Navigating the Legal Landscape: The Impact of Artificial Intelligence on the Future of Law
By

-- Umar Bashir --

Navigating the Legal Landscape: The Impact of Artificial Intelligence on the Future of Law

Abstract :

This study thoroughly examines the complex world of artificial intelligence (AI) on a worldwide basis. The research commences by clarifying the disparate interpretations of artificial intelligence that exist globally, offering a sophisticated comprehension of the lexicon and theoretical models utilised in disparate areas. The study then explores how AI is essential to the administration of law and legal procedures, emphasising the revolutionary influence of AI technologies on the legal field.The study highlights how important it is to have legal frameworks in place to govern and oversee the use of AI systems. It is essential that legal systems adjust as AI develops and provide thorough legislation that handle moral issues, responsibility, and any societal repercussions. In order to promote responsible and open practices and ethical AI development and application, the study argues for proactive legislation.

Keywords: Artificial Intelligence, Legislation, Challenges, Facilitated

The field of artificial intelligence (AI) is expanding quickly. Over the next ten years, it is expected that AI technology will become widely available in homes, workplaces, businesses, and the general public. It will seep into almost every area of our existence. The way governments and the general public use AI technology for security is changing and being witnessed globally. In the last ten years, all it has taken to find AI surveillance in operation is to travel to any of the world's major airports or through the central business areas of large cities.These days, artificial intelligence (AI) is present in a wide range of household appliances that we refer to as "smart" because of the way they function. Examples of these appliances include drones, self-driving cars, robotic vacuum cleaners and lawn mowers, smart watches, and smartphones. Robotics, technology, healthcare (including medical diagnosis and surgery), transportation, military, video games, government and public administration, insurance, finance and economics, audit, advertising, and the arts are just a few of the industries that heavily rely on artificial intelligence (AI). Additionally, it has been gradually applied to the field of law, including the prediction of court outcomes and predictive justice.

The idea that artificial intelligence is a brand-new phenomenon is completely untrue.Despite decades of research, artificial intelligence (AI) remains one of computer science's most intractable topics, according to Chris Smith et al.[1]The term artificial intelligence wasfirst coined by John McCarthy in 1956[2] when he held the first academic conferenceon the subject.However, the quest to find out if robots are indeed capable of thinking started earlier.However, there is no internationally agreed definition of AI.[3]As of right now, no single definition of artificial intelligence has gained consensus among technologists. Moreover, theOxford English Dictionary arguably has taken a very broad approach to definingAI. In other words, artificial intelligence has been defined to mean “the field ofstudy that deals with the capacity of a machine to simulate or surpass intelligenthuman behaviour”.[4]However, by creating a definition of AI, the US seems to have taken the lead.[5]Thirty-nine bills using the term "artificial intelligence" were introduced during the 115th Congress. Out of these bills, four had become laws. However, the John S. McCain National Defence Authorization Act for Fiscal Year 2019section 238 orders the Department of Defence to carry out a number of AI-related tasks.[6]The Secretary of Defence must designate a coordinator under subsection (b) to supervise and guide the Department's operations "pertaining to the development and demonstration of artificial intelligence and machine learning.Artificial Intelligence is defined as follows in subsection (g)Throughout this section, "artificial intelligence" refers to the following:

1.    Any artificial system that performs tasks under varying and unpredictable circumstancewithout significant human oversight, or that can learn from experience andimprove performance when exposed to data sets.

2.    An artificial system developed in computer software, physical hardware, or other contextthat solves tasks requiring human-like perception, cognition, planning, learning,communication, or physical action.

3.    An artificial system designed to think or act like a human, including cognitive architecturesand neural networks. (4) A set of techniques, including machine learning, that is designed to approximate a cognitive task.

4.    An artificial system designed to act rationally, including an intelligent software agent or embodied robot that achieves goals using perception, planning, reasoning, learning, communicating, decision making, and acting.[7]

Subsection (f) mandates, however, that the Secretary of Defence define the phrase "artificial intelligence" for use within the Department within a year of the law's passage. Although the description is both general and particular, it does help the community understand what artificial intelligence (AI) is. However, as technology advances, it's possible that this definition will shift.The European Commission's High-Level Expert Group on Artificial Intelligence (AI HLEG) published proposed AI Ethics Guidelines in December 2018. In addition to outlining a structure for creating reliable AI, it suggested a wide definition of AI, which reads as follows:

Artificial intelligence (AI) refers to systems designed by humans that, given a complexgoal, act in the physical or digital world by perceiving their environment, interpreting thecollected structured or unstructured data, reasoning on the knowledge derived from thisdata and deciding the best action(s) to take (according to pre-defined parameters) to achievethe given goal. AI systems can also be designed to learn to adapt their behaviour by analysinghow the environment is affected by their previous actions. As a scientific discipline, AIincludes several approaches and techniques, such as machine learning (of which deep learningand reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search, and optimization),and robotics (which includes control, perception, sensors and actuators, as well as the integrationof all other techniques into cyber- physical systems).[8]Most remarkably, and in contrast to the definition of personal data, the EU does not currently have a specific legal definition for the phrase artificial intelligence. According to Mihalis Kritikos:

“Defining the precise object of regulation in dynamic technological domains is a challengein itself. Given that AI is still an open-ended notion that refers to a very wide range of productsand applications, there is no transnational agreement on a commonly accepted workingdefinition, neither at the technical nor the legal/policy level. As there is no legal and politicalconsensus over what AI is, a plurality of definitions has emerged in Europe and worldwidethat are either too inclusive or too sector- specific. This fragmented conceptual landscapemay prevent the immediate development of a lex robotics and possibly undermine allefforts to create a common legal nomenclature, which is particularly instrumental for thedrafting, adoption and effective implementation of binding legal norms. Alternatively, abroad and technology-neutral definition that is based on the fulfilment of a variety of structuralcriteria, including the level of autonomy and the function, may be a more plausibleoption.”[9]

This presents serious difficulties for personal data, cyber security, and AI. We believe that until the courts get involved, a definition of AI is unlikely to be agreed upon at the national or international level. However, Kritikos continues by stating that the question of legal classification of AI and the classification of its many applications are strongly related to the problem of definitional ambiguity. Should artificial intelligence (AI) systems and products be analysed using conventional legal frameworks, or are we just witnessing the slow emergence of a completely new field of critical legal thought that could lead to a change in the way that law is conceptualizedfrom the conventional idea of code as law to a new and innovative one.[10]As a result, there must be a certain degree of legal harmonisation in this field. The nature and speed of technological progress in a variety of fields, including banking, agriculture, health, law, and finance, as well as agricultural and food production, further complicates matters beyond the legislative obstacles.

Law is Facilitated by Artificial Intelligence

It is anticipated that different AI tools will have a significant positive impact on law and legal procedures, much like they have in other spheres of human existence and social interaction. At the very least, these technologies will lower the expense of legal proceedings and improve their coherence. For example, one can download apps and gadgets that translate languages automatically onto an iPhone or other smart phone these days.Artificial intelligence (AI) algorithms have the potential to automate legal decision-making, forecast legal outcomes, and handle data more quickly and easily. Better information analysis as well as more accurate and ultimately just decision-making may benefit from it. Examples of how AI has aided law and legal business are shown below.

AI has started to expand the opportunities in the legal field and law,[11]via the creation of analytical instruments.[12]Ravel Law, which examines court rulings and creates profiles of judges based on prior rulings, has been one of the most well-known programmes.[13]This kind of analytical instrument was supplanted by Lexis Nexis, which now handles judge and court profiling. It can also predict the actions of a legal firm. These technologies have evolved to provide for a degree of precision in predicting a judge's likely conclusions. The tools adhere to the norms, precedents, cases, precise wording, and reasons that judges often consider before rendering a verdict or judgement.This kind of framework also makes it feasible to evaluate the arguments made by judges from different courts and how those arguments affected the judge's reasoning and decision-making. Additionally, a British university's effort has created a programme that has a 75% reliability rate for predicting verdicts made by the European Court of Human Rights.[14]Furthermore, there are existing decision-making and consultation tools in the legal field, such as Lexis Answers in the form of Lexis Answer Cards, wherein legal queries are posed in normal language and optimal legal solutions are generated by machines in response.Such technology served as the foundation for the development of the Ross (Intelligence) project.[15]It deals with attempting to search within natural language in order to obtain the greatest potential response. Furthermore, the Robot DoNotPay initiative from 2015 should be brought up. Its goal was to create appeals for parking fines that were computed improperly.[16]Legal reasoning is intrinsically linked to the issue of AI in law.One cannot comprehend the other without the other. A pioneering effort in that domain was TAXMAN, an application that computationally examined the majority and minority viewpoints in a well-known court case.[17] Since then, a number of legal and artificial intelligence specialists have addressed legal arguments from formal or empirical perspectives.[18] Legal argumentation specialists who primarily concentrate on formal logical decision-making and legal justification are interested in AI.[19]AI has applications in law, particularly when it comes to making "clear" decisions in circumstances involving minor to grave violations. Such legal decision-making is more suited for the field of computer science since it resembles monotonous, technical decision-making that is almost automatist. Formal logic, or what legal argumentation theory refers to as the "internal phase of legal inferring," must be approached.[20]The idea behind this is that artificial intelligence (AI) systems are frequently claimed to function independently, which highlights weaknesses in legal frameworks that prioritise human actors. Surprisingly, though, as Simon Chesterman points out, not much thought is given to the definition of "autonomy" and how it relates to those gaps. The capacity of contemporary AI to function without human involvement is one of its primary features. Such systems are frequently described as autonomous.In light of the aforementioned, the judiciary has also made an effort to use technology in order to aid in decision-making inside the legal system. As a result, in the US, a robot judge assisted judges in determining whether to hold or release a suspect in a preliminary criminal proceeding.[21]The data that is currently available, however, indicates that judges are still hesitant to employ AI in their decision-making. Slovenia, for instance, uses ICT in the court system to some extent, but AI is still very far off. Their primary areas of focus are the digitalization of the land registry, electronic filings, electronic notaries public, IT as a management tool, and semi-automated enforcement based on a reliable document (like an invoice). Numerous other nations have created systems akin to this.As a result, we believe there is a lot of room for initiatives to explore novel avenues at the intersection of artificial intelligence and law. As application software takes over in simple legal issues (with human supervision, of course), advancements in AI will undoubtedly result in higher-quality legal rulings. This will allow professionals to concentrate more on cases that are less clear.The Indian Supreme Court Judge Hima Kohli have beautifully said that “AI” is a Game-Changer in legal field, why Artificial Intelligence does not pose a threat, but an opportunity.[22]In another Article[23] the author has highlighted various instances that how “AI” can be used in aid of Judiciary.

Legislation to Control Artificial Intelligence

An exciting new era of opportunities and challenges has been brought about by the quick advances in artificial intelligence (AI) technology. There is an increasing need for comprehensive legislation to address the ethical, legal, and societal ramifications of AI deployment as it continues to pervade various facets of our life. This statement can be supported by the remarks of chief Justice of India DY Chandrachud Cautions about artificial intelligence, he says it can make biased decisions based on societal prejudices.[24]Speaking at the Indian Institute of Technology (IIT) Madras' 60th Convocation Ceremony, Chief Justice DY Chandrachud described how artificial intelligence (AI) has great promise but may also be used to reinforce prejudice and unequal treatment:

“A significant impact of AI is its potential to amplify discrimination and undermines the right to fair treatment. Many AI systems have been shown to exhibit biased decisions making based on data inputs that reflect societal prejudices. For example, AI recruitment tools developed by firms favoured men over women because the tools were trained on profile of successful employees who, for gendered reasons, happened to be predominantly male. In this data driven systems can perpetuate biases and marginalize the societal control mechanisms that govern human behaviour.”

Further, if there seem to be any unfavourable effects of AI use on people or society, then legislative regulation is required. The implications of gathering and exploiting incorrect data are significant, especially when it comes to personal data, even if AI is great at processing large amounts of data in minutes or hours compared to weeks or months for a human. So, the issue is: Are there any risks associated with this possible new method of data analysis? Off course Yes, as evident from the above example “AI” recruitment tool favoured men over women therefore perpetuate discrimination.

AI's rapidly expanding function and development has been seen as both a threat and a potential solution to a number of today's problems. It will either be hidden from view or installed with the individual's full knowledge. Ryan Goosen et al., citing a May 2018 New York Times report, report that Chinese and US researchers have successfully programmed artificial intelligence (AI) systems created by Amazon, Apple, and Google to perform tasks like opening websites and making phone calls without the users' knowledge.[25]According to the writers, it's only a short step to more sinister instructions like sending money and opening doors. They point out that while Alexa, Siri, and Google Assistant are among the most popular AI devices available to consumers, they are by no means the only ones. They are among the more recent entries in the market. But they correctly point out that:

“It's not difficult to picture cybercriminals going after the AI-driven client identification software of a financial institution or a dishonest rival assaulting the AI pricing algorithm of a different business. In fact, according to a survey by cybersecurity company Webroot, over 90% of cybersecurity professionals in the US and Japan believe that attackers will employ AI against the firms they work for.”[26]

A Discussion Paper on Human Rights and Technology was published in December 2019 by the Australian Human Rights Commission (AHC).[27] The AHC seeks to pinpoint any significant legal gaps and suggests focused reform. For instance, a regulatory response that addresses justifiable community concerns over people's privacy and other rights is warranted when it comes to the use of facial recognition technology.The Discussion Paper suggests enhancing the accountability safeguards for its decision-making processes involving AI. The paper raises important privacy-related issues. It makes clear that:

“Potential effects on human rights are immense and unparalleled. For instance, AI may have profound and permanent effects on how we provide healthcare, fight prejudice, and preserve privacy, to mention just three.”[28]

Human rights and moral decision-making should be supported by co-regulation and self-regulation. The legal system is not equipped to handle all of the societal ramifications of newly developed technologies. It is unable to. Effective cooperation and self-regulation, facilitated by design guidelines, impact assessments, and professional codes, can encourage all parties involved to make decisions that are morally sound and respectful of human rights.[29]

There are currently no laws in India that specifically address AI regulation. The executive agency for AI-related strategies is the Ministry of Electronics and Information Technology (MEITY), which established committees to develop an AI policy framework.[30]Seven responsible AI principlesprivacy and security, equality, inclusion and non-discrimination, responsibility, transparency, and the preservation and upholding of human values—have been established by the Niti Ayog. It is the constitutional duty of the Supreme Court and lower courts to uphold fundamental rights, such as the right to privacy.  The Information Technology Act and its implementing regulations are India's main pieces of legislation pertaining to data protection. Furthermore, MEITY has introduced the Digital Personal Data Protection Bill, while official passage of the bill is still pending. Should this measure pass into law, people will be able to request information regarding the data that is gathered about them by government and private organisations, as well as the techniques used to handle and preserve it.[31]

The top public policy think tank in India, NITI Aayog, was given a mandate by the government to create rules and regulations for the creation and application of artificial intelligence. The National Strategy for Artificial Intelligence#AIForAll plan,[32] published in 2018 by the NITI Aayog, included guidelines for AI research and development pertaining to healthcare, agriculture, education, "smart" cities and infrastructure, and smart mobility and transformation.The NITI Aayog published Part 1-Principles for Responsible AI in February 2021. This strategy paper, which is broken down into system and societal considerations, examines the numerous ethical issues surrounding the implementation of AI solutions in India. While societal issues centre on how automation will affect employment and job development, system considerations primarily address the general rules guiding decision-making, beneficiaries' legitimate involvement, and the responsibility of AI judgements. Part 2 - Operationalizing concepts for Responsible AI, published by NITI Aayog in August 2021, focuses on these operationalizing concepts.The report outlines the steps that the public and private sectors must take in collaboration with research institutes to address regulatory and policy interventions, capacity building, ethical design incentives, and developing frameworks for adhering to pertinent AI standards.In an effort to allay some of the privacy worries around AI platforms, the Indian government also recently passed the Digital Personal Data Protection Act in 2023.[33]

Additional Legal and Artificial Intelligence Challenges

Decades back Aladdin’s lamp which could do winders after wonders when it was rubbed on the right side, it becomes the bad master when it was rubbed on the wrong side. Regarding “AI” the author has the same opinion.Apart from the blessings which AI has made to the humans, it has disastrous consequences if not regulated or properly operated. Here the author is going to highlight some of the important challenges that AI has posed:

Data Paucity

The lack of data is one of the main issues facing AI. Artificial intelligence is only useful and functional when data is given into it. Their efficiency is contingent upon the calibre of the data. Data is necessary for AI-powered robots to produce optimal results. Businesses are having difficulty getting access to the necessary volume of data, which makes it difficult for them to aggregate the appropriate set of data to produce reliable results. Large IT companies such as Apple, Meta, and Google face difficulties when trying to create global applications using local data because many nations have tight IT regulations limiting data transfer. As a result, the imbalance will produce inconsistent and skewed outcomes.[34]

Lack of talent

Among the main obstacles facing artificial intelligence, this one comes in first. Although artificial intelligence (AI) is a relatively new science, there is a vast knowledge gap despite the field's tremendous advancements. Researchers, IT aficionados, and college students are among the few who possess the necessary understanding about AI's potential. This makes it difficult for organisations to recruit individuals with the knowledge and abilities needed to engage in the ground-breaking application of AI.[35]

Insufficient trust

It's unclear what kind of prediction the deep learning models produced. This poses one of the most important problems for AI. When it comes to a specific set of inputs used to create a solution for a given programme, the average layperson is ignorant. They don't realise how much AI is incorporated into the objects and gadgets we use on a daily basis. Ordinary people still don't realise how AI integration functions with smart gadgets like TVs, phones, and even cars.[36]

AI is Biased

The type and volume of data utilised to train the AI algorithm determines whether an AI system is excellent or terrible. The only way to obtain effective AI services is to compile high-quality data. Organisations' everyday collection of data is meaningless unless it is used to train an artificial intelligence machine. Organisations only gather a limited amount of data, which only represents a given number within a particular population.[37]

Ethical Issues

Here we have yet another important AI difficulty. Concerns over AI's accountability and privacy are emerging as a result of the technology's expanding applications, increased integration, and growing independence. Organisations must act quickly to guarantee that AI systems behave ethically and fairly.[38]

Recognise your gadgets

Smart speakers, like Google Home and Amazon Alexa, are increasingly common in American homes. Most smart speaker owners are already aware that their devices are constantly "listening" for their "wake word," which allows them to respond to user orders and inquiries. The truth is that your home's technology is probably "listening" for more than just smart speakers; virtual assistants are frequently integrated into computers, tablets, and mobile devices, and it's becoming more and more typical for other gadgets (like TVs) to do the same. You ought to familiarise yourself with the smart technology in your house.[39]

Silent Audio Input

Making sure voice input is muted can help prevent a smart speaker from unintentionally recording private conversations if it receives its wake command. In keeping with our earlier advice to familiarise yourself with your gadgets, keep in mind that not all smart speakers may be in constant "listening" mode. Consider whether it would be best practice to disable voice input on each of these devices if you are having private talks.[40]

Put on a headset

It may not always be feasible for you to turn off voice input on every gadget. Think about if there are any other options to stop data from being heard by smart devices and possibly recorded. When you use a headset during a call, for instance, your devices may simply record your speech, reducing the quantity of data that is recorded. You might even get better audio quality as a bonus! Modify Your "Awake Word"Certain smart speakers let you modify the "wake word." If you discover that your smartphone is inadvertently "waking" all the time, consider replacing the wake word with a term that you use less frequently.[41]

Understand Your Vendors

Modern technology is used by most, if not all, of the big tech companies that make smart speakers to safeguard whatever data they gather, but even the most robust security measures are susceptible to rogue attacks. Numerous start-up businesses are also creating innovative technology for use in homes. Make sure you are familiar with, trust, and comprehend the data policies of the firm you are buying your smart device from.All tech companies ought to have easily accessible privacy policies outlining how they plan to utilise your data. Even though these can be somewhat long documents, the majority should be stated in an understandable manner.[42]

Conclusion

The internet technology transcends national boundaries. Consequently, a worldwide reaction is needed to address the issues pertaining to the security of personal data, at least when viewed through the lens of cybercrime and the Internet. These legal fields present difficult tasks and issues related to sovereignty. While they won't be readily resolved in the near future, nations might eventually be compelled to create and put into effect comparable policies and legislation. This has mostly happened, with a great deal of help from the EU, OECD, and to a lesser degree, ASEAN and APEC.This research has shed light on the many challenges that lie ahead, from potential privacy erosion and employment displacement to ethical issues and biases inherent in AI algorithms. Because of the complex link between AI and the law, a proactive and flexible strategy is required.Acknowledging the necessity of an all-encompassing structure, laws must take the lead in directing the creation, implementation, and use of AI technologies. Legislators may create an atmosphere that preserves moral principles, protects individual liberties, and lessens the hazards connected with the application of AI by actively influencing the legal framework surrounding the technology. Achieving a harmonious equilibrium between innovation and regulation is crucial for optimising the advantages of artificial intelligence while minimising its drawbacks.Only by working together—technologists, legislators, and the general public—will we be able to successfully negotiate the rapidly changing AI landscape and create a future in which AI is used to drive progress.

Thank You

Umar Bashir, Advocate

(B.A., LLB, LLM (02 Years) Criminal Law, PGDCL)

Email : umarb373@gmail.com

M : 7006121252, 9797901560



[1] Smith, C., McGuire, B., Huang, T., Yang, G., The History of Artificial Intelligence, https://courses.cs.washington.edu/courses/csep590/06au/projects/history-ai.pdf

[2] Moor, J. Artificial Intelligence Conference: The Next Fifty Years, American Association forArtificial Intelligence (2006), 87–88.

[3]Walters, R., Coghlan, M, Data Protection and Artificial Intelligence Law: Europe AustraliaSingapore- An Actual or Perceived Dichotomy, American Journal of Science, Engineering andTechnology 2019; 4(4): 55–65.

[4]Oxford Dictionary 11th Edition 2008.

[5]Law library of Congress, Regulation of Artificial Intelligence in Selected Jurisdictions, January2019,https://www.loc.gov/collections/publications-of-the-law-library-of-congress/about-this-collection/

[6]John S. McCain National Defense Authorization Act for Fiscal Year 2019, Pub. L. 115–232,§ 238, 132 Stat. 1658 (2018),https://www.congress.gov/115/bills/hr5515/BILLS-115hr5515enr.pdf

[7]FAA Reauthorization Act of 2018, Pub. L. 115–254, § 548, 132 Stat. 3186,https://www.congress.gov/115/billshr302/BILLS-115hr302enr.pdf/

[8]AI HLEG, A Definition of AI: Main Capabilities and Scientific Disciplines (2018)

[9]Kritikos, M, European Parliamentary Research Service Scientific Foresight Unit (STOA) PE 634.427—March 2019,ttps://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-artificial-intelligence-ante-portas.pdf

[10]Ibid

[11]Williams, K., Facciola, J. M., McCann, P., Catanzaro, V. M., (2017), The Legal TechnologyGuidebook. Springer.

[12]Conrad, J.G., Branting, L., K., (2018) Introduction to the Special Issue on Legal Text Analytics.Artificial Intelligence and Law, 26, 99–102.

[13]O’Grady, J. P., (2018) Dewey B Strategic—2017 Blogazine: Risk, Value, Strategy, Innovation,Knowledge and the Legal Profession. Year of the Book Press.

[14]Aletras, N., Tsarapatsanis, D., Preotiuc-Pietro, D. and Lampos, V., Predicting judicial decisionsof the European Court of Human Rights: A natural language processing perspective. PeerJComputer Science, (2016), 93.

[15]Riskin, G, Ross Intelligence Update: How IBM Watson App Helps US Lawyers with LegalResearch. Law Firm Technology (2017).

[16]Livni, E, The world’s first robot lawyer isn’t a damn lawyer. Quartz, 2017,Livni, E, The world’s first robot lawyer isn’t a damn lawyer. Quartz, 2017,

[17]McCarty, L. T, (1976) Reflection on TAXMAN: an experiment in artificial intelligence and legalreasoning. Harvard Law Review: 90: 837.

[18]Bench-Capon, T, (2017) Hypo’e legacy: introduction to the virtual special issue. ArtificialIntelligence and Law: 25:1–46.

[19]Feteris, E, (2017) Fundamentals of Legal Argumentation. Springer, 33–41.

[20]Alexy, R, (1989) A Theory of Legal Argumentation. Oxford University Press.

[21]Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J., Mullainathan, S, (2018) HumanDecisionsand Machine Predictions. The Quarterly Journal of Economics 133: 237–293.

[22]AI is a Game-changer in legal field. Justice Hima Kohli on why Artificial Intelligence does not pose a threat, But an opportunity, available athttps://www.livelaw.in/top-stories/artificial-intelligence-threat-opportunity-game-changer-supreme-court-judge-hima-kohli-221379accessed on 03/02/2024

[23]Artificial Intelligence In Aid Of Judiciary, https://www.livelaw.in/columns/artificial-intelligence-in-aid-of-judiciary-155742?infinitescroll=1, Extracted from LiveLaw, accessed on 03/02/2024.

[24]CJI DY Chandrachud cautions about Artificial Intelligence; Says it can make biased decisions based on Societal Prejudices, available at https://www.livelaw.in/top-stories/cji-dy-chandrachud-cautions-about-artificial-intelligence-says-it-can-make-biased-decisions-based-on-societal-prejudices-233417, accessed on 03/02/2024.

[25]Goosen, R., Rontojannis, A., Deutscher, S., Rogg, J., Bohmayr, W., Mkrtchain, D, ArtificialIntelligence Is a Threat to Cybersecurity. It’s Also a Solution, 2018,https://www.bcg.com/publications/2018/artificial-intelligence-threat-cybersecurity-solution

[26]Ibid

[27]Human rights Commission, Australia, Human Rights and Technology: Discussion Paper, December 2019,https://humanrights.gov.au/our-work/technology-and-human-rights/publications/discussion-paper-human-rights-and-technology

[28]Ibid

[29]Ibid

[30]Artificial Intelligence in the context of the Indian legal profession and judicial system, available at https://www.barandbench.com/columns/artificial-intelligence-in-context-of-legal-profession-and-indian-judicial-system, accessed on 04/02/2024

[31]Available at https://www.niti.gov.in/, accessed on 04/02/2024

[33]AI Regulation in India: Current State and Future Perspectives, available at https://www.morganlewis.com/blogs/sourcingatmorganlewis/2024/01/ai-regulation-in-india-current-state-and-future-perspectives, accessed on 05/02/2024

[34]What are the challenges of Using Artificial Intelligence, Available at https://www.careerera.com/blog/what-are-the-challenges-of-using-artificial-intelligence, accessed on 05/02/2024

[35]Ibid

[36]Ibid

[37]Ibid

[38]Ibid

[39]Robert Walters, Marko Novak, “Cyber Security, Artificial Intelligence, Data Protection and the Law”, Springer Nature Singapore Pte Ltd. 2021 at Page 58

[40]Ibid

[41]Ibid

[42]Ibid


31 Oct 2024

Politicians and Legal Cases in India: A Complex Relationship

-Asutosh Lohia, Adv., Delhi High Court

Jurisdiction of Tender – Terms & Conditions and Interpretation

-NITIN PARIHAR, Advocate & MOHD SUHEL, Deputy General Manager (Civil), CVPPPL, NHPC

Taxation of Expatriates and International Workers: an insight

-By Vipul K. Raheja, Advocate, Delhi High Court

PROTEST PETITION UNDER CrPC - A COMPREHENSIVE ANALYSIS AND REMEDIAL INSIGHTS

-RAJKUMAR UMAKANTA SINGH, Public Prosecutor cum Govt. Advocate (HC), Manipur

Analysis of the Judicial Decisions on Clause (3) of Article 226 of the Constitution of India, 1950

-TAYENJAM MOMO SINGH, Advocate, High Court of Manipur & Advocate-on-Record, Supreme Court of India

Powerless Watchdogs: A Study on Diminished Powers of Indian Media Regulatory Bodies

-Shivam Vashisht (Student 2nd Year, BBA LLB, Manipal University Jaipur)

White Collar Crimes in India (A Study)

-Lovekesh Jain, Avocate

CRIMINALISATION OF POLITICS – Observations by Supreme Court

-R.K. Sahni, Advocate, Delhi High Court

CAREERS IN LAW – AN OVERVIEW

-Jagruti Kate, Law Student, GLC, Mumbai

Rights under India Law for Protection of Children

-Shiv Shankar Banerjee, Advocate, Supreme Court of India

SEX WORKERS -- ENTITLED FOR EQUAL PROTECTION OF LAW

-Rajiv Raheja, Advocate, Supreme Court of India

ROLE OF RBI IN THE PAYMENT SYSTEM OF INDIA

-SHIV SHANKAR BANERJEE, Advocate

FEMALE COPARCENARY

-Shiv Shankar Banerjee, Advocate Supreme Court of India

The Extent of Criminalisation in Politics

-Asutosh Lohia, Advocate, Delhi High Court

Right of Voter to know about Candidate: A Note

-Sanjoy Yambem, Advocate, High Court of Manipur

Anti Defection Law: A Note

-Asutosh Lohia, Advocate, Delhi High Court

Legal Framework on Indian Heritage

-Shiv Shankar Banerjee, Advocate, Calcutta High Court

Human Rights and Education

-Ajay Veer Singh, Advocate, Supreme Court of India

The Art of Pleading (An Insight)

-Lovkesh Jain, Advocate

A Glimpse of the POCSO Act, 2012

-SAMARJIT HAWAIBAM, Addl. Public Prosecutor, (High Court), Manipur

Banks and NBFC — Comparison & Procedure

-Vipul Raheja, Advocate, Delhi High Court

Law of Arbitration in India (A Comprehensive Analysis)

-Mohd. Latif Malik, Advocate, J&K High Court

Insurable Interest: The Key Element Of Marine Insurance

-Atul Nigam, Advocate, Delhi High Court