ARTICLE
4 September 2024

Role Of AI In Legal Systems: A Detailed Analysis

SR
S.S. Rana & Co. Advocates

Contributor

S.S. Rana & Co. is a Full-Service Law Firm with an emphasis on IPR, having its corporate office in New Delhi and branch offices in Mumbai, Bangalore, Chennai, Chandigarh, and Kolkata. The Firm is dedicated to its vision of proactively assisting its Fortune 500 clients worldwide as well as grassroot innovators, with highest quality legal services.
A two day conference on ‘Technology and Dialogue' was organised by the Supreme Court on April 13 and 14 respectively, aiming to delve into the intersection of technology and legal system, with a particular focus...
India Technology
To print this article, all you need is to be registered or login on Mondaq.com.

Introduction:

A two day conference on 'Technology and Dialogue' was organised by the Supreme Court on April 13 and 14 respectively, aiming to delve into the intersection of technology and legal system, with a particular focus on the transformative role of Artificial Intelligence (AI) in the judiciary. In his keynote address at the Indo-Singapore Judicial Conference, Hon'ble Mr. Chief Justice DY Chandrachud commended the conference's pioneering emphasis on technology and its potential to foster essential dialogues at the intersection of technology and the judiciary.

The CJI also recognized India's progress in leveraging technology to modernize its judiciary through initiatives like the e-Courts project.

To improve the access to justice for all citizens, initiatives under the e-Courts project aims to computerize court processes, digitizing case records, and establish online case management systems across all levels of the judiciary. By reducing administrative burdens and automating routine tasks, these initiatives intend to enhance the speed and efficiency of legal proceedings.1

Enhancing AI to Legal Transformation

The CJI emphasized the transformative potential of AI in legal research, describing it as a "game-changer".

Initiatives like SUVAS and SUPACE:

He further noted the Supreme Court of India's introduction of live transcription services aimed at enhancing accessibility to legal information. This initiative addresses linguistic diversity by translating judicial proceedings into 18 regional languages and Hindi, ensuring that legal information is accessible to citizens across India using AI termed as the SUVAS (Supreme Court Vidhik Anuvaad Software).

In addition, adoption of AI tools likeSUPACE (Supreme Court Portal for Assistance in Court's Efficiency) by the Supreme Court represents a significant development in how judicial proceedings are handled. These tool are designed to enhance the efficiency of information processing and help in organizing and retrieving relevant data, which can assist judges in making more informed decisions.

Hon'ble Ms. Justice Pratibha M. Singh, a prominent figure in the Indian judiciary has been at the forefront of delivering insightful and landmark judgments concerning the use of AI.

In the case of Christian Louboutin Sas & Anr vs. M/s The Shoe Boutique-Shutiq on August 22, 20232 held that AI tools like ChatGPT cannot form the basis of adjudication of legal or factual issues in a court of law. The response of a Large Language Model (LLM) based chatbots such as ChatGPT, which is sought to be relied upon by the ld. Counsel for the Plaintiff, depends upon a host of factors including the nature and structure of query put by the user, the training data, etc. Further, there are possibilities of incorrect responses, fictional case laws, imaginative data, etc. generated by AI chatbots. Hence, accuracy and reliability of AI generated data is still in the grey area.

Challenges associated with AI Integration:

With opportunities comes an inevitable counterpart: challenges. The CJI highlighted this duality, emphasizing that while AI offers significant potential, it also brings forth a range of challenges.

One of the primary concerns is the potential for errors and biases inherent in AI systems. These systems are only as good as the data they are trained on, and if the data contains biases, the systematic biases will be perpetuated in the outputs especially in predictive models. These models are based on AI data which reflects a limited viewpoint and present the risk assessment predictions.

For instance, the deployment of hotspot policing algorithms, aimed at predicting and preventing crime by identifying areas with high crime rates, has raised significant concerns over racial bias and its consequences on communities. Studies in the USA have shown that a Black American is five times more likely to be stopped by police than a white American and twice as likely to be arrested3. As a result, an algorithm that uses arrest data as a baseline for police deployment decisions may predispose areas with higher shares of Black residents thereby been subject to over-policing. As a result the data fed into the algorithms is skewed, leading to a cycle where these communities are repeatedly targeted.

In Los Angeles, the use of predictive policing programs such as Pred-Pol has been criticized for disproportionately targeting African American and Latino neighborhoods.4

These hotspot policing algorithms analyze vast amount of data, including past crime reports and other variables to forecast where crimes are most likely to occur to enable law enforcement agencies to prevent crime proactively.

In cases of granting bail data analyzed could include

  1. Data as to the history of the person in question i.e. criminal records etc.
  2. Flight risk i.e. the ties to the community etc.
  3. The severity of charges the nature and severity of the charges that the accused is facing can also be considered in bail decisions
  4. Social economic background i.e. races, religion age, education.
  5. Risk assessment of the person committing a crime while on bail or not appearing before the court during the bail period
  6. Other factors substance abuse, mental health issues etc.5

Despite its benefits, these algorithms rely heavily on historical data, which often reflects existing biases in law enforcement practices. The principle and practice of using the above-mentioned data in bail applications may severely impair justice for many.

Another significant challenge is the risk of 'AI hallucinations' where the AI may generate false or misleading information.

An incident in New York underscored the gravity of this issue, where a lawyer representing a client in a routine personal injury lawsuit used ChatGPT to prepare a filing.6 Unfortunately, the AI tool provided fictitious legal cases, which the attorney then presented to the court. This led to a judge considering sanctions, highlighting one of the first instances where AI-generated hallucinations had a tangible impact on legal proceedings. The court encountered an 'unprecedented situation' as the filing referenced cases which do not actually exist. This incident underscores the critical need for caution and rigorous verification when incorporating AI into legal processes.

The case, State vs. Loomis7 has become a significant milestone in the discussion about the use of technology in the criminal justice system. The Wisconsin Supreme Court's decision in this case raised crucial questions about fairness, transparency and the role of algorithmic risk assessments in sentencing. In 2013, Eric Loomis was pleaded guilty to the charges against him and during his sentencing, the judge used a risk assessment tool called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions). COMPAS is a software used to evaluate a defendant's risk of recidivism based on various factors, including criminal history, age, employment status, and social behavior. The COMPAS report classified Loomis as a high risk for recidivism and sentenced him for six years of imprisonment.

Aggrieved by this, Loomis filed a motion for post-conviction relief, claiming that the use of COMPAS violated his due process rights. He also argued that COMPAS reports provide data relevant only to particular groups and because the methodology used to make the reports is a trade secret, thereby asserting that the court's use of COMPAS assessment infringed on both his right to an individualized sentence and his right to be sentenced on accurate information.

In the year 2016, the Wisconsin Supreme Court ruled against Loomis but acknowledged the dangers/concerns of these assessments as highlighted by him. The court concluded that judges must proceed with caution when using such risk assessments. It underscored the need for transparency, judicial discretion and careful consideration of potential biases when using algorithmic tools in sentencing.

Impact/Consequences of AI Integration:

AI models leveraging algorithms are increasingly woven into the fabric of various industries, not merely limited to legal systems. However these algorithms often operate as 'black-boxes', generating outcomes without clear explanations, thereby obscuring the accuracy and fairness of their performance. This lack of transparency and accountability poses significant risks such as:

  • Potential for wasteful spending

Organizations may invest heavily in AI technologies that fail to deliver expected efficiencies, ultimately resulting in financial loss. For instance, Chicago's predictive policing programme was designed to identify individuals who were more likely to commit shootings. Those flagged by the algorithm faced heightened police scrutiny. Despite the $2million investment in the program, it failed to demonstrate any positive impact, calling into question the efficacy of such expensive technological solutions.8

Similarly, in the year 2016, Michigan's Integrated Data Automated System (MIDAS) wrongly accused Brian Russel of unemployment insurance fraud, seizing his $11000 tax refund.9 This decision was not made by a human but by an algorithm. MIDAS made approximately 48000 accusations of fraud against unemployment insurance recipients. Despite the millions of dollars Michigan spent on software, the state auditor found that 93% of MIDAS's fraud determinations did not involve any actual fraud. These examples highlight the significant financial losses incurred in investing in flawed systems;

  • Biases

These systems are based on historical data and inadvertently perpetuate biases, producing unfair outputs that reinforce existing inequalities. For instance, biased data fed into machine learning algorithms can lead to discriminatory practices in hiring, lending and law enforcement.

However, to know more about how does AI create bias in recruitment process, kindly refer to our article titled "GenAI Bots get a big say in hiring".

External audits can help in mitigating the bias the Ada Lovelace Institute, a British research institute focusing on the just and equitable use of artificial intelligence, recommends "bias audits" conducted by external experts. In principle bias audits can be conducted with or without access to the code of the system. For example, one could audit hiring algorithms by "participating in them," such as by submitting identical job applications but varying the applicants' race.

  • Lack of Efficiency-

A notable example of this is the Detroit Police Department, which faced significant backlash after wrongfully arresting several Black residents based on facial recognition software, despite its ineffectiveness and lack of adequate testing and training. The department later acknowledged that its software misidentified suspects approximately 96% of the time.10

  • Lack of Accountability

The reliance on AI introduces a concerning trend where humans defer responsibility to algorithms. This lack of accountability can have negative consequences, especially in high-stakes environments such as healthcare, where AI-driven diagnostics and treatment recommendations could directly impact patient outcomes. Moreover, it is specified under Article 22 of the GDPR that purely automated decision without scope of human intervention especially where the decision has legal impact on the person is not permissible. To know more about prohibited profiling under Article 22 of GDPR in credit scoring, kindly refer to our article titled "Credit Score calculation and Data Privacy Concerns"11.

  • Lack of Transparency

The lack of transparency in AI decision-making processes complicates accountability. In the name of trade secrets the principle behind the software may not be revealed hence very difficult to ascertain whether the decision was rightly or wrongly reached.

The Dutch Child Welfare Scandal12 (Toeslagenaffaire) involved the wrongful accusation of child welfare fraud against tens of thousands of families. In 2019 the investigation revealed systematic issues within the tax authority including discriminatory practices and a lack of human oversight. The scandal ultimately led to the resignation of the Dutch government in January 2021. Later, efforts were made to provide compensation to the affected families.

  • Privacy concerns-

Furthermore, the vast amount of data required to train this machine learning models raise significant privacy concerns. Companies often collect extensive personal data without adequate consent, exposing individuals to potential misuse of their information.

How does EU AI law regulate AI?

The European Union's Artificial Intelligence Act (EU AI Act) is a comprehensive framework designed to regulate AI technologies, ensuring their safe and ethical deployment. One of the critical aspect of this legislation is the categorization and management of AI systems based on their potential risks. To know about his risk categorization as defined under the EU Act, kindly refer to our article titled "EU Parliament Gives Final Nod to Landmark Artificial Intelligence Law."13

Article 10: This article mandates that high-risk AI systems must be trained on datasets that are relevant, representative, free of errors, and complete to the best extent possible. This is to ensure that the AI systems do not perpetuate or amplify biases.

In addition, predictive profiling without human oversight falls under the category of high risk and is also designated as a prohibited use of AI under Article 5(1) (d) of the Act.

Conclusion: Need of Ethical Considerations

In his concluding remarks, the CJI noted that the advancement of technology and AI is inevitable. Further, emphasizing the AI's potential to significantly transform professions, particularly in the field of law, where AI can expedite and streamline justice deliver, he stated, "The era of maintain the status quo is behind us; it is time to embrace evolution within our profession and explore how we can harness the processing power of technology to its fullest within our institutions."

Edwin Clemance Thottappilly, Intern at S.S. Rana & Co. has assisted in the research of this article.

1 https://timesofindia.indiatimes.com/technology/tech-news/chief-justice-of-india-dy-chandrachud-advocates-for-ethical-ai-integration-in-legal-research/articleshow/109265942.cms

2 https://indiankanoon.org/doc/128131570/

3 https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/

4 https://www.latimes.com/california/story/2020-04-21/lapd-ends-predictive-policing-program

5 https://ijrpr.com/uploads/V5ISSUE5/IJRPR28130.pdf

6 https://www.forbes.com/sites/mollybohannon/2023/06/08/lawyer-used-chatgpt-in-court-and-cited-fake-cases-a-judge-is-considering-sanctions/

7 https://harvardlawreview.org/print/vol-130/state-v-loomis/

8

9 https://www.freep.com/story/news/local/michigan/2019/12/22/government-artificial-intelligence-midas-computer-fraud-fiasco/4407901002/

10 https://time.com/6991818/wrongfully-arrested-facial-recognition-technology-essay/

11 https://www.barandbench.com/law-firms/view-point/credit-score-calculation-and-data-privacy-concerns

12 https://www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-using-algorithms/

13 https://ssrana.in/articles/eu-parliament-final-nod-landmark-artificial-intelligence-law/

For further information please contact at S.S Rana & Co. email: info@ssrana.in or call at (+91- 11 4012 3000). Our website can be accessed at www.ssrana.in

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More