In our previous article, we explored the significant transformation brough about in the legal industry by Artificial Intelligence (AI), perceiving "the law as data", which has brought about significant transformations in the legal industry. However, it is crucial to fully grasp the phrase "AI sees the Law as data" as "AI sees the Law as data but not as Law". While AI serves as a valuable tool in the legal field, it is essential to acknowledge the challenges it faces in practical legal applications, particularly in niche areas of law or smaller jurisdictions. Drawing a parallel, let's consider the example of a knife: while it is an indispensable instrument in culinary settings, it can also pose potential danger if not used appropriately.

While technology has made significant advancements, it still faces challenges when dealing with complex legal knowledge, especially in niche areas of law or smaller jurisdictions. According to Neil Sahota, the essence of AI lies in the training process provided to the machine which depends on supplying the system with data and algorithms to discern patterns, make predictions, or accomplish specific tasks. In simple terms, despite being fed with vast amount of data, AI still lacks critical thinking abilities and a practical understanding of the law. It relies solely on detected patterns and does not possess the same understanding of the law as a human lawyer. This results in a garbage-in-garbage-out scenario, hindering its effectiveness in addressing complex legal matters.

  • Accuracy:

Acquiring high-quality data for an AI system poses a challenge in the legal field. While diverse sources are used, ensuring accuracy becomes difficult once resources such as published books, Wikipedia articles, and a refined "Common Crawl" repository are exhausted. Unlike human lawyers who learn from handpicked reliable sources, AI models are fuelled by both labelled and unlabelled data, potentially leading to erroneous outcomes. Additionally, if trained on inaccurate data, AI models may generate hallucinations or fabricate facts.

Interestingly, AI systems are only as good as the data on which they are trained. If the data is inaccurate, the AI system will also be inaccurate. A New York lawyer is facing possible sanctions after citing fake cases generated by OpenAI's ChatGPT in a legal brief filed in federal court. The incident occurred in a personal injury lawsuit against Avianca, where the lawyer used ChatGPT to supplement his legal research. However, the judge discovered that six of the cited cases were bogus, leading to doubts about the reliability of the lawyer's sources. The mistake gained media attention and prompted discussions about the need for verification when using AI-powered tools in legal research. Therefore, one should approach AI as a helpful starting point, rather than a definitive source.

AI system may also have limited awareness of different legislation and jurisdictions. In a niche area or a small jurisdiction, AI models might not be effectively trained to address specific needs. Therefore, legal professionals should exercise caution when relying solely on AI-generated content for legal drafting. It is important to consider the limitations and potential inaccuracies of AI models and ensure that human expertise and verification are incorporated into the process.

  • Bias Concerns:

Similar to their human counterparts, AI systems can exhibit biases. Biased training data or algorithms design can result in unfair treatment of certain individuals or groups. Using historical data that reflects past mistake can lead AI systems to replicate these biases.

AI systems can learn from the data they are trained on. This means that if an AI system is trained on data that is biased, the AI system will also be biased. For example, if an AI system is trained on a dataset of legal cases that predominantly favours men, the AI system may be more likely to recommend that men be given lighter sentences than women.

The bias in AI systems can damage public trust in the legal system. For example, if people believe that AI systems are biased against them, they may be less likely to report crimes or cooperate with law enforcement. Addressing this challenge requires mechanisms for detecting, measuring, and mitigating biases in AI systems, ultimately promoting fairness and equity in the legal field.

  • Transparency:

Understanding the decision-making process of AI systems is crucial in the legal field. Transparency builds trust among legal practitioners, clients, and the public. By providing clear explanations and justifications for AI-generated outcomes, we can ensure ethical and responsible use of AI. Achieving transparency involves explainable AI techniques, documentation of AI processes, and external audits of AI systems.

Despite these challenges, the use of AI in the legal field is expected to continue growing. AI has shown us the potential to pave the way towards AI Lawyer but not as a replacement of human expertise and judgement, yet. Even though there are many examples online where AI falls short, the reality is that it is very suitable for handling legal tasks under the supervision of a knowledgeable legal expert.

While AI cannot replace human expertise and judgement, it serves as a valuable tool that enhances the capabilities and efficiency of legal professionals in the digital age.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.