Artificial intelligence is the talk of the town. Chat GPT and other generative AI tools are now firmly at the center of the global zeitgeist, and as they continue to develop at lightning pace, we're all learning how best to utilize this generation-defining phenomenon.

Knowing how a tool works means realizing its limitations. The legal sector's unique positioning in relation to AI makes it difficult to understand whether it is a threat or an opportunity. The reality is of course far more nuanced; yes, generative AI brings all manner of challenges – from attorneys relying on it too much to the automation of work currently done by people – but it can also be used as a helpful tool for attorneys. Its constant evolution dictates that we must be agile in our approach, and most importantly, keep abreast of its limitations.

Generative AI: Helpful, but not gospel

Currently, AI is a quick-fire, version of a Google search on steroids; the results are a combination of answers cut and pasted from secondary sources available on the internet since 2021, which users could find themselves. The difference though, is in the iterative nature of any interaction that this user may have. A broad search can be fine-tuned through further interaction with this generative AI – and the 'interaction' itself is more conversational than transactional. This is something for attorneys to watch out for; informality may lead to the sharing of confidential information, and since a generative AI tool is always learning, it's important to keep the lines drawn between what can and can't be fed in – lest it be spat back out without consent.

While AI is constantly learning, it also can't be treated as the preeminent source of reliable information. With the limitations of its capabilities still unknown, AI could one day be a go-to authority on all legal matters. But for now, it is being used as an aid for attorneys. Here at Rimon, members of our team are utilizing its breadth of knowledge to double-check the non-confidential elements of their arguments, saving on labor time. It can also be helpful to create quickfire summaries or abstracts of documents that are easily digestible. But we're by no means handing over the keys to the kingdom – everything has to be checked by a real lawyer.

The same goes for how we can avoid dangerous and unethical advice while utilizing AI. In a recent article, Jon Garon explored these dangers. ChatGPT, for example, is currently notorious for adding unverified footnotes to its answers, simply because people 'like' footnotes. So, if its algorithmic responses to queries are shared ad verbatim by attorneys without undertaking their own due diligence, they could be committing malpractice by offering incorrect advice. Plus, Jon also flags the ethical concerns of repurposing content from AI tools. If its responses are drawn from secondary sources written by someone else, then an attorney cannot – and should not – be claiming ownership.

Furthermore, as recently as June 22, a U.S. judge already imposed sanctions on two New York lawyers who submitted a legal brief that included six fake case citations generated by ChatGPT.

The judge ordered the two lawyers and their law firm to pay a $5,000 fine. The judge found the lawyers acted in bad faith and made "acts of conscious avoidance and false and misleading statements to the court." In addition, the lawyers and their firm suffered the embarrassment of having their names and misdeeds published for the entire world to see.

Attorneys remain irreplaceable

"The first thing we do, let's kill all the lawyers".

So said Dick the Butcher in Henry VI, delivering one of Shakespeare's most famous lines. But was this maleficent line also a premonition for the replacement of human attorneys by AI? In short, no. Civilization is based upon law, and as far as we can tell, we're not yet turning humanity over to ChatGPT just yet. There are threats, and there are opportunities surrounding AI. But ultimately, the legal sector needs real people to get the job done.

Alarm bells have certainly been raised about the replacement of humans by artificial intelligence. A study by Goldman Sachs estimates that 300 million full-time jobs could be lost to automation globally as a new wave of AI systems are integrated into workflows. So, it would be remiss not to say that certain areas may see a loss of jobs, even in the legal sector. But these roles will probably be at administrative and paralegal level, rather than seeing the mass displacement of attorney positions.

The role of lawyers is likely to move with the times, transformed rather than eroded by AI. With some aspects of the role becoming automated, lawyers may end up becoming more strategically focused, taking on a corporate counsel role more often. In litigation, for example, automation is becoming more prevalent – but clients will still need an attorney who has the specialist training and experience to develop advice and offer genuinely valuable counsel. After all, no two deals are alike – so it'll be a long time yet before AI can compete with the unique ingenuity and personality of human attorneys.

Don't rely on regulation

The pace at which AI is moving means the rules are still being written. Currently, we're not seeing any regulations involving attorney use of generative AI, and with so many other issues for governments to face at the moment, it's unlikely to be at the top of the pile. But that doesn't mean we in the legal field shouldn't be keeping it front of mind. If the rules haven't been written by government, we should all be establishing best practice ourselves. If we're dealing with confidential client information, we wouldn't punch this into a Google search. So, the same should apply to the use of AI tools.

At some point, it's likely that the use of AI will be integrated into best practice policies across multiple sectors, including law. Whether we conclude that its use is a danger to privacy and data in law remains to be seen. But for now, it's important to remain vigilant – or else, by the time regulations and rules are put in place, the ship will already have sailed.

What does 'best practice' look like? Updating privacy policies is a must, and while attorneys must ensure they aren't sharing any private information through the likes of ChatGPT, this advice should also be conveyed to clients. Creating their own specific policy regarding generative AI – or at least amending usage policies – will ensure they are using it as safely as possible, minimizing the danger of oversharing confidential information. A simple way to look at it, is to see the use of generative AI in a similar vein to social media: the dangers only arise if people aren't reminded how to use it properly.

Working in harmony

The AI revolution can feel like it's bringing a lot of unwanted change. But used effectively, the possibilities of this technology will create all manner of opportunities. It can undertake previously laborious tasks to help attorneys, giving them more time to undertake the specialist, money-earning client work. Tech lies at the heart of Rimon's success, and the exciting possibilities of AI suggest that it can be a benefit rather than a burden.

It's important to remember that these types of transitions happen. Attorneys who once used the library now use software like LexisNexis. This didn't 'kill the lawyers', but instead simply changed the way we work. AI is likely to do the same – we just need to use it to our advantage.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.