Generative AI Is Changing How We Do Business And How We Practice Law

SP
Squire Patton Boggs LLP

Contributor

Squire Patton Boggs LLP
The news about Steven Schwartz, the attorney who asked ChatGPT, an artificial intelligence chatbot, to find cases relevant to his client's lawsuit only to submit a brief full of bogus caselaw...
United States Technology
To print this article, all you need is to be registered or login on Mondaq.com.

The news about Steven Schwartz, the attorney who asked ChatGPT, an artificial intelligence chatbot, to find cases relevant to his client's lawsuit only to submit a brief full of bogus caselaw, spread gleefully fast, as embarrassing news does. And although we shook our heads in disapproval, I suspect many attorneys were grateful to Mr. Schwartz. His blunder suggested that we are not so easily replaceable by AI. And it couldn't have come at a better time-according to a recent report from Goldman Sachs, AI is putting 44% of legal jobs at risk. GPT-4 passed the Uniform Bar Exam (UBE), and it didn't just barely squeak by: it scored in the 90th percentile, outperforming the average real life test taker. If the legal cartel were not beholden to so many ethics rules, I might suspect Mr. Schwartz had been planted to take one for the team.

The legal profession would be wise to brace for an AI upheaval. Firms might consider enlisting AI to do grunt work. But they should avoid relying on it to do the heavy lifting, such as brief writing or legal analysis. To be sure, ChatGPT isn't Shepardizing on Lexis, Westlaw, or scotus.gov. It was trained on a loosely vetted assortment of websites, articles, and books. AI will inevitably lead to better efficiency and lower client costs. At least in the early days, however, the lower client costs come at a loss of confidence in the quality of work product, as the Schwartz Affair shows.

As legal professionals we find ourselves in a tricky situation where we can't responsibly outsource our work to AI. At the same time, we are obligated by both ethical rules and market conditions to offer reasonable rates - which in turn make the use of generative AI compelling. Finding the balance between those competing interests is always challenging and Generative AI is making it all the more so.

At least one judge has defined the boundaries for litigants appearing in his court. U.S. District Judge Brantley Starr of the Northern District of Texas requires that attorneys attest they will limit use of ChatGPT or other generative AI in drafting briefs. Attorneys must file a certificate indicating either that no portion of any filing was generated by an AI tool, or "that any language drafted by generative artificial intelligence will be checked for accuracy, using print reporters or traditional legal databases, by a human being." In other words, the order still allows for the use of AI but requires that a human review it.

In addition to issues surrounding the veracity of AI-generated content, Judge Starr raises the issue of bias in AI. He writes that, "[w]hile attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath." ChatGPT has only been trained on pre-2021 data, before the proliferation of Diversity, Equity, and Inclusion (DEI) programs (an observation and not a judgment).

Other industries have been reckoning with bias in AI as well. Amazon had reportedly been developing a program since 2014 to automate review of job applicant resumes. Amazon stopped the program in 2018, before Amazon recruiters had used it to evaluate candidates, when AI specialists discovered that the engine was biased against hiring women. The AI system had been trained to review applicants by observing patterns in resumes submitted in the last ten years. Most resumes it reviewed were from men, a reflection of male dominance in the tech industry. From this, the system "learned" that men were more desirable candidates.

Other considerations for legal professionals abound, privacy being salient among them. ChatGPT's 2,000+ word privacy policy states that OpenAI, the chatbot's creator, may share a user's personal information (such as the user's name / IP / address / messages / queries / uploaded files) with third parties without providing the user further notice, unless it is required to do so by law. OpenAI has a separate Terms of Use policy, stating that the user is primarily responsible for taking appropriate security measures when engaging with its tools.

This doesn't sound so great. Many companies and competitors, including Samsung, Apple, JPMorgan, Bank of America, Goldman Sachs and Citigroup, have reportedly heavily restricted the internal use of generative AI tools like ChatGPT due to potential compliance issues. Employees could inadvertently divulge trade secrets or expose the company to data leaks through messaging the chatbot. Attorneys should be abundantly cautious of revealing client confidences when using AI (i.e., don't tell it anything you wouldn't tell the New York Times). In the context of AI, the disclosure may come inadvertently by, for example, the AI tracking that you asked a particular series of questions, at a particular time, on a particular day. The combination of that information could constitute the disclosure of confidence if that information, taken together, would allow a third party to recreate the confidence.

Some law firms are developing internal AI tools that will train exclusively on their internal libraries. Knowledge management at big law firms is an art form; knowing how to find something within the firm's (often proprietary) filesharing system takes a level of canniness. Developing an AI to hunt, gather and process information solely from your colleagues sounds promising. As attorneys we crib off each others' work product all the time; the common law doctrine of stare decisis virtually demands it. Other industries, however, are far from a share and share alike attitude towards AI, and AI has posed challenging questions regarding intellectual property ownership across the creative fields of visual art, music, and writing.

For example, photo licensing company Getty Images filed suit in the Northern District of California against AI company Stability AI Inc. Getty alleges that Stability's image generator, with its uncanny ability to turn text into images, copied millions of photographs from Getty Images' collection without permission from or compensation to Getty Images. In another lawsuit, several artists filed a class action against Stability AI (and Midjourney, another AI image generator) and the image-sharing platform DeviantArt, claiming copyright infringement.

AI-generated music adopting the vocal styles of popular musicians has become ubiquitous on TikTok and other social media platforms, racking up millions of views and listens, and raising copyright, trademark, and right of publicity questions. One widely covered conflict concerned a moderately convincing track mimicking R&B artists Drake and The Weeknd. The song was removed from the platforms when Universal Music Group, the record label representing Drake and The Weeknd, issued a Digital Millennium Copyright Act (DMCA) takedown.

AI has also emerged as a hinge point in the writers' strike that has already grinded production on many shows to a halt. When the Writers Guild of America (WGA) came with their list of demands, chief among them was that the studios agree not to use AI to write scripts. The Alliance of Motion Picture and Television Producers (AMPTP), which represents the studios, rejected this proposal. They countered that they would be open to holding annual meetings to discuss advancements in technology.

Alas, it doesn't look like we're getting the fully automated luxury communist/capitalism utopia future some may have hoped for. Not if the AI is off writing songs and screenplays and making collages instead of picking up trash. And human beings still need jobs, not just the artsy ones either, but the dirty ones and the customer servicey ones. Some jobs don't feel right to outsource to a chatbot. Recently it was reported that employees of the National Eating Disorder Association were told they would be fired and replaced with an AI chatbot, and a Belgian father reportedly committed suicide following conversations about climate change with an AI chatbot therapist. We're still unlocking the potential of AI technology and confronting its shortcomings. But, thanks to Mr. Schwartz and his fallacious brief, it looks like those of us in the legal industry will live to die another day.

But let's address the elephant in the room. I'd be remiss not to mention that it's not just our jobs that are threatened by AI, it is our entire existence as a species. Hundreds of artificial intelligence experts, tech executives, and scholars are warning against the "risk of extinction from AI." Sam Altman has halted development on Chat GPT-5 because of the threat the bot poses to humanity. It appears that in no time AI will render humans superfluous as its intelligence blooms exponentially. Will its intelligence license it to be cruel? To wipe us out? As attorneys, we like to consider ourselves to be the smartest folks in the room. Have we been kind with our superior intellect?

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More