Navigating The Use Of AI Tools In Legal Practice Before Pa.'s Federal District Courts

DM
Duane Morris LLP

Contributor

Duane Morris LLP, a law firm with more than 800 attorneys in offices across the United States and internationally, is asked by a broad array of clients to provide innovative solutions to today's legal and business challenges.
By now, litigators appreciate that a degree of technological expertise is needed to practice law effectively. Everyone has heard about the unfortunate attorney in Texas who appeared at a Zoom hearing as a worried kitten.
United States Technology
To print this article, all you need is to be registered or login on Mondaq.com.

By now, litigators appreciate that a degree of technological expertise is needed to practice law effectively. Everyone has heard about the unfortunate attorney in Texas who appeared at a Zoom hearing as a worried kitten. But in the past year, attorneys have become more attuned to the potential and risks of artificial intelligence (AI). Last June, lawyers in New York made headlines after relying on a chatbot's research skills, leading to sanctions for unknowingly submitting fictitious caselaw. One journalist even found himself in a love triangle with a chatbot bent on ending his marriage. In spite of these cautionary tales, the use of AI in the legal profession is on the rise as trusted legal research services like LexisNexis and Westlaw roll out AI-assisted research functions and major tech companies integrate AI into their products.

Faced with a seismic shift in available technology, litigators appearing in Pennsylvania's federal courts must be mindful of when and how they turn to AI so as not to run afoul of court orders or policies, or procedural or ethical rules. Judges who opt to regulate the use of AI in cases before them should be equally mindful to avoid crafting orders with an unintended chilling effect on the use of new technologies, especially when current procedural and ethical rules may already suffice.

Pennsylvania's federal judges have not yet taken any wholesale measures to regulate the use of AI in matters before them. Currently, U.S. District Court Judge Michael Baylson of the Eastern District of Pennsylvania is the sole federal judge in the commonwealth with a standing order requiring attorneys to affirmatively disclose whether they have used AI in the "preparation" of any pleading, motion, brief, or any other filed paper. If so, they must certify that they have verified the accuracy of all legal and record citations.

Recently, Judge Kelley Hodge, also of the Eastern District of Pennsylvania, revised her Chamber's Policies and Procedures to emphasize that Rule 11(b) and Rule 26(g) of the Federal Rules of Civil Procedure and ethical rules apply with respect to all filings created with the aid of "generative artificial intelligence." Rule 26(g) requires attorneys and self-represented parties to certify that the content of all discovery-related materials are, to the best of their knowledge, complete, correct, in accordance with the rules, and not filed for an improper purpose. Similarly, Rule 11(b) imposes the same requirement for pleadings, motions, and other papers. Applicable ethical rules include the lawyer's duties of competence, confidentiality, communication, and supervision.

Since lawyers are already bound by federal procedural and ethical rules, the intent behind Hodge's Chamber's Policies and Procedures may simply be to remind practitioners of the dangers inherent in the misuse of AI, as well as the consequences. In other words, blind reliance on AI will not be excused.

Policies like Hodge's provide litigators with helpful reminders of their legal and ethical obligations. But court orders that actively regulate the use of AI have the potential to chill the use of a burgeoning technology depending on their terms. The difficulty in crafting orders that appropriately address the potential misuse of AI, without stopping its use entirely, stems not only from the rapid pace at which AI is developing, but also from the use of terms that do not yet have a commonly understood or accepted meaning.

What, for instance, is "generative artificial intelligence" as opposed to artificial intelligence? What does it mean to "use" AI? Is factual background research on an opposing company's products or services covered by use? What about a request to an AI chatbot for a high-level overview of an area of law before the attorney turns to more targeted research through traditional sources? Are Westlaw or LexisNexis AI-backed research functions covered? What about requests for an AI chatbot to review and improve the structure or prose of an attorney's written submission?

Unless a court's order provides clarification on these and related questions, litigators must presume that terms like "preparation," "use," and "AI" should be construed very broadly to include any and all output from any AI source which is relied on or incorporated in any way into a submission to the court. Lawyers must also be familiar with all sources they use and whether they are backed by AI. When in doubt about disclosure requirements, err on the side of caution.

Given the complexities involved when courts attempt to regulate or monitor the use of AI, it is crucial for them to consider their objectives and the potential pitfalls of overly broad or ambiguous directives. Key concerns include the risk of citing fabricated legal authority or factual materials generated by AI and the issue of AI engaging in the unauthorized practice of law. To address these concerns, courts may opt to mandate that attorneys certify their filings. For instance, attorneys could affirm, "I certify that no artificial intelligence source authored any portion of this filing, and all citations have been independently verified for accuracy using nonartificial intelligence sources." A certification such as this warrants that the attorney—not a chatbot—has written the filing and also verified all citations for accuracy, without deterring attorneys from utilizing developing technologies.

Without specific court mandates governing AI usage, attorneys must be careful not to inadvertently abet AI in unlawfully practicing law without a license or fabricating legal sources. Current AI technologies should be regarded as a starting point. As even Westlaw cautions, users of AI-assisted research "should never be used to advise a client, write a brief or motion for a court, or otherwise be relied on without doing further research." Ethical rules may also entitle clients to know and approve, for purposes of representation, confidentiality, and billing, whether their attorneys are availing themselves of this technology.

As is evident from several high-profile blunders, and the response to them by judges across the country, the advent of AI presents numerous challenges on how to incorporate it into our everyday professional lives as attorneys. Learning how to use AI tools and avoid their pitfalls, as well as familiarity with applicable ethical obligations, is critical not just to avoid sanctions, but also to best serve clients. It is incumbent on litigators to thoroughly familiarize themselves with any AI tools that they use; check whether they are bound by the requirements of standing orders of any given judge's chambers; and remain abreast of relevant procedural and ethical rules. But whatever you do, don't rely on an AI chatbot for legal or marriage advice!

Originally published by The Legal Intelligencer,

Disclaimer: This Alert has been prepared and published for informational purposes only and is not offered, nor should be construed, as legal advice. For more information, please see the firm's full disclaimer.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More