ARTICLE
24 April 2025

When Arbitrators Use AI: LaPaglia v. Valve And The Boundaries Of Adjudication

Aceris Law

Contributor

Aceris Law is a leading boutique international arbitration law firm. It provides the highest-quality legal representation for complex international commercial arbitrations, investor-State arbitrations and international construction disputes, combining competitive legal fees with an outstanding track record. It covers all jurisdictions, arbitral institutions and industry sectors, working for clients globally.
As artificial intelligence ("AI") tools become increasingly integrated into legal practice, their use by arbitrators is no longer a theoretical possibility but a practical reality.
United States California Technology

As artificial intelligence ("AI") tools become increasingly integrated into legal practice, their use by arbitrators is no longer a theoretical possibility but a practical reality. From drafting procedural orders to organising evidence or even assisting in the preparation of awards, AI offers the promise of greater efficiency, consistency, and cost-effectiveness. But what happens when that efficiency comes at the cost of human judgment?

A recent case filed in a U.S. federal court, LaPaglia v. Valve Corp., raises precisely this question. The claimant has petitioned to vacate an arbitral award on the grounds that the arbitrator allegedly relied on AI to such an extent that he "outsourced his adjudicative role."1 While the outcome of the case remains uncertain, it presents an ideal starting point for examining the limits of AI use in arbitration and the legal and ethical responsibilities that come with it.

This article explores the key issues raised by LaPaglia, placing them in the context of newly emerging guidelines on AI in arbitration. It considers at what point AI assistance goes too far and what role transparency, party expectations, and procedural fairness should play in navigating this new frontier.

LaPaglia v. Valve Corp.

In LaPaglia v. Valve Corp., a consumer of PC games, Mr. LaPaglia (the "Claimant") filed a claim in arbitration administered by the American Arbitration Association (the "AAA") demanding compensation for the higher prices he paid as a result of alleged antitrust violations by Valve Corp. (the "Respondent"), the owner of the Steam online game store, as well as for breach of warranty stemming from a defective PC game he had purchased.2

The Claimant's claims were heard before a sole arbitrator (the "Arbitrator") at a December 2024 hearing.3 The hearing took place over 10 days, and according to the Claimant, during breaks in the proceedings, the Arbitrator allegedly told the parties he wanted to issue a decision quickly because he had an upcoming trip scheduled to the Galapagos.4

The final post-hearing brief was submitted on 23 December 2024, with the award (29 pages long) issued on 7 January 2025 (the "Award"), when the Arbitrator was allegedly scheduled to leave on his trip.5

On 8 April 2025, the Claimant filed a Petition to Vacate Arbitration Award (the "Petition") before the United States District Court for the Southern District of California (the "District Court") pursuant to 9 U.S.C. §§ 10(a)(3), (a)(4), on the basis that, inter alia, the Arbitrator allegedly "outsourced his adjudicative role to Artificial Intelligence ('AI')."6

The Claimant concluded that the Arbitrator used AI to draft his award on the basis of the following factual elements:

  • The Arbitrator "told a story about how he had been assigned to write a short article on an aviation club he was part of, and that he had used ChatGPT to write it to save time."7
  • The Arbitrator "noted for the parties that he was leaving for a trip to the Galapagos soon and wanted to get the case done before then."8
  • The Award allegedly contains "telltale signs of AI generation" and purportedly cites facts that "are both untrue and not presented at trial or present in the record" without any relevant citations.9
  • The Claimant's counsel's law clerk asked ChatGPT whether it believed a certain paragraph was written by humans or AI, and ChatGPT stated that "the paragraph's awkward phrasing, redundancy, incoherence, and overgeneralization 'suggest that the passage was generated by AI rather than written by a human.'"[10]

The Claimant relied on Section 10(a)(4) of the FAA, which "permits vacatur where an arbitrator 'exceeds their powers[]'" by acting outside the scope of the parties' contractual agreement.11

The Claimant asserted that the Award must be vacated because, by allegedly relying on AI, the Arbitrator exceeded his authority bound by the scope of the parties' arbitration agreement, which empowers a "neutral arbitrator" to resolve disputes between them and binds said arbitrator to provide "a written decision" and a "statement of reasons" for their holding.12 In cases where an arbitrator relies on AI, the Claimant submitted that this "betrays the parties' expectations of a well-reasoned decision rendered by a human arbitrator."

The Claimant then analogised between the present case and other U.S. cases, such as Move, Inc. v. Citigroup Global Mkts.,13 where courts vacated arbitration awards where arbitrators falsified their credentials or made other false representations. Courts in these instances noted that awards should be vacated "where 'there is simply no way to determine whether' an unqualified 'imposter' on the arbitration panel 'influenced other members of the panel or that the outcome of the arbitration was affected by his participation'".14 According to the Claimant, just as courts have vacated awards when the decision-making is outsourced to a person other than the appointed arbitrator, so too must a court vacate an Award when the decision-making is outsourced to AI.15

The District Court has yet to rule on the Claimant's petition, but, despite any factual or legal arguments that may be raised against the Claimant's claim, this case raises important questions about the future of arbitration: Should arbitrators rely on AI? If so, to what extent?

AI in Arbitration

At first glance, AI seems like a welcome development for arbitration. It offers the potential to expedite proceedings by rapidly organising and summarising large volumes of data,16 thereby reducing the arbitrator's workload. This increased efficiency could, in turn, lower the overall cost of the arbitration, particularly where the arbitrator is remunerated on an hourly basis.

However, the use of AI in arbitration is not without risks. Chief among these is the potential erosion of the arbitrator's independence and decision-making responsibility, especially where AI is relied upon to assess factual, legal, or evidentiary matters. AI systems are prone to hallucinations – that is, generating plausible but inaccurate or entirely false information.17 If not carefully reviewed and verified by the arbitrator, such inaccuracies may compromise the quality and reliability of the award, undermining the arbitrator's duty to provide a reasoned and accurate decision.

Although the rules of most major arbitral institutions (ICC, LCIA, SIAC, HKIAC, etc.) are currently silent on arbitrators' use of AI, recent soft law instruments have begun to fill this gap, offering guidance on the responsible and appropriate integration of AI in the arbitral process.

AI Guidelines for Arbitrators

One example of this is the Silicon Valley Arbitration & Mediation Center ("SVAMC") Guidelines on the Use of Artificial Intelligence in Arbitration (the "SVAMC Guidelines"), which were published on 30 April 2024. The SVAMC Guidelines "introduce a principle-based framework for the use of artificial intelligence (AI) tools in arbitration at a time when such technologies are becoming increasingly powerful and popular. They are intended to assist participants in arbitrations with navigating the potential applications of AI."18

Part 3 of the SVAMC Guidelines specifically sets out guidelines for arbitrators, including Guidelines 6 (Non-delegation of decision-making responsibilities) and 7 (Respect for due process).

According to Guideline 6, "An arbitrator shall not delegate any part of their personal mandate to any AI tool. This principle shall particularly apply to the arbitrator's decision-making process. The use of AI tools by arbitrators shall not replace their independent analysis of the facts, the law, and the evidence."19

Guideline 7 provides: "An arbitrator shall not rely on AI-generated information outside the record without making appropriate disclosures to the parties beforehand and, as far as practical, allowing the parties to comment on it. Where an AI tool cannot cite sources that can be independently verified, an arbitrator shall not assume that such sources exist or are characterised accurately by the AI tool."20

Another slightly more recent example is the Chartered Institute of Arbitrators ("Ciarb") Guideline on the Use of AI in Arbitration (the "Ciarb Guideline"), published in 2025. Like the SVAMC Guidelines, the Ciarb Guideline "seeks to give guidance on the use of AI in a manner that allows dispute resolvers, their representatives, and other participants to take advantage of the benefits of AI, while supporting practical efforts to mitigate some of the risk to the integrity of the process, any party's procedural rights, and the enforceability of any ensuing award or settlement agreement."21

Part IV of the Ciarb Guideline addresses the use of AI by arbitrators and, like the SVAMC Guidelines, contains two articles: Article 8 (Discretion over use of AI by arbitrators) and Article 9 (Transparency over use of AI by arbitrators).

Article 8 notes that arbitrators may consider using AI tools to enhance the arbitrator process, including both the efficiency of the proceedings and the quality of the arbitrator's decision-making, but that arbitrators "should not relinquish their decision-making powers to AI" and "should avoid delegating any tasks to AI Tools [...] if such use could influence procedural or substantive decisions."22 Article 8 also reminds arbitrators that they should independently verify the accuracy and correctness of information obtained through AI, while also maintaining a critical perspective to prevent undue influence over their decisions.23 Finally, Article 8 provides that arbitrators "shall assume responsibility for all aspects of an award, regardless of any use of AI to assist with the decision-making process."24

Article 9 encourages arbitrators to consult with the parties, as well as other arbitrators on the same tribunal, on whether AI tools may be used by them throughout the course of the arbitral proceedings.25

This note will now return to the LaPaglia case to examine the Arbitrator's alleged conduct with these AI guidelines in mind.

Analysis: LaPaglia According to the AI Guidelines

Examining the Arbitrator's alleged conduct in LaPaglia v. Valve Corp. (purely hypothetically) through the lens of the guidelines, it is not a black-and-white case of inappropriate/appropriate use of AI, even if all the facts alleged by the Claimant are taken as true.

For example, according to SVAMC Guideline 6 and Ciarb Article 8, if the Arbitrator did use AI, like ChatGPT, while drafting the Award, this is not per se inappropriate, so long as he retained his decision-making power and was not influenced by the AI in making any procedural, factual, or legal decisions.26

However, if, as the Claimant asserts, the Arbitrator did indeed cite facts and evidence that were not "in the record or otherwise evidenced or even argued",27 SVAMC Guideline 7 suggests that this may have been inappropriate if the Arbitrator failed to make "appropriate disclosures to the parties beforehand and, as far as practical, allowing the parties to comment on it", 28 which may raise serious due process concerns.

Further, SVAMC Guideline 7 and Ciarb Article 8 both recall that arbitrators have a duty to independently verify the accuracy of any statements made in their awards. Thus, if the AI used by the Arbitrator referenced facts that "are both untrue and not presented at trial or present in the record", as the Claimant claims,29 it was the Arbitrator's duty to verify their accuracy, and by apparently failing to do so (as they allegedly ended up in the final Award), the Arbitrator may have inappropriately used AI.

Additionally, while the Claimant's petition does not clarify in what context these fabricated facts, incoherences and over-generalisations were allegedly elaborated in the Award or whether they appeared to have any bearing on the Arbitrator's decision, their presence in the Award does seem to call into question whether the Arbitrator did not delegate any of his decision-making power, particularly regarding the factual analysis of the Award, to AI, contrary to SVAMC Guideline 6 and Ciarb Article 8, as mentioned above.30

In any event, based on the Claimant's Petition, it is unclear whether the Arbitrator made any sort is disclosure regarding any use of AI throughout the proceedings. However, both the SVAMC Guidelines and the Ciarb Guideline suggest that, should an arbitrator use any amount of AI, he or she should disclose its use to the parties,31 if not seek their approval beforehand.32

Conclusion

The LaPaglia v. Valve Corp. case – though still pending and based on allegations yet to be judicially assessed – raises significant and timely questions about the role of artificial intelligence in arbitral decision-making. Even if the factual basis for the Claimant's petition remains uncertain, the case usefully illustrates the types of challenges and complexities that may arise when arbitrators rely, or are suspected of relying, on AI tools in drafting awards.

As the analysis above demonstrates, a core principle must guide any consideration of AI use by arbitrators: non-delegation. Arbitrators cannot outsource their adjudicative function to a third party – human or machine – nor can they allow technology to compromise their independent reasoning. While AI may assist with administrative or drafting tasks, it cannot replace the arbitrator's personal engagement with the facts, evidence, and law.

Equally important is transparency. Where arbitrators use AI tools, they should disclose this to the parties and, potentially, seek their prior approval. Guidelines such as those issued by SVAMC and Ciarb make clear that arbitrators bear ultimate responsibility for the accuracy, integrity, and human authorship of their awards.

The LaPaglia case also spotlights an emerging evidentiary issue: How can parties prove that an award – or part of it – was drafted by AI? Are AI detection tools reliable, and how should courts treat such evidence? What if an arbitrator uses AI simply to enhance clarity rather than to substitute reasoning?

As AI tools become more sophisticated and widely adopted, these questions will become increasingly important. Courts, arbitral institutions, and parties alike will need to grapple with the appropriate standards for AI use, the mechanisms for disclosure, and the consequences of misuse. Whether or not the LaPaglia petition succeeds, it has already succeeded in provoking a broader conversation that arbitration can no longer avoid.

Footnotes

1 Petition to Vacate Arbitration Award; Memorandum of Points and Authorities in Support Thereof at 2, LaPaglia v. Valve Corp., No. 3:25-cv-00833 (S.D. Cal. Apr. 8, 2025).

2 Id. at 2- 3.

3 Id. at 3.

4 Id. at 4.

5 Id. at 4.

6 Id. at 2. The Claimant also challenged the Award on the basis that the Arbitrator allegedly wrongfully consolidated the Claimant's claim with 22 others in violation of the arbitration agreement and refused to permit the Claimant to submit an expert report purportedly proving the Respondent's possession of a monopoly market share.

7 Id. at 9.

8 Id. at 9.

9 Id. at 9.

10 Id. at 10.

11 Id. at 9.

12 Id. at 10.

13 Move, Inc. v. Citigroup Global Mkts., 840 F.3d 1152, 1159 (9th Cir. 2016).

14 Id. at 10.

15 Id. at 10.

16 A. Singh Chauhan, Future of AI in Arbitration: The Fine Line Between Fiction and Reality, 26 September 2020, https://arbitrationblog.kluwerarbitration.com/2020/09/26/future-of-ai-in-arbitration-the-fine-line-between-fiction-and-reality/.

17 M. Magal et al., Artificial Intelligence in Arbitration: Evidentiary Issues and Prospects, 12 October 2023, https://globalarbitrationreview.com/guide/the-guide-evidence-in-international-arbitration/2nd-edition/article/artificial-intelligence-in-arbitration-evidentiary-issues-and-prospects.

18 SVAMC Guidelines, Introduction.

19 SVAMC Guidelines, Guideline 6.

20 SVAMC Guidelines, Guideline 7.

21 Ciarb Guideline, Introduction.

22 Ciarb Guideline, Articles 8.1, 8.2.

23 Ciarb Guideline, Article 8.3.

24 Ciarb Guideline, Article 8.4.

25 Ciarb Guideline, Articles 9.1-9.2.

26 SVAMC Guidelines, Guideline 6; Ciarb Guideline, Article 8.

27 Petition to Vacate Arbitration Award; Memorandum of Points and Authorities in Support Thereof at 9, LaPaglia v. Valve Corp., No. 3:25-cv-00833 (S.D. Cal. Apr. 8, 2025).

28 SVAMC Guidelines, Guideline 7.

29 Petition to Vacate Arbitration Award; Memorandum of Points and Authorities in Support Thereof at 9, LaPaglia v. Valve Corp., No. 3:25-cv-00833 (S.D. Cal. Apr. 8, 2025).

30 SVAMC Guidelines, Guideline 6; Ciarb Guideline, Article 8.

31 SVAMC Guidelines, Guideline 6.

32 Ciarb Guideline, Articles 9.1-9.2.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More