Artificial intelligence (AI) tools are already widely used in arbitration and increased uptake is predicted to be one of the key arbitration trends of 2025. In the latest International Arbitration Survey by Queen Mary University of London, 90% of respondents said they expected to use AI for research, data analytics and document review, and 54% said saving time was the biggest driver for using AI.
However, usage has preceded a clear framework and agreed approach to its application. Against this backdrop, the Chartered Institute of Arbitrators (Ciarb) has convened experts from diverse legal traditions to develop practical guidance. The result of this process was unveiled on 19 March 2025 when Ciarb published its "Guideline on the use of AI in arbitration" (the "Ciarb AI Guideline"). This article reviews the key provisions of the Ciarb AI Guideline and provides an overview of the other available guidelines on AI in arbitration.
What is the Ciarb AI Guideline?
The Ciarb AI Guideline is the first detailed practical guidance note for parties and arbitral tribunals on the use of AI in international arbitration. What will be particularly welcomed by arbitrators is that the Ciarb AI Guideline gives a clear framework for how to approach conversations with parties around AI use, with general principles underpinned by recommended practical actions. The Ciarb AI Guideline is based on the principle of evaluating the risks of AI use on a particular arbitration, while respecting the autonomy of parties to agree the form of their arbitration (subject to the usual applicable rules).
Who does the Ciarb AI Guideline apply to?
The Ciarb AI Guideline is a non-mandatory "soft law" provision which parties may elect to incorporate into an arbitration. Additionally, over time it may become evidence of general best practice. The Ciarb AI Guideline includes template agreements and procedural orders on the use of AI in arbitration which parties may utilise by agreement alongside any arbitral rules. The Ciarb AI Guideline can be incorporated within the arbitration clause; as a stand-alone agreement between the parties; within the procedural order/tribunal's terms of reference; or as a stand-alone AI procedural order.
Guidance on benefits and risks to the use of AI
Part 1 of the Ciarb AI Guideline consists of a short guide to the benefits and risks of the use of AI in arbitration, covering similar principles to existing guidelines on AI use in arbitration, discussed later in this article. Benefits identified in the Ciarb AI Guideline include:
- efficiency and quality of the arbitral process
- AI-powered legal research
- data analysis
- text generation
- streamlining the collection of evidence
- translation and interpretation
- hearing transcription
- detecting AI usage e.g., deepfakes
- case analysis such as predictions of outcomes
- remedying inequality of arms for under-resourced parties
The Ciarb AI Guideline also identifies the following risks, considering that the extent to which they are dominant in an arbitration will depend on the particular AI tool, the nature of the arbitration, and the use to which the tool is put:
I. Confidentiality and data integrity & security
The Ciarb AI Guideline warns that "not all AI tools are equal in this regard". It is important to consider the extent to which the tool has been vetted and whether it is an open or closed model. Consideration should be given to the permissions granted for the use of data, how third-party tools will store the data, as well as whether confidentiality agreements are required. Cybersecurity is a key consideration in the light of the move towards soft-copy submissions.
II. Impartiality & independence
Algorithmic biases could arise due to the selection of datasets or the configuration of an algorithm. Particular risks identified include authority bias (i.e. a tendency to attribute greater accuracy to the opinion of an authority figure) and cognitive inertia (i.e. a tendency to resist change). Those using AI as part of their decisions must take responsibility for the outcome of their AI use.
III. Due process & the "black box" problem
Due process issues could arise from the use of AI tools, for example where an arbitrator uses AI case analysis and then works from a summary of a party's statement of case, with the result that the arbitrator does not fully consider all the arguments and exhibits, only those highlighted by AI. The Ciarb AI Guideline recommends that the tribunal's ownership of decisions and assumptions of responsibility are crucial safeguards against due process violations. Additionally the "black box" problem – i.e. the inability of a user to see how AI reached a conclusion – impedes human oversight and makes transparency of decision-making difficult.
IV. The enforceability of arbitral awards
The rapid roll-out of AI technology has meant that regulations are on the back foot, which may lead to sudden developments such as bans of certain AI tools. Any use of AI in an arbitration must not conflict with any mandatory rule, applicable law, regulation or policy, or any institutional rule on the use of AI. At present few arbitral rules explicitly address the use of AI (with two notable exceptions discussed later), but future institutional rule reforms will be keenly watched for developments in this area.
V. Energy use and the environment
The Ciarb AI Guideline warns that AI tools may be energy-intensive and the environmental impacts should be considered. However, this may be balanced by procedural efficiencies, making the overall environmental impact difficult to assess. This is an area that will require further study and transparency from AI tool providers. Parties may also have to consider how this interacts with arbitral rules, such as the Singapore International Arbitration Centre (SIAC) Rules 2025 which encourage environmentally sustainable practices in arbitration.
General recommendations about use of AI in arbitration
The Ciarb AI Guideline provides four general rules applicable to parties and arbitrators. Firstly, all participants should make reasonable enquiries to understand a proposed tool. Secondly, all participants should weigh up the risks against the benefits of use, and thirdly make enquiries about the applicable laws and regulations governing the use of the tool. Fourthly, unless expressly agreed in writing by the tribunal and parties (and subject to any mandatory rule), the use of an AI tool by any participant will not diminish their responsibility and accountability.
Tribunals are encouraged to record decisions on the use of AI in procedural orders and, if usage is contentious, consider addressing it in the award. Where parties fail to comply with directions or procedural orders regarding AI usage, arbitrators may draw appropriate conclusions and take this into account when awarding costs. In keeping with the general principle of party autonomy, parties may agree the approach to AI usage in the arbitration. Arbitrators should ascertain how this is covered within the arbitration agreement or invite parties to express views on the topic.
Where parties disagree on AI use, arbitrators may be required to make a ruling on the topic, considering the benefits and risks of the relevant tool. Where the tribunal consider the use or non-use of AI by a party jeopardises the integrity of proceedings, arbitrators may, after consultation with the parties, elect to make a ruling on AI usage of their own motion. In making any ruling on AI use arbitrators must be guided by applicable laws, regulations, policies and institutional rules. The non-exhaustive list of considerations includes the law of the seat, the laws governing the proceedings, the arbitral rules, the parties' national laws and any applicable ethical rules. Laws not expressly aimed at AI regulation (e.g. privacy, data) should also be considered where appropriate.
Should parties in an arbitration disclose AI usage?
The approach taken by the Ciarb AI Guideline is that disclosure of AI usage may be required to the extent that its use could have an impact on the evidence, the outcome of the arbitration, or otherwise involve a delegation of express duty. Arbitrators may require disclosure of the use of an AI tool, however the tribunal may not regulate the private use of AI by parties to the extent it is generally permitted in litigation in the relevant domestic courts, does not interfere with proceedings, and does not impact the integrity of the arbitration. This provision is designed to distinguish between the widely accepted usages of AI, e.g. in disclosure, versus potentially contentious applications. Elsewhere in the rules, the CIArb AI Guideline considers relevant domestic laws to include the law of the seat, the law governing the proceedings and the national laws of the parties.
Enforcement of an AI-assisted arbitral award
A crucial provision of the Ciarb AI Guideline is article 8.4, providing that an arbitrator shall assume responsibility for all aspects of an award, regardless of any use of AI to assist with decision making. This addresses the contentious issue of the limit of what may be delegated – either to a secretary or (by analogy) to technology – and the impact of the delegation on enforceability. In the LaPaglia v Valve Corporation (3:25-cv-00833) proceedings currently before the California Southern District Courts, the claimant is seeking to vacate an arbitral award on the grounds that it was drafted either primarily or in part using AI, arguing that an "arbitrator's reliance on generative AI to replace their own role, and the parties' submissions, in the litigation process betrays the parties' expectations of a well-reasoned decision rendered by a human arbitrator".
While we are yet to see a ruling on this in an AI context, the principle was addressed in the Yukos annulment proceedings, with the conclusion that the role of the tribunal's secretaries in drafting the award did not affect its validity where the tribunal assumed responsibility for the award. In P v Q and others [2017] EWHC 194 (Comm) the English High Court considered whether the input of a secretary was improper under the London Court of International Arbitration (LCIA) Rules, distinguishing between appropriately delegable tasks versus those which involved "expressing a view on the substantive merits of an application or issue". The court considered it best practice to "avoid involving a tribunal secretary in anything which could be characterised as expressing a view on the substance of that which the tribunal is called upon to decide". The Ciarb AI Guideline similarly provides that the tribunal should avoid delegating to AI any issues such as legal analysis, research of facts, legal research or application of law to facts where this may influence procedural or substantive decisions.
Other guidance on AI use in arbitration
The Ciarb AI Guideline builds on the existing body of general guidance, most notably the 2024 Silicon Valley Arbitration and Mediation Center "Guidelines on the use of artificial intelligence in arbitration" (the "SVAMC AI Guidelines"). The SVAMC AI Guidelines are a principle-based framework for parties arbitrating under any set of arbitral rules, utilised where parties or a tribunal have so agreed. The general principles are:
- Guideline 1, understanding the uses, limitations and risks of AI applications
- Guideline 2, safeguarding confidentiality
- Guideline 3, disclosure
- Guideline 4, duty of competence or diligence in the use of AI
- Guideline 5, respect for the integrity of the proceedings and the evidence (aimed at parties/party representatives)
- Guideline 6, non-delegation of decision-making responsibilities and other guidance aimed at arbitrators
- Guideline 7, respect for due process
In October 2024 the Stockholm Chamber of Commerce Arbitration Institute published its "Guide to the use of artificial intelligence in cases administered under the SCC rules" (the "SCC AI Guide"). The SCC AI Guide is a short, high-level document encouraging tribunals operating under the SCC rules to consider four key principles when utilising AI, namely (i) confidentiality, (ii) the quality of the tool (including necessary oversight and confirmation of results), (iii) the integrity of the arbitration, and (iv) the principle of non-delegation of the tribunal's decision-making mandate.
In April 2025, shortly after the publication of the Ciarb AI Guideline, the Vienna International Arbitration Centre (VIAC) published its "Note on the use of artificial intelligence in arbitration proceedings" (the "VIAC AI Note"). The VIAC AI Note is a short and non-binding document to facilitate discussion on the use of AI in VIAC arbitration proceedings, and where a conflict arises between the VIAC AI Note and the VIAC arbitration or mediation rules, the latter prevails. Provision 1 of the VIAC AI Note states that arbitrators, secretaries and counsel shall consider AI-related professional standards applicable in their jurisdiction or to the given proceedings, which may incorporate soft-law instruments such as the Ciarb AI Guideline.
Conclusion
The Ciarb AI Guideline prompts parties and arbitrators to consider their AI usage from the outset of a dispute, or even as early as the drafting of an arbitration clause. It reinforces the principle that parties should agree the form of their arbitration, as well as the role of the tribunal in the regulation of the conduct of proceedings. The risk-based approach accommodates generally accepted AI usage while encouraging express agreement regarding more contentious use. Throughout, the Ciarb AI Guideline reminds participants that an enforceable award is the objective of the arbitral process and this should govern attitudes to AI usage.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.