AI Ethical Review Committees and the Future of Responsible AI in International Arbitration
March 9, 2026
Historically rooted in human judgment, flexibility, and due process, international arbitration is now being reshaped by a technological shift. The question is not whether artificial intelligence (“AI”) will be integrated into international arbitration, but rather how to regulate its use by tribunals.
This blog post proposes institutionalising AI Ethical Review Committees (“AI-ERCs”) in international arbitration to regulate and oversee AI-assisted decision-making.
I. AI in International Arbitration
AI is rapidly transforming international arbitration and many tools are already helping both counsel and tribunals alike. AI's promise lies in boosting efficiency, cutting costs, and delivering analytical depth beyond human capability. Yet, concerns over bias, data insecurity, and diminished human agency underscore the need for meaningful safeguards.
While some jurisdictions have begun adopting AI regulations, AI-assisted decision-making in arbitration is primarily shaped by voluntary standards and guidelines. The Vienna International Arbitration Centre, the Chartered Institute of Arbitrators and the Silicon Valley Arbitration and Mediation Center, along with numerous other arbitral institutions, have issued guidelines to promote responsible use of AI in arbitration. Nevertheless, the reliability of AI-assisted decision-making is far from assured given the risk of opacity and bias of AI.
II. AI Ethical Review Committees
AI ethics committees or boards serve advisory and accountability functions in various types of organizations. In the arbitration context, AI-ERCs would function as an independent advisory mechanism that evaluates AI use in arbitration to ensure procedural legitimacy. Conceived as independent, multi-disciplinary bodies, they review and assess how AI is deployed in arbitral decision-making. They serve both as a pre-award audit mechanism for parties and tribunals to verify that AI has not compromised due process, and as a post-award technical reference for courts and enforcement bodies when dealing with challenges involving AI. In both settings, AI-ERCs operate as reviewers of rapidly evolving technologies to enhance stakeholder confidence and reduce the risk of annulments.
A. Conceptual Foundation
An AI-ERC does not adjudicate or alter awards but evaluates how the use of AI by a tribunal affects parties’ rights and expectations. In doing so, it scrutinizes issues that may fall outside the arbitrators’ expertise, such as the explainability of the tool, disclosure to parties, potential hidden bias, or improper delegation of the tribunal’s functions including allowing an algorithm to decide evidentiary admissibility or procedural timelines.
The purpose of AI-ERCs is neither to function as a substitute for tribunals nor to overburden parties with bureaucratic procedures. Instead, they would function as advisory bodies akin to institutional vetting boards or technical amici curiae, providing critical evaluations at two junctures.
B. Stage of Intervention
Oversight must not slow proceedings or introduce new adversarial layers that erode arbitration’s appeal. A dual-stage model, operating pre-award and post-award, offers a practical solution by enabling AI-assisted decision-making to be used responsibly rather than prohibited altogether.
i. Ex-ante Review
At this stage, tribunals may voluntarily consult an AI-ERC by disclosing AI use where the tribunal’s legal reasoning, factual analysis, or drafting is influenced, or the parties may seek a review of flawed AI-assisted conclusions. The legitimacy of this method draws from practices adopted in institutional rules, such as Article 34 of the ICC Rules 2021, which permits scrutiny of draft awards by the ICC Court to ensure enforceability. Positioned after drafting but before issuance of the award, the review mirrors institutional scrutiny to reinforce legitimacy while preserving tribunal autonomy. An AI-ERC may also be engaged to pre-empt ethical concerns in the tribunal’s use of AI and identify errors such as diversions from party submissions, as well as the absence of meaningful review of AI outputs, accuracy, and transparency.
Such review could be conducted on an expedited timeline, typically within 7-14 days, with findings communicated through a written advisory note to the tribunal, which would retain full discretion over whether to disclose it to the parties, and whether and how to act on such assistance.
ii. Ex-post Review
At the post-award stage, an AI-ERC may assist courts and enforcement bodies by providing technical reports where an award is challenged for the misuse or non-disclosure of AI. This is particularly relevant under Article V(1)(b) and V(2)(b) of the New York Convention that permits refusal of enforcement of awards for due process violations or conflicts with public policy. The AI-ERC functions as an advisory body, distinct from party-appointed experts, with the ex-post review undertaken by a panel distinct from any AI-ERC consulted ex ante. Its role is to help courts assess whether procedural fairness or the tribunal’s mandate was breached.
Traditional judicial forums are well-equipped for legal adjudication, yet AI introduces novel technical questions that may require specialized assessment. With the assistance of an AI-ERC, courts may be empowered to determine if the AI usage rose to the level of a due process breach or was legitimate. Parties would have access to the AI-ERC’s review, based on an assessment of the award supplemented by AI-use disclosures, summaries of inputs and outputs, and/or records of oversight provided by the tribunal for ex-ante review and reused at the post-award stage.
The challenges an ex-post review is meant to address are illustrated in LaPaglia v. Valve Corp., where the claimant alleged that the arbitrator relied excessively on ChatGPT to draft the award. In such instances, an AI-ERC would determine whether the use of AI shifted from permissible assistance to an improper delegation of adjudicative authority.
C. Composition and Appointment
For AI-ERCs to serve as effective oversight mechanisms, their composition and appointment must reflect a combination of technical competence and trustworthiness. A functionally competent committee would include senior international arbitrators or retired judges who bring an understanding of procedural fairness and institutional practice; computer scientists and AI experts, particularly those with specialization in explainable AI and bias detection; scholars of international law, who can ground assessments in jurisprudential traditions; and data privacy specialists, to ensure compliance with international norms. By bringing together individuals with varied expertise and professional backgrounds, AI-ERCs would function as a holistic body capable of scrutinizing all facets of an award.
AI-ERCs could be established by arbitral institutions or independent oversight bodies through a public application and vetting process, with appointing authorities maintaining a periodically updated roster of approved experts. For greater flexibility, AI-ERCs may be appointed on a case-by-case basis. Where parties do not agree on a specific member, arbitral institutions could designate members by applying established appointment mechanisms. Committees may be housed within existing arbitral institutions, endorsed by bodies such as UNCITRAL, PCA, or ICCA, and maintained through shared funding, administrative support, or the development of common standards.
D. Evaluative Standards and Effect
The legitimacy of AI-ERCs rests upon two pillars: clear evaluative standards and a structured understanding of the legal and procedural consequences of their findings. The committee is neither a toothless bystander nor an overreaching supervisor. AI-ERCs should benchmark AI use against core arbitral values by testing disclosure and transparency through review of the tribunal’s attestations, summaries of AI use, drafting patterns or version histories.
If an AI-generated conclusion is flawed, at the ex-ante stage, the tribunal may amend or remove the affected section. At the ex-post stage, if the AI-use issue is limited and severable, courts may order a partial set-aside. But where AI compromises transparency or due process in a way that infects the award’s reasoning, a full set-aside may be warranted.
E. Challenges to Implementation
AI-ERCs are not without their limitations. Critics may contend that adding another oversight layer could bureaucratize a process valued for its efficiency and party autonomy. If AI-ERC consultations are perceived as procedural detours or opportunities for tactical delay, they may be met with resistance from counsel and tribunals alike.
A second, deeper challenge lies in the opacity of AI usage itself. The rationale for AI-ERCs presumes disclosure of AI use by arbitrators. Although recent institutional guidelines encourage disclosure in defined circumstances, most arbitral rules do not impose a binding obligation to disclose AI use, leaving transparency largely voluntary. AI-ERCs would require institutional support, including rule amendments similar to those imposed on providers of AI solutions in the EU AI Act, obligating tribunals and parties to disclose the nature and extent of their reliance on AI.
Finally, the non-binding nature of findings may limit their practical value. Tribunals are under no obligation to adopt the committee’s assessments, and courts reviewing awards may treat such reports as amicus-style submissions, useful but not determinative.
Despite these limits, AI-ERCs can act as an early warning and a corrective system to ensure AI use does not compromise arbitral integrity or party expectations.
III. Conclusion
As AI reshapes dispute resolution, its promise of efficiency and analytical power must be balanced against party autonomy, fairness, and reasoned adjudication. AI-ERCs offer a principled and institutionally coherent response to this challenge by providing expert-informed oversight that preserves the balance between innovation and integrity. This model is both timely and necessary given the absence of meaningful checks on AI’s expanding role and the continued silence of institutional rules on when and how arbitrators may deploy it. Without structural controls, we risk overlooking instances in which efficiency and technological convenience could undermine fairness and arbitral judgment. AI-ERCs ensure that technology serves justice, not the other way around.
You may also like