Can AI-Assisted Arbitral Awards Survive Enforcement Under the New York Convention?
April 10, 2026
International arbitration has absorbed new technology before — without serious consequences for award enforceability. Generative AI raises questions that earlier tools did not, and the difference lies not in the technology itself but in where it intervenes. Artificial Intelligence ("AI") has moved from managing processes to generating the substance of an award and thereby entered territory that the enforcement regime of the New York Convention has no clear framework to evaluate.
In November 2025, the American Arbitration Association and International Centre for Dispute Resolution (AAA-ICDR) launched a pilot in which an AI system (trained by over 1,500 construction awards) drafts decisions for human review. No award from that pilot has yet been tested before a national court, and no enforcing court appears to have directly confronted the question of an AI-assisted award. The legal questions this raises, however, are already identifiable and deserve attention before an enforcement dispute forces the issue.
The 2025 Queen Mary University of London and White & Case International Arbitration Survey found that 91% of practitioners expect to use AI for legal research, while a majority remain reluctant to see AI draft awards. That reluctance has a legal basis, though it has not yet been fully mapped against the New York Convention's text. This article identifies where AI involvement in an arbitral award creates genuine enforcement risk and distinguishes legally manageable uses from those that are not.
Three Convention Provisions That Were Not Written for Algorithms
The Convention, drafted in 1958, assumes that arbitral awards are the product of human deliberation. That assumption is now under pressure at several points in the Convention. This article focuses on three which are Article V(1)(b), V(1)(d) and V(2)(b). As argued in an early analysis on this Blog, none of the Convention's provisions were designed to accommodate opaque algorithmic decision-making, and the resulting uncertainty will ultimately be resolved by national courts.
Article V(1)(b) permits refusal where a party was unable to present its case. The provision has been applied principally to procedural failures; including inadequate notice, denial of a hearing, or exclusion of evidence. Its underlying requirement extends further. Where an AI system generates reasoning in an award, it may introduce arguments, authorities or inferences the parties never raised and never had the chance to contest, engaging the prohibition on surprise decisions. The prohibition is about reasoning that emerges from a process the parties could neither anticipate nor engage with. This raises both a legal question and a practical one. Legally, opaque AI reasoning can amount to a violation of Article V(1)(b) if it deprives parties of the opportunity to respond to the basis on which their case was decided. Practically, proving that such reliance occurred is difficult, particularly where the tribunal does not disclose its use. As noted on this Blog, the black box nature of machine-learning models is not merely a technical inconvenience; it has procedural consequences. Where the reasoning cannot be traced to identifiable human judgment, a party contesting that award is challenging a process it cannot see, and one it never had the chance to engage with.
Article V(1)(d) permits refusal where the arbitral procedure did not conform to the parties' agreement or the law of the seat. The provision concerns how the arbitration was conducted, not the form or content of the final award, which means a challenge to the quality of AI-generated reasoning sits more naturally under Articles V(1)(b) and V(2)(b). Where V(1)(d) does become relevant is in a more specific situation, where the parties’ agreed rules, or a procedural order, conditioned or prohibited undisclosed AI involvement, and the tribunal did not comply. Where parties have agreed to a disclosure requirement or a prohibition on AI drafting, non-compliance would constitute a procedural irregularity under V(1)(d). Absent such agreement, the provision does not impose a disclosure obligation independently, although fairness or public policy concerns may still arise under other provisions. As AI-use clauses become more common in procedural orders, this limb of the provision is likely to attract closer attention from both practitioners and courts.
Article V(2)(b), the public policy ground, is the broadest of the three. Courts may invoke it ex officio. A 2020 analysis on this Blog identified this risk directly: AI decision-making, if deployed without transparency, risks violating the fundamental principles of justice in the enforcing state. The reasoning is anchored in procedural public policy, which enforcement courts have consistently recognized to encompass the right to be heard and the right to a decision reflecting genuine deliberation. Where an AI system produces the core reasoning in an award, the output is probabilistic rather than deliberative. An award produced in that way may satisfy the formal appearance of reasoned decision-making without satisfying its legal substance.
Whether an enforcing court would act on that concern depends on where enforcement is sought. Singapore courts have taken a measured, governance-focused approach that AI may support human expertise but not replace it. France takes the position more firmly. The EU AI Act classifies AI tools used for legal analysis in arbitration as high-risk (see here), a classification the Cour de Cassation's Working Group confirmed in April 2025, finding that adjudication must remain human-led. These institutional positions are indicative of the standards enforcing courts are likely to apply when scrutinizing AI-assisted arbitral awards.
The Risk Varies with the Role AI Plays
Not all AI use in arbitration creates the same enforcement risk. The Blog’s 2024 Year in Review noted that AI-driven tools have been used increasingly for document review, drafting, and data analysis, uses that sit at very different points on the risk spectrum.
AI tools that manage documents, transcribe hearings, or translate submissions do not alter what arbitrators decide, and courts have no realistic basis for a Convention objection to awards produced with that kind of support. The risk rises when AI moves into the substance of the decision. When AI summarizes evidence or identifies relevant precedents, the arbitrator must independently verify what it produces. Where AI drafts the legal reasoning or operative parts of an award, the question is not only whether human review was adequate. A prior question is whether any AI involvement in drafting substantive reasoning is in itself legally problematic under the Convention. Sapna Jhangiani KC highlighted on this Blog that AI systems generate probabilistic outputs based on prior decisions, not reasoned conclusions in any legally meaningful sense. An award produced with substantial AI involvement may read as reasoned without actually being so. That is precisely what an enforcing court scrutinizing Article V(1)(b) would need to assess. That is a legal question. Whether a party could prove such involvement from the face of the award is a separate practical question.
Institutional Guidelines are Necessary but Not Sufficient
Several institutions have published guidelines on AI use in arbitration since 2024, all converging on a single principle: AI may assist, but humans must decide. The CIArb guideline states in paragraph 8.4 that an arbitrator “shall assume responsibility for all aspects of an award” regardless of AI involvement.
The convergence on that principle is welcome; however, it does not resolve the enforcement question. None of these instruments bind an enforcing court, which will apply the Convention’s text and its own jurisprudence rather than solely relying on institutional guidance.
Conclusion
The New York Convention was not designed to accommodate AI-generated reasoning, and no amendment to it is imminent. Articles V(1)(b), V(1)(d), and V(2)(b) will be applied by national courts to awards reflecting realities those provisions never anticipated. This does not mean AI-assisted awards are unenforceable. It means they carry legal risk that varies with the nature of AI's involvement and the quality of human oversight. The questions being debated on this Blog are the same questions that will determine enforcement outcomes when the first contested case reaches a national court. How AI was used and the extent of human engagement with its output will be the considerations that matter. How courts will apply existing frameworks to these questions remains to be seen.
You may also like