11th Annual EFILA Lecture: Artificial Intelligence in Investment Arbitration
January 8, 2026
On 27 November 2025, Queen Mary University of London hosted the 11th Annual EFILA lecture, sponsored by Control Risks. The lecture was delivered by Professor Maxi Scherer, who is “an Artificial Intelligence (“AI”) person”, having academically researched the topic as early as 2016 and lectured on AI in arbitration in Vienna in 2018, which won the GAR award for best speech.
She recalled that back then, it was the Stone Age of AI, but AI has now become “a very hyped topic”. She then delved into discussing predictive analytics, the risk of hallucination, the current state of AI regulation, and the application of the EU AI Act in investment arbitration (see also the Blog’s coverage here).
Predictive Analytics and Investment Arbitration
Predictive analytics is the idea that you can predict the outcome of a dispute using a computer model.
Various early studies, from 2002 to 2017, tested the ability of computers to predict the outcomes of US Supreme Court cases and European Court of Human Rights decisions. In those studies, computer models successfully predicted the outcome 70-80% of the time. They used natural language and tagged metadata approaches, and the models were trained on anywhere between 600 and 28,000-document data sets.
While Professor Scherer found that some of the methodology of the early studies raised questions, the results demonstrated efficacy. Given that in investment arbitration, there are more than 650 publicly available awards, the size of the data set is likely sufficient for AI-based outcome-predictive models to work. Moreover, many investment arbitrations involve binary classification tasks, such as jurisdiction, and recurring legal issues. (This is unlike in commercial arbitration, where “the legal issues are as diverse as a snowflake, and the applicable law is also.”) A fourth point of comparison is the political context, where, in investor-state cases, the composition of the tribunal can be a factor in predicting the outcome.
Professor Scherer then highlighted areas where predictive analytics are being applied in arbitration already, such as in aid of funding decisions or dispute prevention and settlement. A more controversial point is whether predictive analytics can be used by the tribunal.
This, Professor Scherer recommended, is possible within “very strict limits” noting that the American Arbitration Association has an AI-assisted decision-making process available for certain smaller construction arbitration disputes, where “there is still a human in the loop, with a specific agreement and express consent given by the parties.”
The Risks of Hallucination
Globally, almost 600 instances of hallucinations have been logged on the eponymous website operated by case reporter Damien Charlotin.
Such situations may be more likely to come up in investment arbitration than commercial arbitration due to the public accessibility of investment treaty arbitration awards, and on this topic, Professor Scherer offered a grounded warning:
“An AI model gives you the most probable answer to your question, not necessarily one that exists…So the only way, the only way to check is to actually read the case…Sorry.”
AI Regulation in Investment Arbitration
While there are no specific, binding AI regulations in investment arbitration, Professor Scherer distilled six principles from the areas where AI is currently regulated before national courts around the globe:
- AI literacy: understanding how it works and what the potential challenges and risks are;
- Second, strict non-delegation of decision-making functions where AI tools are used by judicial officers;
- Third, being accountable, whether as a judicial officer, lawyer, or other court user, for the accuracy of whatever material is issued in one’s name;
- Fourth, ensuring information security, confidentiality, data protection, IP, privacy, etc.;
- Fifth, understanding that the data on which an AI system is trained may bias the system towards certain solutions; and
- Sixth, communicating about the use, intended use, or suspected use of AI.
On the last point, Professor Scherer saw divergent approaches: While the UK does not require disclosure if AI is used responsibly, Singapore requires disclosure if the court has grounds to believe that AI has been used, and Dubai imposes an obligation to declare at the earliest possible opportunity.
Professor Scherer noted that many of the guidelines refer to permissible or impermissible use cases. For example, translating, summarising documents, and drafting emails or letters may be permissible, whereas using AI for legal research and analysis may not be, in particular, when done by judicial officers. However, few of the guidelines, except Singapore’s, refer to sanctions.
According to Professor Scherer, the regulation of AI before national courts seems to be more detailed and seems to have arrived sooner than what has been achieved in international arbitration. However, there are two initiatives now, the IBA Taskforce (which Professor Scherer co-chairs) and the ICC Taskforce, that are trying to drill down in a little bit more detail.”
Application of the EU AI Act
In the final topic of her lecture, Professor Scherer drew our attention to the EU AI Act, which she described as an instrument that “few people think about as a reference”.
“The EU AI Act is the first comprehensive, holistic attempt to regulate AI through all industry sectors in the EU. While its drafters did not have arbitration specifically in mind, it applies to arbitration and it applies to investment arbitration."
This observation provoked a wave of murmurs in the audience.
“What it does is take a risk-based approach categorising activities according to the likelihood of harms stemming from the use of AI. Some uses are outright forbidden, whereas others are ranked as high risk, limited risk, or minimal or no risk. And arbitration is a high-risk activity, according to the EU AI Act.”
What that means is, subject to certain exceptions, the use of AI tools in arbitration requires records to be kept in a log. It requires appropriate human oversight, and, most controversially, could give rise to a fundamental rights impact assessment.
As to why Professor Scherer says the EU AI Act applies to international arbitration, she cited Annex 3, Article 8(a), regulating “AI systems intended to be used by a judicial authority or on their behalf to assist the judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts or used in a similar way in alternative dispute resolution.”
Professor Scherer interpreted the term “alternative dispute resolution” in light of the expression in Recital 61, “mechanisms that produce legal effects for the parties”, which captures commercial arbitration as well as investment arbitration.
But how does the Act apply to arbitrators? The personal scope of the EU AI Act includes importers, exporters, manufacturers, as well as the context-specific legal term “deployers”.
Professor Scherer summarised the concept thus:
“A deployer is a natural or legal person using AI, except when the AI system is used in the course of a personal, non-professional activity. So, if you use it in a professional activity, as an arbitrator does, you are a deployer of an AI system.”
The territorial scope of the Act is also very vast. First of all, it applies if the deployer has a place of establishment or is located within the EU. For an arbitrator, that probably covers one with a permanent place of business in the EU.
She went further to say:
“In commercial arbitration, I would argue that you need to look at the seat of the arbitration. So, the Act probably applies if the tribunal, with the legal fiction of the seat, has its seat in the EU. Obviously, that doesn’t work for investment arbitration, if there is no seat, so you probably need to look at the individual arbitrator.
But even if the arbitrator is located outside the EU, the AI Act still applies, if the output produced by the AI system is used in the EU. Now, what is the output? It’s the award. And when is an award used in the EU? Probably if it’s enforced against a party in the EU. Or, if there are assets in the EU against which the award is enforced.”
Luckily, there was good news for EU-based AI-literate arbitrators too, in the form of the Act’s exceptions and carve-outs.
Article 6(3) of the EU AI Act says that even if it is classified as a high-risk use, the regulatory obligations do not apply to the use of an AI system if it is for a narrow, procedural task, or a preparatory task, or if it is to improve the result of a previously completed human activity.
Leaving us with a note of hope, Professor Scherer reassured listeners:
“If it’s just a small procedural task, a chronology, a procedural chronology for an award, or if it’s to improve a previously completed human activity, proofread an award, cite-check an award, these are exceptions that fall under this Article.”
Professor Scherer also mentioned that political efforts are ongoing in relation to the EU Commission to make sure that the EU AI Act does not unduly frustrate the use of AI in international arbitration.
Conclusion
The lecture was followed by a lively Q&A session, which threw the anecdotal experiences of attendees and their clients into the mix. Two interesting themes emerged, which focused on client fears of jurisdictional bias and, in general, on what terms clients currently consent to the use of AI. We have yet to see how these topics will evolve as the IBA and ICC task forces drill down into the regulation of AI in arbitration. In the meantime, the potential application of the EU AI Act is not to be ignored.