2026 PAW: When Technology Meets Justice
April 24, 2026
Artificial Intelligence ("AI") has dominated the discourse of many law topics in recent years. This year’s Paris Arbitration Week ("PAW") was no exception, with several panels discussing the ethical implications of these tools as well as their efficiency and how they should be used in the future.
"AI for Justice" was Jus Mundi’s flagship Symposium of the 10th edition of PAW that brought together arbitral institutions, international tribunals, regulators and tech leaders to explore how AI is shaping access to justice in dispute resolution.
Panel 1 — AI, Legitimacy & the Administration of Justice
The opening panel, hosted by Solène Bedel (Jus Mundi), examined how AI intersects with the administration of justice and what this means for trust, fairness, and legitimacy. Through questions directed to guest speakers Maxi Scherer (Queen Mary University of London and ArbBoutique), Marike Paulsson (Council for International Dispute Resolution of the Kingdom of Bahrain), Bertrand Kleinmann (Tribunal des activités Economiques), and Sapna Jhangiani ( Blackstone Chambers), the discussion looked at the ideas behind the symposium’s main message: AI should strengthen people's trust in justice, not reduce it.
"AI is unstoppable"
Kleinmann emphasized the sheer scale of operations driving the Commercial Court of Paris' transformation: managing nearly 10,000 dispute resolution cases, 5,000 out-of-court proceedings, and around 20,000 payment orders. This substantial volume has been a key catalyst for adopting new technologies that are capable of handling workflows with greater efficiency.
While the Commercial Court of Paris stands out as a particularly advanced example, Jhangiani stressed that the broader judicial landscape is moving in the same direction. Courts across different jurisdictions are facing familiar challenges: increasing caseloads without a corresponding increase in resources. Commercial disputes in particular continue to grow in volume and complexity, forcing judicial systems to rethink how work is organized. In this context, AI is increasingly viewed as a tool for efficiency rather than a replacement. Sapna also noted that AI has dramatically expanded access to legal knowledge, moving it beyond the exclusive domain of lawyers.
At the same time, both Kleinmann and Jhangiani pointed out that caution remains central to the discussion. AI should not replace judicial decision-making. Current AI systems rely on predictive reasoning rather than genuine legal judgment and cannot independently provide the reasoning expected of arbitrational institutions.
Justice Encounters Conflicts
The discussion then shifted to international AI standards. Before addressing this topic, Paulsson delivered a heartfelt reflection on the current situation in the Middle East, emphasizing that while not directly part of the panel’s focus, it shaped their work profoundly. Despite ongoing conflict and instability, institutions in the region remain operational, resilient, and committed to delivering justice without interruption.
Paulsson then continued that discussions at the United Nations on AI are still at a very early stage. Recent meetings of Working Group III of the United Nations Commission on International Trade Law (UNCITRAL) on ISDS reforms ("Working Group III"), as a particular example, attempted to understand what role AI should play and what exactly it is supposed to regulate, illustrating how rapidly evolving and dynamic the technology is, offering opportunities to enhance efficiency on the one hand while also requiring careful oversight on the other hand. This likely explains why Working Group III has not yet identified a perfect balance for integrating AI into its work.
Party Consent as the Cornerstone
To Bedel’s question about where they personally draw the line in the use of AI. Scherer, providing an arbitrator's perspective, explained that the limits on using AI are fundamentally different from those in courts because the process is built on party autonomy, meaning the parties themselves decide what is acceptable.The "red line" is therefore not fixed by arbitrators personally, but by party consent. If both parties explicitly agree, they could even allow an arbitrator to use AI to assist with drafting, or even draft, an award. While this may not necessarily be advisable and could raise issues, it remains possible because consent stands at the core of the system.
Furthermore, courts perform a public function and therefore require stricter and more clearly defined limits on the use of AI. Arbitration, by contrast, is more flexible, as a private and consent-based system.
Nonetheless, Scherer’s observation overall indicated that, as case volumes increase and competition intensifies, a good lawyer masters their craft, but a great lawyer knows how to use AI as a complementary tool - while ultimately leaving it to the client to decide to what extent AI should be used.
Panel 2 — Institutions in Action: What’s Working Now?
Guided by Alexandre Vagenheim (Jus Mundi), the second session examined three global arbitral institutions using AI in research, drafting, translation, and case management, while maintaining quality and credibility. Eliana Tornese (LCIA), Ana Serra E Moura (ICC), William Sternheimer (Court of Arbitration for Sport) and Thara Gopalan (International Centre for Dispute Resolution) discussed where AI delivers real value, where institutions draw clear red lines, and how data governance, explainability, and oversight ensure that efficiency gains do not come at the expense of due process or trust.
What Should AI Be Used For?
The panel explained that AI has the potential to make alternative dispute resolution ("ADR") process - often considered too complex and costly - more accessible. Ana Serra E Moura pointed out that it can quickly analyse large volumes of data and generate a draft award for review by human arbitrators. Beyond arbitration, AI can be used in other forms of ADR, for example, as an early case examination tool supporting neutral evaluation in mediation.
The efficiency of AI is particularly evident in sports arbitration, where fast awards are necessary due to competition schedules: disputes can arise concurrently with major sporting events, requiring arbitrators to make decisions within very short timeframes. The use of AI in case management with a private database can save time. To that end, Jus Mundi announced its partnership with the Court Arbitration for Sport ("CAS"), through which the CAS granted access to its database, which will enable Jus Mundi’s AI to leverage a greater body of knowledge, case law and materials. This will make it more effective for conducting research and drafting awards.
AI can also offer a practical solution to the growing burden of increasingly lengthy awards. The panel noted that it is not uncommon for decision-makers to be required to review multiple lengthy submissions within even a week, each requiring extensive citation checks. While human expertise remains essential, AI can save a significant amount of time just by verifying citations.
The panel then addressed the criticism AI use has received, particularly regarding "hallucinations" and a perceived lack of reliability. However, Sternheimer argued that human work is not perfect either, and both need to be double checked in any case.
Where Should The Line Be Drawn In The Use Of AI?
The panel emphasized that AI is not meant to replace human judgement. A human arbitrator is still needed to accept, reject and modify a draft award. Ultimately, AI is meant to be an assistant that expands access to reasoning and skill, and the decision-making is left to the human arbitrator. Sternheimer expressed the possibility that a sufficiently developed AI in the future could make decisions by itself. This idea, however, was not shared by other members of the panel.
Panel 3 — Ethical AI in Practice: A view from the Innovation Leaders
Last but not least, responsible AI becomes tangible through governance. Guided by Jean‑Rémi de Maistre, this panel explored how organizations translate ethical AI principles into practice. Natalia Chumak (Signature Litigation), Lucas De Ferrari (White & Case LLP), Eleissa Karaj (Allen & Overy Shearman), and Filip Nordlund (Legora) answered questions on how technology leaders balance innovation with risk management, offering practical insight into the behind-the-scenes structures of how AI is actually used in practice.
Karaj shared her experience in implementing innovation strategies and developing digital tools. She joked that her role often involves “harassing” lawyers to use the tools, explaining that the ultimate goal is to make work easier while also highlighting why it is so important to embrace and adapt to new technologies. Nevertheless, she also explained that her firm’s AI governance, including a central AI team and local AI ambassadors, ensures compliance while enabling innovation.
Building on this, Nordlund emphasised that AI fundamentally changes how legal work is carried out rather than just functioning as “another software tool.” This insight became the catalyst for Legora, a collaborative AI platform for legal professionals designed to go beyond a conventional AI solution. The rollout has been successful, with over 85% of licenses activated and actively used.
Chumak also shared some insights on how AI is helping carry out work at her firm. Pre-claim assessment becomes faster and more cost-efficient, onboarding urgent matters are regulated smoother, and stepping into ongoing cases is more efficient. She emphasized that AI supports lawyers’ capabilities but does not replace judgment, reiterating the first panel’s point: AI is and should never replace credibility, but should instead be seen as a new puzzle piece in our work and ultimately, benefit the client.
Diversity, Equality and Inclusion (“DEI”) and Technology - How AI And Other Developments Will Help Or Hinder Progressive Initiatives.
Organised by Open Arbitration, an international LGBTQIA+ network for the arbitration community aimed to promote inclusion for LGBT individuals in the arbitral world, and hosted by Jus Mundi and Robbins Arbitration, this panel on DEI and Technology featured two host speakers, James McKenzie (Eversheds Sutherland) and Tim Robbins (Robbins Arbitration), as well as three guest speakers, Alexandre Vagenheim (Jus Mundi), Gilian Forsyth (Eversheds Sutherland) and Ziva Filipic (ICC International Court of Arbitration).
The central theme of the discussion is AI is often accused of being biased. After all, AI is trained on human-made data, which reflects human prejudice. When it is used to select an arbitrator, especially with a basic prompt, said data will mirror a profession still largely dominated by senior white men and thus reinforce existing patterns.
However, the panel, did not believe that this outcome is unavoidable. Bias is not an inevitable flaw, because AI systems can be guided and corrected, through both data inputs and effective prompting. In the same way that arbitration is only as good as its arbitrator, AI is only as good as its prompts.
However, this assumption of controllability is not always reflected in practice. Vagenheim explained that Jus Mundi has not fully launched its AI profile recommendation tools partly due to DEI concerns, because teams still have to manually recommend female profiles to diversify results despite using inclusive prompting.
The panel nevertheless expressed their optimism on the future of AI and its use to support DEI objectives, provided that it is used responsibly and transparently. Clients must be able to understand how their data is used and how recommendations are generated.
Overall, PAW 2026 made it clear that AI, despite its rapid development, is not a substitute for human decision-making in arbitration. It is a tool that amplifies both the strengths and weaknesses of those who use it. Ensuring fairness, diversity, and accuracy therefore continues to depend on the active involvement of practitioners, whose judgment remains central to the arbitral process.
This post is part of Kluwer Arbitration Blog’s coverage of Paris Arbitration Week 2026.