Should Arbitrators Disclose Their ChatGPT History? Peru’s New AI Regulation, Algorithmic Bias, and Impartiality
January 16, 2026
This story begins with efficiency. An arbitral tribunal, overwhelmed by thousands of pages of submissions and expert reports, turns to an AI system to generate insights and streamline its deliberations. The tool promises speed, precision, and analytical consistency.
Days earlier, however, one of the arbitrators had used that same system to summarize the claimant’s statement of claim. The algorithm remembered. When later asked to assess causation and quantify damages, it subtly privileged the claimant’s framing of the facts.
Trusting in the neutrality of AI, the tribunal incorporated the algorithm’s outputs into its reasoning. In time, what no one had foreseen was uncovered: an algorithmic bias, embedded deep within the model’s training data, had silently shaped the outcome of the case.
These kinds of risks are no longer hypothetical.
Across international arbitral practice, arbitrators increasingly rely on generative AI to summarize submissions, process evidence, and draft procedural documents. What began as an aid to efficiency has quietly entered the decision-making process itself, as illustrated by the recent LaPaglia v. Valve Corporation case, where the use of ChatGPT by an arbitrator became a ground to challenge the award.
Reality now poses an unavoidable question: how do we ensure the impartiality of the arbitrator (and their AI)?
From Soft Law to Peru’s New Regulatory Anchor
AI has emerged as a transformative technology capable of enhancing efficiency across multiple sectors, including arbitration. Yet its integration into decision-making processes raises fundamental challenges to cardinal arbitral principles, particularly impartiality. While arbitrators must remain fully detached from the parties’ interests, even when relying on AI tools, questions persist as to whether AI outputs can truly be unbiased and whether the use and influence of such systems should be disclosed.
Leading soft law frameworks currently address the use of AI in law and arbitration mainly through ethical warnings, but they lack binding force. Peru, however, has taken a legislative step forward. In 2023, it enacted the “Act Promoting the Use of Artificial Intelligence for the Economic and Social Development of the Country”, which established the legal framework governing the use of AI. The detailed regulatory rules were only adopted in September 2025, through the implementing Regulation, which will enter into force on January 22, 2026.
Beyond its promotional and ethical objectives, the new Peruvian regulatory framework expressly recognizes the most insidious risk that AI introduces to decision-making fields requiring impartiality, such as arbitration: algorithmic bias.
The Regulation defines it as a “systematic error that occurs when an AI-based system makes unfair, partial or discriminatory decisions due to biased training data, algorithm design or human interactions” (Article 6.e). The Exposición de Motivos (the legislative statement of reasons) likewise warns that such biases can produce discriminatory outcomes.
High-Risk AI in Arbitral Decision-Making Under the Peruvian Regulation
The Peruvian Regulation adopts a risk-based classification system, designating as “high-risk” any AI system capable of significantly affecting human life, liberty, dignity, or fundamental rights (Article 22). Similarly, Article 24.j classifies as high-risk any AI use that poses risks to fundamental rights or generates stigmatization, discrimination, or cultural bias.
Article 28.11 recognizes that high-risk AI systems in the justice sector require mechanisms of human oversight and proper user training to prevent bias and to “stop, correct, or invalidate” AI-driven decisions. The Regulation thus acknowledges the importance of these safeguards in decision-making within justice. However, the article explicitly applies only to public administration entities.
This creates a regulatory gap in the Peruvian context, where arbitration is recognized as having a jurisdictional function under constitutional law (Article 139.1 of the Peruvian Constitution). Arbitral tribunals exercise jurisdictional guarantees comparable to judicial bodies, as affirmed by the Constitutional Court in Exp. No. 06167-2005-HC/TC. Despite this, the Regulation does not extend Article 28’s safeguards to arbitral decision-making, leaving private-sector justice mechanisms without clearly defined human oversight duties.
A second limitation concerns Article 31.4, which imposes specific obligations on private-sector actors using high-risk AI. These provisions mandate that those in charge of using high-risk AI systems must: a) be trained to avoid being biased by AI-generated outputs; and b) have the explicit capacity to stop, correct, or invalidate the AI system’s decisions.
Yet, the Regulation distinguishes three categories of actors in contact with AI systems: Developers, who design or train AI systems (Article 6.b); Implementers, who deploy them in operational processes (Article 6.d); and Users, who interact with the system to obtain outputs or solve problems (Article 6.g). An arbitrator using AI tools qualifies as User, but the stringent obligations in Article 31.4 are primarily directed at Developers and Implementers, not Users.
In practice, this dual gap implies that: (i) Article 28 safeguards are narrowly confined to decision-making in the public sector justice, although analogous measures could also be relevant in private dispute resolution; and (ii) the private-sector rules for those who make critical decisions with AI support apply only to Developers and Implementers, failing to fully integrate Users as responsible actors, thereby weakening protections for AI-assisted decision-making in fields such as arbitration.
ChatGPT History: Why AI Use in Arbitration Demands Traceability
The central question remains: Does this binding legal framework compel the disclosure of an arbitrator’s AI history?
Soft law has approached this cautiously. The Silicon Valley Arbitration & Mediation Center (SVAMC) Guidelines on the Use of AI in Arbitration (2024) suggest that disclosure should be considered case-by-case, balancing due process and privilege. When warranted, disclosure may include technical details, a short description of the AI tool’s use, and, for generative systems, even the prompts or conversation history necessary to replicate results (Guideline 3).
Peru’s Regulation adopts a similar logic through its Guiding Principles of transparency and accountability (Article 7). Transparency requires AI systems to be “clear, explicable, and accessible,” including information on their functioning, data sources, and potential biases (Article 7.i). While this principle covers both development and use, it appears primarily drafted for Developers, making implementation challenging for Users such as arbitrators. This is problematic, as the Regulation itself identifies algorithmic bias as a central risk, meaning transparency obligations should apply in AI-assisted decision-making, not just in the development of AI systems.
The requirement of disclosure of an AI system’s operational history is not an automatic consequence of Peru’s accountability principle. The Peruvian Regulation does not expressly mandate it, and any obligation to produce and reveal system logs remains debatable. Still, even if an arbitrator would qualify as an Implementer of a high-risk AI system, the principle of accountability (Article 7.j) would still not require disclosing technical “logs” or every interaction with tools like ChatGPT. Rather, accountability could only demand traceability sufficient to show the arbitrator supervised, mitigated algorithmic bias, and did not delegate adjudicative authority. The exact scope remains undefined under Peruvian law.
The Comparative Lens: EU AI Act
Peru is the first country in Latin America to enact AI legislation, while others in the region remain at the draft bills stage. Its Regulation reflects the global shift toward risk-based AI governance, closely aligned with the 2024 EU AI Act.
Under the EU AI Act, AI systems used by judicial authorities or in similar ways in alternative dispute resolution are classified as high-risk, as previously reported on this blog. This provides international validation for Peru’s classification of arbitration within its high-risk structure.
Both frameworks converge in placing fundamental rights at the center of AI governance. The EU AI Act aims to promote human-centric and trustworthy AI while ensuring a high level of protection of fundamental rights enshrined in the EU Charter, democracy, and the rule of law, operationalized through transparency (Article 13) and human oversight (Article 14). Similarly, the Peruvian Regulation embeds guiding principles such as non-discrimination, transparency, human supervision, and responsible use (Article 7). Taken together, both frameworks align in the core safeguards.
What Comes Next?
Peru’s AI Regulation represents an important step forward by formally recognizing the risks that algorithmic bias poses to decision-making in sectors requiring impartiality, such as arbitration. By identifying high-risk AI and embedding principles of transparency, human oversight, and accountability, the framework establishes a foundation to safeguard fundamental rights and promote responsible AI use.
Yet, significant gaps remain. Article 28’s safeguards are limited to public-sector justice, while private-sector users, including arbitrators, are not fully integrated into the obligations in Article 31. Moreover, despite its regional significance, the Regulation lacks AI-specific duties for decision-makers regarding disclosure and traceability, essential tools to mitigate algorithmic bias in practice. With arbitration’s growth in Peru and its constitutional recognition, legislation must act. This could be the next step for the country, following Silicon Valley’s example and continuing to lead AI regulation in Latin America.
Efficiency without transparency risks becoming opacity; speed without accountability risks becoming error. What began as a tool for clarity may, without careful human oversight and a sharp legal framework, become an unseen biased participant in decision-making itself.
You may also like