Justice, Governance and Mediation: Insights from Brazil’s AI Experience

Technology

In October, the Center for Justice at Fundação Getulio Vargas (FGV Justiça), one of Brazil’s leading institutions for research and innovation in the justice sector, released the fourth edition of its national study on Artificial Intelligence in the Judiciary. This annual report examines how Brazilian courts are deploying AI technologies, especially generative AI, and reflects on the governance frameworks required to ensure these tools are used responsibly and ethically.

The initiative is coordinated by Justice Luis Felipe Salomão, a sitting Justice of Brazil’s Superior Court of Justice (STJ) and the academic coordinator of FGV Justiça. His dual role as both judge and academic leader gives him a unique perspective on how innovation can enhance justice systems while remaining faithful to fundamental legal principles. Under his leadership, the report has become a reference point for policymakers, scholars, and practitioners who are navigating the difficult balance between technology and justice.

Why should mediators and ADR professionals care about a study focused on the judiciary? The answer lies in the parallels. The dilemmas raised by the research are strikingly similar to those that emerge in mediation when technology enters the room:

  • How do we balance efficiency and fairness when using technology in processes that affect people’s lives?

  • How do we build trust, accountability, and transparency in digital environments?

  • How can we integrate human empathy and judgment with AI-driven support tools without diminishing what makes mediation unique?

Brazil provides a particularly vivid case study. With more than 80 million pending cases in its courts, the country has long been described as one of the most litigious societies in the world. This enormous backlog creates constant pressure to innovate, and it has encouraged courts to experiment with AI for tasks such as case triage, drafting, and large-scale data analysis. At the same time, however, the research highlights the risks of bias, opacity, and over-reliance on automation – challenges that extend far beyond Brazil and are echoed in justice systems worldwide.

For mediation, these findings offer a clear message: AI should be a co-pilot, not the pilot. Technology can enhance practice by helping mediators analyze information, visualize data, or draft agreements more efficiently. But it cannot replace the relational core of mediation – the creation of trust, the subtle reading of emotions, the encouragement of creativity between parties. These are human skills that no algorithm can replicate.

Justice Salomão has consistently emphasized that innovation in justice must be anchored in good governance. This principle applies equally to courts and to mediation. For mediators, responsible adoption of AI requires clear safeguards: explicit clauses on whether AI will be used in a process, robust data protection, the preservation of confidentiality, informed consent from parties, and a critical stance toward any “solutions” proposed by machines.

Internationally, mediation communities can take inspiration from Brazil’s approach. The frameworks being developed for judicial AI governance may serve as models for technology-assisted ADR. Standards such as those promoted by the International Council for Online Dispute Resolution (ICODR) – emphasizing accessibility, accountability, fairness, and transparency – could guide not only courts but also mediators integrating AI into their practice.

Ultimately, the conversation is just beginning. Whether in courtrooms or around the mediation table, we are all grappling with the same question: how do we harness technology’s benefits without losing the humanity that lies at the heart of justice?

As this research reminds us, the future of justice – and of mediation – is being shaped right now. Mediators cannot stay on the sidelines. Instead, we must actively engage, contribute our voices to debates on governance, and ensure that technology supports, rather than supplants, the human connection that defines our work.

 

Comments (1)
Your email address will not be published.
default-avatar.png
Paul Sills
October 8, 2025 AT 7:19 PM

Editorial Comment Justice, Governance and Mediation: Insights from Brazil’s AI Experience Andrea Maia’s post reminds us that the conversation about artificial intelligence is not confined to the courtroom—it extends to every corner of dispute resolution. Her reflections on Brazil’s national study on AI in the judiciary highlight the deep parallels between judicial innovation and the future of mediation practice. What stands out most is the emphasis on governance. Justice Salomão’s leadership in anchoring innovation to ethical and transparent governance frameworks provides an important lesson for mediators: technological progress in our field must always be accompanied by deliberate reflection on values. As mediation increasingly intersects with online platforms, data analytics, and generative AI tools, the profession faces its own set of questions about oversight, consent, confidentiality, and bias. Andrea’s phrase that “AI should be a co-pilot, not the pilot” captures this balance perfectly. The technology can extend our reach, enhance preparation, and support efficiency—but it cannot (and should not) take over the core human elements of mediation: empathy, trust, intuition, and relational intelligence. Brazil’s example also points to a broader truth: innovation is often born out of necessity. With millions of pending cases, its judiciary’s adoption of AI was not optional—it was imperative. Mediators might soon face similar pressures to scale capacity, improve access, and reduce costs. The challenge is ensuring that in our pursuit of efficiency, we do not lose sight of the human essence that defines our craft. As the mediation community reflects on Andrea’s insights, we invite readers to consider: How can mediators participate meaningfully in the governance discussions shaping AI’s role in justice and dispute resolution? What ethical principles should guide the responsible use of AI in mediation practice—particularly regarding confidentiality, informed consent, and neutrality? Are there existing models that mediation institutions should adopt or adapt? How can we ensure that mediators remain “in the loop,” preserving human empathy and creativity even as AI tools become more sophisticated? Finally, how do we prepare the next generation of mediators to engage critically and confidently with technology without losing touch with mediation’s human core? The conversation Andrea begins is one that must continue across borders and professions. AI will not wait for us to catch up—so it is incumbent upon the mediation community to shape how it is used, ensuring that technology remains a servant of justice, not its master.

Leave a Comment
Your email address will not be published.
Clear all
Become a contributor!
Interested in contributing? Submit your proposal for a blog post now and become a part of our legal community! Contact Editorial Guidelines