Quantum on Autopilot: Pre-Agreed Valuation Formulas, Math-Specialized AI, and Zero-Knowledge Proofs in Arbitration
December 11, 2025
According to Queen Mary University’s 2025 International Arbitration Survey, 77% of respondents considered it “appropriate for arbitrators to use artificial intelligence (“AI”) to assist in calculating damages,” as this task is “primarily mathematical, requiring precision rather than discretionary judgment.”
Quantum experts remain uneasy, warning in a previous blog post that large language models (“LLMs”), like ChatGPT, remain a “black box,” meaning that its responses come from a “thought” process that is untraceable and impossible to audit. They also highlight that LLMs are prone to hallucination, lack the judgment required to build counterfactuals, make discretionary valuation calls, and cannot fully grasp the industry-specific nuances on which valuation depends.
These concerns intensify in more complex quantum scenarios, with many variables and competing theories, where tracing assumptions becomes impossible due to the black box. This is compounded by the fact that LLMs are not built to handle complex mathematical formulas.
This post proposes a way to allow automated quantum valuation while solving the quantum experts’ concerns at least in the most common contractual disputes by using pre-agreed economic models powered by math-specialized generative AI and Zero-Knowledge Proofs (“ZKPs”).
The Proposal: Pre-Agreed Contractual Formulas and ZKPs
1. Pre-Agreed Contractual Formulas
Parties already agree on certain elements that anticipate quantum valuation, such as liquidated damages or price-adjustment formulas. But they do not (at least to the author’s knowledge) agree on complete economic models for assessing damages. Under the current system, once a dispute arises, each party can simply build the economic model that best fits its case theory. Assuming the risk of non-compliance is always lower than compliance, agreeing on a full valuation model built from scratch feels burdensome, expensive, and unnecessary.
But imagine something different. Suppose companies specializing in quantum valuation begin offering ready-made economic models for the most common contractual breaches per industry and contract type. At a reasonable price, the parties purchase the template, receive a brief tailor-making service from the provider, and attach it as an annex to the contract. Under this structure, agreeing on an economic model becomes a standard add-on like selecting a recognized set of technical specifications.
In construction contracts, for example, the template could include formulas to quantify delays, direct losses, lost profits, and opportunity costs. If attribution is later established, damages flow directly from the pre-agreed model.
The idea is that the parties buy these templates to be used by an AI-powered valuation tool if a tribunal later finds a common breach of contract. Instead of letting AI “invent” counterfactuals or exercise discretion, the parties set all economic assumptions in advance. The AI simply performs the math.
2. Math-Specialized AI
Experts’ concerns about LLMs are entirely reasonable. LLMs are designed to imitate human language, not to perform structured, proof-based calculations. This does not mean, however, that AI beyond LLMs is incapable of accurate mathematics. DeepMind’s AI for Math initiative, for instance, develops models trained on formal proofs and advanced mathematical problem sets, enabling them to perform symbolic reasoning and generate verifiable solutions. These systems are built for the kind of precision required for quantum valuation.
Although a specific litigation-related quantum-oriented tool does not yet exist, nothing prevents the creation of a customized math engine capable of applying pre-agreed valuation formulas. Such a system could be activated only once the tribunal determines attribution in principle. The tribunal would still decide on the legal attribution between the breach and the category of loss—for example, that a delay was attributable to a party and capable of generating compensable over costs—but would leave the question of whether there was an actual damage and in what amount to the AI tool. In this way, item-by-item assessment underlying the damages figure becomes automated: accurately matching hundreds or thousands of data points (timestamps, inputs, outputs, and logs) to the agreed economic model, with the tribunal declaring responsibility in its award, together with the compensation figure, only once the system has run and confirmed whether the legally attributable breach results in a positive quantum.
3. ZKPs: Full Data Without Disclosure
Item-per-item damages assessment typically faces two obstacles: the sheer volume of data involved and the intense disputes over document production when that data includes sensitive information like regulatory non-compliance, commercially sensitive pricing models, or third-party confidentiality commitments. Even when tools exist to limit disclosure, they require litigation, time, and cost.
ZKPs offer a different path. Without exception, parties can commit their entire dataset into the system while keeping every underlying element concealed from the other side and the tribunal. A ZKP demonstrates that a specific statement derived from that dataset is true, yet the data itself remains fully hidden. The classic “Where’s Wally?” illustration captures this perfectly: the prover shows the verifier a version of the page where only Wally is visible, proving they know his location without ever revealing the rest of the image.
Applied to arbitration, this means a tribunal can verify, for example, that a party’s internal records link a delay to specific cost overruns without revealing pricing strategies, supplier identities, safety audits, GDPR-protected personal data, or confidential third-party material.
Technically, this operates through cryptographic commitments: the party hashes its dataset, and the ZKP certifies that a defined statement generated from that data is correct. The tribunal never sees the raw inputs, yet it can trust the result because the soundness of a ZKP is mathematically guaranteed, not probabilistic.
The consequence is simple: damages assessment and quantum valuation can rely on the full informational universe relevant to the dispute, rather than the narrow slice parties are willing or compelled to disclose. This produces a far more accurate and defensible picture of the damage.
4. The Outcome: An Automated, Auditable Quantum Figure
Once attribution is determined in the arbitration, the system takes over. The math-specialized AI receives the ZKP-verified inputs, applies the pre-agreed valuation formula, and produces a single damages figure. Being a black box, the tribunal sees the agreed economic model, the verified inputs (not the raw data), and the result.
Why would the tribunal rely on a figure that it cannot manually replicate? Because ZKPs provide cryptographic certainty ensuring that each input is correct, and the AI tool is subject to technical auditability to ensure accuracy.
A Hypothetical Example: Automated S.A. v. Cotopaxi Constructions Ltd.
Imagine a contract between Automated S.A. (Netherlands) and Cotopaxi Constructions Ltd. (Ecuador) for an airport terminal. During negotiations, the parties purchase an industry-standard quantum model produced by a specialist firm. It covers typical construction scenarios (delays, supply-chain disruptions, etc.) and is annexed to the contract like any set of technical specifications.
Months later, delays occur, attributable to Automated. Cotopaxi initiates an ideally expedited arbitration to confirm attribution and run the AI tool.
Instead of fighting over document production, both parties feed all project data into the system: procurement records, construction logs, subcontractor communications, pricing information, and even material normally withheld (e.g., sensitive internal pricing strategies or regulatory issues). ZKPs protect all this data from disclosure. The system simply proves the truth of the relevant statements on damages.
Assuming the tribunal finds attribution during deliberations, the AI engine then runs the agreed model. Damages are calculated in minutes rather than months and included in a figure in the final award, together with the tribunal’s analysis on responsibility. The resulting figure reflects a more complete picture of reality than today’s selective disclosure and expert battles.
The Tradeoff: Black Box vs. the Current System
Today’s quantum valuation is transparent in method but subjective in outcome. Experts choose assumptions, select data, and present competing narratives shaped by evidentiary limitations.
Automation introduces a different kind of opacity. Although the calculation itself is not manually traceable, it offers two advantages the current system cannot match: (i) completeness, because ZKPs allow the system to consider all relevant data; and (ii) objectivity, because the same formula is applied consistently by a math-specialized engine.
The real tradeoff is not transparency versus opacity. It is a transparent but partial and narrative-driven process versus an opaque but comprehensive, ex ante-agreed computation.
For many high-volume or standardized contracts, the latter may be more attractive.
Benefits and Challenges
The benefits are clear: dramatically lower costs, no dueling experts, faster proceedings, stronger confidentiality, and more accurate results due to the reliance on complete information. Outcomes become more predictable and less dependent on advocacy strategy, because every dispute under the same contract uses the same model.
Under a party-autonomy lens, enforcement courts may accept this structure: the parties chose the model, the verification method, and the computational engine. The right to be heard remains intact because each side verifies its own data and can challenge defective proofs. However, whether the courts would uphold decisions using this proposal is uncertain. The challenge is cultural.
As LaPaglia v. Valve illustrates, discomfort arises when parties suspect that decision-making has been delegated to AI. The concern is not only accuracy but legitimacy. Who is deciding the case?
As math-specialized AI matures, international arbitration will have to decide whether it can accept decisions that are less traditionally reasoned, but potentially more accurate because they are automated.
Perhaps the time has come to ask: are we ready to trust quantum on autopilot?
The content of this post is intended for educational and general information. It is not intended for any promotional purposes. Kluwer Arbitration Blog, the Editorial Board of the Blog, and this post’s author make no representation or warranty of any kind, express or implied, regarding the accuracy or completeness of any information in this post.
You may also like