The Dark Side of Computational Antitrust: When AI is Used to Evade the Law
October 28, 2025
Given the creeping complexity of cases, expanding evidence bases, and the ever present threat (or reality) of budget cuts, competition authorities around the world are on the lookout for ways to streamline their enforcement activities. In response, some commentators have advanced the idea of “computational antitrust”, which entails the development of “computational methods for the automation of antitrust procedures and the improvement of antitrust analysis”, as a means to help authorities navigate these challenges. As of 2025, computation antitrust is being applied by agencies around the world to detect bid-rigging and cartels, assess merger risks, run web-scraping pipelines to track historical changes in market conditions, and more.
Of course, it would be surprising if academics and enforcers were the only people interested in applying computation methods to help them with competition cases. The use of computational methods is fast becoming mainstream, as shown by a LexisNexis report which found that over a quarter of lawyers use AI on a “regular” basis. It therefore stands to reason that undertakings subject to competition law would also look to make use of computational techniques as well.
This week, I came across British startup Lexverify, which in its own words, offers AI tooling to companies which can help “ensure all communications are compliant, every email, every message, every time.” As of writing, the homepage of its website shows a draft email with the text “[o]ur competitors suggested that prices can’t go any lower if the business is to be sustainable”, and an accompanying warning identifying the text as a “competition compliance risk”. The purpose of this post is not to single out Lexverify in particular. An increasing number of companies, responding to market demand, offer similar services. Yet Lexverify’s product is the first I have seen which appears to cater directly to undertakings’ “competition compliance risk”, and hence makes for a useful starting point for analysis.
A screenshot from Lexverify’s website.
The premise is, or seems to be, that emails are a key source of evidence in legal cases, and that by identifying and flagging emails that are likely to appear as evidence in future court cases before they are sent, such evidence can simply not be generated in the first place. That emails are a key source of evidence is indisputable, especially in competition cases, as demonstrated by Google Android, Trucks, DOJ’s case against Apple, the Libor scandal, and even an entire Substack of “internal tech emails”. Emails can give an insight into what decision makers in a company were thinking at critical time periods, and in doing so, can help support the facts of an authority’s case, whether that’s the way in which a market has been defined, or to demonstrate anti-competitive intent.
I have often wondered how effective AI would be if used for such purposes. While I’ve not had the chance to try out any commercial tools in practice, I did manage to test out the idea by building a prototype at a Law and Tech hackathon during the summer. The results were clear: even with non-specialised and freely available LLMs, generative AI technology was easily able to identify language which could later be used as evidence in a competition law case. It could even suggest re-wording the email in a way which would be less likely to raise concerns. If you’d like to try it out yourself, try pasting the prompt I used into an LLM of your choice.
Needless to say, from the point of view of competition law enforcement, the emergence of such tools is problematic. Big companies have long tried to prevent email evidence from being written, yet as far as the author is aware, have relied only on guidance as opposed to automated tooling to do so. If new AI-powered tools supercharge the ability of companies to avoid leaving digital traces of their anti-competitive behaviour, the discovery and procedural costs involved in competition law cases could rise dramatically. Lacking email evidence, competition authorities will have a harder time in clearly establishing the facts of the case, and may then struggle to convince courts that intervention is necessary and proportionate. To fill the evidentiary vacuum, authorities may then need to resort to using forms of evidence which are more theoretical in nature, such as by relying more heavily on economic experts, an approach which comes with its own pitfalls pertaining to technocracy and bias. Competition cases would then become more expensive to bring and harder to win, with the risk being that anti-competitive behaviour will proliferate as a result.
The emergence of AI powered tooling 'on the other side' of antitrust cases shows that computational antitrust should not be regarded as some kind of deus ex machina of competition policy. Rather, computational antitrust is just another step in a century-long arms race between public bodies looking to foster competitive markets which operate in the public interest, and capital looking to insulate itself from profit-draining competition. Indeed, the fact that computational antitrust can be mis-applied has been recognised even by its most prolific proponents: the very first publication in the Stanford Computational Antitrust series ended with a warning that “[c]omputational antitrust should not become a zero-sum game in which the gains made by companies or agencies systematically penalize the other”.
There is reason for hope. As I have argued elsewhere, the designers of digital products and services are afforded incredible flexibility in terms of how such products are built. The sheer scope for imagination is perhaps best illustrated with a somewhat silly example. A group of software developers once decided to have a competition for who could build the most ridiculous volume control interface. The ingenuity of the entrants, my favourite being one where changing the sound level of the computer entailed shouting into the computer’s microphone at the desired volume, shows how the sky really is the limit when it comes to software design. As such, the designers of digital products have the privilege of being incredibly creative in terms of what features they build, and how those features operate.
To return to Lexverify for a moment, as of writing, the design of its product appears to encourage the re-writing of emails prior to sending, such that emails that could later become evidence never get sent in the first place. The risk here is systemic. If competition law is treated as a compliance risk to be ‘solved’ with technology, then infringements may not be detected and remedied as easily, leading to the underenforcement of competition law, harm to market competition, and knock on effects like decreased market efficiency or increased economic inequality.
The good news is that different applications of the same idea are possible, including ones which remain commensurate with the interests of undertakings, yet are more commensurate with the underlying aims of competition law. The product could, for instance, be designed to only alert the writer to a potential issue after the email has been sent. Such emails could be then automatically forwarded both to the undertaking’s legal department, and also to a national competition authority, where they could be treated as part of a leniency procedure. While this kind of wiretapping-esque approach may at first glance appear extreme, it is not without precedent, and one could argue that if the risk of competition law infringement is high enough for an undertaking to use AI to mitigate their risk, then it is also high enough for competition authorities to take a proactive approach to enforcement. It is possible, in other words, to imagine AI tools being deployed in a way which both reduces legal risk for undertakings while also being aligned with the broader goals of competition policy.
For the time being, it seems that such positive applications are only a figment of the imagination. After all, there is admittedly less commercial incentive to build a public-interest oriented version of this technology. In the status quo, it frankly appears to be only a matter of time before the AI is deployed by those engaging in anti-competitive behaviour, with the intention of minimising their exposure to competition law scrutiny. Given the risk that the use of such tools may reduce the effectiveness of competition law enforcement on a systemic level, it may be prudent for competition authorities to adopt a precautionary approach and issue guidance advising against the use of such tools. Authorities should also keep a keen eye on the development and adoption of all commercial AI powered “risk-mitigation” and “compliance” tools as part of their horizon scanning activities. If undertakings start to use them en masse, as appears likely, then regulatory intervention may be required to ensure that competition enforcement remains effective, pre-empt the emergence of a culture of corporate lawbreaking, and to avoid the “zero-sum” game that competitional antitrust advocates have warned about.
You may also like