When everything becomes a program: a technical analysis of the UK Supreme Court’s judgment in Emotional Perception

court room

As previously reported in this blog, on 11 February 2026, the UK Supreme Court handed down its ruling in Emotional Perception AI Ltd v Comptroller General of Patents, Designs and Trade Marks [2026] UKSC 3 which reversed the longstanding Aerotel test for assessment of patentable subject matter, causing a seismic shift in the patent community.

This decision was the first time the highest UK court has addressed the patentability of artificial neural networks (ANNs) and, more broadly, the framework for assessing computer-implemented inventions under the UK Patents Act 1977 and Article 52 of the European Patent Convention (EPC). I previously offered a technical critique of the first instance decision of the High Court, where Sir Anthony Mann held that an ANN is not a program for a computer at all. Two decisions and court instances later, the Supreme Court has arrived at the opposite conclusion, but on reasoning that, from a computer science perspective, raises its own set of difficulties.

This post focuses on the Court’s technical analysis of the concepts of computer, program, and artificial neural network in Issue 2 of the judgment (paras 68–96, focusing on whether an artificial neural network is or contains a program for a computer), and its implications for the patentability of AI-related inventions.

The Aerotel departure and the “any hardware” threshold

Before addressing the technical questions, it is worth noting that the Court made a significant doctrinal move by overruling the Aerotel four-step approach that had governed UK practice for nearly two decades, aligning instead with the Enlarged Board of Appeal’s reasoning in G1/19 (Pedestrian Simulation). The practical consequence is that the Article 52(2) inquiry is now reduced to an “any hardware” threshold. This means that if the claim involves any technical means whatsoever, it qualifies as an “invention” and therefore clears the first hurdle. The real filtering of non-technical features is deferred to an “intermediate step” before the assessment of inventive step.

This realignment is welcome from the standpoint of EPC consistency. However, the Court’s treatment of the question whether an ANN is a “program for a computer” at all is where the technically contested reasoning lies, and where the judgment’s lasting influence on AI patent practice will be felt.

What is a “computer” anyway? A circularity problem

The Supreme Court rightly criticised the Court of Appeal’s definition of a computer as “a machine which processes information” (para 76), noting that this would capture basic ovens, smoke detectors, and analogue radios. It is the ability of said machine to “perform computations” that the Supreme Court found differentiates a computer from other machines. At the same time, the Court noted that "[m]uch of the utility of computers derives precisely from their ability to perform calculations and produce results that far exceed the capabilities of any human brain” (para 79)

While this is an improvement, it trades one problem for another. For one, the Court offers no definition of “computation” or “calculation”, and whether and how the two differ. In theoretical computer science, the concept of computation is formalised through models such as Turing machines, lambda (ƛ) calculus, or recursive functions, all of which presuppose specific structural properties. A computation is not merely any transformation of input to output; it is a transformation carried out by a system capable of executing a defined class of operations on symbolic representations according to a finite set of rules. Without engaging with any such formalism, the Court’s criterion risks being either vacuous (ie, any physical process that transforms an input into an output “computes” something) or circular (ie, a computer is a device that computes, and computation is what a computer does).

The Court seeming conflates “computation” with “calculation” as evidence by the use of both concepts (para 76 and 79). In computer science, “computation” is the broader term that is also more theoretically loaded. It encompasses any process that can be modelled by a formal system, including symbolic manipulation, logical inference, search, and pattern matching. As a matter of principle, none of these formal systems need involve numerical calculation in the arithmetic sense. In contrast, “calculation” is narrower as it connotes numerical or mathematical operations. In this sense, every calculation is a computation, but not every computation is a calculation. If the Court means these terms as synonyms, then it is using “computation” loosely at best. If it means them as distinct concepts, then para 79 potentially narrows the definition it appeared to establish in para 76. Furthermore, para 79 adds a further qualifier that is entirely absent from para 76, namely, the ability to produce results “that far exceed the capabilities of any human brain”. This is not a definitional criterion but a functional characterisation that speaks to scale and speed rather than to the nature of the device. For instance, a pocket calculator performs calculations that a human brain can also perform; it just does it more quickly. Does that make it a computer? Under para 76’s formulation (ie, “performs computations”), the answer would possibly be yes. Under para 79’s formulation (ie, “far exceeds human capabilities”), it would arguably be not. The Court almost certainly did not intend to introduce a threshold of computational power into the definition, but the language is there nonetheless, even if qualified by the adverb “much”.

This ambiguity matters because the Court uses these definitions to justify classifying a trained ANN as a “program for a computer”. If “computation” is the operative concept, then the question is whether the operations an ANN performs (ie, weighted summation, activation function application, feed forward propagation) qualify as computations in a formal sense. In reality, they do, but so does virtually any mathematical transformation. This brings us back to the over-breadth problem. If “calculation” is the operative concept, then the Court is implicitly grounding the definition in numerical and/or arithmetic operations, which fits the ANN case well but may struggle to encompass future non-numerical computing paradigms (eg, symbolic AI, neuromorphic computing at the hardware level, quantum computation on non-numeric state spaces).

The Court was right to refuse to tie the concept of “computer” to conventional digital architectures with a CPU (para 77), noting that analogue and quantum computers must also fall within Article 52(2)(c) EPC. However, the resulting breadth of the term demands precisely the kind of formal boundary that the judgment does not supply. For example, the Court discusses field programmable gate arrays (FPGAs) at paras 82 and 86 as one of several hardware platforms on which an ANN may be implemented, and at para 91 it asserts that even where the weights and biases of a trained ANN are “baked into the hardware”, the ANN remains a computer program. This reasoning, however, elides a distinction that electronics engineers would regard as fundamental. An FPGA is always configured by a bitstream, ie, a binary file that physically reconfigures the silicon by setting lookup tables, routing connections, and switch matrices to create an actual circuit. After loading, the FPGA is the circuit as there is no instruction fetch-decode-execute cycle, and the bitstream ceases to exist as a separate entity. The industry increasingly calls this artifact “gateware” as a way of distinguishing it from both software and hardware. For example, aerospace regulators draw the same line of distinction. Thus, RTCA DO-254 governs FPGAs as electronic hardware, while the separate standard DO-178C governs airborne software. If a trained ANN’s parameters were synthesised into an FPGA’s gate configuration, it would be difficult to maintain that what results is a “program” in any technically meaningful sense, simply because it is a circuit.

The problem sharpens further with application specific integrated circuits (ASICs). If the Court is correct that weights and biases “baked into hardware” still constitute a program (para 91), then the same must logically apply to an ASIC manufactured with those parameters permanently etched into its transistor layout. However, by definition, an ASIC is not programmable and not reconfigurable. It is a fixed physical object. At that point, the Court’s reasoning would appear to entail that the circuit topology of a chip is itself a “program for a computer”. This is clearly a conclusion that sits uneasily with the Court’s own insistence, at para 76, that basic ovens and smoke detectors are not computers. Those devices are overwhelmingly built on simple microcontrollers and cheap dedicated ASICs. No basic household appliance uses an FPGA. If an ASIC embodying “frozen” ANN parameters is a computer running a program, it becomes genuinely difficult to explain why the ASIC inside a smoke detector, which similarly embodies fixed computational logic determining its sensor response, is not.

Admittedly, these edge cases may be largely moot in practice. Under the “any hardware” threshold now adopted, any physical embodiment of an ANN would satisfy the invention requirement regardless of how it is classified. The conceptual tension remains instructive though, as it reveals that the Court’s definition of “program” has been stretched beyond what the underlying technical distinctions can support, and that the real work of determining patentability has been deferred entirely to the intermediate step, where no guidance exists as of yet. The fundamental question remains: at what point does an electronic circuit stop being a computer?

Neural networks as “programs”: collapsing the data/instructions distinction

The most consequential, but also most technically vulnerable, aspect of the judgment is the holding that an ANN, viewed as a whole, constitutes “a program for a computer” (para 87). The Court’s reasoning proceeds as follows: an ANN is an abstract entity, not a physical object (para 81); when implemented on hardware, it causes the machine to process data in a particular way; therefore, it is “in essence, a set of instructions to manipulate data in a particular way so as to produce a desired result” (para 87).

This formulation effectively collapses the distinction between instructions and data. This distinction is foundational in computer science. In a conventional computing architecture, a program is a sequence of instructions that the processor fetches, decodes, and executes. Data, by contrast, are the operands upon which those instructions act. It is true that the theoretical stored-program computer model, known as the von Neumann architecture, treats programs and data as equivalent in the sense that both reside in the same addressable memory, and indeed this equivalence is what makes self-modifying code and compilation possible. However, that architectural equivalence does not erase the functional distinction between the two. At any given moment of execution, the processor must still differentiate between the instruction it is executing and the data it is operating upon. Thus, in the context of ANNs, the weights and biases are, in any standard computational understanding, data, ie, numerical values that parameterise a mathematical function. They do not instruct the processor to do anything; they are consumed by instructions that perform operations such as multiplication, addition, and activation function evaluation.

In all fairness, it must be acknowledged that the critique advanced that the Court collapses the distinction between instructions and data is technically sound but may also call for further refinement. The distinction between instructions and data is not purely binary. In functional terms, not all data is created equal, and, for example, the configuration data of a trained ANN is not comparable to a rasterised bitmap file.

Consider what a set of trained weights and biases actually does within a deployed ANN. Unlike a JPEG image, whose pixel values are semantically meaningful to a human viewer but functionally inert with respect to the image decoder processing it, ANN weights determine the function that the system computes on all future inputs. If one changes the weights, they also change the mapping from input space to output space. This is not merely a modification of the content of the output, but of the computational relationship between input and output. In contrast, the image decoder applies identical operations regardless of whether the JPEG contains a photograph of a cat or a landscape. In this sense, ANN weights exhibit a degree of functional potency that ordinary data does not. They parameterise, and as such constitute, the specific function the system implements. Without weights, the ANN architecture is a universal function approximator that approximates nothing. The particular set of weights turn it into a particular system (para 88, where the Court observes that a network with one arrangement of weights will cause the machine to process data in one way, and different weights in a different way).

This line of argument suggests that the relationship between “program” and “data” is perhaps better understood as a spectrum of functional potential than as a binary opposition. Thus, at one end sits purely cognitive data, ie, text, images, audio. Their content is relevant only to human users and their processing is invariant to that content. At the other end of the spectrum lies a conventional program, understood as a sequence of instructions that directly and completely specifies the system’s operations. ANN weights occupy a position much closer to the program end of this spectrum, without crossing the line though. ANNs do not instruct a processor in the way machine code does in the sense that they are not fetched, decoded, and executed. However, they are also not merely operated upon. In functional terms, they are what makes the system this particular system rather than some other.

It is worth noting that the EPO’s existing doctrinal apparatus already captures this nuance with considerably more precision than the Court’s binary reclassification. The Guidelines for Examination in the EPO (Part G.II, section 3.6.3) draw a distinction between functional data and cognitive data. Functional data is defined as data whose structure or format has a technical function within a technical system. This is  specifically the case where it controls the operation of the device processing it and inherently comprises, or maps to, corresponding technical features of that device (following T 1194/97). By contrast, cognitive data does not contribute to producing a technical effect, as its content and meaning are relevant only to human users. Under G1/19, the potential technical effect of functional data related to an implied technical use is to be taken into account when assessing inventive step.

ANN weights could be seen as a paradigmatic case of functional data under this framework. They have an intended technical use (ie, parameterising the forward pass of a neural network), they cause a technical effect when used according to that intended use (determining the input-output mapping of the system), and they inherently map to corresponding technical features of the device (ie, the weighted connections between neurons in the network architecture). In contrast, to go back to the distinction, a JPEG file would be simply cognitive data as its content is relevant to the human viewer, and the processing pipeline treats all compliant files identically regardless of their semantic content. The weights of a trained ANN could thus be argued to contribute to the technical character of the invention not because they are a “program”, but because they are functional data in the sense recognised by the EPO’s case law.

This matters because it shows that the EPO’s graduated framework of functional data contributing to technical character, assessed at the intermediate step endorsed in G1/19, would have given the Court a more precise instrument than the blunt reclassification of all ANN components as a “program for a computer”. While the Court reached the right practical outcome of the invention clearing the Article 52 hurdle, it arrived at this conclusion by a conceptual route that distorts the meaning of “program” and creates the downstream difficulties identified above. The functional data spectrum embedded in the EPO doctrine would likely have achieved the same result without that distortion. Admittedly, this crosses into the territory of the intermediate step in the two-hurdle approach. None of this implies the Court would not have followed the G1/19 approach to apply the distinction between functional and cognitive data at the intermediate step had it been tasked with the problem of providing a clearer guidance for the intermediate step; a problem that the Court acknowledged but declined to carry out (paras 112–118).

The Chartered Institute of Patent Attorneys and the IP Federation, as interveners, also argued that weights are data used in the operation of the ANN, not instructions telling a computing device what to do (para 75). The Court acknowledged the submission but did not engage with it substantively. Instead, it reasoned that since the weights, together with the network’s topology and activation functions, cause the machine to process data in a particular way, they qualify as instructions (para 88). However, by that logic, any dataset that influences the output of a computational process, such as training data, and even configuration files or lookup tables, would also qualify as “instructions”. The judgment essentially stretches the concept of a “program” to the point where it loses its discriminating power.

The Court went further than the Court of Appeal, which had identified only the weights and biases as the program. Lords Briggs and Leggatt held that the entire ANN, including topology, activation functions, and parameters, collectively constitutes the program (para 88). This is an astounding claim. A mathematical function that is composed with a network architecture would not ordinarily be understood as a “program” in any field of computing. Instead, it is a computational model, ie, a structured representation of a relationship between inputs and outputs. The Court itself characterised the ANN as “an abstract model which takes a numerical input, applies a series of mathematical operations … and outputs a numerical result at successive layers” (para 81, quoting the Hearing Officer). The leap from “abstract mathematical model” to “program for a computer” seems to be more asserted than argued, and it raises the fundamental question about the relationship between computer programs and mathematics, also known as the Curry-Howard isomorphism.

Machine learning and the “computer-generated programs” analogy

The Court dismissed the relevance of machine learning to the classification question by analogising trained ANN parameters to compiler-generated machine code (para 93). The argument goes as follows: just as a compiler generates the machine code that a CPU executes, so too the training process generates the parameters that define the ANN’s behaviour. The Court concluded that there is “no justification for drawing a distinction in law between instructions created by a computer and those created by a human being” (para 93).

This analogy is superficially appealing but technically misleading. A compiler performs a deterministic, rule-governed transformation from a high-level language to machine code. The relationship between source code and compiled output is well-defined and largely reproducible. Machine learning, by contrast, is a stochastic optimisation process. The resulting parameters are not a “translation” of anything; they are the product of iterative numerical adjustment driven by data. The epistemic status of the two outputs is fundamentally different. While compiled code is derived from human-authored instructions, trained weights are discovered through statistical optimisation. The Court’s analogy obscures this difference and, in doing so, makes the concept of a “program” even more indeterminate.

Policy implications: what has actually been decided?

Despite the breadth of the technical holdings, the practical outcome is narrower than it might appear. The Court allowed the appeal, set aside the Hearing Officer’s refusal, and remitted the case for assessment of novelty and inventive step. Importantly, the Court declined to carry out the “intermediate step” required by G1/19 which entails the filtering of features that contribute to the technical character of the invention. The decision to decline was made on the grounds that it had received no submissions on the point (paras 112–118).

This leaves the UKIPO and lower courts with a clear doctrinal framework comprising the Duns principles minus principle G, with the intermediate step, but virtually no guidance on how to apply it to ANN-based inventions. The question of which features of an ANN-based invention contribute to its technical character, which is the question that will determine patentability in practice, remains open.

A separate but related question is whether the term “computer program” in Article 52(2)(c) EPC should be interpreted by reference to the technology of today or the technology that existed when the statutory language was drafted. The EPC was signed in 1973 and the relevant exclusions were carried into the UK Patents Act 1977 largely unchanged. At that time, a “computer program” had a tolerably clear referent, ie, a sequence of coded instructions for a digital computer, typically written in FORTRAN, COBOL, or assembly language and executed on mainframe hardware. ANNs, while theoretically conceived in the 1950s, were not a practical technology and would not have featured in any draftsperson’s contemplation. The Supreme Court in Emotional Perception rightly resists tying the EPC’s language to a particular technology (para 77), and there is much to be said for a dynamic interpretation that accommodates technological change. After all, the EPC is a living instrument governing a field defined by innovation. A dynamic interpretation cuts both ways though. If “computer program” is to be read broadly enough to encompass ANNs, one must ask whether the draftspersons, had they been confronted with the technology we see today, would have been content to use that term at all. It is at least plausible that they would have chosen language more clearly inclusive (or exclusive) of ANNs. That the question is not susceptible to an obvious answer suggests that the term is being made to bear a weight that its drafters did not anticipate and that its ordinary meaning cannot comfortably support. This is by no means an argument for originalism in treaty interpretation.  The Vienna Convention’s emphasis on ordinary meaning, context, and object and purpose (Article 31 VCLT) resists any such rigidity, but it is a reason to be cautious about extending a mid-twentieth-century exclusion to a twenty-first-century technology by definitional fiat rather than by considered legislative or treaty amendment.

From a policy perspective, the judgment creates a curious asymmetry. On one hand, the Court has adopted the broadest possible classification of ANNs; they are programs, the machines that run them are computers, and the “any hardware” threshold means the Article 52(2) exclusion is trivial to overcome. On the other hand, the real gatekeeping performed by the patentable subject matter doctrine is pushed to the intermediate step and the assessment of inventive step. For patent applicants and examiners alike, the immediate effect may be uncertainty rather than clarity.

The Supreme Court’s willingness to overrule Aerotel and align with EPO jurisprudence is doctrinally sound and long overdue. But its technical reasoning on ANNs as programs rests on definitions that are broader than anything in computer science would support, and on analogies that do not withstand scientific scrutiny. This is regretful given the patent doctrine’s strong commitment to scientific realism. As AI-related patent applications continue to grow, these definitions will bear increasing weight. Whether they can sustain it remains to be seen.

Comments (0)
Your email address will not be published.
Leave a Comment
Your email address will not be published.
Clear all
Become a contributor!
Interested in contributing? Submit your proposal for a blog post now and become a part of our legal community! Contact Editorial Guidelines
Image
2026 Future Ready Lawyer Survey Report
Image
Expert aI on Kluwer IP Law

Book Ad List

Books
book1
Vissers Annotated European Patent Convention 2024 Edition
Kaisa Suominen, Nina Ferara, Peter de Lange, Andrew Rudge
€105.00
AIPPI
Experimental Use and Bolar Exemptions
David Gilat, Charles A. Boulakia, Daphné Derouane & Ralph Nack
€190.00
book2
Annotated PCT
Malte Köllner
€160.00