Can copyright law benefit from the marking requirement of the AI Act?
September 29, 2025
Using generative AI does not necessarily render output non-copyrightable. What matters is whether the human creative contribution to an AI-based outcome is sufficient. The question has been widely discussed in academia (to mention only a couple: here and here), and courts are now taking up the task of clarifying where to draw the line between copyrightable and non-copyrightable AI-based output (see, for example, here and here). Yet, even once this line is established in theory, the issue may not be fully resolved in practice. The risk of failing to meet the copyright eligibility requirements, combined with the anti-AI or pro-human bias, creates fertile ground for the temptation to conceal the AI origin of content and falsely claim copyright protection (copyfraud). In turn, without information about its origin and creation process, it can be difficult – or even impossible – to determine whether AI-based output is copyrightable. Among the various approaches to address these challenges, one is to subject non-copyrightable AI-based output to a transparency obligation through (machine-readable) marking or (human-facing) labelling (see, for instance, here and here).
While these questions remain a topic of debate within copyright law, an answer may have already crystallised beyond its boundaries. In the EU, Art. 50(2) AI Act (Regulation 2024/1689) could be seen as having the potential to act as a ‘troubleshooter’. But is that really the case?
Marking requirement under Art. 50(2) AI Act
Article 50(2) AI Act requires providers of AI systems, generating synthetic audio, image, video or text content, to ensure that AI outputs are marked in a machine-readable format and detectable as artificially generated or manipulated. The technical solutions suggested by Recital 133 AI Act to meet the transparency obligation under Art. 50(2) AI Act include watermarks, metadata identifications, cryptographic methods for proving provenance and authenticity of content, logging methods, and fingerprints, while also allowing for other techniques, as may be appropriate. Importantly, Art. 50(2) AI Actprescribes that the technical solutions be effective, interoperable, robust and reliable, with due regard to technical feasibility. However, this transparency obligation does not apply, among other cases, when AI systems perform an assistive function for standard editing (standard editing exception) or do not substantially alter the input data provided by the deployer or the semantics thereof (insubstantial alteration exception). The lack of clarity regarding the meaning of these concepts and their corresponding thresholds has already led to practical recommendations that all outputs be marked (see, for example, here on p. 18).
Why would Art. 50(2) AI Act hold any promise for copyright law?
One may believe that the marking requirement will help to identify non-copyrightable AI-based content. Concealing the AI origin to obtain copyright protection would thus become impossible, unless Art. 50(2) AI Act is blatantly infringed. And indeed, there might be intriguing parallels. Neither the verdict on non-copyrightability nor the marking requirement applies to every AI-based outcome. The thresholds for triggering each appear to echo one another. The fact that the marking requirement kicks in when an AI system performs more than an assistive function for standard editing or substantially alters the input data or its semantics seems to parallel, in copyright terms, the point at which human creative contribution fades and becomes insufficient to secure copyright protection. This speculative connection invites a closer look at whether the marking requirement might support copyright law in navigating challenges around delineation of copyrightable/non-copyrightable AI-based output and copyfraud.
Policy objectives make the difference
At present, copyright law is unlikely to find much help in Art. 50(2) AI Act. The two rest on different policy objectives. In turn, different policy objectives lead to different designs of provisions, as well as different assessments and determinations (see also here). As Recital 133 AI Act makes clear, the marking requirement was designed to address new risks of misinformation and manipulation at scale, fraud, impersonation and consumer deception. Copyright law, by contrast, is not concerned with these threats. In fact, it centres on human creativity, with fictional settings common to its subject matter.
To illustrate how this difference could play out in practice, let’s turn to a couple of examples. With an AI-based image-editing tool, one might alter some details, such as turning day into night. For copyright purposes, such editing alone would be unlikely to constitute a problem and to result in non-copyrightability of the final output. But given its impact on the content’s veracity, it may go beyond the standard editing exception and trigger the marking requirement. At the same time, a faithful AI translation may fall under the insubstantial alteration exception and avoid the marking requirement, yet it may lack sufficient human creative contribution to satisfy copyright eligibility criteria.
True, there may be instances where content marked under Art. 50(2) AI Act will also turn out to be non-copyrightable. However, the examples show that the results of the copyrightability assessment and the evaluation under Art. 50(2) AI Act may diverge. In practice, this means that marking may raise doubts about output’s copyrightability and help deter copyfraud, but it could equally prejudice copyrightable content. Likewise, copyfraud remains possible for unmarked content. For this reason, unless Art. 50(2) AI Act is applied with an eye to the nuances of the copyrightability assessment, it should not be treated as a ready-made solution for resolving the copyright-related issues. Instead, a thorough assessment remains necessary.
It ain’t over till it’s over
The marking requirement under Art. 50(2) AI Act, even when considered alongside other recitals and provisions, leaves significant uncertainty. Part of this uncertainty stems from technical challenges – specifically, doubts over whether the appropriate technical solutions currently exist. Another part arises from legal challenges – the intricacies involved in interpreting its boundaries and concepts. Over time, it will become clearer how the gap between actual and assumed technical capabilities is addressed, and how interpretations of Art. 50(2) AI Act take shape, including whether they account for copyright nuances. The implementation of Art. 50(2) AI Act is expected to be bolstered by the development of technical standards, codes of practice, implementing acts, and guidelines, with some consultations already underway.
Conclusion
Given the policy objectives behind the marking requirement under Art. 50(2) AI Act, it is unlikely that copyright law can benefit from it in resolving its struggles. The differences in policy objectives may result in divergent assessments and determinations regarding the necessity of marking content. The fact that content falls outside or within the transparency obligation of Art. 50(2) AI Act should not be conflated with its copyrightability or non-copyrightability, nor should it replace the proper assessment of copyright protection eligibility. Ultimately, this also indicates that Art. 50(2) AI Act is ill-suited to tackle copyfraud. This conclusion will hold unless Art. 50(2) AI Act factors in copyright nuances. Whether that happens remains to be seen.
This contribution draws on the author’s open-access article, ‘Can Copyright Law Benefit from the Marking Requirement of the AI Act?’, available at: https://doi.org/10.1007/s40319-025-01624-2.
You may also like

