European Parliament Study Recommends Statutory Licensing as the Optimal Copyright Framework for AI Training
March 19, 2026
The rapid rise of generative artificial intelligence (AI) has reignited long‑standing debates about how copyright law should balance incentives for creation with the societal benefits of technological innovation. A new in‑depth analysis commissioned by the European Parliament’s Policy Department for Justice, Civil Liberties and Institutional Affairs at the request of the Committee on Legal Affairs, examines how copyright policy should respond to AI. By combining historical lessons from digital markets, insight on the economic value of data, and a formal model to study welfare effects, it offers one of the most comprehensive economic assessments to date and its conclusions are likely to shape the next phase of EU policy discussions.
Authored by Professor Christian Peukert (HEC Lausanne), “The Economics of Copyright and AI” evaluates how different copyright frameworks affect creators, AI developers, and, crucially, consumers. Drawing on empirical evidence, historical lessons from digital markets, and a calibrated welfare model, the study identifies a clear front‑runner among the policy options currently under debate - statutory licensing. The full text of the study is available here.
Copyright’s Core Purpose: Incentivising Future Creation
A central premise of the report is that copyright exists not to insure creators against revenue loss from existing works, but to incentivise the creation of new works. This distinction becomes especially important in the context of AI training, where the value of models depends heavily on access to fresh, high‑quality data.
The empirical evidence reviewed for the study suggests that when creators discover their works have been used for AI training without compensation, they tend to reduce output. If left unaddressed, this dynamic risks degrading the quality and representativeness of future training data, ultimately harming the performance of AI systems themselves.
At the same time, the study highlights that the vast majority of AI’s economic value accrues to users, not AI firms. Estimates place annual consumer surplus (in the United States) at roughly ten times industry revenues. This makes consumer welfare and not just the interests of creators or AI developers, the appropriate benchmark for evaluating copyright policy.
Four Policy Options, One Clear Winner
Bearing in mind the ultimate goal of making creators better off while minimising the costs to end-users and AI developers, the study evaluates four realistic policy frameworks:
1. Copyright Exception
A broad exception to the rights of reproduction and extraction maximises access to existing works removing the need to identify rightsholders, thus removing the problem of orphan works. However, because this approach provides no financial return for future creation, the flow of new works may decline over time, ultimately diminishing the quality of the data available for AI training.
This policy option aligns with the principles of Article 3 of the Directive on Copyright in the Digital Single Market (“CDSMD”) but extends its scope to all AI developers, regardless of whether they are scientific, non-commercial, or commercial.
2. Exception with “Opt‑Out” [Worst Option]
Adding an opt‑out mechanism further shrinks the accessible stock of works while still offering no remuneration. Consequently, copyright exceptions without remuneration but with opt-out mechanisms represent the least desirable policy option, as opt-outs restrict access to existing data, while the absence of remuneration provides no funding to support a continued supply of data. This policy option is similar in spirit to Article 4 CDSMD.
3. Licensing Market (“Opt‑In”)
A licensing market based on an “opt-in” mechanism preserves rightsholder choice and can provide access to both the existing stock of data and the ongoing flow of new data, since compensation sustains incentives for continued creation. However, this approach suffers from high transaction costs, as bargaining with large numbers of individual rightsholders is inefficient. These costs result in fragmented coverage of both data stock and data flow, leading to selective and less representative datasets, particularly where licensing is limited to a small number of large rightsholders in order to reduce overall transaction costs.
Therefore, in theory, voluntary licensing could sustain incentives. In practice, transaction costs, fragmented coverage, and selective deals lead to biased and incomplete datasets, undermining both AI performance and overall welfare.
4. Statutory Licensing [Recommended Option]
Across nearly all model calibrations, statutory licensing emerges as the most robust and welfare‑enhancing framework. A statutory licence with a centrally set royalty rate ensures broad access to works, with regulator-determined royalties balancing the interests of rightsholders, AI developers, and users, while maintaining incentives for ongoing creative output that keeps AI systems valuable. It avoids the need to identify individual rightsholders, thereby eliminating the orphan works problem. A centrally set royalty rate generates a steady royalty stream that restores incentives for creation and supports the ongoing flow of new data, while keeping transaction costs low.
The study emphasises that statutory licensing is not a silver bullet. The main drawbacks are the need for an independent authority to set the royalty rate for AI market competition enforcement, and the requirement that royalty administration remain lean and efficient so as not to undermine the system’s effectiveness.
To stimulate additional research and development, the study proposes a narrow exception for non-commercial research that connects later commercial use to creator remuneration. These measures help ensure that incentives are aligned across creators, AI developers, and consumers.
A Geopolitical Dimension
With most frontier AI development occurring outside of Europe, the report also highlights a strategic consideration: statutory licensing helps ensure that European works remain represented in global training datasets. This supports linguistic and cultural diversity and strengthens the EU’s digital sovereignty by ensuring that AI systems remain valuable to European users.
Policy Recommendations
The study concludes with four concrete recommendations for EU policymakers:
Adopt statutory licensing as the primary framework for AI training, suggesting that the royalty rate is set at the lowest level (reviewed periodically).
Ensure lean, transparent administration, minimising overhead and approaching reliance on CMOs with caution.
Integrate copyright policy with competition and science policy, including the Digital Markets Act 2022 (DMA) and the Digital Services Act 2022 (DSA), with possible carve-outs for scientific research.
Close evidence gaps through systematic disclosure obligations and evidence.
The study focuses primarily on the welfare-based economic evaluation of copyright policy and does not make the EU AI Act a central focus. The key findings, however, do call for greater transparency for both AI firms and creators, to align regulation with technological and market realities. This goes in line with the EU AI Act’s transparency obligations on the providers of the general-purpose AI models.
Above all, the study argues that copyright policy for AI should be evaluated by its impact on total welfare and by its ability to sustain the ongoing flow of high-quality creative data that underpins long-term AI value. The central insight is that dynamic incentives matter more than compensating the fixed stock of existing works. As discussed above, the study proposes a statutory licensing regime with a modest, regularly reviewed royalty as the most effective approach.
Conclusion
In conclusion, it is important to highlight that this study's recommendations are primarily economic, largely bypassing legal constraints within the existing EU copyright framework and not addressing the legal feasibility or offering concrete legal solutions for amending current rules. This may be due to this study being viewed as complementary to the European Parliament study on “Generative AI and Copyright: Training, Creation, Regulation”, which focuses on the legal aspects. The recommendations of both studies largely align, save for the emphasis on legal coherence versus welfare optimisation.
This study concludes that the copyright-AI debate does not have to pit creators against technology companies, that policymakers should prioritise safeguarding the substantial consumer benefits generated by AI while maintaining the ongoing supply of fresh, high‑quality data that AI systems rely on. To that end, statutory licensing offers the most effective path forward by ensuring broad access, preserving incentives, and keeping costs low.
As the EU continues to refine its approach to AI governance, this study provides a timely and rigorous economic foundation for policymaking. Its message is unambiguous: to sustain both cultural production and the societal benefits of AI, Europe should move toward a statutory licensing regime that secures broad access to works while preserving incentives for future creation.
If adopted, such a framework could offer a stable, predictable, and welfare‑maximising path forward, one that balances the interests of creators, AI developers, and the public at large.
Photo by Steve Johnson on Unsplash