AI, Copyright, and the Future of Creativity: Notes from the Panama International Book Fair

Typewriter with document

In August, I spoke at the Panama International Book Fair, in an event co-hosted by World Intellectual Property Organization (WIPO), the Panama Copyright Office, the Ministry of Culture, and the Panama Publishers Association. My presentation examined the increasingly complex interface between copyright law and artificial intelligence (AI), a subject now central to global legal and policy debates. This post distills the key arguments I presented, with reference to current litigation, academic perspectives, and the U.S. Copyright Offices May 2025 report on generative AI.

How should copyright law respond to the widespread use of protected works in the training of generative AI models or systems? The analysis suggests there are emerging discussions around several key areas: the limits of fair use and exceptions, the need for enforceable remuneration rights, and the role of licensing and regulatory oversight. This post begins with an overview of the legal and technological context surrounding AI training. It then reviews academic proposals for recalibrating copyright frameworks, examines recent court decisions that test the boundaries of current doctrine, and summarizes the U.S. Copyright Offices 2025 report as an institutional response. It concludes by outlining four policy considerations for future regulation.

 

A Shifting Legal and Technological Landscape

The integration of generative AI into creative and informational ecosystems has exposed foundational tensions in copyright law. Current systems routinely ingest large volumes of copyrighted works, such as books, music, images, and journalism, to train AI models. This practice has given rise to unresolved legal questions: Can copyright law meaningfully regulate the use of training data? Do existing doctrines and legal provisions, fair use, or exceptions and limitations, extend to these practices? What remedies, if any, are available to rightsholders whose works are used without consent?

These questions remain open across jurisdictions. While some courts and regulatory agencies have begun to respond, a substantial part of the debate is now being shaped by legal scholarship and litigation, each proposing frameworks to reconcile AI development with copyrights normative commitments. The following sections examine this evolving landscape, beginning with recent academic proposals.

 

Academic Perspectives: Towards a New Equilibrium

In reviewing the literature, several clear themes have emerged.

First, some authors agree that remuneration rights for authors must be strengthened. Geiger, Scalzini, and Bossi argue that to truly ensure fair compensation for creators in the digital age, especially in light of generative AI, EU copyright law must move beyond weak contractual protections and instead implement strong, unwaivable remuneration rights that guarantee direct and equitable revenue flows to authors and performers as a matter of fundamental rights.

Martin Senftleben has proposed that policymakers in other regions should evaluate implementing an alternative output-based remuneration system for generative AI, compelling providers to pay equitable compensation when their system's artistic or literary output potentially substitutes human creations, rather than adopting the complex input-based copyright requirements of the EU AI Act. And João Pedro Quintais  has similarly emphasized the need for structural reforms to copyrights remuneration system in response to generative AI, proposing stronger collective mechanisms and statutory rights to ensure authors benefit from the exploitation of their works in AI training and outputs.

Second, some scholars highlight that the technical opacity of generative AI demands new approaches to author remuneration. Cooper argues that as AI systems evolve, it will become nearly impossible to determine whether a work was AI-generated or whether a particular copyrighted work was used in training. He warns that this loss of traceability renders attribution-based compensation models unworkable. Instead, he calls for alternative frameworks to ensure creators are fairly compensated in an age of algorithmic authorship.

Third, scholars like Pasquale and Sun argue that policymakers should adopt a dual system of consent and compensation, giving creators the right to opt out of AI training and establishing a levy on AI providers to ensure fair payment to those whose works are used without a license. Gervais, meanwhile, argues that creators should be granted a new, assignable right of remuneration for the commercial use of generative AI systems trained on their copyrighted works, complementing, but not replacing, existing rights related to reproduction and adaptation.

There is also a growing consensus on the need to modernize limitations and exceptions (L&Es), particularly for education and research. Flynn et al. show that a majority of the countries in the world do not have exceptions that enable modern research and teaching, such as academic uses of online teaching platforms. And in Science, several authors propose harmonizing international and domestic copyright exceptions to explicitly authorize text and data mining (TDM) for research, enabling lawful, cross-border access to copyrighted materials without requiring prior licensing. My own work has emphasized the urgency of updating Latin American educational exceptions to account for digital and cross-border uses.

At WIPO, the Standing Committee on Copyright and Related Rights (SCCR) has been taking steps in this area by approving a work program on L&Es, under current discussions for the upcoming SCCR 47, including the recently presented African Group proposal for an international instrument in L&E’s (SCCR47/6). And in the Committee on Development and Intellectual Property (CDIP), there is a Pilot Project approved on TDM to Support Research and Innovation in Universities and Other Research-Oriented Institutions in Africa – Proposal by the African Group (CDIP/30/9 REV).

Eleonora Rosati argues that unlicensed AI training falls outside existing EU and UK copyright exceptions, including Article 3 of the DSM Directive (TDM for scientific research), Article 4 (general TDM with opt-outs), and Article 5(3)(a) of the InfoSoc Directive (use for teaching or scientific research). She finds that exceptions for research, education, and fair use-style defenses do not apply to the full scope of AI training activities. As a result, she concludes that a licensing framework is legally necessary and ultimately unavoidable, even when training is carried out for non-commercial or educational purposes.

Finally, policy experts like James Love warn that one-size-fits-all” regulation risks sidelining the medical and research breakthroughs promised by AI. The danger lies in treating all training data as equivalent—conflating pop songs with protein sequences, or movie scripts with clinical trial data. Legislation that imposes blanket consent or licensing obligations, without distinguishing between commercial entertainment and publicly funded scientific knowledge, risks chilling socially valuable uses of AI. Intellectual property law for AI must be smartly differentiated, not simplistically uniform.

 

US Litigation as a Site of Doctrinal Testing

U.S. courts have become a key venue for testing the boundaries of copyright in the age of AI. In the past two years, a growing number of cases have explored whether existing doctrines and foundational concepts of copyright, such as fair use, reproduction, and originality, can meaningfully apply to machine learning systems. While judges assess these questions within the confines of current law, their rulings are increasingly informing the policy debate over whether statutory reform may be necessary.

In August 2025, the court in Perplexity AI v. News Corp denied Perplexitys motion to dismiss or transfer the case, rejecting the companys jurisdictional challenge by emphasizing its business presence and targeting of New York-based readers, and thereby confirmed the Southern District of New York as a proper forum. The ruling does not address the merits of the infringement claim that Perplexity allegedly scraped and repurposed News Corps content via AI summarization without permission, but establishes that the case will proceed in New York for now.

In June 2025, the court in Bartz v. Anthropic PBC found that training AI models on lawfully purchased and digitized books constituted fair use, an act it described as spectacularly” or quintessentially transformative,” akin to a writer learning from the works of others, but held that copying and storing pirated books to build a central, permanent library was not fair use and must proceed to trial for damages. Anthropic recently agreed to $1.5 billion to settle this case.

In New York Times v. OpenAI & Microsoft, filed in December 2023, the plaintiffs allege that their articles were ingested without authorization to train large language models, a claim now supported by the courts refusal to dismiss key copyright infringement counts, including direct and contributory liability. The dispute includes claims that outputs from the AI models sometimes regurgitate” or closely mimic Times content, including near‑verbatim reproductions or synthetic summaries resembling paywalled material. A key issue in the cases is what is substitution/market harm under fair use. The court has allowed the case to proceed into discovery and potential trial.

In the creative industries, AI-generated music is facing its own legal reckoning. In UMG v. Suno, filed in mid-2024, Universal Music Group alleges that the startup unlawfully used copyrighted sound recordings to train generative AI systems that produce new music tracks. The case raises critical questions about whether training on copyrighted recordings constitutes infringement, and whether outputs that mimic the style or sound of human artists can trigger liability. The outcome could establish major precedents for musical copyright in an AI context.

Earlier decisions have already begun to set legal limits. In Thomson Reuters v. Ross Intelligence, a 2024 ruling rejected Rosss fair use defense after finding that its AI legal assistant copied Westlaw headnotes in a way substantially similar to Westlaws own product, making the substitution risk more direct and obvious than in cases involving large, generalized training datasets.

Visual artists and photographers are also pursuing their claims. In parallel lawsuits (Andersen v. Stability AI and Getty v. Stability AI), courts are considering whether AI-generated images infringe the right to prepare derivative works and whether stripping metadata violates moral rights. On the literary front, Authors Guild v. OpenAI is still in the early stages, but could shape the compensation landscape for book authors whose works were used without consent in LLM training.

Finally, foundational principles are also being reaffirmed. In Thaler v. Perlmutter, the U.S. Court of Appeals for the District of Columbia Circuit in 2023 upheld the U.S. Copyright Offices decision that purely AI-generated works without human authorship cannot be copyrighted. This ruling reasserted the human-centered foundation of copyright law.

Together, these cases are forging the contours of copyright doctrine in real time. They expose the limitations of existing frameworks and the growing pressure on courts to reconcile new technologies with enduring legal principles.

 

Institutional Responses: The U.S. Copyright Offices 2025 Report

The most comprehensive institutional response to date comes from the U.S. Copyright Offices May 2025 pre-publication report on generative AI. Key findings include:

  • AI-generated works without human authorship are not copyrightable.
  • Training on protected materials may require licenses, unless covered by clearly defined exceptions.
  • New policy tools are under consideration, including remuneration systems, dataset registries, and transparency mandates.
  • The report draws a clear distinction between permissible and impermissible uses:
  • Infringing: unauthorized reproduction, derivative-like outputs, removal of copyright management information.
  • Permissible: fair use (where applicable), TDM exceptions, use of public domain or synthetic data.

At the same time, the Office underscores that uses for research, analysis, or non-substitutive educational functions are often highly transformative” and therefore more likely to qualify as fair use. Training models for closed systems or research purposes is distinguished from training designed to output expressive works that compete with the originals.

 

Policy Directions: Building a Balanced Framework

To close the gap between technological reality and legal capacity, policy makers could consider exploring more deeply the following topics, including:

1.     remuneration rights for authors, and management of metadata;

2.     exceptions for text and data mining, especially for research, education, and non-commercial innovation;

3.     transparent licensing schemes, with disclosure of training data; and

4.     tools that enable regulators, authors, and the public to understand how data is sourced, processed, and deployed in generative models.

These measures, while not exhaustive, could represent building blocks for a future copyright system.

The legal community now faces a pivotal challenge: how to adapt copyright frameworks to AI without undermining the principles that underpin creative economies and public access to knowledge. From academic proposals to courtroom debates, from fair use to human rights, this conversation is no longer theoretical. It is unfolding now in legislation, in litigation, and in international fora. The question is not whether copyright will change. Its how and for whom. 

 

Photo by Markus Winkler on Unsplash

 

Comments (0)
Your email address will not be published.
Leave a Comment
Your email address will not be published.
Clear all
Image
ESG

Number 1 in Top 40 Copyright Blogs!

Image
feedspot

Book Ad List

Books
AIPPI
Artificial Intelligence and Copyright
Guillaume Henry and Sanna Wolk
€125.00
book1
The EU Artificial Intelligence (AI) Act: A Commentary
Ceyhun Necati Pehlivan, Nikolaus Forgó, & Peggy Valcke
€285.00