Report on a roundtable on the visual arts, generative AI, and copyright at the UCL Institute of Brand and Innovation Law

Photo by <a href="https://unsplash.com/@lundres?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Luis Caroca</a> on <a href="https://unsplash.com/photos/a-bookshelf-filled-with-lots-of-books-on-top-of-wooden-shelves-h_YX8re2Uu4?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a>

Discussions around generative artificial intelligence (‘genAI’) have become almost impossible to avoid (for better or worse). The topic dominates news headlines, academic journals, and court cases alike, with conversations seeming increasingly circular as they trace the same concerns about infringement, authorship, and originality without much apparent progress. Yet, for all the repetition, these questions remain both urgent and unresolved. The remarkable persistence of this topic speaks to its importance and the need to respond to these fundamental legal questions with thoughtful and considered answers.

To that end, and as part of its ongoing series of roundtable discussions on genAI and copyright law, on the 21st of October 2025 the UCL Institute of Brand and Innovation Law (IBIL) hosted a closed-doors roundtable discussion on the intersection of these two areas with the visual arts.

 

The genAI and copyright series

 

Image
ibil

 

Based at the UCL Faculty of Laws, IBIL is one of the few UK-based university research centres with a focus solely on intellectual property law. Established in 2007 and led by Prof Sir Robin Jacob, IBIL was created with the distinct objective to not only undertake academic research, but also to pay attention to the practical application of intellectual property law in a rounded and inclusive manner.

In seeking to contribute to the challenging policy debates in this field, IBIL have been hosting a series of roundtable discussions to bring together stakeholders and experts from various fields to exchange their knowledge and opinions. This blog’s previous report on our academic publishing roundtable can be read here, and since then roundtable discussions have taken place on visual outputs and music in the context of genAI and copyright.

Following the success of these previous events, IBIL’s most recent roundtable brought together stakeholders from across academia, the visual arts, technology, legal practice, and rightsholder representative organisations to contribute to the discussion and share ideas. Like previous roundtables, the event operated under the Chatham House rule, allowing participants to repeat the information shared at the event so long as the identities and affiliation of the speakers are not revealed. Aware that to solve the issue within the space of the roundtable is perhaps overambitious, the event succeeded in continuing an honest interchange of information and opinions and beginning some form of consensus as to a way forward.

 

The roundtable – genAI, copyright, and visual arts

 

The major copyright problems that straddle genAI and the visual arts are likely well-known to many readers of this blog: firstly, that training AI models requires vast amounts of data, including visual works, many of which will be protected by copyright; secondly, that the attribution of copyright authorship for a work generated by AI is an unclear task; and thirdly, that AI models are capable of outputting visual pieces which may infringe copyright by reproducing a work in which copyright subsists, and then communicating this to the public.

In light of the enormity of this subject, the roundtable focused on a few specific areas for discussion.

 

Computer-generated works

 

S.9(3) of the UK’s Copyright, Designs and Patents Act 1988 (‘CDPA 1988’) protects with copyright computer-generated works. It states that “the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken”. Though certainly ahead of its time, attendees agreed that this provision is likely unfit for the age of genAI. As such, its value and potential repeal were the subject of a question in the UKIPO’s consultation on genAI and copyright, one of the most responded-to in history.

At the roundtable, it was noted that the provision’s main historical use was in relation to newspaper financial market graphics and weather forecast maps, though some attendees noted that the print and broadcast journalism industries of other jurisdictions fared perfectly well without such a provision.

On a more theoretical note, it was suggested that this provision unnaturally divides works into two categories of computer-generated and author-generated works; this has the effect of ignoring the truthfully blurred boundaries between the two, and divorces the idea of human authorship from the responsibility that, through copyright, attaches to an author when they disseminate their work into the world. Despite the more abstract nature of this point, it was also raised that this lack of clarity might cause problems for clients seeking legal advice, especially in light of the fact that they will often find a simpler route to authorship elsewhere in the CDPA 1988.

The lack of clarity was noted as being compounded by the fact that “the person by whom the arrangements necessary for the creation of the work are undertaken” could, in theory, mean anyone from the designer of the AI model’s architecture, to the technicians working on training, to the end user. Indeed, the inherent degree of randomness that is programmed into an AI model when producing an output makes these actors even further removed from each other, worsening the problem of tracing authorship.

Specific to the field of the visual arts, it was noted that the idea of creating in an artistic sense can be seen as connecting with one’s creative history and context, building on the world around oneself. Using a genAI tool potentially cuts out this personal creative history and replaces it with an artificially-developed context window. By losing this creative context, it becomes harder to say (both in artistic and in copyright terms) whether the user of a genAI tool is truly an author.

In response to the problems of s.9(3), many participants agreed that the time was nigh for its repeal. Others were more reluctant, however, agreeing with its present redundancy but issuing warnings on two fronts. First, technology may develop to the point where autonomous genAI models are capable of generating content that is truly without a human author, so the provision may become needed in the future (especially if Sam Altman’s prediction of superintelligence by 2027 comes to fruition). Second, its removal, and its replacement if it is ever needed, might bring about large-scale lobbying that results in a worse alternative and proves a distraction from progress elsewhere in this field. Suggested solutions included the issuing of guidance and examples as to the use of s.9(3), and the creation of a sui generis right for AI-generated works, which could be achieved through reforming the provision.

 

Market substitution

 

Attendees of the roundtable seemed unanimous on the point of market substitution, accepting that visual artists across disciplines were losing work to the output of genAI models. Though it was noted that genAI can help bring down barriers to accessing the arts, allowing anyone to create visual material and find expression with high speed and low costs, it was also suggested that doing this might cut artists out of work, creating the contrary problems of inaccessibility of the arts to career artists and the depletion of good training data to improve innovation among models.

 

Machine unlearning

 

Machine unlearning is the process by which an AI model, once trained, might be made to ‘forget’ a particular work that it has been trained on. The problem is, as noted in computer science literature (for example, see here), that this is thought to be impossible. While it is easy to delete a work from a training dataset if found to be copyright-infringing, it appears impossible to then identify and delete the effects that this has had on an extremely complex neural network by identifying the miniscule changes in potentially trillions of parameters that would not have been made but for the inclusion of that single work as training data. This has the effect of meaning that where a visual artist has had their work illegitimately used as training data, the effects of this would be irreversible.

A potential solution was raised at the roundtable as being the deletion of all infringing works from the training dataset, and the subsequent mass retraining of models. However, it was pointed out that this process would be extremely long, would impose major cost burdens on the model providers, and would create serious environmental impacts, especially in areas close to data centres. Moreover, participants noted that even where models are trained on amended datasets, AI models might still be able to make inferences that have the effect of filling in the missing gaps. Some participants, ending discussion on a perhaps more positive note, pointed out that AI model providers are increasingly working towards creating retrospective licences so that visual artists whose work has been used as training data can receive some remuneration.

 

Moving forward

 

The goal of the roundtable, like each hosted at IBIL, was not to provide a catch-all solution to the matters discussed but to share knowledge and build understanding in the hopes of moving towards consensus that can take shape in law- and policy-making, licensing, and researching. The next roundtable will take place in March 2026, focusing on copyright, genAI, and music – if you are a stakeholder and would be interested in attending, please register your interest here.

Comments (0)
Your email address will not be published.
Leave a Comment
Your email address will not be published.
Clear all
Image
ESG

Number 1 in Top 40 Copyright Blogs!

Image
feedspot

Book Ad List

Books
AIPPI
Artificial Intelligence and Copyright
Guillaume Henry and Sanna Wolk
€125.00
book1
The EU Artificial Intelligence (AI) Act: A Commentary
Ceyhun Necati Pehlivan, Nikolaus Forgó, & Peggy Valcke
€285.00