Report on a roundtable on music, generative AI, and copyright at the UCL Institute of Brand and Innovation Law
April 22, 2026
The debate around generative artificial intelligence (genAI) and copyright law has been raging on globally for some time. In some respects, the dust is slowly beginning to settle: for example, the government’s response to the UKIPO consultation and the accompanying report and impact assessment (as demanded by ss.135-136 of the Data Use and Access Act 2025), as well as the House of Lords Digital and Communications Committee report were all issued this month. Though some solutions are starting to take shape, the topic continues to raise more questions than answers.
With this in mind, and as part of its ongoing series of roundtable discussions on genAI and copyright law, on 4 March 2026 the UCL Institute of Brand and Innovation Law (IBIL) hosted a closed-doors roundtable discussion on the intersection of these two areas with music. This post does not intend to be a verbatim report of the roundtable, but summarises the matters discussed and the debates had.
The GenAI and Copyright Series
The genAI and copyright series is hosted at the UCL Faculty of Laws Institute of Brand and Innovation Law (IBIL). An ongoing series, the five previous roundtables have revolved around topics including music, academic publishing and the visual arts – reports of some of these events can be read here and here.
Following these, IBIL’s most recent roundtable brought together distinguished stakeholders from across music, technology, academia, legal practice, the UK Government and judiciary, trade associations, and rightsholder representative groups. Like previous roundtables, the event operated under the Chatham House rule, allowing participants to repeat the information shared but neither the identities nor affiliation of the speakers. Aware that to solve the issue within a single roundtable is overambitious, the event succeeded in continuing an honest interchange of opinions and building some consensus on a way forward.
The Roundtable – GenAI, Copyright, and Music
Musicians
Discussion began by considering the musicians’ attitudes towards genAI. Participants shared that creators tend to be less focused on whether their work is used as AI training data, and more concerned about whether remuneration and fair attribution was available to them. Unsurprisingly, the ability to create and communicate music tends to be at the forefront of musicians’ minds, with legal rights rarely so much as an afterthought. It was also noted that musicians wish to develop an audience connection; while genAI can occasionally be used to augment this, and many artists are curious about the use of AI tools, it often cuts artists out of the process of remuneration and attribution.
Participants were keen to stress that musicians cannot be treated as a single entity. For example, it was noted that where there are successful attempts to bring about remuneration and attribution, these are almost exclusively focused on major record labels and featured artists. Session musicians, by comparison, are often excluded from discussions on the use of their work for AI training and the reaping of benefits that accrue from it.
Licensing and territoriality
The bulk of discussion revolved around the licensing of copyright works for training AI models. As many readers will know, training an AI model requires vast amounts of data, and many models, either for music or text, rely on materials made by musicians (which is used as an umbrella term to cover different copyright categories of music creators). Participants first noted a culture shift: 25 years ago, the music industry’s outrage at companies like Napster led to it being shut down; yet now, the culture has changed such that the scraping of training data by AI companies is met with far weaker resistance, on the one hand, and the prospect of securing licensing deals, on the other.
Turning to the recent Getty v Stability AI [2025] case in the UK (currently under appeal), the debate turned to licensing and territoriality, since many models are trained in the USA and then shared and deployed worldwide. Participants were curious whether the UK could introduce a provision similar to Recital 106 of the EU’s AI Act (Regulation 2024/1689), which states that to place a general purpose AI model on the EU market, EU copyright law must be respected regardless “of the jurisdiction in which the copyright-relevant acts underpinning the training of those general-purpose AI models take place”. However, other participants noted three flaws: first, that as a recital, this provision creates no enforceable legal obligation (see here); second, that its unenforceability means that it is not taken seriously by many in the industry; and third, that the provisions on secondary infringement may already provide this effect, and the appeal in Getty might shed light on the issue. Perhaps, it was suggested, copyright law ought to move beyond its territorial obsessions, especially as techniques are increasingly aterritorial. For example, LAION’s training datasets are comprised of hyperlinks, so their content can be downloaded anywhere. Nevertheless, caution was urged, as though this might be desirable in theory, it could distract from the highly territorial reality of copyright law.
If the US is the focal point for AI training, then one must look to whether training constitutes ‘fair use’, the primary American exception to copyright infringement. It was noted that many AI companies may well benefit from this US exception. As such, investors encourage operations in the USA on the grounds that the risk of legal liability is theoretically lower. Having said that, doubts were raised about whether such acts will tend to constitute fair use: first, genAI providers tend to hold onto data far longer than necessary and manipulate it beyond what is required, which could weigh against them; second, that with every successful licence AI companies draw up it becomes harder for them to argue that an exception to copyright infringement must apply; and third, that genAI tools are increasingly acting as a substitute for musicians, which will weigh against AI providers in the fourth factor of the fair use exception (the effect of use upon the potential market). The discussion acknowledged this uncertainty, and suggested that a commercially sensible approach in the meantime is to negotiate licensing deals to bring certainty and resolutions. In any case, for creators to be successful in their long-term endeavours, they need to make persuasive arguments primarily before the US courts.
In building these frameworks, participants were keen to stress that the government should not manage the licensing marketplace, as past attempts have failed. Nevertheless, it must be asked how remuneration can be guaranteed to flow to primary creators when it is their labels negotiating the licences (see CREATe’s studies on creators’ earnings and contracts hub). Some participants argued that the wording of most contracts would already cover payments for this use of the licensed works, and though some doubt was expressed about older contracts, it was said that many of these would be open for renegotiation in this respect or could be dealt with under a blanket licence. However, other discussants were less optimistic, noting that many contracts guarantee payments only for uses that are “direct and identifiable”, and that use for training AI would fall outside this. A more positive conclusion was reached, with participants noting that many organisations will voluntarily ensure that remuneration reaches the primary creators themselves.
The reality of this remains to be seen and was extensively discussed during IBIL’s Annual Copyright Lecture on the same day of the roundtable. It was noted by several participants that collective management organisations, for all their value, can be slow or ineffective at distributing royalties.
Transparency
In any case, transparency measures are needed to understand how works have been used such that remuneration can flow to artists. It was generally agreed that current obligations, like those of Art 53(1)(d) EU AI Act, will make little difference. However, the current technological frameworks are underdeveloped and largely ineffective. One possible solution was explored in which files could be watermarked – a technology previously thought untenable because it affected audio quality, but recent developments have worked around this and created a watermarking system that can also survive most major file manipulations. If successful, this could serve well as the machine-readable opt-out that Art 4 CDSM Directive calls for.
A recent trend was noted whereby genAI providers claim their training dataset as a trade secret not to be disclosed; but it was noted that this can, and should, lead to courts making presumptions against them as has been the case in the UK since King Features v Kleeman [1941].
Outputs
A short discussion was had on outputs. Overall, the term ‘deepfakes’ was disfavoured and ‘digital replicas’ was seen as the more accurate terminology. Some participants supported the idea of new personality right to prevent digital replicas, while others were not as much in favour it, with possible provisions of copyright and performers’ rights already providing a route to a remedy. Further, though moral rights may have some role to play in preventing deepfakes, it must be recalled that these protect only a person’s work, not the person themselves.
Conclusion
The roundtable was successful in moving towards agreement that can shape law- and policy-making, academic research, licensing, and music tech developments. On the same day, IBIL hosted its Annual Copyright Lecture on a similar topic, which can be watched here. The next roundtable will take place in June 2026 on the topic of publishing.
You may also like