Concerns Raised Over Large Language Models and Copyright Infringement


  • CEO warns Large Language Models in AI are infringing on copyright laws at a massive scale.
  • Experts call for clearer licensing structures to address AI’s copyright challenges.
  • Parliament examines the ethical and legal implications of AI in content creation.

Dan Conway, the CEO of the Publishers Association (PA), has raised alarms before a committee of Members of Parliament (MPs) regarding the use of Large Language Models (LLMs) in Artificial Intelligence (AI), claiming that they are breaking the law on a “massive scale.” This revelation comes amid growing concerns about the impact of AI on copyright infringement and the need for more robust regulations.

The parliamentary discussion

The Communications & Digital Committee convened in Parliament on November 7th to examine the nature and implications of LLMs. During this session, Conway, along with other experts, including Dr. Hayleigh Bosher from Brunel University London, Arnav Joshi from Clifford Chance, and Richard Mollet from RELX, provided evidence and insights into the issue.

Concerns over copyright infringement

Conway voiced his concerns regarding how LLMs are potentially infringing on copyright law. While acknowledging the positive potential of AI and LLMs, he emphasized that the current market conditions are not conducive to the safe, reliable, and ethical development of AI. Conway cited the example of the Books3 database, which exposed the presence of 120,000 pirated book titles ingested by these LLMs. He also pointed to the output of these models, which consistently generates copyrighted book content.

According to Conway, these Large Language Models are not in compliance with intellectual property (IP) law at various stages of their operation, including data collection, storage, and handling. He asserted that copyright law is being violated on a massive scale, raising critical legal and ethical concerns.

Agreement on copyright breaches

Dr. Hayleigh Bosher, an expert in intellectual property law, appeared to concur with Conway’s assertions. She emphasized that the principles governing when a license is required and when it constitutes copyright infringement are clear. Reproducing a copyrighted work without permission would typically necessitate a license. However, Bosher highlighted that some AI tech developers are interpreting the law differently, leading to legal ambiguity.

Conway stressed the need for a clear process involving permission, transparency, remuneration, and attribution in handling copyrighted content. He advocated for market-based solutions that facilitate seamless licensing, ensuring that AI systems have access to data in a legally compliant manner. This could involve direct licensing or a collective licensing model, which may be particularly beneficial for smaller businesses involved in AI development.

The call for clearer licensing structures

Both Conway and Bosher agreed that there is a pressing need for more transparent and well-defined licensing structures. Conway suggested that improvements in the licensing system should be explored to enable smoother transactions between AI developers and rights holders. This, he argued, would ensure that the correct information and creative works are used in AI models, resulting in appropriate outputs.

The path forward

In terms of facilitating change, Conway expressed support for a voluntary approach, accompanied by a set of principles from the government emphasizing copyright and transparency. He suggested that there are various global models to consider when shaping regulations. However, he also emphasized the importance of having legislative mechanisms in place as a backup if voluntary discussions fail to yield meaningful results.

Last month, various publishing trade bodies called on the government to implement tangible solutions to protect the “human creativity” behind AI. They urged the acknowledgment of and compensation for copyright infringement that has already occurred, highlighting the urgency of addressing these challenges.

The concerns raised by Dan Conway and other experts before the Parliamentary committee shed light on the complex legal and ethical landscape surrounding the use of Large Language Models in AI. While AI holds great promise for innovation and transformation, it also poses challenges related to copyright infringement. The discussions in Parliament serve as a crucial step toward finding a balance between AI advancement and protecting intellectual property rights, with the hope of ensuring a fair and ethical future for AI development and utilization in the UK.

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Share link:

John Palmer

John Palmer is an enthusiastic crypto writer with an interest in Bitcoin, Blockchain, and technical analysis. With a focus on daily market analysis, his research helps traders and investors alike. His particular interest in digital wallets and blockchain aids his audience.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Related News

Subscribe to CryptoPolitan