🔥 Trade with Pros on Discord → 21 Days Free (No Card)JOIN FREE

Judge rules in favor of Anthropic in copyright lawsuit but it’s not off the hook yet

In this post:

  • US Judge Alsup’s ruling allows AI to train on books under fair use.
  • But Anthropic still faces penalties for storing pirated books.
  • It becomes the first major ruling on fair use and AI training.

In a landmark decision that could reshape the future of artificial intelligence and copyright law, a US federal judge ruled that Anthropic did not break the law by using copyrighted books to train its systems.

But Anthropic – the AI firm behind the Claude chatbot- is, however, not completely off the hook; it could still face stiff penalties for how it handled those books.

But Anthropic also overstepped legal boundaries

The ruling came late Monday from US District Judge William Alsup in San Francisco, who found that Anthropic’s training of its AI model using the works of authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson qualifies as fair use.

This doctrine, which allows limited use of copyrighted content without permission, played a central role in Alsup’s ruling, one of the first to tackle fair use in the era of generative AI.

“Like any reader aspiring to be a writer, Anthropic’s (AI large language models) trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different,” Alsup wrote.

While Alsup sided with Anthropic regarding its use of the books for AI training, he was clear that the company overstepped legal boundaries when it stored more than 7 million pirated titles in what he called a “central library.” That, the judge said, did not fall under fair use.

See also  Australia’s A$400bn pension giant turns bearish on global stock markets over AI

A trial has been scheduled for December to determine what damages, if any, Anthropic will owe the authors. Under US copyright law, damages for willful infringement can reach as high as $150,000 per work.

Anthropic has yet to publicly comment on the ruling, but the outcome splits the case into two: the training part is protected, and the storing is not.

Could the Anthropic case signal a win for the AI industry?

The case is part of a wider wave of lawsuits from authors and media outlets targeting companies like OpenAI, Meta, and Microsoft over how they’re building their AI systems. The question at the core: should these companies be allowed to use copyrighted material, often without consent, to develop tools that, in some ways, compete with the original creators?

Alsup’s ruling gives a boost to AI developers, many of whom argue that their models are producing new, transformative content and shouldn’t be forced to pay every copyright holder whose work was used along the way.

“Like any reader hoping to become a writer, Anthropic’s models were trained on these books not to copy them, but to create something entirely new.”

Alsup.

Anthropic had told the court that copying the books was essential for studying writing styles and extracting uncopyrightable elements, like structure and tone, to help its AI create novel content.

See also  DRAM, NAND shortage linked to AI build‑out will hike phone prices 8-10% globally

The company argued this kind of learning actually furthers human creativity, something copyright law is supposed to encourage.

But Alsup also criticised Anthropic for collecting pirated digital copies of the books. While the company had insisted the source of the material didn’t matter, Alsup disagreed strongly.

In his ruling, he said: “This order doubts that any accused infringer could ever meet its burden of explaining why downloading source copies from pirate sites that it could have purchased or otherwise accessed lawfully was itself reasonably necessary to any subsequent fair use.”

Essentially, while the end use might be protected, how Anthropic got its hands on the material was not. That distinction could shape how AI companies gather training data in the future, and potentially encourage more ethical, or at least legal,  data sourcing.

With more copyright lawsuits lined up against AI firms, this decision could set a key precedent. The December trial will now decide whether Anthropic’s approach to storing content merits financial penalties, and if so, how steep they should be.

Sharpen your strategy with mentorship + daily ideas - 30 days free access to our trading program

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Editor's choice

Loading Editor's Choice articles...

- The Crypto newsletter that keeps you ahead -

Markets move fast.

We move faster.

Subscribe to Cryptopolitan Daily and get timely, sharp, and relevant crypto insights straight to your inbox.

Join now and
never miss a move.

Get in. Get the facts.
Get ahead.

Subscribe to CryptoPolitan