TLDRs; Anthropic will pay $1.5B to settle authors’ lawsuit over using pirated books in AI training datasets. The settlement requires Anthropic to destroy datasets containing copyrighted works, marking a historic precedent. Authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson led the case against the $183B AI firm. The record-breaking payout highlights risks tied to [...] The post Anthropic Ordered to Destroy Datasets in $1.5B Author Settlement appeared first on CoinCentral.TLDRs; Anthropic will pay $1.5B to settle authors’ lawsuit over using pirated books in AI training datasets. The settlement requires Anthropic to destroy datasets containing copyrighted works, marking a historic precedent. Authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson led the case against the $183B AI firm. The record-breaking payout highlights risks tied to [...] The post Anthropic Ordered to Destroy Datasets in $1.5B Author Settlement appeared first on CoinCentral.

Anthropic Ordered to Destroy Datasets in $1.5B Author Settlement

2025/09/07 01:48
3 min di lettura
Per feedback o dubbi su questo contenuto, contattateci all'indirizzo crypto.news@mexc.com.

TLDRs;

  • Anthropic will pay $1.5B to settle authors’ lawsuit over using pirated books in AI training datasets.
  • The settlement requires Anthropic to destroy datasets containing copyrighted works, marking a historic precedent.
  • Authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson led the case against the $183B AI firm.
  • The record-breaking payout highlights risks tied to sourcing pirated materials, even under “fair use” defenses.

Anthropic, one of the fastest-growing artificial intelligence startups, has agreed to pay at least $1.5 billion to settle a class-action lawsuit brought by a group of U.S. authors.

The case, filed in the Northern District of California in 2024, accused the company of using thousands of copyrighted books, many allegedly obtained from pirated sources, to train its large language models without permission.

If approved by the court, the deal would mark the largest publicly reported copyright settlement in history, surpassing prior cases involving technology and media companies. The settlement will compensate authors at a rate of roughly $3,000 per book, plus interest, and crucially, requires Anthropic to permanently destroy training datasets that contained the disputed material.

Disputed Datasets Must Be Deleted

While Anthropic initially defended its practices under “fair use,” the lawsuit turned on the source of the training data. Plaintiffs argued that much of the content had been scraped from shadow libraries such as Library Genesis and Pirate Library Mirror, both known repositories of pirated works.

A judge had earlier ruled that Anthropic’s use of books could be considered transformative and therefore protected under fair use. However, the court noted that the company’s reliance on pirated datasets exposed it to separate legal vulnerabilities. This dual finding created a complex risk landscape: even a legal win on fair use did not shield the company from potential damages tied to unlawful data acquisition.

By agreeing to destroy the datasets, Anthropic is not only offering financial compensation but also addressing authors’ calls for stronger protections against unauthorized use of their works in AI training.

Authors Behind the Case

The lawsuit,was filed by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson. They argued that Anthropic’s model training devalued their creative work and undermined their intellectual property rights.

Legal experts say the case underscores a new era of copyright litigation, where creators are pushing back against AI companies’ vast data practices. While courts have recognized transformative uses under fair use, the source of that data has emerged as a separate battlefield.

“This settlement shows that creators’ rights can’t be brushed aside simply because the technology is new,” said one copyright attorney familiar with the case.

Industry Impact Beyond Anthropic

The timing of the settlement is particularly notable. Just days before the agreement, Anthropic announced a $13 billion funding round, pushing its valuation to $183 billion. Analysts say the company likely pursued a settlement to remove uncertainty as it seeks to expand partnerships and secure regulatory approval in key markets.

For the broader AI sector, the case offers a cautionary tale. Even as courts rule favorably on fair use, the origin of training data can leave companies exposed to massive liabilities.

Industry observers believe this record-breaking deal could set a precedent for future lawsuits involving other AI developers accused of training their systems on copyrighted works without consent. It may also accelerate efforts to establish licensing frameworks between AI companies and publishers to avoid costly litigation.

 

 

The post Anthropic Ordered to Destroy Datasets in $1.5B Author Settlement appeared first on CoinCentral.

Opportunità di mercato
Logo Sleepless AI
Valore Sleepless AI (SLEEPLESSAI)
$0.01926
$0.01926$0.01926
-1.38%
USD
Grafico dei prezzi in tempo reale di Sleepless AI (SLEEPLESSAI)
Disclaimer: gli articoli ripubblicati su questo sito provengono da piattaforme pubbliche e sono forniti esclusivamente a scopo informativo. Non riflettono necessariamente le opinioni di MEXC. Tutti i diritti rimangono agli autori originali. Se ritieni che un contenuto violi i diritti di terze parti, contatta crypto.news@mexc.com per la rimozione. MEXC non fornisce alcuna garanzia in merito all'accuratezza, completezza o tempestività del contenuto e non è responsabile per eventuali azioni intraprese sulla base delle informazioni fornite. Il contenuto non costituisce consulenza finanziaria, legale o professionale di altro tipo, né deve essere considerato una raccomandazione o un'approvazione da parte di MEXC.

$30,000 in PRL + 15,000 USDT

$30,000 in PRL + 15,000 USDT$30,000 in PRL + 15,000 USDT

Deposit & trade PRL to boost your rewards!