A federal judge has cast serious doubt on a proposed $1.5 billion settlement between AI firm Anthropic and authors over the illegal use of nearly half a million pirated books.
Key Takeaways
- Judge William Alsup harshly criticized Anthropic’s proposed $1.5 billion settlement, calling it incomplete and potentially unfair to authors.
- The lawsuit accuses Anthropic of using approximately 465,000 pirated books to train its Claude chatbot.
- Authors were expected to receive around $3,000 per work, but the judge questioned how the claims process would be managed.
- A final decision is delayed until at least October, pending stricter documentation and review of the claims and participants.
What Happened?
Anthropic, a major AI firm, reached a record-setting $1.5 billion deal to settle a lawsuit over its alleged use of pirated books to train its language models. But U.S. District Judge William Alsup was not convinced, calling the proposed agreement full of “pitfalls” and warning it could lead to more legal trouble if not handled transparently. He has delayed approval and scheduled another hearing for September 25.
Judge’s Concerns Over the Settlement
Judge Alsup was blunt in his criticism of the settlement during a hearing in San Francisco, stating it was “nowhere close to complete.” He voiced particular concern about the possibility that the deal was being pushed “down the throat of authors” without their full understanding.
Among the judge’s demands:
- A final list of all pirated works involved must be submitted by September 15.
- A complete and clear claims form must be created and submitted by September 22.
- The final approval will not proceed unless all documents, including lists of authors and works, are reviewed and approved by October 10.
He also questioned whether major organizations like the Authors Guild and Association of American Publishers, who attended the hearing but did not speak, were pressuring authors behind the scenes.
Authors, Lawyers, and the $3,000 Question
Under the current terms, each author or publisher would receive about $3,000 per pirated work. Justin Nelson, the lawyer representing the authors, said the list includes around 465,000 books, and he defended the fairness of the deal by noting the extensive media coverage it has received.
Still, Alsup was wary. He warned of the risk that authors may be left behind in the process once the money is on the table and lawyers have moved on. “Class members get the shaft in a lot of class actions,” he said.
Judge Alsup emphasized that the process must include strong safeguards:
- Authors should be clearly informed about their rights and not coerced into the deal.
- The process must allow authors to opt in or opt out.
- The settlement must include terms ensuring Anthropic cannot face duplicate lawsuits over the same issue.
Bigger Implications for AI and Copyright
This lawsuit is part of a broader legal trend where creators are challenging how AI companies source training data. Alsup’s earlier ruling in June found that training chatbots on copyrighted material may not be illegal in itself, but Anthropic’s use of pirated books to obtain that material crossed a legal line.
Before the hearing, author Kirk Wallace Johnson called the settlement “the beginning of a fight on behalf of humans that don’t believe we have to sacrifice everything on the altar of AI.”
CoinLaw’s Takeaway
In my experience covering copyright disputes, this case stands out not just for its billion-dollar price tag, but for the deep concerns about fairness and transparency. What hits home here is that AI companies can’t just write a check and make the problem go away, especially when it involves real people whose work was taken without consent. I found Judge Alsup’s refusal to rubber-stamp the deal refreshing. It signals that courts are finally waking up to the real-world impact of AI data practices. If you’re a creator, this moment matters.