Bartz v. Anthropic: The AI Copyright Case Most Americans Never Heard About

By Roger Paradiso

As artificial intelligence (AI) reshapes nearly every creative field, one of the most consequential copyright cases in U.S. history has unfolded with little public attention.

The case—Bartz v. Anthropic—now moving toward final settlement approval, raises fundamental questions about who owns creative work in the age of AI, how “fair use” applies to machine learning, and whether artists will have any meaningful say in how their work is used.

In August 2024, authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson sued Anthropic, the company behind the Claude AI models. They alleged that Anthropic had downloaded millions of copyrighted books from pirate libraries such as Library Genesis and Pirate Library Mirror and used them to train its systems.

Court records later showed that Anthropic downloaded more than seven million books from these sites, while also purchasing and scanning millions of legally obtained titles.

The lawsuit initially challenged the broader practice of training AI on copyrighted material. But in June 2025, U.S. District Judge William Alsup drew a sharp distinction. He ruled that training AI models using legally purchased books could qualify as “transformative” and protected under fair use. However, he rejected any argument that downloading or retaining pirated copies could be excused, signaling that such conduct likely constituted copyright infringement.

That ruling left Anthropic partially vindicated—but still exposed to massive liability.

The case escalated further in August 2025 when Judge Alsup certified it as a class action. The class, defined broadly by the court, included all copyright holders with reproduction rights to books found in the pirate datasets—encompassing authors, publishers, estates, and other rights holders.

With nearly half a million works implicated and statutory damages theoretically reaching $150,000 per work, Anthropic faced potential exposure exceeding $70 billion.

Faced with a December trial date, the parties reached a settlement in late August 2025. The agreement covers approximately 482,460 books and is now moving toward final approval. Key deadlines include a January 7, 2026 opt-out date and a March 23, 2026 claims deadline.

While the settlement delivers compensation, many copyright holders are expected to receive relatively modest payouts after fees and costs. More significantly, the agreement does not grant authors control over future AI training.

In effect, nobody fully won. Anthropic pays a substantial sum but retains the right to train AI systems on legally acquired books. Authors and publishers receive compensation for past conduct but little leverage over what comes next.

The implications extend well beyond publishing. In April 2024, more than 200 prominent musicians—including Billie Eilish, Nicki Minaj, Stevie Wonder, and the estates of Bob Marley and others—signed an open letter demanding protections against unauthorized AI training.

Musicians are already feeling the pressure. Rapper Cash Cobain has said he opposes AI-generated songs, warning that such practices are “not fair for real,” even as AI-created artists begin appearing on Billboard charts.

In Greenwich Village, artists describe a similar squeeze. One longtime musician, speaking off the record, said his royalty checks have dwindled to a fraction of their former value, forcing him to continue touring simply to pay rent.

Visual artist McKernan told PBS, “Someone’s profiting from my work. I had rent due yesterday, and I’m $200 short.”

Despite its scale, Bartz v. Anthropic has received little sustained media attention. Yet it may shape the future of creative work more than any cultural case in decades.

Whether lawmakers step in—or artists are left navigating an AI economy built without them—remains an open question.