Meta, Anthropic win legal battles over AI ‘training.’ The copyright war is far from over.

June 27, 2025

Artificial intelligence developers won marginal legal battles this week when federal judges in California ruled that Anthropic (ANTH.PVT) and Meta (META) could “train” large language models (LLM) on copyrighted books.

But the larger war over AI developers’ use of protected works is far from over.

Dozens of copyright holders have sued developers, arguing that the developers must pay rights holders before allowing generative AI software to interpret their works for profit. Rights holders also argue that the AI output cannot resemble their original works.

Rob Rosenberg, an intellectual property lawyer with Telluride Legal Strategies, called Tuesday’s ruling siding with AI developer Anthropic a “ground-breaking” precedent, but one that should be viewed as an opening salvo.

IMAGE DISTRIBUTED FOR ANTHROPIC - Anthropic CEO Dario Amodei at the Code with Claude developer conference on Thursday, May 22, 2025 in San Francisco. (Don Feria/AP Content Services for Anthropic)
Anthropic CEO Dario Amodei at the Code with Claude developer conference on May 22 in San Francisco. (Don Feria/AP Content Services for Anthropic) · ASSOCIATED PRESS

“Judges are just starting to apply copyright law to AI systems,” Rosenberg said, with many cases coming down the pike.

In that ruling, California US District Judge William Alsup said that Anthropic legally utilized millions of copyrighted books to train its various LLMs, including its popular chatbot Claude.

However, the judge distinguished books that Anthropic paid for from a pirated library of more than 7 million books that it also used to train Claude. As for the stolen materials, the judge said, Anthropic must face the plaintiff authors’ claims that it infringed on their copyrights.

In a more limited ruling favoring Meta on Wednesday, California US District Judge Vince Chhabria said that a group of 12 authors who sued the tech giant, including stand-up comedian Sarah Silverman, made “wrong arguments” that prevented him from ruling on infringement. According to the authors, Meta used their copyrighted books to train its large language model Llama.

The rulings are among the first in the country to address emerging and unsettled questions over how far LLMs can go to rely on protected works.

Comedian Sarah Silverman at a Los Angeles red carpet event in 2023. (Reuters/Mike Blake)
Comedian Sarah Silverman at a Los Angeles red carpet event in 2023. (Reuters/Mike Blake) · REUTERS / Reuters

“There is no predicting what’s going to come out the other end of those cases,” said Courtney Lytle Sarnow, an intellectual property partner with CM Law and adjunct professor at Emory University School of Law.

Sarnow and other intellectual property experts said they expect the disputes will end up in appeals to the US Supreme Court.

“I think it’s premature for Anthropic and others like it to be taking victory laps,” said Randolph May, president of the Free State Foundation and former chair of the American Bar Association’s Administrative Law and Regulatory Practice section.

US copyright law, as defined by the Copyright Act, gives creators of original works an exclusive right to reproductions, distributions, and public performances of their material, according to Sarnow, including some derivative works and sequels to their original creations.

Absent a license from the rights holders to use their copyrighted material, all large language models are stealing from authors, she said.

But under US law, a certain level of what would otherwise be deemed stealing is, in fact, an exception permitted under the doctrine of “fair use.”

That doctrine makes it legal to use the material without a license for commentary and critique, to reference it for news reporting and education, and to transform it into something new and distinct that serves a purpose different from the original form.

Both Anthropic and Meta argued that training their LLMs on copyrighted material didn’t violate the Copyright Act because the models transformed the original authors’ content into something new.

In his ruling, Judge Alsup reasoned that Anthropic’s use of books was “exceedingly transformative” and therefore qualified as fair use under the Copyright Act.

Rosenberg and Sarnow said it’s too soon to tell how courts will ultimately rule on the issue. In cases where a “transformative” use is being used as a defense, LLM defendants need to show that their use of copyrighted material did not disrupt the market for the authors’ original works.

Judge Chhabria criticized Alsup’s ruling, calling his analysis incomplete for “brushing aside” such market concerns.

Meta chief product officer Chris Cox speaks at LlamaCon 2025, an AI developer conference, on April 29. (AP Photo/Jeff Chiu, File)
Meta chief product officer Chris Cox speaks at LlamaCon 2025, an AI developer conference, on April 29. (AP Photo/Jeff Chiu, File) · ASSOCIATED PRESS

“Under the fair use doctrine, harm to the market for the copyrighted work is more important than the purpose for which the copies are made,” Judge Chhabria said.

Anthropic still faces some other big legal challenges. Reddit sued the company earlier in June. The suit alleges Anthropic intentionally scraped Reddit users’ personal data without their consent and then put their data to work training Claude.

Anthropic is also defending itself against a suit from music publishers, including Universal Music Group (0VD.F), ABKCO, and Concord, alleging that Anthropic infringed on copyrights for Beyoncé, the Rolling Stones, and other artists as it trained Claude on lyrics to more than 500 songs.

The company faces more peril in the case where a judge determined it must face claims from authors that it infringed on their copyrights by paying for a pirated library of more than 7 million books.

For copyright infringement, willful violations can result in statutory fines up to $150,000 per violation. If Anthropic were found liable for intentionally misusing the 7 million books at issue in its case, the maximum allowable penalties, though not usually imposed, could end up north of $1 trillion.

Three authors brought the case, requesting that the court grant their request to pursue their claims as a class action. The judge’s decision on the class certification request is pending.

“The judge did not give Anthropic a free pass,” Rosenberg said.

Click here for the latest technology news that will impact the stock market

Read the latest financial and business news from Yahoo Finance

Terms and Privacy Policy