Meta Moves to Dismiss Porn-Piracy Suit, Calls AI-Training Claims ‘Nonsensical’

October 31, 2025

In brief

  • Meta has asked a U.S. court to dismiss a lawsuit by Strike 3 Holdings, alleging that it used corporate and hidden IPs to torrent nearly 2,400 adult films since 2018 for AI development.
  • Meta says the small number of alleged downloads points to “personal use” by individuals, not AI training.
  • The company denies using any adult content in its model, calling the AI-training theory “guesswork and innuendo.”

Meta has asked a U.S. court to dismiss a lawsuit that accused it of illegally downloading and distributing thousands of pornographic videos to train its artificial intelligence systems.

Filed Monday in the U.S. District Court for the Northern District of California, the motion to dismiss argues there is no evidence that Meta’s AI models contain or were trained on the copyrighted material, calling the allegations “nonsensical and unsupported.”

The motion was first reported by Ars Technica on Thursday, with Meta issuing a direct denial saying the claims are “bogus.”

Plaintiffs have gone “to great lengths to stitch this narrative together with guesswork and innuendo, but their claims are neither cogent nor supported by well-pleaded facts,” the motion reads.

The original complaint was filed in July by Strike 3 Holdings and alleged Meta of using corporate and concealed IP addresses to torrent nearly 2,400 adult films since 2018 as part of a broader effort to build multimodal AI systems.

Strike 3 Holdings is a Miami-based adult film holding company distributing content under brands such as Vixen, Blacked, and Tushy, among others.

Decrypt has reached out to Meta and Strike 3 Holdings, as well as to their respective legal counsel, and will update this article should they respond.

Scale and pattern

Meta’s motion argues that the scale and pattern of alleged downloads contradict Strike 3’s AI training theory.

Over seven years, only 157 of Strike 3’s films were allegedly downloaded using Meta’s corporate IP addresses, averaging roughly 22 per year across 47 different addresses.

Meta attorney Angela L. Dunning characterized this as “meager, uncoordinated activity” from “disparate individuals” doing it for “personal use,” and thus was not, as Strike 3 alleges, part of an effort by the tech giant to gather data for AI training.

The motion also pushes back on Strike 3’s claim that Meta used more than 2,500 “hidden” third-party IP addresses, and claims Strike 3 did not verify who owned those addresses and instead made loose “correlations.”

One of the IP ranges is allegedly registered to a Hawaiian nonprofit with no link to Meta, while others have no identified owner.

Meta also argues there’s no proof it knew about or could have stopped the alleged downloads, adding that it gained nothing from them and that monitoring every file on its global network would be neither simple nor required by law.

Training safely

While Meta’s defense appears “unusual” at first, it may still have weight given the core claim rests on how “the material was not used in any model training,” Dermot McGrath, co-founder of venture capital firm Ryze Labs, told Decrypt.

“If Meta admitted the data was used in models, they’d have to argue fair use, justify the inclusion of pirated content, and open themselves to discovery of their internal training and audit systems,” McGrath said, add that instead of defending how the data was supposedly used, Meta denied “it was ever used at all.”

But if courts admit such a defense as valid, it could open “a massive loophole,” McGrath said. It could “effectively undermine copyright protection for AI training data cases” such that future cases would need “stronger evidence of corporate direction, which companies would simply get better at hiding.”

Still, there are legitimate reasons to process explicit material, such as developing safety or moderation tools.

“Most major AI companies have ‘red teams’ whose job is to probe models for weaknesses by using harmful prompts and trying to get the AI to generate explicit, dangerous, or prohibited content,” McGrath said. “To build effective safety filters, you need to train those filters on examples of what you’re trying to block.”

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.

 

Search

RECENT PRESS RELEASES