Meta had a 17-strike policy for sex trafficking, former safety leader claims
November 24, 2025
An unredacted court filing reveals claims that Meta repeatedly chose user engagement over safety.
An unredacted court filing reveals claims that Meta repeatedly chose user engagement over safety.


Meta allegedly gave accounts engaged in the “trafficking of humans for sex” 16 chances before suspending them, according to testimony from the company’s former head of safety and wellbeing, Vaishnavi Jayakumar. The testimony — along with several other claims that Meta ignored problems if they increased engagement — surfaced in an unredacted court filing related to a social media child safety lawsuit filed by school districts across the country.
“That means that you could incur 16 violations for prostitution and sexual solicitation, and upon the 17th violation, your account would be suspended,” Jayakumar said during her deposition. She added that this “is a very high strike threshold” by “any measure across the industry,” according to the lawsuit. Internal documentation also “confirms” this policy, lawyers claim.
As reported by Time, the unredacted filing reveals other disturbing accusations, including that Meta “did not have a specific way” for Instagram users to report child sexual abuse material (CSAM) on the platform. When Jayakumar learned about this, she reportedly “raised this issue ‘multiple times,’ but was told that it would be too much work to build” and to review reports.
Though Meta recently won its antitrust battle against the Federal Trade Commission, the company is facing mounting regulatory and legal pressure over child safety on the platform. The unredacted filing is part of a massive lawsuit filed against Meta, TikTok, Google, and Snapchat by dozens of school districts, attorneys general, and parents, alleging they are contributing to a “mental health crisis” by operating “addictive and dangerous” platforms. Meta CEO Mark Zuckerberg told The Verge last year that he believes there is “no causal connection” between social media and teen mental health.
The filing reveals multiple instances in which Meta is accused of downplaying the harms of its platforms in favor of boosting engagement. In 2019, Meta considered making all teen accounts private by default in order to prevent them from receiving unwanted messages; however, the company allegedly rejected the idea after the growth team found it would “likely smash engagement.” Meta started putting teens on Instagram into private accounts last year.
The lawsuit also claims that while Meta researchers found that hiding likes on posts would make users “significantly less likely to feel worse about themselves,” the company walked back these plans after finding it was “pretty negative to FB metrics.” Meta is similarly accused of reinstating beauty filters in 2020, even after finding “actively encouraging young girls into body dysmorphia.” Taking away the filters could have “negative growth impact, simply because any restriction is likely to reduce engagement if people go elsewhere,” Meta said, the lawsuit alleges.
“We strongly disagree with these allegations, which rely on cherry-picked quotes and misinformed opinions in an attempt to present a deliberately misleading picture,” Meta spokesperson Andy Stone said in an emailed statement to The Verge. “The full record will show that for over a decade, we have listened to parents, researched issues that matter most, and made real changes to protect teens — like introducing Teen Accounts with built-in protections and providing parents with controls to manage their teens’ experiences.”
Search
RECENT PRESS RELEASES
Related Post
