Meta apologizes after mistaken child exploitation flag shuts down woman’s business account

August 31, 2025

Every year, tech companies flag and suspend thousands of accounts that violate their Terms of Use concerning child sexual exploitation, and report many of those violators to law enforcement.

But what if those reports aren’t true? The fallout for people falsely accused can be devastating.

That is what appeared to happen to one woman named Riley, who contacted MyNorthewest and KIRO Newsradio. She received an email from Meta, the parent company of Instagram, Facebook, WhatsApp, and Threads, that informed her all of her personal and business accounts and channels she owns or manages, including a fitness center, a weight loss clinic, and a bar, were suspended for allegedly violating their terms of use related to child sexual exploitation.

Riley was shocked by the email and explained that the allegations were false.

“I would never ever do that,” Riley said. “I have younger siblings, like no, that’s terrible.”

It turns out, however, Meta was wrong, and they apologized.

Companies ramp up screening for child exploitation material

Screening content for potential child sexual exploitation and child sexual abuse material (CSAM) is extremely important work. Last year, tech companies like Google, Meta, OpenAI, Microsoft, and Amazon committed to reviewing their AI training data for CSAM. They also committed to removing it from use in any future models when they signed on to a new set of principles meant to limit the proliferation of CSAM.

In a recent blog post, Google said in addition to committing to the principles, it also increased ad grants for the National Center for Missing and Exploited Children (NCMEC) to promote its initiatives.

According to NCMEC, in 2022, they received 32 million reports of child sexual exploitation and abuse, including 49.4 million images and 37.7 million videos from tech companies.

However, the issue of misidentifying and suspending accounts for terms of use violations has increased in the last year. KIRO Newsradio found that a simple internet search uncovered hundreds of stories from users claiming they’ve been wrongly identified.

For Riley, her suspended accounts meant years of her personal work as a photographer, including hours of portfolio work, and documents were potentially lost forever. She also thought her professional work for her business accounts was gone.

Riley said she immediately appealed the suspension with Meta. She turned to online forums about what to do next. She also reached out to attorneys filing a class action lawsuit and attempted to get Washington State Attorney General Nick Brown involved.

Meta apologizes for flagging business accounts

Several weeks into our investigation into the issue, including multiple emails to Meta, Riley received another email from Meta that explained they found their technology had made a mistake.

Another message she shared with us from Instagram said, “Thanks for taking the time to request a review. We reviewed your account and found that the activity on it does follow our Community Standards on child sexual exploitation, abuse, and nudity, so you can use Instagram again. We’re sorry we got this wrong and that you weren’t able to use Instagram for a while. Sometimes we need to take action to help keep our community safe.”

Riley told us she’s back to running her accounts, but the allegations were frightening and disturbing.

“I know AI is supposed to be the new thing, supposed to help,” she said. “But I feel like it can do a lot more harm, especially when it’s turning into potentially permanently disabling people’s financial livelihood.”

Follow Luke Duecy on X. Read more of his stories here. Submit news tips here.