Meta is making users who opted out of AI training opt out again, watchdog says

May 14, 2025

EU users have less than two weeks to opt out of Meta’s AI training.

Privacy watchdog Noyb sent a cease-and-desist letter to Meta Wednesday, threatening to pursue a potentially billion-dollar class action to block Meta’s AI training, which starts soon in the European Union.

In the letter, Noyb noted that Meta only recently notified EU users on its platforms that they had until May 27 to opt their public posts out of Meta’s AI training data sets. According to Noyb, Meta is also requiring users who already opted out of AI training in 2024 to opt out again or forever lose their opportunity to keep their data out of Meta’s models, as training data likely cannot be easily deleted. That’s a seeming violation of the General Data Protection Regulation (GDPR), Noyb alleged.

“Meta informed data subjects that, despite that fact that an objection to AI training under Article 21(2) GDPR was accepted in 2024, their personal data will be processed unless they object again—against its former promises, which further undermines any legitimate trust in Meta’s organizational ability to properly execute the necessary steps when data subjects exercise their rights,” Noyb’s letter said.

This alleged lack of clarity for users who opt out makes it harder to trust that users can ever truly opt out, Noyb suggested. Previously, Meta “argued (in respect to EU-US data transfers) that a social network is a single system that does not allow to differentiate between EU and non-EU users, as many nodes (e.g. an item linked to an EU and a non-EU user) are shared,” Noyb noted. That admission introduces “serious doubts that Meta can indeed technically implement a clean and proper differentiation between users that performed an opt-out and users that did not,” Noyb alleged.

“This lack of proper differentiation would mean that messages between a user who objected to the use of their data for AI training and a user who did not object could end up in Meta’s AI systems despite the first user’s objection,” Noyb warned.

The letter accused Meta of further deceptions, like planning to seize data that users may not consider “public,” like disappearing stories typically only viewed by small audiences. That, Noyb said, differs significantly from AI crawlers scraping information posted on a public website.

According to Noyb, there would be no issue with Meta’s AI training in the EU if Meta would use a consent-based model rather than requiring rushed opt-outs. As Meta explained in a blog following a threatened preliminary injunction on AI training in Germany, the company plans to collect AI training data using a “legitimate interest” legal basis, which supposedly “follows the clear guidelines of the European Data Protection Committee of December 2024, which reflect the consensus between EU data protection authorities.”

But Noyb Chairman Max Schrems doesn’t believe that Meta has a legitimate interest in sweeping data collection for AI training.

“The European Court of Justice has already held that Meta cannot claim a ‘legitimate interest’ in targeting users with advertising,” Schrems said in a press release. “How should it have a ‘legitimate interest’ to suck up all data for AI training? While the ‘legitimate interest’ assessment is always a multi-factor test, all factors seem to point in the wrong direction for Meta. Meta simply says that its interest in making money is more important than the rights of its users.”

Meta defends AI training

In a statement, Meta’s spokesperson defended the opt-out approach, noting that “we’ve provided EU users with a clear way to object to their data being used for training AI at Meta, notifying them via email and in-app notifications that they can object at any time.”

The spokesperson criticized “Noyb’s copycat actions” as “part of an attempt by a vocal minority of activist groups to delay AI innovation in the EU, which is ultimately harming consumers and businesses who could benefit from these cutting-edge technologies.”

Noyb has requested a response from Meta by May 21, but it seems unlikely that Meta will quickly cave in this fight.

In a blog post, Meta said that AI training on EU users was critical to building AI tools for Europeans that are informed by “everything from dialects and colloquialisms, to hyper-local knowledge and the distinct ways different countries use humor and sarcasm on our products.”

Meta argued that its AI training efforts in the EU are far more transparent than efforts from competitors Google and OpenAI, which, Meta noted, “have already used data from European users to train their AI models,” supposedly without taking the steps Meta has to inform users.

Also echoing a common refrain in the AI industry, another Meta blog warned that efforts to further delay Meta’s AI training in the EU could lead to “major setbacks,” pushing the EU behind rivals in the AI race.

“Without a reform and simplification of the European regulatory system, Europe threatens to fall further and further behind in the global AI race and lose ground compared to the USA and China,” Meta warned.

Noyb discredits this argument and noted that it can pursue injunctions in various jurisdictions to block Meta’s plan. The group said it’s currently evaluating options to seek injunctive relief and potentially even pursue a class action worth possibly “billions in damages” to ensure that 400 million monthly active EU users’ data rights are shielded from Meta’s perceived grab.

A Meta spokesperson reiterated to Ars that the company’s plan “follows extensive and ongoing engagement with the Irish Data Protection Commission,” while reiterating Meta’s statements in blogs that its AI training approach “reflects consensus among” EU Data Protection Authorities (DPAs).

But while Meta claims that EU regulators have greenlit its AI training plans, Noyb argues that national DPAs have “largely stayed silent on the legality of AI training without consent,” and Meta seems to have “simply moved ahead anyways.”

“This fight is essentially about whether to ask people for consent or simply take their data without it,” Schrems said, adding, “Meta’s absurd claims that stealing everyone’s personal data is necessary for AI training is laughable. Other AI providers do not use social network data—and generate even better models than Meta.”

 

Go to Top