Meta Accused Of Still Flouting Privacy Rules With AI Training Data

May 15, 2025

Meta’s efforts to placate Europe over the use of personal data to train AI models hasn’t worked, with privacy advocacy group noyb launching another challenge.

After pausing AI training in the EE and European Economic Area last June, Meta last month announced plans to resume, using public posts and comments shared by adults and users’ interactions with Meta AI.

It had paused the training after concerns were expressed by the Irish Data Protection Commission and complaints filed with data protection authorities across the region. At the end of the year, the European Data Protection Board issued an opinion which, said Meta, meant it was complying with the law.

“Last year, we delayed training our large language models using public content while regulators clarified legal requirements”, the company said. “We welcome the opinion provided by the EDPB in December, which affirmed that our original approach met our legal obligations.”

However, nyob begs to disagree.

“As far as we have heard, Meta has ‘engaged’ with the authorities, but this hasn’t led to any ‘green light’,” said nyob chair Max Schrems. “It seems that Meta is simply moving ahead and ignores EU Data Protection Authorities.”

Nyob has now sent the firm a cease and desist letter, threatening a European class action as a next step.

The company’s claim that ‘legitimate interest’ allows it to use user data without explicit opt-in consent doesn’t hold water, it said.

“The European Court of Justice has already held that Meta cannot claim a ‘legitimate interest’ in targeting users with advertising. How should it have a ‘legitimate interest’ to suck up all data for AI training?” said Schrems.

“While the ‘legitimate interest’ assessment is always a multi-factor test, all factors seem to point in the wrong direction for Meta. Meta simply says that its interest in making money is more important than the rights of its users.”

Nyob also claims that Meta may also be unable to comply with other GDPR rights such as the right to be forgotten, the right to have incorrect data rectified or users’ rights to access to their data in an AI system. On top of this, it says, because Meta provides AI model as open-source software for anyone to download and use, it can’t recall or update a model once it’s published.

Noyb compares Meta’s practices over data collection for AI with its collection of user data to serve advertisements. On this, after a series of GDPR lawsuits, the company finally agreed in 2023 to give up on the legitimate interests argument, and require specific opt-in instead.

“This fight is essentially about whether to ask people for consent or simply take their data without it. Meta starts a huge fight just to have an opt-out system instead of an opt-in system. Instead, they rely on an alleged ‘legitimate interest’ to just take the data and run with it,” said Schrems.

“This is neither legal nor necessary. Meta’s absurd claims that stealing everyone’s person data is necessary for AI training is laughable. Other AI providers do not use social network data—and generate even better models than Meta.”

Meta has been approached for comment.