Meta Oversight Board Slams Parent Company Over Viral Ronaldo Deepfake
June 5, 2025
In brief
- Meta’s Oversight Board said the company should have removed a deepfake ad of Brazilian footballer Ronaldo Nazário.
- The post promoted a deceptive online game and misled viewers.
- The decision highlights Meta’s inconsistent enforcement of fraud policies amid growing concern over AI misuse.
Meta’s Oversight Board has ordered the removal of a Facebook post showing an AI-manipulated video of Brazilian football legend Ronaldo Nazário promoting an online game.
The board said the post violated Meta’s Community Standards on fraud and spam, and criticized the company for allowing the misleading video to remain online.
“Taking the post down is consistent with Meta’s Community Standards on fraud and spam. Meta should also have rejected the content for advertisement, as its rules prohibit using the image of a famous person to bait people into engaging with an ad,” the Oversight Board said in a statement Thursday.
The Oversight Board, an independent body that reviews content moderation decisions at Facebook parent Meta, has the authority to uphold or reverse takedown decisions and can issue recommendations that the company must respond to.
It was established in 2020 to provide accountability and transparency for Meta’s enforcement actions.
The case highlights a growing concern over AI-generated images that falsely depict people, portraying them as saying or doing things they never did.
They are increasingly being deployed for scams, fraud, and misinformation.
In this instance, the video depicted a poorly synchronized voiceover of Ronaldo Nazário urging users to play a game called Plinko through its app, falsely promising that users could earn more than by doing common jobs in Brazil.
The post garnered more than 600,000 views before being flagged.
But despite being reported, addressing the content was not prioritized, and it was not removed.
The user who reported it then appealed the decision to Meta, where it was again not prioritized for human review. Finally, the user went to the Board.
This is not the first time Meta has faced criticism over its handling of celebrity deepfakes.
Last month, actress Jamie Lee Curtis confronted CEO Mark Zuckerberg on Instagram after her likeness was used in an AI-generated ad, prompting Meta to disable the ad but leave the original post online.
The Board found that only specialized teams at Meta could remove this type of content, suggesting widespread underenforcement. It urged Meta to apply its anti-fraud policies more consistently across the platform.
The decision comes amid broader legislative momentum to curb the abuse of deepfakes.
In May, President Donald Trump signed the bipartisan Take It Down Act, mandating that platforms remove non-consensual, intimate, AI-generated images within 48 hours.
The law responds to an uptick in deepfake pornography and image-based abuse affecting celebrities and minors.
Trump himself was targeted by a viral deepfake this week, showing him advocating for dinosaurs to guard the U.S.’ southern border.
Edited by Sebastian Sinclair
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.
Search
RECENT PRESS RELEASES
Myriad Moves: Will Bitcoin Set a New All-Time High? Plus Strategy and PENGU Predictions
SWI Editorial Staff2025-06-20T11:48:08-07:00June 20, 2025|
Bitcoin should hold $100K as Q3 seasonality predicts sideways trading
SWI Editorial Staff2025-06-20T11:48:04-07:00June 20, 2025|
Cannabis use disorder may raise your psychiatric illness risk
SWI Editorial Staff2025-06-20T11:47:47-07:00June 20, 2025|
Mass. House legislation proposes to overhaul the Cannabis Control Commission
SWI Editorial Staff2025-06-20T11:47:43-07:00June 20, 2025|
Minnesota issues first adult-use marijuana license to a cultivator
SWI Editorial Staff2025-06-20T11:47:41-07:00June 20, 2025|
Thailand’s Cannabis Boom in Phuket Sparks Tourism Backlash and Urgent Calls for Regulation
SWI Editorial Staff2025-06-20T11:47:36-07:00June 20, 2025|
Related Post