Meta’s oversight board says it needs to do more about deepfakes

March 11, 2026

“Meta needs to create a new, separate set of rules to ensure users can reliably recognize AI-generated content. Additionally, it should amend its current policies to ensure a timely and adequate response to deceptive AI-generated output.”

Those are the words of Meta’s own oversight board: the entity created by the social-media giant to scrutinise its policies and field complaints from people who’ve exhausted its appeals processes.

Its latest pronouncement is that Meta’s “approach to surfacing AI-generated content must evolve” – although in this case the spur is not copyright or celebrity-related, but focused instead on “deceptive AI” content during times of war.

The board cited “deepfake output designed to deceive, manipulate or increase engagement” that spread on Meta’s platforms during recent conflicts and crises in Venezuela and Iran.

One fake video from the Israel-Iran conflict last year received more than 700k views, but despite being reported to Meta by users “was neither reviewed by the company nor checked by third-party fact-checkers” – and it wasn’t given an AI label.

Among the board’s recommendations are that Meta “invest in stronger detection tools for AI-generated multi-format (audio, audio-visual and image) content” – something that would have relevance for the music industry and its biggest stars, but only if Meta runs with the recommendation.

  

Search

RECENT PRESS RELEASES