Meta’s Australian fact-checking program to curb fake content ahead of election
March 18, 2025
Meta’s Australian fact-checking program to curb fake content ahead of election
Meta Australia’s head of public policy has outlined in an 18 March blog post how it intends to protect the integrity of Australia’s 2025 election.
Despite rolling back its fact-checking arm in the United States in January, Cheryl Seeto confirmed that Meta Australia will continue to work with Agence France-Presse (AFP) and the Australian Associated Press (AAP) as independent reviewers.
You’re out of free articles for this month
To continue reading the rest of this article, please log in.
Keep me signed in on this device.
If you check this box before you log in, you won’t have to log back into the website next time you return, even if you close your browser and come back later.
JavaScript is required for CAPTCHA verification to submit this form.
Create free account to get unlimited news articles and more!
First Name
Last Name
Mobile
Organisation Type
By becoming a member, I agree to receive information and promotional messages from Cyber Daily.
I can opt out of these communications at any time.
For more information, please visit our
Privacy Statement.
Need help signing up? Visit the
Help Centre.
Content flagged by the AAP and AFP will have warning labels added, while Meta will reduce its distribution.
“We are also partnering with AAP on a new media literacy campaign to help Australians critically assess the content they view online, which will run in the lead-up to the election,” Seeto said.
Meta will also continue its work with the Australian Electoral Commission to empower voters and share verified AEC messaging across Instagram and Facebook to educate users on the voting process.
Seeto also said Meta would be applying its Community Standards and Ad Standards to any AI-generated content, which will also be subject to fact-checking by the AAP and AFP.
“One of the rating options is ‘Altered’, which includes ‘faked, manipulated or transformed audio, video, or photos.’ When it is rated as such, we label it and down-rank it in [the] feed, so fewer people see it,” Seeto said.
Even if they don’t violate any of Meta’s standards, any photo-realistic content in election advertising will be labelled as such. Anyone creating their own AI-generated content can voluntarily mark it as such, but if they don’t, Meta will add a label itself if it detects such content.
Meta will also be leveraging its global election interference teams to combat “covert influence operations” and other “inauthentic behaviour”. This could lead to outright takedowns of interference campaigns and monitoring for any future attempts at evasion.
“For more overt efforts, we label state-controlled media on Facebook, Instagram and Threads so that users know when content is from a publication that may be wholly or partially under the editorial control of a government,” Seeto said.
Search
RECENT PRESS RELEASES
Related Post