Meta’s plans to deal with misinformation during Australia’s federal election

March 18, 2025

Photo by Dima Solomin on Unsplash.

Global social media player Meta says it will will combat misinformation, and identify AI-generated content, during Australia’s coming federal election.

Facebook in Australia, unlike in the US, still has contracted fact checkers, Agence France-Presse (AFP) and Australian Associated Press (AAP), to independently review content.

Meta is also working with the Australian Electoral Commission (AEC) on a number of fronts. 

“This includes activating our voter empowerment products, which will remind Australians to vote at the election and connect them with verified information from the AEC across Facebook and Instagram about where and when they can vote,” said Cheryl Seeto, head of policy at Meta in Australia. 

“The voting information prompts will start to roll-out a week before the election, and we will share an election day reminder on Facebook and Instagram on the day itself. 

“For those that want to share their civic experience, Instagram voting stickers will be available for people to post to their Stories.”

Those who share AI-generated images, video or audio can add a label to the content. 

However, if a user chooses not to identify their use of AI, and Meta detects signals of AI, an AI info label will be applied.

Meta declined to disclose the specific penalties for repeated breaches, though it is understood that consequences will apply for multiple infractions.

“In certain cases we may determine that a digitally created or altered image, video or audio that creates a high risk of deceiving the public on a matter of importance, we may add a more prominent label, so people have more information and context,” said Seeto.

“Since AI-generated content appears across the internet, we’ve also been working with other companies in our industry on common standards and guidelines. 

“We’re a member of the Partnership on AI and we signed on to the tech accord designed to combat the spread of deceptive AI content in the 2024 elections. This work is bigger than any one company and requires coordinated effort across industry, government and civil society.”

Content related to elections, politics and social issues accounted for less than 1% of all fact-checked misinformation last year. 

However, Meta is ramping up its efforts to remove any scam ads featuring politicians’ faces and deepfakes before they reach the masses. 

“To counter covert influence operations, we’ve built specialised global teams to stop coordinated inauthentic behaviour and have investigated and taken down over 200 of these adversarial networks since 2017,” said Seeto. 

“This is a highly adversarial space where deceptive campaigns we take down continue to try to come back and evade detection by us and other platforms, which is why we continuously take action as we find further violating activity.”

Clive Palmer, who has pledged to spend $100 million on a series of ads across YouTube and major mastheads, has sparked questions about whether his campaign messaging violates Meta’s hate policies. 

However, it is understood Meta does not fact check political speech or politicians,

Have something to say on this? Share your views in the comments section below. Or if you have a news story or tip-off, drop us a line at adnews@yaffa.com.au

Sign up to the AdNews newsletter, like us on Facebook or follow us on Twitter for breaking stories and campaigns throughout the day.

 

Search

RECENT PRESS RELEASES