Meta’s 2025 Election Plan: Safeguards Boosted, But Is It Enough?
March 18, 2025
As the 2025 Australian federal election approaches, Meta has outlined a series of measures aimed at safeguarding election integrity on its platforms, including Facebook, Instagram, and Threads.
The company’s approach focuses on mitigating misinformation, regulating AI-generated content, promoting voter engagement, and increasing transparency in political advertising.
Combatting Misinformation and Promoting Media Literacy
Meta is continuing its partnerships with Agence France-Presse (AFP) and the Australian Associated Press (AAP) to fact-check content and reduce the spread of misinformation. Through its third-party fact-checking program, when content is flagged as false, Meta applies warning labels and reduces its visibility in feeds to limit its reach.
For more serious misinformation—such as content that could incite violence, interfere with voting, or pose physical harm—Meta enforces its Community Standards, which allow for complete removal of harmful content.
In addition to these enforcement measures, the company is launching a media literacy campaign in partnership with AAP to educate Australians on how to critically evaluate the information they see online. This initiative is designed to help voters identify misleading narratives and improve their digital literacy in the lead-up to the election.
Working with the Australian Electoral Commission on Voter Engagement
To encourage voter participation, Meta is collaborating with the Australian Electoral Commission (AEC) to provide verified election information across its platforms. The company will introduce voter empowerment products, which include notifications on Facebook and Instagram directing users to official details about polling locations, registration deadlines, and election dates.
Starting a week before the election, Meta will roll out reminders with accurate voting information. On Election Day, Facebook and Instagram will send out notifications reminding users to vote. Additionally, Instagram will introduce voting stickers that users can add to their Stories to encourage civic engagement.
Countering AI-Generated Disinformation
As AI-generated content becomes more prevalent, Meta has introduced new policies to counter the risks posed by deepfakes and manipulated media. AI-generated content will be subject to the same Community Standards and Ad Standards as traditional content, meaning it will be fact-checked, labeled, and down-ranked in feeds if flagged as misleading.
“Our Community Standards and Ad Standards apply to all content, including AI-generated content, and we take action against this type of content when it violates these policies. AI-generated content is also eligible to be reviewed and rated by our independent fact-checking partners,” Meta said in its official election correspondence.
If Meta detects AI-generated content that could mislead voters, it will apply an AI label. If users fail to disclose AI usage, Meta will still label the content and provide additional warnings if the content is highly deceptive.
The company has also collaborated with Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock to implement AI metadata standards. Additionally, Meta is part of the Partnership on AI and has signed onto the Tech Accord, which aims to prevent AI-generated election disinformation across digital platforms.
Political Advertising Transparency and AI Disclosure Requirements
Since 2018, Meta has required advertisers running political or social issue ads to complete an authorisation process and disclose the funding source with a “Paid for by” disclaimer. These ads are stored in Meta’s public Ad Library for seven years to ensure accountability.
For the 2025 election, Meta has introduced new AI disclosure requirements for political ads. Advertisers must disclose if AI or other digital techniques were used to create or alter an ad in the following circumstances:
- A real person appears to say or do something they did not.
- A realistic-looking person who does not exist is depicted.
- A realistic-looking event that did not happen is shown.
- A real event is altered to change its meaning.
If an advertiser fails to disclose AI use in any of these cases, Meta will label and down-rank the ad to reduce its visibility.
Preventing Election Interference and Foreign Influence
Meta claims to have built specialised global teams to counter coordinated inauthentic behaviour and foreign interference, having dismantled over 200 adversarial networks since 2017. To increase transparency, the company labels state-controlled media, ensuring users are aware when content comes from government-backed sources.
To further prevent election interference, Meta continues to enforce its policies against voter suppression, hate speech, harassment, and misinformation. Content that violates these policies—whether created by humans or AI—will be removed. The company also publishes Quarterly Threat Reports to document its efforts in combating influence operations.
“This is a highly adversarial space where deceptive campaigns we take down continue to try to come back and evade detection by us and other platforms, which is why we continuously take action as we find further violating activity,” the company said.
Is It Enough?
Meta’s 2025 election strategy mirrors its efforts in past elections in India, the UK, and the US. The company has put in place fact-checking partnerships, AI moderation policies, voter engagement tools, and political ad transparency measures to safeguard its platforms from misinformation and manipulation.
While Meta has outlined a proactive approach to safeguarding the 2025 Australian federal election, concerns persist over the effectiveness and enforcement of its measures. The company’s reliance on fact-checking partnerships and AI labelling assumes bad actors will comply with disclosure rules, yet misinformation often spreads unchecked in private groups and less-regulated digital spaces.
Additionally, while voter empowerment tools provide election information, they do not prevent the amplification of misleading content before corrections can be made.
Meta’s recent decision to discontinue third-party fact-checking in the U.S. raises further doubts about its commitment to combating misinformation. The shift toward a community-moderated model, similar to X’s Community Notes, has drawn criticism from Australian politicians and media experts who warn it could lead to a “trolling free-for-all.” While Meta has assured the Australian government that fact-checking will continue on the Homefront, the precedent set in the U.S. raises concerns that its approach to election integrity could change.
The Australian Electoral Commission (AEC) has already expressed concerns about AI-driven disinformation, warning that without stronger detection and enforcement, AI-generated content could undermine trust in the electoral process. The AEC has reportedly sought assurances from Meta and other tech companies, though no binding commitments have been made.
“The AEC continues to observe significant instances of AI being used to spread election related mis and disinformation globally via online platforms,” AEC electoral commissioner Jeff Pope wrote in a letter viewed Capital Brief.
“The AEC is concerned that, without further improvements in detection and action, this has the potential to enable widespread misinformation and disinformation about the electoral process in Australia.”
As Australia approaches the 2025 election, Meta’s evolving content moderation policies deserve close scrutiny. While the company insists its measures will protect election integrity, its track record of scaling back enforcement efforts internationally raises fears that these policies may not be as stable as they appear.
Ultimately, the ongoing tension between free expression and preventing harmful disinformation will continue to be tested as the campaign unfolds.
Search
RECENT PRESS RELEASES
Related Post