Why is Meta replacing fact-checks with Community Notes in the US?
January 7, 2025
In a major shake-up of its content moderation strategy, Meta has announced that it will be pulling the plug on its third-party fact-checking programme in the US. Instead, the social media giant said it will be embracing the Community Notes system followed on Elon Musk-owned platform X.
Meta admitted that its content moderation efforts had “gone too far” to the point where “we are making too many mistakes”.
“Too much harmless content gets censored, too many people find themselves wrongly locked up in “Facebook jail,” and we are often too slow to respond when they do. We want to fix that and return to that fundamental commitment to free expression,” Joel Kaplan, the newly hired head of Meta’s global policy team, said in a blog post published on Tuesday, January 7.
Meta’s change in approach comes amid several indications that the new regulatory regime under incoming US President Donald Trump is set to take on big tech companies for allegedly censoring the online speech of conservatives in the country.
How will Community Notes work on Meta’s platforms?
On X, select users add helpful notes with facts and context below a specific post. It is primarily intended to prevent the spread of misinformation.
Anyone on X who meets certain criteria can become a contributor and add Community Notes. Initially, contributors are only allowed to rate Community Notes. Over time, they are allowed to write and attach their own Community Notes which will also be rated by other contributors.
Meta has openly acknowledged that it is adopting X’s crowd-sourced model for fact-checking content because it works. “They empower their community to decide when posts are potentially misleading and need more context, and people across a diverse range of perspectives decide what sort of context is helpful for other users to see,” it said.
Advertisement
Community Notes on Meta will be written and rated by contributing users, similar to X. “It will require agreement between people with a range of perspectives to help prevent biased ratings,” the company said.
Users on Facebook, Instagram, and Threads can sign up to be a contributor starting today. The Community Notes on Meta’s platforms will appear as a “much-less obtrusive label indicating that there is additional information for those who want to see it.”
Meta plans on gradually rolling out Community Notes in the US over the next few months. It did not mention if the changes will be extended to other countries as well.
What was Meta’s previous approach and why is it moving away from it?
In 2016, Meta launched its independent fact-checking programme where select fact-checkers and independent experts were allowed to give people more information about specific content, particularly viral hoaxes.
Advertisement
“That’s not the way things played out, especially in the United States. Experts, like everyone else, have their own biases and perspectives. This showed up in the choices some made about what to fact check and how,” the company said.
Meta further revealed that in December last year, 10-20 per cent of actions taken against content may have been errors, i.e. the actioned content may not have violated the platform’s policies.
This is not the first time that Meta has admitted to mistakenly removing content across its apps.
The company issued a public apology after its automated content moderation systems downranked photos of the assassination attempt on President-elect Trump. Meta’s Oversight Board had also warned against the “excessive removal of political speech” in the run-up to the US presidential elections in November last year.
Advertisement
Which policies has Meta decided to discontinue?
Meta has said that it will be getting rid of fact-checking controls on posts. Additionally, it will no longer demote fact-checked content.
Earlier, users on Meta’s platforms saw a full-screen warning when they came across a post that had been flagged as misleading. The company is dumping these warning screens as well. However, it is unclear if Meta will continue showing warning screens for potentially sensitive content like violent and graphic imagery or visuals with some forms of nudity.
Meta is also dumping its previous restrictions on topics like immigration, gender identity and gender. These policy changes may take a few weeks to be fully implemented, it said.
Furthermore, the company announced that it will be tuning its AI content moderation tools designed to scan and flag content that violates the platform’s policies.
Advertisement
“We’re going to tune our systems to require a much higher degree of confidence before a piece of content is taken down,” Meta said. Content that violates its “less severe” policies will only face action if it is reported by a user, rather than being automatically detected.
Notably, Meta revealed that it is using large language models (LLMs) to provide a second opinion before taking actions against content.
“We are also going to recommend more political content based on personalised signals and are expanding the options people have to control how much of this content they see,” the company said.
“As part of these changes, we will be moving the trust and safety teams that write our content policies and review content out of California to Texas and other US locations,” it added.
Search
RECENT PRESS RELEASES
Related Post