The danger of Meta’s big fact-checking changes

January 8, 2025

With less than two weeks before the new Trump administration takes office, Meta chief Mark Zuckerberg announced a sweeping set of policy changes that will do away with fact-checkers on the company’s platforms and reduce restrictions on the posts its users can share.

Zuckerberg said the changes are meant to address political “bias” and curtail “censorship” — echoing arguments that President-elect Donald Trump and his supporters have long made about the platform.

In lieu of fact-checkers, Facebook will employ a “Community Notes” model like the one used on X.

As it operates on X, Community Notes allows users to add context and corrections to other people’s posts, though studies show it can be slower and cover different subjects than professional fact-checking.

Zuckerberg also said that the site would relax its policies for moderating posts and allow more content on issues including “immigration and gender” instead of taking them down. (According to a Wired review, some of these changes appear to have already gone into effect.)

For users interested in seeing more political content, Zuckerberg noted that Meta plans to reintroduce more of these posts into people’s feeds as well.

In a five-minute video announcing the changes, Zuckerberg said the fact-checkers Facebook has worked with were “too politically biased” and had harmed user trust, and jabbed at the Biden administration for the “censorship” it’s allegedly employed against Meta. (Zuckerberg didn’t specify what he meant by that claim, though tech companies have previously fielded requests from the Biden administration about removing posts related to Covid-19 misinformation and election fraud.)

Broadly, Meta’s announcement signals a willingness among tech companies to cater to Trump as they seek to preserve their business prospects and avoid political retaliation from a frequent, strident critic. Its shifts in content moderation also have serious implications for the types of posts and misinformation that can spread on its platforms, which include Facebook, Instagram, and Threads.

“I suspect we will see a rise in false and misleading information around a number of topics, as there will be an incentive for those who want to spread that kind of content,” Claire Wardle, an associate professor in communication at Cornell University, told Vox.

What Meta has done

Meta’s recent changes coincide with other moves Zuckerberg has made in an apparent attempt to get into Trump’s good graces, including a personal visit to Mar-a-Lago and the appointment of Dana White, the CEO of Ultimate Fighting Championship and a Trump ally, to Meta’s board of directors.

Below is a more detailed rundown of the changes Zuckerberg just announced, as well as other recent steps he’s taken.

Changes to content moderation

  • Replacing fact-checkers with Community Notes: Meta had worked with 90 different independent organizations to fact-check posts that spread on its platforms. Those fact-checkers would append warning labels to false content, and Meta would also reduce the distribution of those posts. Zuckerberg has accused the fact-checkers of being politically biased, while providing no examples, and said they’ll be replaced with a Community Notes system that will be phased in over the coming months.
  • Reducing content restrictions on topics like “immigration and gender:” Zuckerberg said that the platforms will focus on removing posts that contain “illegal and high-severity violations” and allow more posts to stay up that they might have previously been flagged. Effectively, the company is cutting back on content moderation in general and taking down fewer posts on hot-button political issues.
  • Bringing back politics content: Meta had previously downgraded politics content and reduced the distribution of it on its platforms, citing user requests for less of this content in their feeds. Zuckerberg announced that Meta would be phasing political content back into users’ feeds due to changing demands.
  • Moving content and moderation teams: In another bid to address alleged political bias, Zuckerberg said that Meta’s content moderation team will move from California to Texas.
  • Working with Trump to combat censorship by other countries: Zuckerberg committed to collaborating with Trump to fight censorship and regulations in other countries, pointing to the blocking of Meta apps in China and European tech policies he claimed were stifling innovation.

Donating to Trump’s inaugural fund and visiting Mar-a-Lago: Meta is among the tech firms donating $1 million to Trump’s inaugural fund. Amazon has as well, and Apple CEO Tim Cook and OpenAI CEO Sam Altman have made comparable personal donations. Zuckerberg, Amazon’s Jeff Bezos, and Google’s Sergey Brin are also among the tech chiefs who’ve paid a personal visit to Trump at Mar-a-Lago.

Silicon Valley is bending the knee — with troubling consequences

As Vox’s Nicole Narea previously reported, Zuckerberg is far from the only tech CEO to try to build a friendlier relationship with Trump as his second term approaches. A number of others, including Bezos — who killed a Washington Post editorial endorsement of Democratic presidential nominee Kamala Harris — have done the same.

Many of these efforts are driven by the goal of maintaining a friendlier regulatory climate, Narea reports, whether that’s less scrutiny of antitrust or more consideration for government contracts.

Efforts by businesses to cultivate ties across administrations are commonplace. But Zuckerberg’s and Bezos’s moves have raised additional concerns, given the impact they have on what millions of people read and, in Zuckerberg’s case, post.

Zuckerberg’s moves could shape the type of content that proliferates on Facebook, Instagram, and Threads, enabling misinformation to thrive unchecked. Not only will Facebook remove fact-checkers, but by dialing back moderation on topics like immigration and gender identity, which have already been the subject of rampant right-wing conspiracy theories, it could exacerbate an existing mis- and disinformation problem.

X, formerly known as Twitter, has also rolled back its content moderation since Trump ally and Tesla CEO Elon Musk took over the site in late 2022. Since then, Musk elevated Community Notes as a way to crowd-source fact-checks.

Community Notes has been a mixed bag since it was implemented, says Erik Nisbet, a professor of policy analysis and communications at Northwestern University. Researchers have found that users are likely to trust context offered via Community Notes more than a basic flag from a fact-checker, for example. But Community Notes are often slower than a professional fact-checker, meaning a false post could go viral before it gets checked. Additionally, Community Notes relies on the expertise and interest of the site’s users, whereas professional fact-checkers can offer expertise quickly on a wider range of key topics.

The changes in content moderation at X since Musk’s takeover could foreshadow similar results at Meta. A USC study of English-language posts on X from January 2022 to June 2023 found that hate speech had increased 50 percent on the site in that time, with use of transphobic slurs increasing 260 percent. Musk fired a number of content moderators when he took over in 2022 and began revamping the platform’s approach, including lifting suspensions for previously banned accounts.

“Mark Zuckerberg argues that his role model for this change is Elon Musk and what he did on Twitter. So we can look at Twitter for answers, right? And if we do, we see chaos,” says Yotam Ophir, a University of Buffalo communications professor who studies misinformation.

The potential spread of more misinformation and hateful content on Meta’s platforms is concerning, Nisbet told Vox, and could have significant effects on the quality of US democracy. Access to accurate information and the ability to hold political leaders accountable is a crucial differentiator for democratic states, he said, and as falsehoods are allowed to proliferate and spread, it weakens people’s access to trustworthy information and their ability to confront their political leaders.

Multiple recent events have illustrated the acute impact such misinformation can have. In September, Trump amplified a lie about Haitian immigrants eating pets in Springfield, Ohio, which had begun on Facebook. That lie went on to fuel property damage and threats against Haitian people in the city. Trump’s lies about FEMA aid workers in North Carolina following Hurricane Helene’s devastation also spread on social media, spurring distrust of the agency and even threats of violence toward government workers.

Without strong guardrails at Facebook, Instagram, and Threads, such misinformation could spread further and have even more dangerous consequences.

 

Search

RECENT PRESS RELEASES