The Science Misinformation Gap

May 10, 2026

Every day, a Facebook user in Munich—and across the European Union—is likely to see at least one post with a label attached that warns, “False: rated by independent fact-checkers.” Most will scroll by. But a Facebook user in Minneapolis looking at the same post will not see a warning label. Same platform, same algorithm, same claim, different internet. This discrepancy reflects a policy divergence that has been quietly widening for years, and which burst into view amid the bitter fights over COVID misinformation and Donald Trump’s return to the White House in November 2024.

The moderation rollback began on Twitter after Elon Musk’s takeover in 2022, then spread to Meta in January 2025 when Mark Zuckerberg announced that the company would end its third-party fact-checking program, starting in the United States. The accuracy of posts on Facebook, Instagram, and Threads would instead be reviewed by a system in which users scrutinise posts and use crowd-sourcing to write contextual notes—a more formal version of the Community Notes model that Musk had promoted on Twitter. The new system, Zuckerberg explained, was adopted after years of pressure from conservative critics who claimed Meta’s fact-checking bordered on censorship and targeted conservative speech. The company would now refocus on “illegal and high-severity violations” and steer away from controversial social issues:

After Trump first got elected in 2016, the legacy media wrote nonstop about how misinformation was a threat to democracy. We tried in good faith to address those concerns without becoming the arbiters of truth. But the fact-checkers have just been too politically biased and have destroyed more trust than they have created, especially in the US. So over the next couple of months, we’re going to phase in a more comprehensive Community Notes system. Second, we’re going to simplify our content policies and get rid of a bunch of restrictions on topics like immigration and gender that are just out of touch with mainstream discourse. What started as a movement to be more inclusive has increasingly been used to shut down opinions and shut out people with different ideas, and its gone too far. 

Meta and Twitter are not the only platforms trying to navigate two different content standards at once. TikTok and YouTube, aware of their political vulnerability in Washington, have been quieter about policy changes, but in Europe they operate under the same EU obligations that constrain their peers. 

Europe Maintains “Expert” Oversight 

The important difference between Europe and the US is structural: the American model leaves a platform’s error-correction systems vulnerable to corporate preference, political backlash, and culture-war pressure. The more restrictive EU model treats large platforms as risk-generating infrastructure and forces them to document how they manage the misinformation their systems can amplify, including the spread of illegal content, election manipulation, public-health risks, and other forms of systemic harm.