Meta’s content-moderation changes will make Facebook, Instagram more hateful

January 15, 2025

Fact-checking sends a signal that there is a line between true and false. Meta has shut down that signal.

Meta will get back to its roots is by eliminating outside fact-checking on Facebook, Instagram, and Threads, supposedly in the name of promoting free speech.Ezra Acayan/Getty

Meta’s announcement last week of sweeping changes to how it filters content rightly drew extensive coverage focusing on how the world’s largest social media company is aligning itself politically with the incoming Trump administration.

Speaking in a celebratory vein, Meta CEO Mark Zuckerberg linked what’s going on at Meta to President-elect Donald Trump’s return to power. “The recent elections,” he said in a video posted on the company’s website, “feel like a cultural tipping point toward once again prioritizing speech. So we’re going to get back to our roots and focus on reducing mistakes, simplifying our policies, and restoring free expression on our platforms.”

One way Meta will get back to its roots is by eliminating outside fact-checking on Facebook, Instagram, and Threads, supposedly in the name of promoting free speech. It will cut off the network of more than 90 outside groups — ranging from such established news organizations as the Associated Press to specialized fact-checkers including the Pulitzer Prize-winning PolitiFact — that employ journalists to report on the veracity of some posts on topics like public health and election integrity.

At best, Meta’s fact-checking operation could assess only a tiny percentage of potential online falsehoods; the gargantuan volume of social media traffic made comprehensive fact-checking impossible. But the groups doing this work, which were certified by the International Fact-Checking Network run by journalism’s Poynter Institute, made a good-faith attempt to identify heavily circulated content that befogged matters of public import. It was then up to Meta whether to “down-rank” such content, making it less prominent in users’ feeds. In rare cases, the company removed extremely dangerous false content, such as claims that drinking bleach can prevent COVID-19 or that vaccines contain microchips.

More generally, fact-checking sends a signal that there is a line between true and false, and major platforms bear some responsibility for not spreading misinformation that can undermine public health and democratic institutions. Meta has shut down that signal.

But this is only part of Meta’s new approach. Zuckerberg also outlined how the company will overhaul its broader content-moderation system to allow a wide array of speech it currently seeks to exclude. That includes hateful speech. As Zuckerberg acknowledged, “It means we’re going to catch less bad stuff.”

That’s because fact-checking is only a minuscule portion of a sprawling content-moderation system incorporating both artificial intelligence filters and thousands of non-journalist reviewers, most of whom are employed by vendors in the Philippines, India, and elsewhere. This system is designed to enforce policies Meta established to exclude content it believes would offend many users and advertisers. For more than a decade, these policies have prohibited spam, pornography, violent incitement, and hatred based on race, gender, sexual orientation, religion, and other criteria.

Conservatives, led by Trump, have attacked Meta and other social media companies, claiming they use content moderation to systematically censor right-leaning views. The companies have denied this in the past — and there’s no evidence to support the claim — but last week, Zuckerberg pulled an about-face and said he now agrees with the criticism. And he’s going to do something about it — namely, dilute restrictions on speech, reset the AI filters so they let through more bigotry, and rely more on ordinary users to report egregious material before taking any action.

Let’s get specific. Meta has removed its ban on referring to women “as household objects or property or objects in general.” The company has added a passage to its “hateful conduct” policy stating, “We do allow allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality.” (“Transgenderism” is a word used pretty much only by foes of transgender rights.)

Meta added another provision that allows “content arguing for gender-based limitations of military, law enforcement, and teaching jobs. We also allow the same content based on sexual orientation, when the content is based on religious beliefs.”

The new standard, according to Zuckerberg’s recently promoted top lieutenant, Joel Kaplan, a former Republican operative with ties to the Trump administration, is that if you might hear something on Newsmax or Fox News, it ought to be available on Meta’s platforms. “We’re getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate,” he wrote in a corporate blog post. “It’s not right that things can be said on TV or the floor of Congress, but not on our platforms.”

Meta provides nonpublic written guidance to its employees and outside contractors on how to implement these policies. Casey Newton, who writes a Silicon Valley newsletter called Platformer, obtained some of the new internal guidelines, including these examples of posts that are now allowed:

“Trans people aren’t real. They’re mentally ill.”

“Gays are not normal.”

“Women are crazy.”

“A trans person isn’t a he or she, it’s an it.”

If those statements strike you as the kind of free speech that needs to be restored in light of the “cultural tipping point” that is the Trump resurgence, you’re in luck — at least on Facebook, Instagram, and Threads. But the contrary view — the humane one, in my opinion — is that bigoted harassment actually stifles expression as it intimidates people into silence and incites real-world assaults and worse. The changes at Meta herald not freedom but intolerance and violence.

Paul M. Barrett is deputy director of the Center for Business and Human Rights at New York University’s Stern School of Business.

 

Search

RECENT PRESS RELEASES