Meta Could Fight Harassment If It Wanted To
April 15, 2025
The Maccabeats are the premier Jewish a cappella group in America. Their bubblegum bops have drawn millions of views on YouTube, live gigs around the world, and a devoted following on social media. But on January 27, the group’s lighthearted attempt to dance the hora turned into an anti-Semitic mosh pit.
The Maccabeats had posted to Instagram a 46-second recording of “Hava Nagila,” the Jewish folk song that is a staple of weddings, bar mitzvahs, and the occasional Bruce Springsteen concert. Before long, comments flooded the post, echoing every possible anti-Jewish stereotype. “The sound you hear when you accidentally drop a coin,” read the top comment, with just under 30,000 likes. “Pornography, banking industry, CIA, and US government main theme,” read another, referencing entities that anti-Semites allege are controlled by Jews (12,942 likes). “Last thing a politician hears before being enslaved” (7,300 likes). “My nose is already growing” (3,439 likes). “Palestina Libre!”—a completely reasonable response to an American Jewish a cappella jam (3,158 likes). And of course, an animated GIF of a machine counting money (16,168 likes).
A brigade of bigots had apparently stumbled upon the post and made a sport out of trying to top one another’s insults. Some of the comments and likes came from accounts with verified checkmarks. As of this writing, so many responses contain slurs, conspiracies, and crude taunts that any one person who tried to report them would sacrifice untold time and sanity. And yet, this is the world that Meta has seemingly chosen.
On January 7, Mark Zuckerberg announced that Meta would be reforming its content-moderation regime and dialing back its automated filters to focus on “illegal and high-severity violations,” such as terrorism and child exploitation. “For lower-severity violations, we’re going to rely on someone reporting an issue before we take action,” Zuckerberg said. “The problem is that the filters make mistakes, and they take down a lot of content that they shouldn’t. So by dialing them back, we’re going to dramatically reduce the amount of censorship on our platforms.” The Meta CEO acknowledged that this change would involve a “trade-off,” but one that he felt was worthwhile: “It means we’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down.”
There’s much to recommend this position in the abstract. A lot of content moderation, although well intentioned, has failed to combat hateful and misleading material while sweeping up legitimate speech in its dragnet—not just on Meta’s platforms but on Twitter turned X, TikTok, and YouTube. Moderation gone amok has confused journalists and researchers reporting on bigotry and conspiracy theories with people advocating bigotry and conspiracy theories; has mistaken jokes for incitement; and has blocked political posts that happen to accidentally trip a poorly designed keyword search. Given these drawbacks, it’s reasonable for platforms to seek an alternative approach somewhere between that of Elon Musk’s X—where neo-Nazism is not just permitted but monetized—and the heavy-handed moderation regime of old Facebook.
But in practice, Meta’s new policy means that content creators like the Maccabeats have to either report every single instance of harassment they receive, even when the invective arrives en masse, or live with their own corner of social media being turned against them and their fans. “Families watch our videos,” the group’s musical director, Julian Horowitz, told me, “and for them to have to read that stuff, and for the platforms to allow it at all, is just totally unacceptable.”
The Maccabeats are not instinctual critics of Meta or Instagram. In fact, they first attained fame when one of their pop-culture parodies went viral during Hanukkah in 2010. In a real sense, the group owes its success to social media. But now they feel like they are being hounded off of it. “This is where we live,” Horowitz said, only to have “other people come into that space and just destroy it.”
Unlike X, Meta has not fired its content-moderation team, although it has loosened restrictions on offensive speech. The company still has Community Standards for its platforms, both a spokesperson and a leading member of its policy team emphasized to me when I reached out. Yet since shifting gears to put the onus on users to report problematic content, Meta has not given those users tools that would allow them to truly shape their experiences on Facebook and Instagram. But it could—in ways that would uphold a commitment to free speech too.
What would that look like? Consider the Maccabeats again. Meta currently allows Instagram users to individually block and restrict other users, but this toggle is insufficient to the task of confronting thousands of trolls and their posts. Horowitz told me that “hundreds of comments of just the most vile nature” were continuing to pour in, some arriving even as we spoke. “I could probably check over the course of this conversation, and there will have been 10 more GIFs of goblins and swastikas and nose emojis,” Horowitz said. Simply put, blocking every bigot who comments on content is impractical and onerous. But there is a better solution: allowing users to ban with one click not just the author of a bigoted remark but also every one of the thousands of people who liked the offending post.
As noted, the top anti-Semitic comment on the Maccabeats’ “Hava Nagila” clip has some 30,000 likes. It seems unlikely that any of those people have a real interest in Jewish a cappella. Many of them likely left their own slurs in the comments. Nuking the top response and everyone who liked it would not only prevent thousands of bigots from trolling in the future but also clean up many of the other anti-Jewish attacks on the targeted video. And because trolling becomes less fun for the troll when they know they will get only one shot before being one-shotted themselves, such a feature would discourage the entire enterprise. This “megablock” is not science fiction: It existed as a third-party extension for Twitter before Musk bought the site and killed most outside add-ons by charging their developers exorbitant fees. Meta could easily adapt the concept.
Similarly, the company could provide a straightforward way for users to report an entire page as having been targeted for brigading—that is, a mass influx of malevolent users engaged in directed abuse that has nothing to do with the post. In this way, rather than needing to flag individual comments, users would be able to raise the alarm about a page or post as a whole, then have Meta’s moderators investigate the accounts involved and bar them from the page as needed.
All of these ideas are free-speech friendly. They do not involve top-down censorship, but bottom-up user choice. Letting people police the content on their own pages and feeds is the natural next step for platforms that want to empower users rather than constantly surveil and censor them. Such features are also just common sense. No one has an inherent right to graffiti another person’s real-world home; they shouldn’t have a right to vandalize a virtual one.
Most important, these sorts of changes are necessary to resolve a fundamental flaw in the structure of sites such as Instagram and TikTok. Social media is a numbers game, where popularity dictates what the algorithm ultimately amplifies. Because Jews and other minorities constitute such a small subset of the population, they are easily outnumbered on social media by those who want to deride them. Without powerful new user tools to counter these odds, platforms will quickly become a competition that minority populations can’t win, and will instead be swarmed and overwhelmed by those who want to bully them and drown out their content.
Owners like Zuckerberg don’t have to imagine what that looks like—they can just log on to Musk’s X. That platform may be a lost cause, but the rest of them don’t have to be.
Search
RECENT PRESS RELEASES
Related Post