Wide-Ranging Decisions Protect Speech and Address Harms
April 23, 2025
Wide-Ranging Decisions Protect Speech and Address Harms
April 23, 2025
Today, the Oversight Board has published decisions for 11 cases based on user appeals from three continents. They are the first to reflect on the policy and enforcement changes Meta announced on January 7, 2025. As with all our decisions, we are guided by our commitment to freedom of expression online and belief that Meta has a responsibility to address offline harm resulting from use of its platforms. We also emphasize greater transparency, consistency and fairness in Meta’s approach to content moderation.
Today’s decisions address freedom of expression in relation to gender identity, apartheid imagery, anti-migrant speech, hatred against people with disabilities, suppression of LGBTQIA+ voices and the 2024 UK riots. The decisions have seen the Board deliberate on complex issues, many of which involve Meta’s Hateful Conduct (previously named Hate Speech) policy. In these decisions, we confirm that speech which is controversial can and should remain on the platform, while also concluding that Meta should remove content where there is a substantial connection to tangible harm.
More than 1,000 people or organizations from five continents provided input into today’s cases through public comments, complementing the perspectives of the Board’s 21 members who come from a variety of regions and backgrounds. Such diversity is reflective of Meta’s user base, with 95% of people using Meta’s services based outside the United States.
Capturing the evolving challenges to freedom of expression online, today’s decisions demonstrate why the protection of speech remains critical. This commitment cuts across political divides. Given the high stakes on the issues discussed, we hope that our decisions and recommendations contribute to a more robust and open exchange that holds Meta accountable to its users.
The Outcomes in Today’s Cases
Appealed to the Board by users of Meta’s platforms, all 11 cases were selected because they relate to significant freedom of expression issues and probe the boundaries of Meta’s policies and enforcement practices. The Board applies a high threshold for restricting speech under an international human rights framework. Majority and minority opinions in several of today’s decisions indicate the level of complexity involved in reconciling a strong commitment to freedom of expression with effective concern for other human rights, and the reasonable differences that thoughtful deliberation and debate can generate on these issues.
- In our Gender Identity Debate Videos cases, a majority of the Board upheld Meta to allow two posts discussing transgender peoples’ access to bathrooms and participation in athletic events in the United States. Despite the intentionally provocative nature of the posts, which misgender identifiable trans people in ways many would find offensive, a majority of the Board found they related to matters of public concern and would not incite likely and imminent violence or discrimination.
- In another pair of cases, where the Board considered two posts displaying images related to apartheid, including South Africa’s former national flag, a majority upheld Meta’s decisions to leave them on Facebook. While all Board Members acknowledged the flag as a symbol of South Africa’s apartheid, evoking painful memories, a majority found that removal would not have been the least intrusive means to address harms. Indeed, despite finding a violation under a particular rule on “hateful ideologies,” the Board noted these posts should nevertheless be kept up, under international freedom of expression standards, also recommending more clarity around this Community Standard.
- In two cases of anti-migrant speech from Poland and Germany, a majority of the Board overturned Meta to take down content including a racist slur and generalizations of migrants as sexual predators. These Board Members found that the content contributed to heightened risks of discrimination and violence against migrants, particularly in the already inflamed atmosphere surrounding an election in which migration policies were a major political issue.
For the three sets of cases above, covering six pieces of content, extensive minority opinions are indicative of the Board’s diverse membership and the varying perspectives held on how best to protect human rights online. These differences of opinion on the right answers to the most difficult questions of speech are central to the Board’s design, with explanations given in each of our decision texts to openly set out the reasons for the outcomes.
- Today’s announcement also includes unanimous decisions made by the Board on three posts relating to the UK riots of summer 2024. With each one advocating violence against immigrants and Muslims during this week of widespread riots in the UK, the Board overturned Meta, requiring the posts to be taken down from Facebook because the likelihood of their inciting additional and imminent unrest and violence was significant. In these cases, a close examination of how Meta enforced its content policies in a crisis revealed inadequacies in the company’s ability to accurately assess visual forms of incitement based on viral disinformation and misinformation.
- Alongside those standard decisions, the Board has also published two summary decisions, involving cases that shed light on Meta’s enforcement errors. In the first, Meta incorrectly removed a drag artist’s video, wrongly finding that it contained a banned slur, when the term was used in a self-referential context, which Meta’s policies expressly allow. The Board noted that incorrect removals like these not only affect freedom of expression, but can also impact livelihoods of the persons involved. In the second, the Board identified a blatant example of dehumanizing speech about persons with disabilities, which raised concerns about Meta’s enforcement systems failing to detect such posts.
Responding to Meta’s Latest Changes
While Meta’s January 7 policy changes did not affect the outcomes of the cases published today, recommendations in today’s decisions nevertheless respond to some of those changes.
Our decisions note concerns that Meta’s January 7, 2025, policy and enforcement changes were announced hastily, in a departure from regular procedure, with no public information shared as to what, if any, prior human rights due diligence the company performed.
The Board calls on Meta to live up to its public commitment to uphold the UN Guiding Principles on Business and Human Rights, including through engagement with impacted stakeholders. As these changes are being rolled out globally, the Board emphasizes it is now essential that Meta identifies and addresses adverse impacts on human rights that may result from them. This should include assessing whether reducing its reliance on automated detection of policy violations could have uneven consequences globally, especially in countries experiencing current or recent crises, such as armed conflicts.
The Board has issued 17 recommendations today, relating to Meta’s January 7 changes, the company’s policies, and its enforcement systems. Highlights include calling on Meta to:
- Assess the human rights impact of the January 7 Hateful Conduct policy updates, in particular potential adverse effects on Global Majority countries, LGBTQIA+ people, including minors, and immigrants, updating the Board on its progress every six months, and reporting publicly on this soon.
- Improve how it enforces violations of its Bullying and Harassment policies, especially rules that require users to self-report content.
- Clarify the references to hateful ideologies not permitted under the Dangerous Organizations and Individuals policy.
- Continually assess the effectiveness of Community Notes compared to third-party fact-checking, particularly in situations where the rapid spread of false information creates risks to public safety.
- Improve how incitement to violence in visual imagery is detected through better guidance to reviewers.
Holding Meta to Account
For close to five years, our decisions have shaped Meta’s content policies and enforcement practices, resulting in increased protection for political speech, news reporting, awareness raising, and cultural and artistic expression. Public discourse on political and social issues – from immigration to abortion to criticism of states and leaders – remains on Meta’s platforms because of the Board. Our cases have pushed Meta to be more transparent in how it handles government pressure to remove content. Recognizing the heightened risks to democratic processes of platforms censoring speech, the Board has prioritized cases enabling individuals’ rights to participate in elections and protests, especially in countries where civic space is shrinking.
Our recommendations meanwhile have resulted in Meta adding tools that help people understand the rules and avoid having their content taken down unfairly. They have also led to the company implementing alternatives to content removal, including warning screens and “AI info” labels to indicate when content has been AI-generated. These solutions empower users and impose fewer burdens on expression.
Moving forward, the Board will continue to select cases tackling some of the most difficult and significant problems in global content moderation. The Board is also in discussions with Meta, ready to accept a policy advisory opinion referral, to help shape the company’s approach to fact-checking in regions outside the United States. As the only global body providing independent oversight of Meta’s content moderation, the Board is uniquely positioned to take on this role, interpreting how platform changes can impact users’ freedom of expression and other human rights across different regions and communities around the world.
Gender Identity Debate Videos
(2024-046-FB-UA, 2024-047-IG-UA)
In two posts that include videos in which a transgender woman is confronted for using a women’s bathroom and a transgender athlete wins a track race, the majority of the Board has upheld Meta’s decisions to leave up the content. The Board notes that public debate on policies around transgender peoples’ rights and inclusion is permitted, with offensive viewpoints protected under international human rights law on freedom of expression. In these cases, the majority of the Board found there was not enough of a link between restricting these posts and preventing harm to transgender people, with neither creating a likely or imminent risk of incitement to violence. Nor did the posts represent bullying or harassment. Transgender women and girls’ access to women’s bathrooms and participation in sports are the subjects of ongoing public debate that involves various human rights concerns. It is appropriate that a high threshold be required to suppress such speech. Beyond the content in these cases, the Board has made recommendations to address how Meta’s January 7, 2025, revisions to the renamed Hateful Conduct Policy may adversely impact LGBTQIA+ people, including minors.
Click here for the full decision.
To read public comments for these cases, click here.
Posts Displaying South Africa’s Apartheid-Era Flag
(2025-001-FB-UA, 2025-002-FB-UA)
Following a review of two Facebook posts containing images of South Africa’s 1928-1994 flag, the majority of the Board has upheld Meta’s decisions to keep them up. Board Members acknowledge the long-term consequences and legacy of apartheid on South Africa. However, these two posts do not clearly advocate for exclusion or segregation, nor can they be understood as a call for people to engage in violence or discrimination. The deliberation in these cases also resulted in recommendations to improve conflicting language in the Dangerous Organizations and Individuals policy.
Click here for the full decision.
To read public comments for these cases, click here.
Criticism of EU Migration Policies and Immigrants
(2025-003-FB-UA, 2025-004-FB-UA)
The majority of the Board has found that two pieces of immigration-related content, posted on Facebook ahead of the June 2024 European Parliament elections, violate the Hateful Conduct policy and Meta should take them down. The Board recognizes the right to free expression is paramount when assessing political discussions and commentary. However, content such as these two posts contributed to heightened risks of violence and discrimination in the run-up to an election, in which immigration was a major political issue and anti-migrant sentiment was on the rise. For the majority, it is necessary and proportionate to remove them. One post by a Polish political party intentionally uses racist terminology to harness anti-migrant sentiment. The other post generalizes immigrants as gang rapists, a claim that, when repeated, whips up fear and hatred.
Click here for the full decision.
To read public comments for these cases, click here.
Posts Supporting UK Riots
(2025-009-FB-UA, 2025-010-FB-AU, 2025-011-FB-UA)
In reviewing three different posts shared during the UK riots of summer 2024, the Board has overturned Meta’s original decisions to leave them up on Facebook. Each created the risk of likely and imminent harm. They should have been taken down. The content was posted during a period of contagious anger and growing violence, fueled by misinformation and disinformation on social media. Anti-Muslim and anti-immigrant sentiment spilled onto the streets. Meta activated the Crisis Policy Protocol (CPP) in response to the riots and subsequently identified the UK as a High-Risk Location on August 6. These actions were too late. By this time, all three pieces of content had been posted. The Board is concerned about Meta being too slow to deploy crisis measures, noting this should have happened promptly to interrupt the amplification of harmful content.
Click here for the full decision.
To read public comments for these cases, click here.
Reclaimed Term in Drag Performance
(2025-013-IG-UA)
Click here for this Summary Decision.
Comment Targeting People with Down Syndrome
(2025-014-FB-UA)
Click here for this Summary Decision.
Search
RECENT PRESS RELEASES
Related Post