Meta ‘hastily’ changed moderation policy with little regard to impact, says oversight boar

April 23, 2025

Mark Zuckerberg’s Meta announced sweeping content moderation changes “hastily” and with no indication it had considered the human rights impact, according to the social media company’s oversight board.

The assessment of the changes came as the Facebook and Instagram owner was criticised for leaving up three posts containing anti-Muslim and anti-migrant content during riots in the UK last summer.

The Meta oversight board raised concerns about the company’s announcement in January that it was removing factcheckers in the US, reducing “censorship” on its platforms and recommending more political content.

In its first official statement on the changes, the board – which issues binding decisions on removing Meta content – said the company had acted too quickly and should gauge the impact of its changes on human rights.

“Meta’s January 7, 2025, policy and enforcement changes were announced hastily, in a departure from regular procedure, with no public information shared as to what, if any, prior human rights due diligence the company performed,” said the board.

It urged Meta to “live up” to its commitment to uphold the United Nations’ principles on business and human rights, and urged the company to carry out due diligence on their impact.

“As these changes are being rolled out globally, the Board emphasizes it is now essential that Meta identifies and addresses adverse impacts on human rights that may result from them,” said the statement. “This should include assessing whether reducing its reliance on automated detection of policy violations could have uneven consequences globally, especially in countries experiencing current or recent crises, such as armed conflicts.”

The criticism of the changes was published alongside a series of content rulings including admonishment of Meta for leaving up three Facebook posts related to riots that broke out across the UK in the wake of the Southport attack on 29 July last year, in which three young girls were murdered. Axel Radukabana was jailed for a minimum of 52 years for carrying out the attack.

The board said the three posts, which contained a range of anti-Muslim sentiment, incited violence and showed support for the riots, each “created the risk of likely and imminent harm” and “should have been taken down”. It added that Meta had been “too slow” to implement crisis measures as riots broke out across the UK.

The first post, which was eventually taken down by Meta, was text-only and called for mosques to be smashed as well as buildings where “migrants“ and “terrorists” lived to be set on fire. The second and third posts featured AI-generated images showing a giant man in a union jack T-shirt chasing Muslim men and of four Muslim men running after a crying blond toddler in front of the Houses of Parliament – with one of the men wielding a knife.

The board added that changes to Meta’s guidance in January meant users could now attribute behaviours to protected characteristic groups, for example, people defined by their religion, sex, ethnicity or sexuality, or based on someone’s immigration status. As a consequence, users can say a protected characteristic group “kill”, said the board.

It added that third-party factcheckers, which are still being used by Meta outside the US, had reduced the visibility of posts spreading a false name of the Southport attacker. It recommended the company researched the effectiveness of community notes features – where users police content on platforms – which are also being deployed by Meta after the removal of US factcheckers.

“Meta activated the crisis policy protocol (CPP) in response to the riots and subsequently identified the UK as a high-risk location on August 6. These actions were too late. By this time, all three pieces of content had been posted,” said the board.

“The board is concerned about Meta being too slow to deploy crisis measures, noting this should have happened promptly to interrupt the amplification of harmful content.”

In response to the board’s ruling, a Meta spokesperson said: “We regularly seek input from experts outside of Meta, including the oversight board, and will act to comply with the board’s decision.

“In response to these events last summer, we immediately set up a dedicated taskforce that worked in real time to identify and remove thousands of pieces of content that broke our rules – including threats of violence and links to external sites being used to coordinate rioting.”

Meta said it would respond to the board’s wider recommendations within 60 days. The board ruling also ordered the removal of two anti-immigrant posts in Poland and Germany for breaching Meta’s hateful conduct policy. It said Meta should identify how updates to its hateful content policy affect the rights of refugees and asylum seekers.

 

Search

RECENT PRESS RELEASES