Mark Zuckerberg’s gamble and the risks of privatising digital public spaces

January 16, 2025

Jan 16, 2025 12:42 IST

First published on: Jan 16, 2025 at 08:46 IST

A raging question in debates around freedom of speech and expression is about the limits that can be imposed on it. Proponents of absolute freedom argue that any limit is an act of censorship and that the only way of regulating freedom is to set up consequences. Proportionality champions propose that freedom is both relational and constructed and hence protection against harms has to be weighed against unlimited freedom. The increased prevalence and weaponisation of misinformation in digital systems has only exacerbated this debate, where the instability of truthiness of information meets the existential conflict about freedom and limits.

Content moderation has been offered as a viable and practical, even if flawed, solution to these questions. A mix of human and algorithmic detection, flagging, scrutiny, resolution, and oversight has developed as a way of interpreting limits and diminishing the scope of harmful speech. Content moderation has had its share of problems. It is known that these cleaners of our online spaces are often employed in low-income countries, paid extremely low wages, and with very little protections against the graphic and violent nature of information that they process and clean.

Advertisement

A recent open letter from content moderators in Kenya, working for big tech like OpenAI and Meta, removing objectionable content to protect the end-users from the violence and pain, called their work “torture”. They brought attention to the fact that content moderation is not just information processing but digital care making. Instead of recognising them as frontline digital care workers, they are often vilified as irrational or treated as replaceable, with automated technologies of content detection and deletion. Their actions often decide how policies of digital platforms eventually open up and control speech on their platforms.

Ever since the momentous decision in 2016, when, the then presidential candidate for the US, Donald Trump was banned from Facebook and then Twitter (now X), the question of how hate speech and misinformation need to be de-platformed and controlled has been at the front and centre of these debates of free speech. The last few years, all data and information companies have focused on creating safe spaces for diverse and multiple communities, investing in nuanced forms of flagging, removal, and appeal. The role of these platforms in defining free speech and expression is unprecedented and unparalleled in terms of the power they wield to shape the narratives of our time.

As Twitter gave way to X, and Elon Musk announced dropping of almost all safeguards against hate and harmful speech, there was much hand wringing about a billionaire buying the global public commons, and making it a vehicle for hate and violence. Mark Zuckerberg’s announcement that Meta is also dropping the limits on content moderation and merely flagging controversial information should not come as a surprise, but nevertheless marks a shocking development where we see the power of finance and politics reshaping our idea of free speech in dramatic ways. In that one single announcement, Zuckerberg has given out five signals about what the future of digital information is going to look like.

Advertisement

This move underscores the rise of automated digital information processing, where we are relying more on LLMs which will replace the already precarious labour of content moderator workers. Instead of trying to protect and support the human beings who have been keeping the internet safe for so long, Big Tech companies are preferring to just replace them with algorithmic work.

This decision reminds us of the geo-political strength of certain countries that shape these global platforms. It is undeniable that this decision is aligning itself with a regressive political turn in the USA, where the attack on liberal and progressive values and social justice from the new government directly influences the global discourse and the safety of others around the world.

Zuckerberg has rewritten the responsibility axis of Meta. In his announcement, he suggests that Meta was not just doing content moderation but also fact checking and information verification. He is showing that the responsibility for information reliability is no longer on the platforms, and the individual users are on their own when dealing with the increasing misinformation circulation.

This decision also shows the toothlessness of oversight bodies which are often constituted as performative structures to present these platforms as committed to individual safety and community care. This move undermines the very nature of oversight and shows the need to have more accountability and governance structures for these big corporations.

most read

Lastly, this should be a wake-up call to all of us, reminding us that digital public spaces, commons, and exchanges cannot be left to the mercy of private corporations. Even the most liberal and performatively progressive ones can, with one quick decision, overturn and suspend hard-earned battles around freedoms and protections.

Meta’s new policies are not new but they do warn us about how we need more communities of care, collectivity, and ownership so that the responsibility of safety and information verification does not lie on those who are the most disproportionately impacted by this platforming of hate that Meta now naturalises on all its different platforms.

The writer is professor of Global Media at the Chinese University of Hong Kong and faculty associate at the Berkman Klein Centre for Internet & Society, Harvard University