Why Meta is retreating from encryption
March 16, 2026
In 2019, over the course of more than 3,000 words, Mark Zuckerberg made the case for encryption.
“As I think about the future of the internet, I believe a privacy-focused communications platform will become even more important than today’s open platforms,” Zuckerberg wrote in a Facebook note. “Privacy gives people the freedom to be themselves and connect more naturally, which is why we build social networks.”
To serve this world, Zuckerberg wrote, the company would rebuild Instagram and Messenger to support end-to-end encryption of messages. Not even Meta would be able to read the contents of the messages.
The announcement was one of a series of changes the company planned in an effort to restore trust after the Cambridge Analytica scandal and the biggest security breach in the company’s history. It was intended to sound audacious and counterintuitive. “I understand that many people don’t think Facebook can or would even want to build this kind of privacy-focused platform,” Zuckerberg wrote, “because frankly we don’t currently have a strong reputation for building privacy protective services.”
In the intervening years, the need for encrypted messaging has only grown more salient. The US military branded Anthropic a “supply chain risk” in part over fears it would not be able to use the company’s technology to conduct mass domestic surveillance. The United Kingdom last year attempted to force Apple to create a backdoor into encrypted iCloud backups. And that’s just what’s going on in putative democracies — a commercial spyware industry that lets governments of all types purchase zero-day exploits to target the devices of dissidents, political opponents, journalists and activists is thriving globally.
And so it was with real surprise that I saw the news — buried on a support page on Instagram’s website — that Meta will deprecate support for end-to-end encryption beginning on May 8.
After I posted about the news on social media, the company sent me a statement:
Very few people were opting in to end-to-end encrypted messaging in DMs, so we’re removing this option from Instagram in the coming months. Anyone who wants to keep messaging with end-to-end encryption can easily do that on WhatsApp.
A spokesman did not respond to follow-up questions. Still, the statement is disingenuous enough to warrant lingering on it a bit longer.
The statement says the feature is being deprecated because not enough people used it — that the whole situation is our fault. And where once Zuckerberg argued persuasively that encrypted communications should be the default, the company now writes about “anyone who wants to keep messaging with end-to-end encryption” as if they should be considered a fringe group.
Let’s be clear: this is the first time a major platform has ever rolled back encryption protections, and it’s a worrisome sign for the future of private communications. Even if, as we’ll get to, the costs of encryption in Instagram were legitimate and depressing.
The truth is that E2EE has been controversial even within the company from the very beginning. In the days before Zuckerberg’s blog post, Monika Bickert, the company’s head of content policy, warned of dire consequences.
“We are about to do a bad thing as a company. This is so irresponsible,” she wrote, according to internal company documents that became public last month as part of a lawsuit alleging that Meta has failed to protect children on its platforms.
Katie Paul and Jeff Horwitz covered the documents in Reuters:
Even as Zuckerberg claimed publicly that the company was addressing the plan’s risks, top safety and policy executives internally expressed dismay, with Bickert, the head of content policy, saying the company was making “gross misstatements of our ability to conduct safety operations,” the documents show.“I’m not very invested in helping him sell this, I must say,” Bickert wrote of Zuckerberg’s efforts to promote encryption on privacy grounds. With end-to-end encryption, “there is no way to find the terror attack planning or child exploitation” and proactively refer those cases to law enforcement, she added.
The company predicted that default encryption would effectively serve as a shield for terrorists and child predators to do their work on the platform in secret. Reports to the National Center for Missing and Exploited Children (NCMEC), the US national clearinghouse for reporting cases of child exploitation, would likely decline by 65%, according to an internal Meta briefing document.
As a result, Meta slowed the rollout of Instagram dramatically. It didn’t begin testing E2EE messaging on Instagram until 2021, and never finished rolling it out to the user base. (I still don’t have it on my own account.)
This is why the company’s explanation for eliminating the feature — low adoption — is laughable on its face. Meta never gave most users a chance to adopt it; even those who got access found that the feature was hidden behind four taps and never advertised within the app itself.
Zuckerberg predicted in his original post that the company would face strong opposition to its plan. But the strength of that opposition still seemed to surprise the company. India has made repeated efforts to break encryption, primarily in WhatsApp. The United Kingdom’s Online Safety Act, passed in 2023, ordered encrypted services to scan for and remove illegal content — a request that is incompatible with encryption. The European Union’s Chat Control regulation attempted something similar; the bloc’s legislative body voted just last week to delay until next year questions about whether to revive the effort.
Meanwhile, Meta has been under (justified) pressure worldwide to make its platforms safer for children. Messaging is a key pathway for several awful harms, including grooming, sextortion, and the spread of child sexual abuse material. And unlike the utility-focused WhatsApp, Instagram is a social network packed with “discovery” features designed to introduce strangers to each other. That introduces risks that other pure messaging apps never had to confront.
Notably, TikTok — a direct competitor to Instagram — said earlier this month that it would not encrypt direct messages on the platform. The company told the BBC that it was “a deliberate decision to set itself apart from rivals,” and that encryption “prevents police and safety teams from being able to read direct messages if they needed to.”
Of course, it wasn’t long ago that the threat of TikTok sharing users’ messages with the Chinese government was the entire pretext for forcing ByteDance to divest the app. But today, child safety concerns are severe enough that platforms have turned promising to share users’ data with police into a point of pride.
These are not easy tradeoffs to manage. Meta’s child safety problems are real, as are TikTok’s, and even platform employees worried in internal documents about the risks that encryption would introduce if added properly to Instagram.
At the same time, for the majority of people, encryption is a protective technology. Activists, sex workers, LGBTQ users in authoritarian states, people discussing reproductive rights in post-Dobbs America, and journalists and their sources are among the many groups that benefit from being able to talk in private. And Instagram, crucially, was a place where those groups were likely to meet one another for the first time.
The question is whether Meta could have served both the children who need the protections that come from increased monitoring and the much larger group of users who need protections from snooping of all sorts.
Until recently, the company sought to thread that needle by building special accounts for teenage users that prevent adults from initiating contact with them. Those accounts remain — but come May, the protections for everyone will be gone.
The question now is where else encryption might be on the chopping block. At the same time it unwinds the feature on Instagram, Meta has been gradually deprecating Messenger — killing off its web and desktop apps and directing people to use it inside Facebook like it’s 2010 again. On Messenger, unlike Instagram, E2EE has been rolled out globally. The fact that a company with two encrypted messaging apps is only directing people to use one of them seems ominous.
Back in 2019, in laying out his principles for a privacy-focused future, Zuckerberg was admirably concise about what encryption meant. “End-to-end encryption prevents anyone — including us — from seeing what people share on our services,” he wrote. Seven years later, the company has decided it would like to see after all. And that means that the government will be able to see it, too.

Sponsored
Unknown number calling? It’s not random…

The BBC caught scam call center workers on hidden cameras as they laughed at the people they were tricking.
One worker bragged about making $250k from victims. The disturbing truth?
Scammers don’t pick phone numbers at random. They buy your data from brokers.
Once your data is out there, it’s not just calls. It’s phishing, impersonation, and identity theft.
That’s why we recommend Incogni: They delete your info from the web, monitor and follow up automatically, and continue to erase data as new risks appear.
Exclusive deal for tax filing season: try Incogni here and get 58% off your subscription with code PLATFORMER

xAI’s internal implosion
What happened: Elon Musk is shaking up xAI‘s leadership amid the cash-burning startup’s failure to compete effectively against the heavyweights Google, OpenAI and Anthropic.
Two xAI cofounders, Zihang Dai and Guodong Zhang, were reportedly pushed out of the company following a review by SpaceX and Tesla employees who were sent in to audit xAI. Zhang was blamed for issues with its models’ coding abilities — an area where Anthropic and OpenAI have found immense success — and which prompted Musk’s frustrations.
Other xAI employees have also been fired as a result of the review, sources told the Financial Times.
To fix the coding problem, Musk brought in two senior leaders from AI coding platform Cursor, Andrew Milich and Jason Ginsberg, to work on the “Grok Code Fast” product.
Why we’re following: And then there were two. The latest departures of cofounders mean that 10 out of 12 of xAI’s founding members are no longer at the company, following an earlier restructuring last month. Musk said on X that the company “was not built right [the] first time around” and “is being rebuilt from the foundations up.” We’re not sure it’ll be built right the second time, either.
Macrohard, the company’s effort to build a powerful AI agent, is already reportedly stalling. The tumultuous leadership situation and issues with contractors have delayed the project, sources told Business Insider.
This all comes just a month after Musk announced SpaceX had acquired xAI for $250 billion, which some saw as a way to fund xAI’s lofty AI ambitions (such as giant space catapults) with money from a far richer Musk project. An initial public offering from SpaceX, which could take place later this year, might raise as much as $50 billion.
What people are saying: “Many talented people over the past few years were declined an offer or even an interview @xAI. My apologies. @BarisAkis and I are going through the company interview history and reaching back out to promising candidates,” Musk posted on X.
That post is giving “I am lowering our bar,” according to @FlintCasey.
“Obviously long history of [Musk] doing this, but also…this is a company that has already raised and burned through billions if not tens of billions of dollars and is attempting to claim a significant percentage of Musk’s less troubled rocket company,” Bloomberg Businessweek’s Max Chafkin pointed out.
Others were upset at how long it took for Musk to realize there were problems at xAI. “Some of us tried to alert you that there were problems, chief,” wrote former xAI employee Benjamin De Kraker.
Meanwhile, Musk is marking another loss in his crusade against OpenAI, after a judge suggested his claim for $134 billion in damages is based on taking “numbers out of the air.” You don’t say.
—Lindsey Choo
What happened: President Trump accused Iran of using AI as a “disinformation weapon” to misrepresent military successes and popular support.
Talking to reporters on Air Force One, Trump said, “AI can be very dangerous, we have to be very careful with it.” (He should know, given the White House’s ongoing and often horrific use of deepfakes).
Trump also accused U.S. outlets of “close coordination” with Iran to spread “fake news” — a funny accusation, given his comments came a day after a prominent New York Times report debunking Iran’s deepfakes.
Social media sites are, in fact, being flooded with fake images and videos of the war in Iran — many of which show fake victories or support for Iran. And sites including X, TikTok and Facebook, aren’t effectively removing them.
But Trump used this moment as an opportunity to make baseless accusations about reporting from the Wall Street Journal and Reuters that don’t appear to have anything to do with AI deepfakes. While he remains chummy with many of the executives of the sites that spread actual disinformation! Great.
Why we’re following: Aaaaaaaaaaaaaaaah.
AI is becoming an increasingly big player in wartime — deepfakes are big enough to attract the U.S. President’s attention, and AI tools are becoming increasingly important weapons both online and on the ground.
It’s particularly at frankly insane times like this one that we need deepfake-free, truthful reporting. Trump’s comments are particularly disturbing as we’re seeing increasing U.S. threats to free speech, with FCC chair Brendan Carr threatening to revoke broadcasters’ licenses over coverage of Iran.
What people are saying: Valerie Wirtschafter, a fellow at the Brookings Institution, explained why she thinks there are so many Iran deepfakes circulating right now. “This is a natural front for Iran to try and exploit and it feels like this is one of the reasons it is so voluminous,” Wirtschafter told the Times. “It’s actually a tool of war.”
On Bluesky, CNN’s chief media analyst Brian Stelter pointed out that Trump was “using the ‘fake news’ phrase that he personally popularized a decade ago to demean real US news outlets.” But, “The thing is, news outlets like CNN and The New York Times have been debunking those AI-fueled lies about the war.”
On r/wordnews, the top comment joked that Trump’s accusation was a confession: “So Trump is using AI to spread false information. Got it.”
—Ella Markianos

Side Quests
The Trump administration is reportedly set to receive $10 billion for brokering the TikTok sale. Trump coin surged as much as 60 percent after its promoters marketed a Mar-a-Lago event featuring the president (though Trump has not confirmed his attendance).
Meta has paused a massive cable project in the Persian Gulf due to the war in Iran.
Closing arguments began in a landmark social media addiction trial. A court largely voided an injunction that blocked California from enforcing a child online safety law.
X agreed to change its verification system in the EU following a fine.
Moscow is experiencing mass mobile internet blackouts as the Kremlin tightens control over the internet.
11 major tech and retail companies signed a pledge to share threat intelligence to stop scammers.
ByteDance suspended its video AI model launch following copyright allegations from major Hollywood studios.
Claude Opus 4.6 and Sonnet 4.6 now include the full 1M context window.
OpenAI’s “adult mode” feature is reportedly angering even its own advisory council. (This story is juicy and fun.) Encyclopedia Britannica sued OpenAI over alleged misuse of its material in AI training. The Stargate initiative now has new leaders. OpenAI is reportedly in advanced talks with private equity firms to create a joint venture, valued at $10 billion, that would distribute its enterprise products across the firms’ portfolio companies.
Nvidia expects to make $1 trillion in AI chips through 2027, CEO Jensen Huang said.
Meta’s new foundational AI model, Avocado, is reportedly being delayed. Meanwhile, it’s reportedly planning layoffs that could affect 20 percent of the company.
Meta will pay as much as $27 billion to access Nebius’s AI infrastructure. Instagram is testing clickable links in captions for Meta Verified subscribers. Facebook is adding new tools to detect impersonation. Longtime games executive Jason Rubin has left the company.
Apple introduced the AirPods Max 2. Apple dropped its purchase commission for its mainland China App Store from 30 percent to 25 percent.
Amazon will not have to pay a record $854.4 million fine in Luxembourg after a court found that the privacy regulator improperly carried out its analysis. Amazon’s ad-free Prime subscription, rebranded as Ultra, will double in price and stream in 4K.
Tinder unveiled a slew of new updates, including features for in-person events and a new virtual speed dating experience.
Adobe CEO Shantanu Narayen will step down after the company appoints a successor.
Spotify will let users modify their Taste profile for recommendations.
The rebooted Digg announced it’s shutting down operations just two months after its open beta.
An in-depth look at what programmers do when AI does the programming. AI usage among doctors has doubled since 2023. An investigation into how models are being used in AI scams. The AI-washing of layoffs is corrosive, this columnist writes.

Those good posts
For more good posts every day, follow Casey’s Instagram stories.

(Link)

(Link)

(Link)

Talk to us
Send us tips, comments, questions, and encrypted messages (but use Signal for that): casey@platformer.news. Read our ethics policy here.
Search
RECENT PRESS RELEASES
Related Post
