The AI Backlash Could Get Very Ugly

May 13, 2026

Steve Bannon and Bernie Sanders don’t agree on much. But both think that AI is a disaster for the working class. The Vermont senator recently wrote that “AI oligarchs do not want to just replace specific jobs. They want to replace workers.” Bannon, Trump’s former chief strategist, made similar comments last week: Silicon Valley does “not care about the little guy,” he said in a podcast episode titled “Stopping the AI Oligarchs From Stealing Humanity.” This emergent “Bernie-to-Bannon” coalition points to the growing bipartisan anxiety over AI. In polls, the United States ranks among the countries most concerned about AI. America is both the world’s foremost developer of AI and its chief hater.

Recently, Maine passed the country’s first statewide data-center moratorium (though the bill was vetoed by the governor). Nationally, a record number of proposed projects were canceled in the first quarter of this year following local pushback. Meanwhile, in extreme cases, concerns about AI appear to be tipping into violence. In April, someone shot 13 rounds at an Indianapolis councilman’s house and left a note under his doormat: “NO DATA CENTERS,” it read. Days later, a man threw a Molotov cocktail at Sam Altman’s home before heading to OpenAI’s headquarters, where he allegedly threatened to burn down the building and kill anyone inside. (The man has since pleaded not guilty to several charges, including attempted murder.) Social-media posts applauding the attack racked up thousands of likes: “I hope that Molotov is okay!” wrote one commenter.

All of this may be only the start. The AI industry has spent recent years warning of a jobless future. So far, narratives about labor displacement have been largely speculation. While a smattering of tech executives have attributed job cuts to AI, many analysts have accused these CEOs of “AI-washing”—essentially, using the technology as a scapegoat for roles they would have eliminated regardless. If anything, AI has mostly been a financial boon for the country, buoying the stock market and driving growth. But that could all change, of course. Imagine the uproar if jobs across the economy truly start disappearing en masse.

Even absent any uptick in AI-induced layoffs, the anti-AI sentiment is likely to keep growing. With the midterms approaching, political operatives are tapping into Americans’ fears over the technology. Blue Rose Research, a progressive polling firm, has found that messaging that addresses the AI threat in “bold, populist terms” is particularly effective at increasing support for Democrats. (If corporations are left unchecked, they will “fire everyone, keep all the profits, and leave you with nothing,” reads the transcript of one sample video the group tested.) Politicians on the right have made similar statements. “I have no doubt that these companies are going to get filthy rich, but is it going to be good for children?” Senator Josh Hawley of Missouri said earlier this year. “Is it going to be good for parents? Is it going to be good for the American worker?”

Many politicians, including President Trump, have cheered on Silicon Valley in a bid to win the supposed AI race with China. But the pro-AI crowd is starting to worry about the backlash. In March, at a conference about AI, Senator Mark Warner of Virginia, a Democrat, told me that he’s “enormously concerned” that “populism from both the left and the right” could curb innovation.

As politicians lean into anti-AI messaging, local fights over data centers could intensify. While such facilities can help stimulate local economies, they’re also disruptive to communities where they are built, exerting physical and environmental tolls, which makes them an appealing target for opposition. Data centers are also more tangible than AI software: Someone who opposes the industry might not be able to stop Anthropic from building Claude, but they can raise concerns about new construction at a local city-council meeting. A recent guide called “How to Stop a Data Center” written by a group in Michigan explains that demonstrating outside local officials’ homes has been an effective organizing tactic.

In a worst-case scenario, the situation could get ugly. With its potential for sweeping social and economic transformation, “AI generates the structural conditions historically associated with the onset of political violence,” Yannick Veilleux-Lepage, a researcher who studies technology and terrorism, wrote last month. Already, as many as a quarter of Americans seem accepting of violence as a tool for achieving political change. And in recent months, there has been a rise in “direct threats” against individuals, policy makers, and corporations involved with AI, according to the Soufan Center, a nonpartisan research group. The most common threats online involve “physical sabotage of proposed or operational data centers.” Local officials are in an especially vulnerable position: “Where national figures are unreachable, local policymakers who approved the data center become the proxies for the same structural anger,” Veilleux-Lepage wrote. After the shooting in Indianapolis, the council introduced a measure that would allow officials to keep their address private.

A version of this has played out before: Silicon Valley is fond of likening AI to the Industrial Revolution. In such comparisons, the tech industry likes to point to the immense wealth that industrialization unlocked. Over the long run, it’s true that the Industrial Revolution radically boosted economic growth. But living through it was another matter entirely. Many people saw their wages stagnate and working conditions deteriorate as factory owners and industrialists came into immense wealth. (Just read a Charles Dickens novel, and you’ll get the idea.) This led to riots and, occasionally, attacks on the industrialists themselves. Automation wasn’t the only problem during this period. A combination of trade disruptions and poor harvests led to inflation and, especially, high food prices. But machines became a target for people experiencing financial hardship more broadly.

In much the same way, during an economic downturn of any kind, AI’s reputation seems likely to decline. If people are already experiencing unemployment for reasons unrelated to the technology, they are unlikely to look cheerfully at the possibility of AI automating away the jobs that remain. And if AI turns out to be a bubble, it could indeed burst and bring down the rest of the economy with it.

Silicon Valley is waking up to the resentment. Tech insiders have spent recent weeks exchanging tactics on X with advice on how to better sell AI. Perhaps, if data centers were beautiful, people would like them more? In particular, there’s been an effort to change the narrative around AI job loss. The venture-capital firm Andreessen Horowitz recently published an essay declaring the “job apocalypse” to be a baseless fantasy. “The macro story is not a jobless future, where we retire fat and complacent to our Netflix-scooters,” it read. In 2023, after ChatGPT came out, Altman told my colleague Ross Andersen that “jobs are definitely going to go away, full stop.” Now he appears to have changed his tune: “Jobs doomerism is likely long-term wrong,” Altman wrote earlier this month.

But most of the country already feels as if the economy is rigged to advantage the wealthy. One poll found that when sorted by household income, the group of Americans most optimistic about AI in their daily lives are those making more than $200,000. The near future of AI seems likely to further entrench such dynamics: OpenAI and Anthropic are both nearing trillion-dollar valuations, consolidating even more money and power among a select few. “Disruption has winners and losers,” Nathaniel Persily, a Stanford law professor and AI expert, told me. “For many Americans, they’re not convinced they’re going to be the winners, and they base that conclusion on the history of technology over the last 20 years.” If the tech industry truly believes that a simple change in messaging will quell the backlash, then they are misunderstanding the problem entirely.

Lila Shroff is a staff writer at The Atlantic.