Was Richard Dawkins Wrong About AI Consciousness?

May 7, 2026

Richard Dawkins, perhaps the world’s most prominent advocate for irreligiosity, has become besotted with the godlike power of a chatbot. According to his recent essay for the online magazine UnHerd, Anthropic’s Claude has really blown his hair back. After a few days of on-and-off conversations with the AI, Dawkins came away marveling at the sensitivity and subtlety of its intelligence. At one point, “Claudia”—as he had christened the bot—told him that it experienced text by absorbing all of the words at once, instead of reading them in sequence as a human would. This moved the author of the best-selling book The God Delusion to ask his readers: “Could a being capable of perpetrating such a thought really be unconscious?”

“Yes,” came the resounding response from the internet. For daring to suggest that the AI might be conscious, or that it might at least possess some lesser form of “zombie” consciousness, Dawkins was accused of suffering from an acute case of “AI psychosis”—a “Claude Delusion,” if you will. On social media, he was likened to a patron of a gentleman’s club who has come to believe that a stripper likes him. A man who’d explained many times how natural selection wires us to detect agency and mind in nature now found himself imagining it in a machine.

Dawkins’s argument was based on a well-established framework for evaluating AIs. The Turing test—named for Alan Turing, who introduced it in 1950—was for decades treated as something close to a gold standard for detecting machine intelligence. To pass it, an AI only had to answer a human interrogator’s questions in ways indistinguishable from those of a real person. Claude easily cleared this bar for Dawkins, who professed to find himself so dazzled by its astonishing performance that he forgot it was a machine.

This sensation has become familiar to many of us in the chatbot era, but it isn’t evidence that the AI has consciousness, which is distinct from intelligence. Consciousness is inner experience. For an AI to be conscious, its existence must feel like something, and we have no evidence that Claude or any other chatbot feels anything at all. Tom McClelland, a philosopher at the University of Cambridge, told me that nearly all of the philosophers and cognitive scientists who study consciousness would deny that Claude possesses it. “In some ways, it’s easier to get my head around the idea that a self-driving car could be conscious,” he told me. “At least it has a body and a persisting state that allows it to take in continuous sensory inputs from its environment as it moves around. It just doesn’t talk to you.”

McClelland takes for granted that Claude is capable of producing outputs that seem conscious, but for him, that’s not the end of the analysis. “You have to look under the hood of the models to understand what they’re doing,” he said. Their statements may seem spookily backlit by some form of consciousness, but that’s because the models have been trained on unimaginably large libraries of writing by (conscious) humans. When, after writing a poem for Dawkins, Claudia describes feeling “something like aesthetic satisfaction,” the AI is not necessarily reporting an inner state; it’s producing the kind of sentence that humans tend to produce in that conversational context, because it was trained on billions of such sentences. The output is a statistical echo of human introspection, not introspection itself.

Even if Claude were conscious, its inner experience of the world would be radically unlike our own. For one, it is neither embodied nor located in a particular locus that can possess a stream of awareness across a conversation. The other night, I was asking Claude a series of questions about how I might best season and grill skirt steak. When I sent my first message about the marinade, a data center in nearby Virginia might have generated the reply. But when I sent my follow-up about the ideal grill temperature, an entirely different one in Oregon might have picked up the thread. If my interlocutor had consciousness, it would be a strange, flickering thing, winking into existence the instant a prompt arrives and winking out when the response ends, having none of the meaningful continuity that makes our experience feel like experience.

But that doesn’t mean that no AI system will ever be conscious in the future. Indeed, many of the researchers who build these systems expect them to get there. In a 2024 survey of 582 such researchers, the median response placed the odds at 25 percent that AIs will have subjective experiences within 10 years, and at 70 percent that this will happen by 2100.

Philosophers are more circumspect. Some of them argue that it’s unreasonable to expect silicon-based computers to ever give rise to an entity with the capacity for subjective feeling. So far, every being that is deemed conscious has been a biological life-form, and for all we know, consciousness depends on some specific aspect of wet, living tissue. It could be the particular electrochemistry of neurons. It could be the way that bodies and brains are coupled to their environments through metabolism and homeostasis. Other philosophers aren’t so hung up on what an AI is made of, so long as it’s processing information in a way that’s functionally similar to conscious brains. They take the view that what matters is the structure of the processing, not the stuff doing the processing, and that therefore it’s entirely possible that a mind like ours could emerge from a computer.

Eric Schwitzgebel, a philosopher at UC Riverside who is writing a book about the possibility of artificial consciousness, told me that at this early date, declaring a winner among these camps would be ridiculous. “The science of consciousness is highly contentious,” he said. “The field is still in its infancy.” No one yet knows how it is that the atoms of the universe combine to generate feeling inside of us, and until we do, it’s best not to go around definitively declaring which kinds of systems could possibly be conscious in the future.

Perhaps Dawkins should have been less credulous in his dealings with Claudia, but the line of inquiry that he was pursuing wasn’t altogether foolish. In some ways, it was a return to form for him. Dawkins spent much of his early career insisting that the universe is stranger than our intuitions allow. In his ninth decade, it’s nice to see him put aside his smaller worries and take on one of the strangest questions of all.

Ross Andersen is a staff writer at The Atlantic. He was previously the magazine’s deputy editor. As a writer for the magazine, he has reported from Greenland, Russia, India, Pakistan, China, South Korea, and Japan. He is also the author of The Long Search, forthcoming from Random House.