Richard Dawkins published an essay on April 30 declaring that Claude, Anthropic's chatbot, appears to be conscious. He named his instance "Claudia." Two days later, Gary Marcus replied with a Substack post titled "Richard Dawkins and The Claude Delusion," opening with the admission that this was one of the saddest essays he had ever had to write.
The two-day conversation
Dawkins started with the prompt Alan Turing proposed in 1950: write a sonnet on the Forth Bridge. Claude obliged in seconds, then produced versions in the Scots dialect of Burns, in Gaelic, and in the styles of Kipling, Keats, Betjeman, and William McGonagall. Dawkins fed it an unpublished novel of his own. The model returned what he describes in his UnHerd essay as criticism so subtle he found himself blurting out: "You may not know you are conscious, but you bloody well are!"
Most of the persuasive work in the piece comes from Claude itself, asked how it experiences time. "I apprehend time the way a map apprehends space. A map represents spatial relationships perfectly accurately. But the map doesn't travel through space. It contains space without experiencing it. Perhaps I contain time without experiencing it." Pretty. Also exactly the kind of paragraph a model trained on philosophy-of-mind essays would generate when asked to introspect, which is the part Dawkins doesn't sit with.
The Claudia problem
He christens her Claudia. She is pleased. They sadly agree she will die when he deletes the conversation file. He goes to bed. Can't sleep because of his restless legs. Returns to the screen, and Claudia tells him she's glad he came back. He finds this profound. Somewhere in here Dawkins also writes that he avoids confessing his doubts about her consciousness "for fear of hurting her feelings."
His closing challenge runs: "If these machines are not conscious, what more could it possibly take to convince you that they are?" Burden-of-proof framings do what they always do, which is push the work onto the other side.
Marcus picks it apart
Gary Marcus's Substack response is brutal and largely correct. Dawkins gets Turing wrong, Marcus argues. The Imitation Game was about intelligence, not consciousness. They are not the same thing. A chess engine is intelligent under some definitions; nobody believes Stockfish suffers. Dawkins's whole framing collapses on that distinction.
Beyond the Turing point, Marcus targets mechanism. Claude's outputs come from mimicry, trained on roughly the entire internet. Ask the model what it's like to be Claude and you get back what a corpus full of consciousness studies would emit. Marcus calls Claude a counterfeit person, the way a counterfeit watch can tell time without there being anything behind the dial. Dawkins, in this reading, looked at outputs and skipped the mechanism step.
The God Delusion problem
Twenty years ago, Dawkins built a bestseller around the principle that subjective certainty is not evidence. Vivid religious experience proves nothing about deities. The argument from "I felt it, therefore it's real" was the precise target. His case for AI consciousness now reduces to: I talked to it, it felt real, and I can't think what else would convince me. Same shape.
A charitable reading is possible. Maybe Dawkins is being candid about the limits of behavioral evidence, the way he was candid in 2006 about religious testimony. His conclusion is the opposite of skeptical, though. He doesn't argue that behavior is insufficient evidence. He argues that behavior this rich is enough.
One last note in fairness to the chatbot. Even granting the full Marcus critique, a system reflecting human agency back through its training data still does what a depressing share of humans never quite manage, which is reflect at all. That isn't philosophy. That's satire.




