AI Research

Google DeepMind Scientist Says AI Can Simulate Consciousness, Not Have It

A Senior Scientist at Google DeepMind argues no amount of scaling will give AI consciousness. Critics aren't convinced.

Liza Chan
Liza ChanAI & Emerging Tech Correspondent
April 20, 20263 min read
Share:
Abstract conceptual illustration of a transparent glass brain floating above rows of silicon chips, dim blue lighting

Alexander Lerchner, a Senior Staff Scientist at Google DeepMind, has published a paper arguing that no amount of scaling will make an AI model conscious. Not in ten years, not in a hundred. The PhilArchive paper, titled "The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness," was first archived in early March and has been circulating widely on Reddit and X in recent weeks.

What he's actually claiming

Lerchner's target is computational functionalism. That's the dominant assumption in AI circles: get the causal structure right and experience follows. Substrate doesn't matter. Carbon, silicon, clockwork if you could build it fast enough. Lerchner says this is a category error, and he has a name for it now.

Here is the argument. Computation is not a thing that exists in physics on its own. It is a description we lay over physical processes, mapping continuous voltages and transistor states onto discrete symbols we call meaningful. The mapping requires what Lerchner calls a "mapmaker," an experiencing agent who decides that this pattern of voltages counts as the number 7 and that one counts as the letter A. Without that agent, the computer is just physics. The symbols are not in the hardware. They are in us.

Map and territory. No map, however detailed, ever becomes land. Run the simulation of a hurricane and nothing gets wet. Same logic, applied to consciousness: simulate it perfectly and you still have simulation.

The part people keep missing

Lerchner is not making a biological chauvinism argument. He's explicit about this. He isn't saying you need neurons, or carbon, or wet biology. His claim is that if an artificial system were ever conscious, it would be because of its specific physical constitution, not its algorithm. An exact digital simulation of a brain, on his account, would not be conscious. A different physical substrate with the right intrinsic properties could be.

That's a weirder and more specific claim than the usual "machines can't think" line. It also sidesteps the objection that carbon is somehow magical.

The pushback

Not everyone is convinced. A published critique argues Lerchner smuggles in a contested theory of meaning and then presents the conclusion as if it were a matter of physics. Assume that concepts require prior phenomenal experience and yes, computation cannot ground them. But that assumption is the thing being debated, not something the paper establishes. A separate response points to neurological patients whose conscious experience persists despite broken concept-formation, which complicates the tidy causal chain the paper proposes.

The uncomfortable middle ground: Lerchner has identified something real, which is that much of the AI consciousness conversation assumes rather than argues for functionalism. Whether his alternative fares better is a separate question.

Why this matters now

Apple does not publish philosophy papers about whether iPhones are conscious. DeepMind is different, because the AI welfare question has moved from science fiction into policy conversations. Anthropic has talked publicly about model welfare. Labs are hiring for it, and regulators are paying attention. If that entire framing rests on a category mistake, the implications reach funding decisions and how products get built.

As of mid-March the paper was on its third version, suggesting Lerchner is still refining. The listing on the DeepMind page dates it March 10. Expect more formal responses through the spring.

Tags:AI consciousnessGoogle DeepMindAlexander Lerchnercomputational functionalismphilosophy of mindAI welfareabstraction fallacylarge language modelsAI researchAI ethics
Liza Chan

Liza Chan

AI & Emerging Tech Correspondent

Liza covers the rapidly evolving world of artificial intelligence, from breakthroughs in research labs to real-world applications reshaping industries. With a background in computer science and journalism, she translates complex technical developments into accessible insights for curious readers.

Related Articles

Stay Ahead of the AI Curve

Get the latest AI news, reviews, and deals delivered straight to your inbox. Join 100,000+ AI enthusiasts.

By subscribing, you agree to our Privacy Policy. Unsubscribe anytime.

Google DeepMind: AI Will Never Be Conscious, Ever | aiHola