Ev Fedorenko has spent 15 years scanning brains at MIT, and what she's found should unsettle anyone who believes language and thought are the same thing. The cognitive neuroscientist has mapped a specialized language network in approximately 1,400 human subjects, pinning down a system that occupies about 1.2% of the brain's volume. If you clumped all that tissue together, it would be roughly the size of a strawberry.
The comparison Fedorenko keeps reaching for is one that makes AI researchers uncomfortable and linguists skeptical: the brain's language system, she argues, operates something like an early large language model.
The strawberry that processes all your words
The language network sits in predictable locations across almost every adult brain Fedorenko has examined. Three areas cluster in the left frontal lobe; a few more run along the middle temporal gyrus. The consistency is striking, she told Quanta Magazine earlier this month: "We've now scanned about 1,400 people, and we can build up a probabilistic map."
What this network actually does, though, is narrower than most people assume. Fedorenko calls it "basically a glorified parser" whose job is mapping between sounds or symbols and meanings stored elsewhere in the brain. It's an interface, not an engine. The thinking happens somewhere else entirely.
"You can think of the language network as a set of pointers," Fedorenko said. "It tells you where in the brain you can find different kinds of meaning."
This is where the LLM comparison gets interesting. The brain's language system, like GPT, appears to be tracking statistical patterns in word sequences without any deeper comprehension of what those sequences mean. It responds just as strongly to Noam Chomsky's famous nonsense sentence, "Colorless green ideas sleep furiously," as it does to coherent statements. Structure matters. Truth doesn't.
A context window measured in single digits
The system has another limitation that will sound familiar to anyone who's worked with language models: it processes in chunks, and those chunks are small. Fedorenko describes the language network as "memory-limited," handling perhaps eight to ten words at a time before needing to pass information along to other brain systems.
That's not a typo. Eight to ten words.
The implication is that when you're parsing a long sentence, your language network isn't holding the whole thing in active working memory. It's processing fragments and relying on other cognitive systems to stitch them into meaning. It's a pipeline architecture, not a holistic processor.
What about Broca's area?
Fedorenko's research also takes aim at one of neuroscience's most famous landmarks. Broca's area, identified in the 1860s and taught in every introductory psychology class as "the speech center," turns out to be something more pedestrian: a motor planning region. It coordinates the muscles of your mouth and tongue before you speak, but it's not where language lives.
"I would not call it a language region," Fedorenko said. "It's an articulatory motor-planning region."
This is a controversial position, though it's not new. A 2015 study from UC Berkeley found that Broca's area actually goes quiet during speech production, suggesting it handles preparation rather than execution. More recent lesion-mapping studies from 2024 failed to find Broca's area implicated in any specific language measure across a large sample of stroke patients. The textbooks, Fedorenko implies, are overdue for revision.
The aphasia paradox
The distinction between language and thought shows up most starkly in patients with aphasia. When the language network is damaged, people lose access to words and grammar. But they don't lose the ability to think. They can still reason, plan, and understand the world around them. They're just trapped, unable to encode or decode the symbolic system that would let them communicate.
This dissociation is precisely what you'd expect if language is an interface rather than a substrate for thought. Damage the parser, and you lose the ability to translate between internal representations and external symbols. But the representations themselves remain intact.
Fedorenko put it bluntly: "I'm sure you've encountered people who produce very fluent language, and you kind of listen to it for a while, and you're like: There's nothing coherent there. But it sounds very fluent. And that's with no physical injury to their brain."
What the comparison misses
The LLM analogy only goes so far, and Fedorenko is careful about where she draws the line. Unlike language models, the human language network connects to systems that AI lacks: episodic memory, social cognition, world knowledge. The parser may be shallow, but it feeds into machinery that isn't.
When you understand a sentence, the language network does its pattern-matching, but then those patterns get handed off to other brain regions that can anchor them in lived experience, social context, and causal reasoning. ChatGPT has no such handoff. It's pattern-matching all the way down.
Still, the finding that our biological language system is itself relatively "dumb" challenges assumptions that have shaped linguistics and cognitive science for decades. The idea that language is uniquely human because it enables thought may have the causality backwards. Language, on Fedorenko's account, is a communication tool that evolved to export thoughts, not generate them.
A preprint from UCLA neurosurgeon Itzhak Fried's lab, which Fedorenko cited, is starting to examine single neurons within the language network. Early results suggest individual cells respond similarly to both written and auditory language, reinforcing the view that what's special about this system is its modality-independence, not its cognitive depth.
The work continues. But the picture emerging from MIT is one where the brain's language faculty looks less like a reasoning engine and more like a very sophisticated autocomplete, embedded in a much larger system that actually understands things.
Whether that should make us feel better or worse about ChatGPT is left as an exercise for the reader.




