MIT researcher Isaak Freeman has published a Master's thesis that tries to turn whole-brain emulation from a speculative exercise into an engineering problem with line items. The 200-plus-page document, From Worm to Human, was submitted to the Program in Media Arts and Sciences in January 2026 under supervision from Ed Boyden, with Kevin Esvelt and George Church as readers.
The cost curve nobody talks about
The sharpest number in the thesis isn't about GPUs. It's about proofreading. Reconstructing a single neuron in C. elegans cost roughly $16,500 in the 1980s, the kind of figure that made connectomics look permanently stuck. By 2025, a Drosophila neuron runs about $214 and a zebrafish larva neuron around $100. Mammalian neurons are still $500 to $1,000 each, so nothing about a human connectome is cheap yet.
A billion-dollar mouse connectome, per Freeman's estimate, would need to get down to about $10 per neuron. A human connectome at the same budget demands $0.01 per neuron. That's a five-orders-of-magnitude gap from today's costs. The gap is supposed to close. Whether it closes on a timeline anyone alive today cares about is the open question.
GPUs, with caveats
Social media summaries have pinned the compute requirement at 50,000 H100s. That figure isn't in the paper. Under pessimistic assumptions, Freeman estimates roughly 6x10^20 FLOP/s, 700 GB of memory per GPU, and 24 GB/s of interconnect bandwidth to run a human brain in real time. At an H100's dense FP16 rate of about 1 petaFLOP/s, that's closer to 600,000 accelerators than 50,000. An upper-bound estimate using Hodgkin-Huxley neurons with roughly 7,000 synapses each lands at 10^20 FLOP/s, which the thesis describes as "comparable to next-generation 100,000-GPU AI clusters such as xAI's Colossus."
Mid-2020s clusters already reach 4x10^20 FLOP/s and 1.8 TB/s of interconnect. Raw compute is mostly there, or close enough that AI datacenter spending will drag it across the line. The bigger squeeze, per the thesis, is memory per GPU and the interconnect wall, both of which have improved far slower than peak FLOPs over the past three decades.
What the summary leaves out
The viral version of this news says data acquisition is the only blocker left. That's not what the thesis argues. Molecular data is a major gap: connectomes capture wiring, not the receptor types, ion channel densities, or neuromodulator concentrations that decide how any given synapse actually behaves. Proofreading still eats most of the cost in electron microscopy pipelines. A complete cubic millimeter of human cortex has been imaged and segmented but not proofread, and that single cubic millimeter has roughly three times more synapses than the entire fruit fly brain.
Published in Nature in October 2024, the adult fruit fly connectome took more than 33 person-years of human proofreading for 139,255 neurons. Scale that to a mouse brain at roughly 70 million neurons and the math gets ugly fast, though AI-assisted pipelines like PRISM claim 8-fold accuracy gains over conventional single-color methods.
The other paper
Freeman is also a co-author on the State of Brain Emulation Report 2025, a longer collaborative document with Niccolo Zanichelli, Maximilian Schons, Philip Shiu, and Anton Arkhipov that grew out of this thesis. Much of the same ground, more technical detail. It was released in January 2026, the same month the thesis was submitted.
A Chinese group led by Wenlian Lu ran an 86-billion-neuron simulation on 14,012 GPUs in late 2024, the largest human-scale attempt so far. It ran 60 to 120 times slower than real time, and Freeman is careful to call it a simulation rather than an emulation: no ground-truth connectome, simplified neuron models, proof of concept rather than proof of anything else.
NIH-backed mouse connectome work aims to reconstruct 10 to 15 cubic millimeters of tissue over the next few years, roughly a fiftieth of a mouse brain. That's the next milestone worth watching.




