Yann LeCun, Meta's chief AI scientist for over a decade and one of the three researchers credited with pioneering deep learning, announced in November 2025 that he was leaving to start Advanced Machine Intelligence Labs. The Financial Times reported the Paris-based startup is seeking €500 million at a €3 billion valuation, which would make it one of the largest AI seed rounds in European history.
LeCun isn't just leaving Meta. He's betting his scientific reputation that the entire industry is chasing the wrong architecture.
The case against transformers
In a December podcast interview, LeCun called the prevailing approach to AI development "complete bullshit." His exact words: "The path to superintelligence, just train up the LLMs, train on more synthetic data, hire thousands of people to school your system in post-training, invent new tweaks on RL, I think it's just never going to work."
The critique runs deeper than performance benchmarks. Language models, he argues, learn statistical correlations between words without understanding causation. A cat knows that dropping a toy means it falls. GPT-4 can describe gravity because it's read the Wikipedia article, but it has no intuitive grasp of physics. Scale doesn't fix this. More parameters just make the statistical correlations more sophisticated.
LeCun points to Meta's own Llama 4 as evidence. The model launched in April 2025 to widespread criticism that its real-world performance lagged far behind benchmark scores. In a Financial Times interview, he admitted the results "were fudged a little bit," with different model configurations cherry-picked for different tests. Mark Zuckerberg was apparently furious.
What AMI is actually building
The startup's approach centers on "world models," AI systems that learn by observing video rather than consuming text. The architecture predicts what happens next in abstract representation space, not pixel by pixel. This lets it model cause and effect in ways text prediction cannot.
LeCun developed the foundational research at Meta through the JEPA (Joint Embedding Predictive Architecture) family. V-JEPA 2, released in June 2025, trained on over a million hours of internet video plus robot trajectory data. The system achieved state-of-the-art results on action anticipation tasks and demonstrated basic planning capabilities. The GitHub repository remains open source under Meta's research license.
AMI's first partner is Nabla, a French medical transcription startup where LeCun serves as an investor. Nabla's co-founder Alex LeBrun is taking the CEO role at AMI, with LeCun as executive chairman. The partnership announcement promises "privileged access" to AMI's models for healthcare applications.
The money question
A $3.5 billion valuation for a pre-revenue research lab sounds aggressive until you compare it to peers. Mira Murati's Thinking Machines Lab reportedly closed at $12 billion. Fei-Fei Li's World Labs, which is pursuing similar world model research, raised at $1 billion back in 2024 and is now reportedly seeking $5 billion.
VCs circling AMI include Cathay Innovation, Greycroft, Hiro Capital, and French public investment bank Bpifrance, according to Bloomberg. LeCun's track record helps. He won the 2018 Turing Award alongside Geoffrey Hinton and Yoshua Bengio, and his convolutional neural network research from the 1980s underpins most of modern computer vision.
Still, the competitive landscape is getting crowded. Google DeepMind is developing world models through its Genie series. Every major robotics company needs similar technology. LeCun's edge is scientific credibility, but he's 65 and has spent a decade managing a corporate research lab rather than building products.
Why he's angry about open source
LeCun has been publicly critical of American tech companies retreating from open AI development. At a December conference, he said the best open-source language models "are all Chinese now. And it's not good."
The DeepSeek R1 model's January 2025 release spooked Silicon Valley by matching frontier performance at a fraction of the training cost. LeCun's read: this proves open-source collaboration works better than corporate secrecy, not that China is overtaking the US.
His concern is that closed development creates blind spots. At Meta, he says researchers stopped publishing papers and pursued "safe and proven" approaches rather than risky innovation. When you do that, "you fall behind," he told the Financial Times. AMI will publish its research openly.
Whether that's idealism or strategy depends on your read of the world model market. Open publication lets smaller teams build on your work. It also lets competitors catch up faster.
What happens now
LeCun says early "baby" versions of AMI's systems should emerge within a year, with full-scale models a few years after. The initial applications will likely be narrow: robotics planning, healthcare decision support, autonomous vehicles. General-purpose AI that reasons like a human remains distant.
AMI Labs plans to establish its Paris headquarters in early 2026. Meta will reportedly be the startup's first client, maintaining ties despite the acrimonious departure.




