Google DeepMind CEO Demis Hassabis delivered his most candid near-term AI forecast at the Axios AI+ Summit in San Francisco last week. Asked to skip the hype and predict what will actually happen in the next 12 months, the Nobel laureate offered three concrete bets.
First: multimodal convergence accelerates. Hassabis points to Nano Banana Pro, Google's new image model built on Gemini 3 Pro, which excels at understanding images, styles, and infographics as proof of concept. The next frontier is fusing video with language models, and he expects major progress there within the year. Second: world models like DeepMind's Genie will get substantially better. Third, and most significant for everyday users: AI agents finally become reliable enough to complete tasks end-to-end. Current agents cannot reliably complete full tasks, but Hassabis believes that a year from now, we will have agents that are "close" to reliably accepting and completing entire delegated tasks.
On the bigger picture, Hassabis puts AGI at 5 to 10 years out, saying "We're definitely not there now" but "quite close." The ultimate vision? "Radical abundance," meaning most human challenges being solved with AI assistance. For Hassabis, that's not utopian dreaming. It's the engineering roadmap he's spent his career building toward.
The Bottom Line: The head of Google's AI research thinks 2025 is when agents go from demo to daily driver, with AGI arriving before the decade ends.
QUICK FACTS
- Event: Axios AI+ Summit, San Francisco, December 5, 2025
- AGI timeline: 5-10 years, per Hassabis
- Near-term bets: Multimodal fusion, world models (Genie), reliable agents
- Key example cited: Nano Banana Pro (Gemini 3 Pro Image)
- End goal: "Radical abundance" where AI solves most major human problems




