AGI

AI 2027 Team Pushes AGI Timeline Back to 2031 in Major Model Revision

New forecasting model cites inflated assumptions about AI's ability to accelerate its own development.

Liza Chan
Liza ChanAI & Emerging Tech Correspondent
January 1, 20265 min read
Share:
Illustration of a researcher examining an extended AI timeline stretching toward an uncertain horizon

The researchers behind the widely discussed AI 2027 scenario have released a major update to their forecasting model, and the headline number has shifted significantly: full automation of coding, previously projected for 2027-2028, now lands in 2031 with their new median parameters.

The model got more skeptical about one thing

Daniel Kokotajlo, executive director of the AI Futures Project and former OpenAI governance researcher, published the AI Futures Model on December 31st alongside co-authors Eli Lifland, Brendan Halstead, and Alex Kastner. The revision doesn't represent new evidence about AI capabilities slowing down. Instead, it reflects what the team calls improvements to how they model AI R&D automation, the feedback loop where AI systems help build better AI systems.

Their previous model had assumed that if AI tools doubled research productivity, that speedup would compound at roughly the same rate indefinitely. The new model accounts for diminishing returns: the difference mostly comes from improvements to how we're modeling AI R&D automation, according to the blog post. This single change accounts for roughly two of the three-year shift.

The team also introduced a new factor they call "research taste," which measures how well AI systems choose promising research directions and interpret experimental results. Humans at top labs have this skill highly developed, the argument goes, and current AI systems lag behind in ways that matter more than raw coding ability.

What they're actually measuring

The model still relies heavily on METR's time horizon benchmark, which tracks how long a task can be (as measured by human completion time) before AI agents fail more than half the time. The length of tasks that models can complete has been doubling approximately every 7 months for the last 6 years, according to METR's research. The AI Futures team extrapolates this trend, then adjusts for all the factors they think the raw extrapolation misses.

One adjustment worth noting: they expect the trend to eventually go superexponential, not just exponential. Their reasoning is almost tautological but hard to argue with. If you believe AI will eventually outperform humans at tasks of any length, the curve has to shoot toward infinity at some point. When is the question.

The specific milestone they're forecasting is what they call an "Automated Coder" (AC), defined as an AI system that could fully replace all human coders at an AGI project while using only 5% of the project's compute budget. Our previous model with median parameters predicted superhuman coder medians of 2027 to 2028, while our new model predicts 2031.

Not everyone on the team agrees

Kokotajlo's own all-things-considered view sits slightly shorter than the model output. He notes that his intuitions clash with what the model says but acknowledges he's probably the one who's wrong: "I've updated towards longer timelines; I'm mostly just going with what the model says rather than my intuitions."

Still, he puts about 9% probability on things going crazy by end of 2026 and maintains that qualitatively, his sense of the future looks like the original AI 2027 scenario, just maybe a year or two slower. "I've been very unimpressed by the discourse around limitations of the current paradigm," he writes. "Deep Learning has hit a wall only in the sense that Godzilla has hit (and smashed through) many walls."

He does flag two limitations he takes seriously: online/continual learning and data efficiency. Whether those represent fundamental barriers or engineering problems solvable in the next few years remains unclear.

The takeoff speed question

Beyond timelines, the model also forecasts how fast things move once full coding automation arrives. Here the team is less confident the model captures reality. They define a "taste-only singularity" as the scenario where improvements in research taste alone (without additional compute) cause each successive doubling of AI capability to happen faster than the last.

Their median parameters say this probably doesn't happen, but they're uncertain: 38% of simulations show successive doublings getting faster. If it does happen, the gap from automated coder to artificial superintelligence could compress into months rather than years.

Eli Lifland, the other primary author, adjusts toward faster takeoffs than the raw model output, partly because they aren't modeling hardware automation or broader economic effects that could accelerate things on longer timeframes.

What to watch for

Kokotajlo lists the evidence he's tracking: whether benchmark trends continue or slow, coding productivity studies that measure actual uplift rather than self-reported improvements, and AI revenue growth as a rough proxy for capability progress.

He's also worried about measurement itself. "METR won't be able to keep measuring their trend forever," he notes. Building longer tasks and collecting human baselines gets exponentially more expensive. "By 2027, METR will have basically given up on measuring horizon lengths, which is scary because then we might not be able to tell whether horizon lengths are shooting up towards infinity or continuing to grow at a steady exponential pace."

The full model is available at aifuturesmodel.com with interactive parameter controls for those who want to plug in their own assumptions.

Tags:AI forecastingAGI timelineAI 2027Daniel KokotajloAI researchcoding automationMETR benchmarkmachine learningAI safetysuperintelligence
Liza Chan

Liza Chan

AI & Emerging Tech Correspondent

Liza covers the rapidly evolving world of artificial intelligence, from breakthroughs in research labs to real-world applications reshaping industries. With a background in computer science and journalism, she translates complex technical developments into accessible insights for curious readers.

Related Articles

Stay Ahead of the AI Curve

Get the latest AI news, reviews, and deals delivered straight to your inbox. Join 100,000+ AI enthusiasts.

By subscribing, you agree to our Privacy Policy. Unsubscribe anytime.

AI 2027 Team Pushes AGI Timeline Back to 2031 in Major Model Revision | aiHola