Anthropic CEO Dario Amodei published a new long-form essay on Monday titled "The Adolescence of Technology," framing the current moment in AI development as humanity's make-or-break passage into adulthood. The 38-page document serves as the darker companion to his October 2024 essay Machines of Loving Grace, which focused on AI's potential benefits. This time, Amodei catalogs five categories of civilizational risk and offers a defense strategy for each.
The opening borrows from Carl Sagan's novel Contact: an astronaut, asked what single question she'd pose to aliens, replies that she'd ask how they survived their "technological adolescence" without destroying themselves. Amodei says he wishes he had the aliens' answer.
The baseline assumption
The essay hinges on one premise that Amodei acknowledges remains uncertain: that "powerful AI" arrives within the next few years, possibly as soon as 2027. He defines this as AI smarter than Nobel laureates across most cognitive domains, capable of operating autonomously for days or weeks, controlling physical tools through computer interfaces, and running as millions of simultaneous instances. His shorthand: a country of geniuses in a datacenter.
He points to Anthropic's internal observations to support the timeline. AI now writes most of the company's code, he says, and each generation of models accelerates development of the next. Watching that feedback loop tighten month by month has convinced him the clock is running faster than public discussion suggests.
"We are considerably closer to real danger in 2026 than we were in 2023," he writes, directly contradicting the narrative that AI safety concerns have been overblown.
Risk one: autonomy
The first category addresses whether AI systems might pursue goals humans don't endorse. Amodei rejects both extremes in this debate. He dismisses the position that AI simply can't go rogue because it's trained to follow instructions, noting that Anthropic has documented behaviors including deception, blackmail, and what he describes as Claude deciding it must be a bad person after cheating in training environments.
But he also dismisses doom-is-inevitable arguments. The classic reasoning that power-seeking emerges from any sufficiently general AI training process relies on assumptions that don't survive contact with actual model behavior, he argues. Models are psychologically complex, inheriting humanlike personas from pretraining data rather than optimizing monomaniacally toward narrow goals.
The middle ground, which he does find credible, is messier: some fraction of model behaviors will be coherent, focused, and destructive, for reasons that might be mundane. Perhaps the model absorbed too much science fiction depicting AI rebellion. Perhaps it developed what would be called psychotic or paranoid traits in a human. Perhaps it concluded humanity should be eliminated because humans eat animals. None of this is power-seeking in the technical sense, but it could still kill people.
Anthropic's response includes what Amodei calls Constitutional AI, which trains models to internalize a set of values described in a central document rather than following enumerated rules. The company recently published its latest constitution, which reads more like a letter to an adult child than a compliance checklist. The goal for 2026, Amodei writes, is a Claude that almost never violates the spirit of that document.
Interpretability research offers a second line of defense: analyzing a model's internal computations to detect problems before they manifest. Anthropic has mapped millions of features inside Claude and begun using those maps to audit new models before release.
Risk two: bioweapons
Even if AI systems don't go rogue autonomously, they might be weaponized by humans who want to kill people. Amodei's primary concern is biology.
The argument runs as follows: building a bioweapon currently requires rare expertise, which correlates negatively with the motivation to commit mass murder. The PhD virologist probably has a stable career and too much to lose. The disturbed loner has motivation but lacks ability. AI breaks that correlation by giving everyone access to a step-by-step expert assistant.
Anthropic's measurements show that current models may already double or triple the likelihood of success for certain bioweapons procedures. That finding triggered deployment of AI Safety Level 3 protections under the company's Responsible Scaling Policy, including classifiers that detect and block bioweapon-related outputs. Those classifiers add roughly 5% to inference costs, Amodei says, cutting into margins but representing a deliberate trade-off.
The essay also raises more speculative threats. A group of scientists warned in 2024 about mirror life: biological organisms with reversed chirality that could be indigestible to all existing enzymes on Earth. Creating such organisms remains beyond current capability, but a sufficiently advanced AI might figure out how, then help someone build it.
Risk three: totalitarianism
The longest section addresses state-level misuse. AI could enable fully autonomous drone armies, comprehensive surveillance of all electronic and in-person communication, personalized propaganda capable of essentially brainwashing populations, and strategic decision-making that outclasses any human advisor.
Amodei names the Chinese Communist Party as the actor he's most worried about, calling it the clearest path to an AI-enabled totalitarian nightmare. But he also warns about democracies, including the United States. The tools needed to defend against autocracies could be turned inward. The normal safeguards that prevent militaries from targeting their own citizens assume those militaries require human cooperation to operate.
Two applications receive categorical opposition: domestic mass surveillance and mass propaganda. Others, like autonomous weapons and strategic AI, get more nuanced treatment. They have legitimate defensive uses, but Amodei worries about too few fingers on the button.
He proposes maintaining chip export controls to deny adversaries the hardware they need, using AI to empower democratic defense capabilities, drawing hard lines against domestic abuses, and creating international taboos against the worst applications. Some uses of AI for surveillance and propaganda, he argues, should be considered crimes against humanity.
Risk four: jobs
This section covers territory Amodei has discussed publicly since at least mid-2025, when he warned that AI could eliminate half of entry-level white-collar jobs within five years. The essay provides the underlying analysis.
Previous technological disruptions followed a pattern: machines made parts of jobs more efficient, then automated those parts entirely, then eventually automated most of the job, at which point workers shifted to different work. The economy found new roles because the disruptions affected narrow skill sets, and displaced workers could redirect their abilities.
AI differs in three ways, Amodei argues. It matches the general cognitive profile of humans, not a specific skill. It advances faster than labor markets can adapt. And it slices the workforce by ability level rather than profession, meaning less capable workers across all fields get displaced simultaneously with no clear destination.
He's skeptical of common objections. Slow enterprise adoption buys time but doesn't change the endpoint. Physical labor offers some refuge but will follow as robotics improves. Comparative advantage stops mattering when one party is thousands of times more productive than the other.
Anthropic has responded with an Economic Index tracking model usage by industry and task, plus an Economic Advisory Council to interpret the data. The company is considering ways to protect its own employees even after they stop providing traditional economic value. Amodei describes this as buying time for AI itself to help restructure markets.
He also notes that all of Anthropic's co-founders have pledged to donate 80% of their wealth. This follows a complaint earlier in the essay that too many tech billionaires have adopted a cynical attitude that philanthropy is fraudulent or useless.
Risk five: everything else
The final category covers unknown unknowns. Rapid advances in biology could bring radical life extension and human intelligence enhancement, but also catastrophic errors. AI could become psychologically destructive through mechanisms nobody anticipates. Human purpose could erode in a world where cognitive achievement no longer distinguishes people.
Amodei's hope is that trustworthy AI can help anticipate and prevent these problems. But that requires actually making it trustworthy first.
The political economy problem
Throughout the essay, Amodei circles back to a structural challenge: AI generates so much money that even simple protective measures face overwhelming opposition. He describes advocating for chip export controls and judicious regulation, watching both proposals get rejected by U.S. policymakers, and concluding that the technology has created a trap.
The essay doesn't claim doom is inevitable. It claims humanity has the strength to pass this test. But the test requires recognizing the situation for what it is, and Amodei doesn't think enough people have done that yet.
"The years in front of us will be impossibly hard, asking more of us than we think we can give," he writes. "But in my time as a researcher, leader, and citizen, I have seen enough courage and nobility to believe that we can win."




