LLMs & Foundation Models

Karen Hao Says Sam Altman Built OpenAI on Manipulation, Cites 90 Insider Interviews

Investigative journalist details how Altman allegedly mirrored Musk's language, redefined AGI for each audience, and maneuvered his way to CEO.

Liza Chan
Liza ChanAI & Emerging Tech Correspondent
March 29, 20265 min read
Share:
Silhouette of a chess king piece casting a long shadow over smaller pieces on a board, dramatic side lighting

Investigative journalist Karen Hao sat down with Steven Bartlett on The Diary of a CEO this week and laid out her case that Sam Altman constructed OpenAI's rise through strategic manipulation of everyone around him: co-founders, investors, regulators, and the public. Hao's reporting draws on more than 300 interviews, including conversations with over 90 current and former OpenAI employees, all compiled in her bestselling book Empire of AI: Inside The Reckless Race For Total Domination.

Musk, for his part, responded on X the same day the episode dropped. His commentary was brief and predictable.

The mirror trick

Hao's most detailed claim involves how Altman recruited Musk to co-found OpenAI in 2015. Before that year, she says, Altman had been publicly focused on engineered viruses as the primary existential threat, not AI. Then he wrote a blog post declaring AI development the greatest threat to humanity's existence, language that closely tracked what Musk had been saying on podcasts and at MIT around the same time.

Hao frames this as deliberate calibration. Altman needed Musk's money and credibility, so he adopted Musk's vocabulary. The blog post even contains an awkward parenthetical acknowledging engineered viruses might be more likely, which Hao reads as Altman trying to reconcile his old position with his new pitch. Whether that interpretation is fair or unfair depends on how much weight you give to word choice in a single blog post. Plenty of people genuinely updated their views on AI risk around 2015.

But Hao isn't relying on one blog post. She describes a pattern: Altman consistently adopting whatever language his current audience needs to hear. The most striking example involves OpenAI's definition of AGI.

AGI means whatever you need it to mean

According to Hao, OpenAI has used at least four different definitions of artificial general intelligence depending on who they're talking to. When Altman testifies before Congress, AGI is a system that can cure cancer and solve climate change. When selling products to consumers, it's the best digital assistant you'll ever have. In the Microsoft investment deal, it was defined as a system generating $100 billion in revenue. And on OpenAI's own website, it's described as autonomous systems that outperform humans at most economically valuable work.

These aren't subtle variations. A cancer-curing superintelligence and a really good chatbot are fundamentally different products with fundamentally different implications for regulation, safety, and investment. Using one term for both is either sloppy or strategic, and Hao clearly believes it's the latter.

How Musk lost the CEO job

The podcast also surfaces a specific claim about OpenAI's transition from nonprofit to for-profit. According to Hao's reporting, when Ilya Sutskever and Greg Brockman were deciding who should lead the new for-profit entity, both initially favored Musk. Altman then went to Brockman, a longtime acquaintance from Silicon Valley circles, and argued that giving Musk control of a potentially powerful AI technology would be dangerous given his unpredictability. Brockman was convinced, then brought Sutskever around. Musk, finding himself shut out, left.

Court documents from the ongoing lawsuit have confirmed some version of this sequence, though the characterization of who manipulated whom depends entirely on which side you ask. Jury selection in Musk v. Altman begins April 27, which makes the timing of this podcast appearance hard to ignore.

The polarization problem

Hao's own assessment of Altman is worth noting because it's more nuanced than the headline version. She told Bartlett that across all her interviews, no one had moderate feelings about the man. People either consider him the Steve Jobs of this generation or view him as manipulative and dishonest. The dividing line, she says, is whether you share his vision. If you do, his persuasion skills look like leadership. If you don't, they look like something else.

This is probably the most honest observation in the entire two-hour conversation. Dario Amodei, who left OpenAI to found Anthropic, apparently fits the second category. Hao describes him as someone who initially believed he and Altman were aligned, only to conclude over time that Altman had been using Anthropic's technical capabilities to advance a vision Amodei didn't agree with.

What's missing

The obvious caveat: Hao's sources are overwhelmingly anonymous. OpenAI refused to participate in the book and Altman publicly dismissed it as the work of someone intent on twisting things. We're hearing one side of an internal power struggle, filtered through a journalist who clearly finds that side more credible.

And Musk is hardly a disinterested party here. He runs xAI, a direct competitor to OpenAI. His lawsuit seeks up to $134 billion in damages. His reaction to the podcast was to post his well-worn nickname for Altman on X, which tells you roughly how seriously to take his commentary as independent corroboration.

Still, the sheer volume of Hao's reporting is hard to dismiss entirely. Over 250 interviews across multiple years, with sources spanning OpenAI's founding dinner at the Rosewood Hotel through the 2023 board crisis and beyond. If even half of the pattern she describes is accurate, it paints a picture of a company whose public mission has always been subordinate to its CEO's talent for telling different people different things.

The jury in Oakland will get their own chance to weigh the evidence starting April 27.

Tags:OpenAISam AltmanKaren HaoElon MuskEmpire of AIAGIMusk v AltmanAI industrySilicon Valley
Liza Chan

Liza Chan

AI & Emerging Tech Correspondent

Liza covers the rapidly evolving world of artificial intelligence, from breakthroughs in research labs to real-world applications reshaping industries. With a background in computer science and journalism, she translates complex technical developments into accessible insights for curious readers.

Related Articles

Stay Ahead of the AI Curve

Get the latest AI news, reviews, and deals delivered straight to your inbox. Join 100,000+ AI enthusiasts.

By subscribing, you agree to our Privacy Policy. Unsubscribe anytime.

Karen Hao: Altman Built OpenAI on Manipulation | aiHola