Regulation

OpenAI Replaces 2018 Charter With Five-Principle AGI Framework

Sam Altman publishes new principles document Sunday, dropping the 2018 commitment to stand aside if a rival nears AGI.

Liza Chan
Liza ChanAI & Emerging Tech Correspondent
April 27, 20263 min read
Share:
OpenAI corporate office building exterior at dusk with reflective glass facade

Sam Altman published a new five-principle framework for OpenAI on Sunday, the company's first major rewrite of its guiding document since the 2018 charter. The post lists democratization, empowerment, universal prosperity, resilience, and adaptability as the values guiding OpenAI's path to artificial general intelligence.

What dropped out

Read the 2018 charter and the 2026 principles side by side and the missing pieces stand out more than the new ones. The original charter committed OpenAI to stop competing and start assisting if a rival, value-aligned project got close to AGI first. That clause is gone. So is most of the language of fiduciary duty to humanity, replaced by broader gestures toward "democratic processes" without specifying whose democracy or which process.

The new principles document mentions AGI twice. The 2018 charter built its entire framework around the concept. That isn't a tonal shift, it's a relocation of the company's center of gravity, away from a specific capability finish line and toward a permanent infrastructure role.

The hedge buried in the fifth principle

Altman closes with adaptability, which functions as a get-out clause for everything above it. "We can imagine periods in the future where we have to trade off some empowerment for more resilience," he writes. Translation: if things get hairy, the empowerment promise gets renegotiated.

That isn't necessarily wrong. A company building systems with biosecurity and cyber implications probably should reserve the right to lock things down. But it does mean the four principles sitting above adaptability are conditional, and the entity deciding the conditions is OpenAI.

Timing

This lands during a stretch when OpenAI has spent more time on defense than offense. The Pentagon contract drew sustained criticism. The conversion from capped-profit to fully commercial structure has attracted scrutiny from former employees and state attorneys general. The principles arrive looking less like a fresh statement of values and more like a coherent public posture assembled before regulators write one for them.

Altman addresses some of this directly. "OpenAI is a much larger force in the world than it was a few years ago," the post reads, with a promise of transparency whenever the principles change. The candor is welcome. The track record on similar promises is uneven, with multiple ex-employees having flagged gaps between OpenAI's stated commitments and its actual resource allocation, particularly around safety teams.

What to watch

The document invites criticism, and it will get plenty. The harder test arrives the first time one of the five principles bumps up against a commercial decision. OpenAI's next major model release and any further defense-sector contracts will be the early indicators. The 2018 charter survived almost eight years before getting rewritten. Whether the 2026 version makes it half that long is the open question.

Tags:openaisam altmanagiartificial intelligenceai policyai safetyai governanceopenai charterai regulation
Liza Chan

Liza Chan

AI & Emerging Tech Correspondent

Liza covers the rapidly evolving world of artificial intelligence, from breakthroughs in research labs to real-world applications reshaping industries. With a background in computer science and journalism, she translates complex technical developments into accessible insights for curious readers.

Related Articles

Stay Ahead of the AI Curve

Get the latest AI news, reviews, and deals delivered straight to your inbox. Join 100,000+ AI enthusiasts.

By subscribing, you agree to our Privacy Policy. Unsubscribe anytime.

OpenAI Posts Five AGI Principles, Drops Key 2018 Pledge | aiHola