AI Career

OpenAI's Reasoning Chief Leaves to Do Research "Hard to Do" at the Company He Helped Build

Jerry Tworek, who led development of ChatGPT's thinking capabilities and early coding models, departs after nearly seven years.

Liza Chan
Liza ChanAI & Emerging Tech Correspondent
January 8, 20264 min read
Share:
Empty office chair facing away from a research desk with scattered papers and monitors

Jerry Tworek, OpenAI's vice president of research and the person who built the team behind the o1 and o3 reasoning models, announced his departure this week. He'd been with the company since 2019, back when it still operated mostly as a nonprofit research lab.

His farewell post lands with a particular sting: "I am leaving to try and explore types of research that are hard to do at OpenAI."

Make of that phrasing what you will.

The résumé matters here

Some departures are just people moving on. This one's different. Tworek's fingerprints are on basically everything that made OpenAI OpenAI.

He ran the Codex research that became GitHub Copilot. He contributed to GPT-4's post-training. He led the team that built the o1 reasoning model, which topped benchmarks for complex logical tasks when it launched. Chinese media outlets have taken to calling him the "father of reasoning models," which is maybe a bit much, but not entirely wrong.

In his farewell note, Tworek listed the highlights: "scaling RL on robots before it was cool, training the first coding models in the world that started the LLM coding revolution, discovering chinchilla scaling before it was called chinchilla."

He also says he's still "a die hard ChatGPT reasoning model user." So there's that.

The pattern is getting hard to ignore

Tworek becomes roughly the twelfth significant departure from OpenAI in 2025. And we're eight days into January.

Last year the company lost CTO Mira Murati, chief research officer Bob McGrew, and VP of research Barret Zoph, all within hours of each other back in September. Over the summer of 2025, Meta's Superintelligence Lab poached at least seven OpenAI researchers, including Shenghao Zhou (who co-created ChatGPT and GPT-4) and Jiahui Yu (who led the Perception team).

Liam Fedus, former VP of research and head of post-training, left to start Periodic Labs. Tom Cunningham, an economic researcher, quit after reportedly growing frustrated with limits on publishing work that might paint AI's economic impacts negatively. Larry Summers stepped down from the board following some awkward archived emails.

Sam Altman is now one of only two remaining active members from OpenAI's original eleven-person founding team.

What "hard to do at OpenAI" probably means

Nobody said it directly. But the subtext isn't subtle.

A December report from Wired documented tension between OpenAI's research teams and its increasingly product-focused leadership. One former employee described the economic research team as becoming "a de facto advocacy arm" for the company. Another departing researcher, Miles Brundage, said in late 2024 that it had become "hard for me to publish on all the topics that are important to me."

OpenAI's chief strategy officer Jason Kwon responded to these concerns in an internal Slack message: "Because we are not just a research institution, but also an actor in the world (the leading actor in fact) that puts the subject of inquiry (AI) into the world, we are expected to take agency for the outcomes."

Translation: we're a company now, act like it.

The nonprofit-to-for-profit transition, the trillion-dollar IPO rumors, the deals involving hundreds of billions with Microsoft and chipmakers. Resources that used to go toward exploratory research are getting redirected to shipping GPT-5.2 and maintaining ChatGPT's market position.

Meanwhile

Lexica founder Sharif Shameem joined OpenAI on January 6th. So talent does flow both ways.

And the company's compensation strategy remains aggressive. Average stock compensation per employee reportedly sits around $1.5 million, about 34 times higher than typical pre-IPO tech companies. A recent posting for "Head of Preparedness" offered $555,000 annually plus equity. Altman's pitch for that role: "This will be a stressful job, and you'll jump into the deep end pretty much immediately."

No word yet on where Tworek is headed. Given the pattern of recent departures, Meta and Anthropic would be the obvious guesses. Or he could go the Fedus route and start something new.

Either way, the researcher who helped teach ChatGPT to think won't be at OpenAI to see what happens next.

Tags:OpenAIAI ResearchJerry TworekReasoning ModelsTech Departures
Liza Chan

Liza Chan

AI & Emerging Tech Correspondent

Liza covers the rapidly evolving world of artificial intelligence, from breakthroughs in research labs to real-world applications reshaping industries. With a background in computer science and journalism, she translates complex technical developments into accessible insights for curious readers.

Related Articles

Stay Ahead of the AI Curve

Get the latest AI news, reviews, and deals delivered straight to your inbox. Join 100,000+ AI enthusiasts.

By subscribing, you agree to our Privacy Policy. Unsubscribe anytime.

OpenAI's Reasoning Chief Leaves to Do Research "Hard to Do" at the Company He Helped Build | aiHola