OpenAI dropped GPT-5.2-Codex on December 18, calling it the company's most advanced model for software engineering. The pitch centers on cybersecurity, a claim with some backing: a security researcher using the previous version recently found three previously unknown vulnerabilities in React.
That researcher, Andrew MacPherson at Privy (a Stripe company), was poking at a critical React flaw when GPT-5.1-Codex-Max helped surface additional issues. The bugs, disclosed December 11, included a source code exposure risk and a denial-of-service vector. OpenAI points to this as proof of concept for AI-assisted defensive security work.
On benchmarks, OpenAI reports 56.4% on SWE-Bench Pro and 64% on Terminal-Bench 2.0. Both are company-reported figures, and independent verification is pending. The model also adds native context compaction, meaning it can work through longer coding sessions without losing track of what it was doing. Windows environment support got an upgrade too.
Paid ChatGPT users have access now. API rollout comes "in the coming weeks," with OpenAI also launching an invite-only pilot for vetted cybersecurity professionals who want less restrictive access for defensive work. The company says the model stays below "High" on its internal Preparedness Framework for cyber capability, though that threshold is self-defined.
The Bottom Line: OpenAI is betting that better AI coding tools mean better security tools, and the React vulnerabilities suggest there's something to it.




