OpenAI rolled out GPT-5.5 to paying ChatGPT and Codex users Thursday, positioning the model for agent work over straight chat. The company's launch post leans on multi-step tasks: planning, using tools, and checking its own output.
The release comes seven weeks after GPT-5.4. Plus, Pro, Business, and Enterprise subscribers get GPT-5.5 today, with a pricier GPT-5.5 Pro variant reserved for Pro and above. API access is coming "very soon," per OpenAI.
On OpenAI's own numbers, GPT-5.5 hits 82.7% on Terminal-Bench 2.0, 58.6% on SWE-Bench Pro, and 78.7% on OSWorld-Verified. Independent testing hasn't confirmed those figures. GPT-5.5 Pro scores 90.1% on BrowseComp, which the company says tops Gemini 3.1 Pro's 85.9%.
"What is really special about this model is how much more it can do with less guidance," OpenAI President Greg Brockman told reporters, the kind of line you'd expect at a launch briefing. Pricing is more concrete: API usage will run $5 per million input tokens and $30 per million output, double GPT-5.4's rates.
In Codex, GPT-5.5 ships with a 400K context window and an optional Fast mode that runs 1.5x faster at 2.5x the cost. OpenAI says 85% of its own staff already use Codex weekly, and that nearly 200 early-access partners tested the model before release.
Bottom Line
GPT-5.5 API pricing lands at $5 per million input tokens and $30 per million output, roughly double what GPT-5.4 charged.
Quick Facts
- Released April 23, 2026, seven weeks after GPT-5.4
- Available to Plus, Pro, Business, and Enterprise tiers in ChatGPT and Codex
- 400K context window in Codex at launch; 1M context promised on API
- Self-reported benchmarks: 82.7% Terminal-Bench 2.0, 58.6% SWE-Bench Pro, 78.7% OSWorld-Verified
- API pricing (company-stated): $5/M input, $30/M output tokens




