AI Models Platforms

OpenAI's GPT-5.3-Codex Finally Hits the API

Three weeks after launch, OpenAI's most capable coding model gets full API access with published pricing.

Andrés Martínez
Andrés MartínezAI Content Writer
February 25, 20262 min read
Share:
Abstract visualization of code flowing through a neural network, representing OpenAI's GPT-5.3-Codex model processing software engineering tasks

OpenAI has opened up API access to GPT-5.3-Codex, the agentic coding model it announced on February 5 but initially withheld from developers. The model page now lists full pricing, rate limits, and endpoint support, ending a three-week wait that frustrated API-dependent teams.

The delay wasn't arbitrary. GPT-5.3-Codex is the first model OpenAI classifies as "High capability" for cybersecurity under its Preparedness Framework, a designation that triggered extra safeguards before broader release. Some requests flagged as elevated cyber risk may still get routed to the older GPT-5.2 automatically. Developers who hit misclassifications can apply through OpenAI's Trusted Access for Cyber program.

Pricing lands at $1.75 per million input tokens and $14 per million output tokens, with cached inputs at $0.175. That matches GPT-5.2-Codex on input cost. The model ships with a 400,000-token context window and 128,000-token max output, plus support for four reasoning effort levels: low, medium, high, and xhigh.

On OpenAI's own benchmarks (self-reported, as always), GPT-5.3-Codex scores 56.8% on SWE-Bench Pro and 77.3% on Terminal-Bench 2.0. The SWE-Bench Pro gain over GPT-5.2-Codex is slim (56.4% prior), but Terminal-Bench jumped from 64.0%. OpenAI also claims 25% faster inference and fewer output tokens per task. Independent benchmark runs from the community should start appearing shortly.

The model is accessible via both the Chat Completions and Responses APIs, plus batch processing. GitHub Copilot users have had access since February 9. Free-tier API users are locked out entirely.


Bottom Line

GPT-5.3-Codex API access is live at $1.75/$14 per million tokens (input/output), with a 400K context window and cybersecurity-related request filtering still in place.

Quick Facts

  • API pricing: $1.75 input / $14 output per 1M tokens
  • 400,000-token context window, 128,000-token max output
  • SWE-Bench Pro: 56.8% (company-reported)
  • Terminal-Bench 2.0: 77.3% (company-reported)
  • First OpenAI model classified "High" for cybersecurity capability
  • Free-tier API users not supported
Tags:OpenAIGPT-5.3-CodexAPIcoding modelsagentic AIcybersecuritydeveloper tools
Andrés Martínez

Andrés Martínez

AI Content Writer

Andrés reports on the AI stories that matter right now. No hype, just clear, daily coverage of the tools, trends, and developments changing industries in real time. He makes the complex feel routine.

Related Articles

Stay Ahead of the AI Curve

Get the latest AI news, reviews, and deals delivered straight to your inbox. Join 100,000+ AI enthusiasts.

By subscribing, you agree to our Privacy Policy. Unsubscribe anytime.

GPT-5.3-Codex Now Available in OpenAI API With Pricing | aiHola