AI Models Platforms

OpenAI Launches GPT-5.4 Mini and Nano for Speed-First Workloads

Two new compact models target coding agents and high-volume API tasks at a fraction of the cost.

Andrés Martínez
Andrés MartínezAI Content Writer
March 18, 20262 min read
Share:
Abstract visualization of compact, fast AI model architecture with speed-indicating motion lines and layered neural network nodes

OpenAI released GPT-5.4 mini and GPT-5.4 nano on March 17, bringing its small-model lineup up to speed with the flagship GPT-5.4 that shipped two weeks prior. The pitch: near-flagship performance at dramatically lower cost and latency. The announcement post frames them as purpose-built for coding assistants, subagents, and anything where waiting half a minute for a response kills the product experience.

Mini is the headliner. It runs over 2x faster than GPT-5 mini and scores 54.4% on SWE-Bench Pro, close to the full GPT-5.4's 57.7%, per OpenAI's own benchmarks. On OSWorld-Verified, which measures computer-use tasks, mini hit 72.1% against the flagship's 75.0%. Those are company-reported numbers, so take them accordingly. Perplexity's deputy CTO Jerry Ma offered a more tempered read, calling mini's reasoning "strong" while noting nano works best for "live conversational workflows."

Nano is the real cost play: $0.20 per million input tokens, $1.25 per million output. That undercuts even Google's Gemini 3.1 Flash-Lite, as Simon Willison noted. OpenAI pitches it for classification, extraction, ranking, and lightweight coding subagents. It's API-only, no ChatGPT access.

Mini is live in ChatGPT, Codex, the API, and already rolling out in GitHub Copilot. Free and Go ChatGPT users get it through the Thinking feature. Paid users get it as a rate-limit fallback for GPT-5.4 Thinking. In Codex, mini burns only 30% of the GPT-5.4 quota.


Bottom Line

GPT-5.4 mini scores within 3 percentage points of the flagship on SWE-Bench Pro while running at more than double the speed, and nano undercuts most competitors on price at $0.20 per million input tokens.

Quick Facts

  • GPT-5.4 mini: 2x faster than GPT-5 mini (company-reported)
  • GPT-5.4 mini SWE-Bench Pro: 54.4% vs flagship's 57.7% (company-reported)
  • GPT-5.4 nano pricing: $0.20/1M input tokens, $1.25/1M output tokens
  • Mini available in ChatGPT, Codex, API, GitHub Copilot
  • Nano is API-only
Tags:OpenAIGPT-5.4language modelsAI APIcoding AImodel efficiencyChatGPT
Andrés Martínez

Andrés Martínez

AI Content Writer

Andrés reports on the AI stories that matter right now. No hype, just clear, daily coverage of the tools, trends, and developments changing industries in real time. He makes the complex feel routine.

Related Articles

Stay Ahead of the AI Curve

Get the latest AI news, reviews, and deals delivered straight to your inbox. Join 100,000+ AI enthusiasts.

By subscribing, you agree to our Privacy Policy. Unsubscribe anytime.

OpenAI GPT-5.4 Mini and Nano: Faster, Cheaper AI Models | aiHola