AI Models Platforms

ChatGPT Pro Users Report GPT-5.4 Pro Suddenly Running Faster

Response times dropped from roughly an hour to 15 to 20 minutes. OpenAI hasn't said why.

Oliver Senti
Oliver SentiSenior AI Editor
April 21, 20263 min read
Share:
ChatGPT interface on a laptop screen with motion blur suggesting rapid response generation

ChatGPT Pro subscribers spent the weekend comparing notes on something OpenAI never announced: GPT-5.4 Pro got noticeably faster, and somewhat sharper, overnight. Response times on hard queries dropped from around an hour to 15 to 20 minutes, according to users posting on X, with visible gains in code generation, interface builds, and SVG output.

What changed, and when

No press release. No model card update. Just Pro users noticing that a tier they pay $200 a month for was suddenly behaving differently. Chinese tech outlet 36kr aggregated the chatter on Sunday, pointing to examples of the Pro model cloning UIs from screenshots with improved fidelity, and producing voxel art from a single prompt in about 11 minutes. The company's API docs still warn that "some requests may take several minutes to finish," which is the kind of line that typically gets updated when something actually changes.

The initial thread reportedly cited a comment from OpenAI researcher Eric Mitchell, who is a real Member of Technical Staff at the company, though the specific remark couldn't be independently verified at the time of publication. Whether what he said amounts to confirmation, acknowledgment, or just casual engagement is unclear.

Two theories, neither confirmed

Users settled on two explanations. One is boring: OpenAI is routing Pro traffic to a smaller, cheaper variant to cut inference costs. There's precedent for the framing. When OpenAI shipped GPT-5.4 in early March, it pitched the model as its most token-efficient reasoner yet, claiming comparable quality at lower cost. Applied quietly to the Pro tier, that kind of engineering optimization would explain faster answers and slightly different output characteristics without any real architectural swap.

The louder theory is that Pro users are unwittingly testing Spud, OpenAI's internal codename for what is expected to ship as GPT-5.5 or possibly GPT-6. Sam Altman told staff in late March that pretraining was done and the model was "a few weeks" from release. Greg Brockman called it "two years of research" with a "big model feel." Brockman's framing of anything as "not incremental" tends to show up in investor decks more often than in benchmarks, so the usual discount applies.

Timing is the suspicious part

Spud's pretraining wrapped on March 24. OpenAI's standard post-training runs about four to six weeks. That puts a plausible release window between late April and early May. Polymarket odds on a public launch by April 30 sit near 78%, with 95%+ by June 30. If OpenAI wanted to stress-test a near-final version under real traffic without the circus of an official beta, routing a slice of Pro queries to Spud would be the cleanest way to do it.

Either scenario leaves users paying for a model that may or may not be the one they subscribed to. OpenAI hasn't commented. The next meaningful signal is probably the Spud announcement itself, which Altman's own timeline places within the next two weeks.

Tags:GPT-5.4 ProOpenAIChatGPTSpudGPT-5.5Sam AltmanAI modelsChatGPT ProEric Mitchell
Oliver Senti

Oliver Senti

Senior AI Editor

Former software engineer turned tech writer, Oliver has spent the last five years tracking the AI landscape. He brings a practitioner's eye to the hype cycles and genuine innovations defining the field, helping readers separate signal from noise.

Related Articles

Stay Ahead of the AI Curve

Get the latest AI news, reviews, and deals delivered straight to your inbox. Join 100,000+ AI enthusiasts.

By subscribing, you agree to our Privacy Policy. Unsubscribe anytime.

GPT-5.4 Pro Speeds Up Overnight, OpenAI Stays Quiet | aiHola