OpenAI is shutting down its self-serve fine-tuning platform in stages, with the final cutoff for new training jobs landing on January 6, 2027. The company posted the timeline to its deprecation page on May 7, 2026, telling developers that inference on existing fine-tuned models keeps running, but only until the underlying base models themselves get retired.
Three dates, narrowing access
The rollout works in stages. May 7 was the soft start: organizations that had never run a fine-tuning job lost the ability to create new ones immediately. By July 2, 2026, the gate widens to include any organization that has not run inference on a fine-tuned model in the past 60 days. The January 6, 2027 deadline catches everyone else still actively training.
Existing fine-tuned models keep working for inference. The company's fine-tuning docs confirm that custom models stay queryable until their base models hit their own deprecation date. Which sounds reassuring until you check the deprecation list and notice that gpt-4.1-nano, one of the few snapshots that still supports fine-tuning, is already scheduled for retirement on October 23, 2026. Your fine-tune dies when the base model does.
Where OpenAI wants you to go instead
OpenAI's pitch is retrieval, prompt engineering, and managed customization through evals. Its optimization guide has been nudging developers in this direction for months, framing fine-tuning as a last resort after evals and prompting are exhausted. Now it's becoming less of an option at all.
Fine-tuning was one of the few places developers could shape model behavior without routing through OpenAI's prompt layer or content filters. Pulling it back consolidates more of the customization stack inside OpenAI's own managed tools. Convenient for the platform. Less convenient if your product was built around the freedom.
What about already-trained models?
This is where the announcement gets vague. Inference continues "until the underlying base model is deprecated," which is a moving target. OpenAI has been retiring base models on rough six-to-twelve-month cycles. A developer on the OpenAI community forum asked the obvious question: if the base model gets killed, was the money and compute spent on the fine-tune wasted? The answer depends on how soon your snapshot hits the deprecation list.
The "self-serve" qualifier matters too. This is the public, do-it-yourself pipeline. Enterprise customers working directly with OpenAI may still have custom training paths available, though the company hasn't said much about that.
The pattern
Fine-tuning isn't the first feature OpenAI has wound down this year. The Assistants API gets retired August 26, 2026. DALL·E model snapshots end May 12, 2026. The Realtime API beta sunsets the same day. The company is converging on a smaller surface area, mostly built around the Responses API, managed retrieval, and agent tooling. Anything that doesn't fit gets a sunset date.
Next hard date: July 2, 2026, when the second tier of restrictions kicks in. Organizations that haven't run inference on a fine-tuned model in 60 days lose training access then.




