Thinking Machines Lab dropped the waitlist on Tinker this week. The LLM fine-tuning service, which launched in private beta back in October, is now open to anyone willing to pay per-token pricing.
The bigger news is what shipped alongside general availability. Tinker now supports Kimi K2 Thinking, Moonshot AI's trillion-parameter reasoning model built for extended chains of thought and heavy tool use. The company also added an OpenAI API-compatible interface, which means developers can swap Tinker into existing pipelines without rewriting integration code. Vision input arrived too, via Alibaba's Qwen3-VL models in 30B and 235B variants.
Tinker's pitch hasn't changed: researchers write Python scripts on their laptops while Thinking Machines handles distributed GPU orchestration behind the scenes. The service uses LoRA adapters rather than full fine-tuning, trading some output quality for dramatically lower compute costs. Thinking Machines published research in September claiming their LoRA implementation matches full fine-tuning performance, though independent benchmarks haven't confirmed those results.
The GA launch comes amid reports that Murati is seeking $5 billion in fresh funding at a $50 billion valuation. The company raised $2 billion at a roughly $10-12 billion valuation in its seed round earlier this year.
The Bottom Line: Tinker's OpenAI API compatibility is the real play here, making it trivial for developers to test specialized fine-tuned models against their existing OpenAI workflows.
QUICK FACTS
- GA launch: December 12, 2025
- Kimi K2 Thinking: 1 trillion parameters
- Pricing: usage-based, $0.03-$3.38 per million tokens depending on model and operation
- Funding: $2B seed round at $10-12B valuation (company-reported)
- Reported new fundraise: $5B at $50B valuation (unconfirmed)




