Luma AI publicly released Uni-1 on Sunday, an image model that merges understanding and generation into a single autoregressive transformer. No diffusion, no separate reasoning pipeline. Text and image tokens share the same sequence, and the model can insert reasoning steps mid-generation, planning composition before rendering pixels.
The pitch is straightforward: Uni-1 thinks, then draws. Competitors like DALL-E 3 and Imagen 3 bolt a language model onto a separate generator. Uni-1 skips that handoff. Luma reports it ranks first in Elo for overall quality, style and editing, and reference-based generation. For pure text-to-image, it places second behind Google's Nano Banana. On RISEBench, a logic-focused benchmark, it edges out both Nano Banana 2 and GPT Image 1.5, though these are company-reported results and independent confirmation is still pending.
Pricing is aggressive. At 2K resolution, Uni-1 costs roughly $0.09 per image via API, compared to $0.101 for Nano Banana 2 and $0.134 for Nano Banana Pro, per The Decoder's analysis. API access is rolling out gradually.
CEO Amit Jain called it "intelligence in pixels," which is the kind of line you'd expect. More concretely, the model supports multi-turn refinement, 76+ art styles, sketch-to-image conversion, and identity preservation across reference photos. Uni-1 also powers Luma Agents, the company's creative workflow platform already deployed with Publicis Groupe, Adidas, and Mazda. You can try Uni-1 free at lumalabs.ai right now.
Bottom Line
Uni-1 undercuts Google's Nano Banana models on price by 10-30% while matching or beating them on reasoning benchmarks, though independent testing is still limited.
Quick Facts
- ~$0.09 per image at 2K resolution (API pricing)
- Ranks #1 in Elo for overall, style/editing, reference-based generation (company-reported)
- #2 in text-to-image behind Google Nano Banana
- Deployed with Publicis Groupe, Adidas, Mazda
- Free to try at lumalabs.ai; API access rolling out




