AI Hardware

Nvidia's DGX Station GB300 Finally Goes on Sale Through OEM Partners

A year after its announcement, Nvidia's desktop AI supercomputer ships through partners at roughly $97k.

Oliver Senti
Oliver SentiSenior AI Editor
March 17, 20265 min read
Share:
Nvidia DGX Station desktop AI workstation with GB300 Grace Blackwell Ultra Superchip on a desk in a professional setting

Nvidia's DGX Station, the desktop workstation built around the GB300 Grace Blackwell Ultra Superchip, is finally available for order from OEM partners. The machine was first announced at GTC 2025 last March, and after a year of trade show prototypes and zero public pricing, systems from ASUS, Dell, GIGABYTE, MSI, and Supermicro are now accepting orders. HP is expected to follow later this year.

Nvidia itself hasn't disclosed an official price. The company is not selling a Founders Edition this time around, instead supplying the core motherboard (with the CPU and GPU already integrated) and letting partners build out the complete workstation. What we do know: an MSI XpertStation WS300 appeared on CDW in late February at $96,995.99. Whether cheaper configurations exist from other partners remains unclear.

What's actually inside

The GB300 Desktop Superchip fuses a 72-core Grace CPU (Arm Neoverse V2) with a Blackwell Ultra GPU via NVLink-C2C. Nvidia claims 1.8 TB/s of coherent bandwidth between the two processors, which it says is seven times PCIe Gen 6 speeds. The GPU side carries 252 GB of HBM3e running at 7.1 TB/s, while the Grace CPU brings 496 GB of LPDDR5X at 396 GB/s. Combined, you get 748 GB of unified memory, though Nvidia's own product page occasionally references 784 GB depending on which paragraph you're reading.

The headline performance figure is 20 petaflops of AI compute. That's FP4 with sparsity, so take it with the usual caveats about what "20 petaflops" means in practice versus what it means in a keynote slide. The system also supports Nvidia's NVFP4 precision format, which the company says delivers near-FP8 accuracy while cutting memory footprint by 1.8x compared to FP8.

The "no GPU" problem

Here's the thing nobody in Nvidia's marketing emphasizes: the DGX Station has no video output. The GB300 Superchip is a compute-only part, and as ServeTheHome noted when it first saw the board, you need to install a separate GPU if you want to plug in a monitor. The system has PCIe 5.0 slots for exactly this purpose, and Nvidia says you can add an RTX PRO Blackwell Generation card for visualization alongside the compute GPU. But that's an extra cost on top of an already substantial price tag, and it means the base configuration is essentially a headless AI server that happens to sit under your desk.

Who is this for?

Nvidia's pitch centers on what it calls "architectural continuity." Code you write on the DGX Station migrates directly to GB300 NVL72 data center racks (72 GPUs) without modification, because both run the same software stack. That is a genuinely useful property if you're a team prototyping models locally before deploying to cloud infrastructure. VentureBeat reported that early customers include Snowflake (testing its Arctic training framework), EPRI (weather forecasting for grid reliability), and Microsoft Research.

Nvidia also claims the machine can run models up to one trillion parameters. The company listed a grab bag of supported models including DeepSeek V3.2, OpenAI's gpt-oss-120b, Gemma 3, Qwen3, and Mistral Large 3. And then there's NemoClaw, a new open-source stack announced at GTC 2026 that bundles Nvidia's Nemotron models with a secure runtime for autonomous agents. Jensen Huang compared its broader platform, OpenClaw, to Mac and Windows, which is... ambitious framing for something that just launched.

$97k in context

Is $97k a lot? For a workstation, absolutely. But the comparison Nvidia wants you to make is against cloud GPU costs for running trillion-parameter inference, where the monthly bills add up fast. A comment in the Tom's Hardware thread mentioned previous-generation DGX Stations cost around $150k for higher-spec builds, so the GB300 variant actually looks cheaper in that light, assuming the CDW price is representative.

The 72-core Grace CPU is worth a moment of skepticism, though. As ServeTheHome bluntly put it, Grace does not compete with modern Intel or AMD server-class processors on raw CPU performance. The point of Grace is to move data to the GPU efficiently, not to be a general-purpose compute beast. If your workload needs strong CPU performance alongside the GPU, you might find the Arm cores limiting.

And the networking tells you something about Nvidia's actual expectations for these boxes. Each DGX Station comes with a ConnectX-8 SuperNIC supporting 800 Gb/s. That is not a feature you put on a desktop workstation for one person. Nvidia clearly expects customers to cluster multiple DGX Stations together, which means the real cost of a "DGX Station deployment" is probably two or four units, not one. The 800 Gb/s NIC also enables fast data transfers to and from data center infrastructure, making the local-to-cloud migration story more concrete.

What's missing

Nobody has published independent benchmarks yet. We don't know actual token generation speeds for specific models on this hardware. The vLLM blog has been working on Blackwell optimization, but the forum discussion around weight offloading performance over NVLink-C2C (252 GB of fast HBM3e plus 496 GB of slower LPDDR5X) is still mostly open questions. Is putting two-thirds of your memory pool behind a slower interface going to be a bottleneck for large model inference? Nobody seems to know yet.

Systems from Dell, MSI, and others are expected to ship within weeks. Partner pricing beyond the MSI CDW listing hasn't surfaced publicly, and Dell specifically declined to quote a price for its Pro Max GB300 variant even while happily disclosing the $4,757 price tag for its smaller GB10 model. Make of that what you will.

Tags:nvidiadgx stationgb300blackwell ultraai workstationgrace cpugtc 2026ai hardware
Oliver Senti

Oliver Senti

Senior AI Editor

Former software engineer turned tech writer, Oliver has spent the last five years tracking the AI landscape. He brings a practitioner's eye to the hype cycles and genuine innovations defining the field, helping readers separate signal from noise.

Related Articles

Stay Ahead of the AI Curve

Get the latest AI news, reviews, and deals delivered straight to your inbox. Join 100,000+ AI enthusiasts.

By subscribing, you agree to our Privacy Policy. Unsubscribe anytime.

Nvidia DGX Station GB300 Now Available from OEM Partners | aiHola