Elon Musk's xAI announced that Colossus 2 is now operational in Memphis, Tennessee, becoming the first AI training cluster to reach gigawatt-scale power consumption. The facility currently draws around 1 GW and is scheduled to expand to 1.5 GW by April.
What a gigawatt actually means here
The number gets thrown around a lot, but context helps. San Francisco's average electricity demand hovers around 670 MW, with peaks hitting 930 MW. A single xAI facility now exceeds the entire peak load of a major American city, which is either impressive or alarming depending on your perspective.
Musk posted on X that the cluster would hit one gigawatt by mid-January (not counting the original Colossus 1 facility) and scale to 1.5 GW by April. That timeline is aggressive, but xAI has a track record of moving faster than anyone expected. The original Colossus went from construction to 100,000 GPUs in 122 days. Most data centers take four years.
The current 1 GW capacity powers roughly 550,000 Nvidia Blackwell GPUs (GB200 and GB300 chips). The April expansion would push that count toward 850,000. For reference, xAI's Colossus 1 already ran about 230,000 GPUs, making the combined Memphis footprint something entirely new in scale.
How they're powering this thing
Here's where it gets interesting. xAI isn't waiting for grid connections. According to SemiAnalysis, the company acquired a former Duke Energy power plant across the border in Southaven, Mississippi. Seven natural gas turbines from that site now feed Colossus 2 through medium-voltage lines.
The turbines come from a partnership with Solaris Energy Infrastructure, which has committed over 1.1 GW of capacity to xAI. Tesla Megapacks (168 units) provide backup power and help smooth out the dramatic power swings that AI training demands. It's essentially a private utility built specifically for one customer.
Environmental groups aren't thrilled. The Southern Environmental Law Center filed complaints after discovering xAI was running 35 turbines at the original Colossus site, exceeding the 15 permitted. The Mississippi expansion appears partly designed to sidestep Tennessee regulators.
The competitive picture
Tom's Hardware reports that OpenAI's Texas facility currently runs at 300 MW and won't reach gigawatt scale until mid-2026. Meta has secured nuclear power deals for 6 GW of future capacity, but that infrastructure doesn't exist yet. Anthropic's facilities remain smaller still.
Musk has said he wants xAI to have more compute than everyone else combined within five years. That's probably hyperbole, given what Microsoft, Google, and Amazon are spending. But for individual training clusters, xAI is now demonstrably ahead.
The facility cost remains unclear. Various reports cite figures from $12 billion to $18 billion for the GPU hardware alone, with xAI having closed a $20 billion Series E in early January. The company burns through more than $1 billion monthly, according to The Information.
All of this exists to train Grok, xAI's chatbot that competes with ChatGPT and Claude. Whether a gigawatt of compute translates into a proportionally better model is the question nobody can answer yet.
xAI plans to begin converting a third building (named "MACROHARDRR," because of course) into additional data center space in 2026, pushing total site capacity toward 2 GW.




