AI Chips

Amazon Plans $200 Billion in Capital Spending for 2026, Mostly on AI

Andy Jassy's shareholder letter defends a massive AI infrastructure bet as free cash flow drops to $11 billion.

Liza Chan
Liza ChanAI & Emerging Tech Correspondent
April 10, 20264 min read
Share:
Rows of server racks inside a modern data center with blue LED lighting and cooling infrastructure

Andy Jassy's 2025 shareholder letter, published April 9, is roughly 6,000 words long. About half of it is a pitch deck for why investors should be comfortable watching Amazon's free cash flow collapse from $38 billion to $11 billion in a single year. The answer, in short: AI infrastructure, and specifically, Amazon's own chips.

The $200 billion question

Amazon expects to spend approximately $200 billion in capital expenditures in 2026, with the bulk going to AWS data centers and AI capacity. That number, first disclosed during the company's fourth-quarter earnings call earlier this year, rattled investors enough to send Amazon stock down 10% in after-hours trading at the time.

Jassy's letter is the extended rebuttal. He points to committed customer demand, including OpenAI's pledge to spend over $100 billion on AWS as part of an expanded partnership. AWS added 3.9 gigawatts of new power capacity in 2025 and plans to double total capacity by the end of 2027.

"We're not investing approximately $200 billion in capex in 2026 on a hunch," Jassy writes, which is the kind of line you include when you know a lot of people think you might be.

The real story is Trainium

Buried in the middle of the letter is what reads like a declaration of war against Nvidia. Amazon's custom chip portfolio (Graviton, Trainium, and Nitro) has hit a $20 billion annual revenue run rate, growing at triple-digit percentages year over year. Jassy then floats a hypothetical: if Amazon sold these chips externally the way Nvidia does, the business would be worth roughly $50 billion annually.

That's a bold claim. The $20 billion figure includes Graviton (general compute) and Nitro (networking), not just AI chips. Lumping them together flatters the number. And the $50 billion hypothetical assumes Amazon could command Nvidia-like pricing in the open market, which is unproven at best.

Still, the trajectory is hard to ignore. Trainium2 offered about 30% better price-performance than comparable GPUs and has largely sold out. Trainium3, which started shipping in early 2026, is 30-40% more price-performant than its predecessor and nearly fully subscribed. Trainium4, still roughly 18 months from broad availability, has already been significantly reserved.

Jassy argues that at scale, Trainium will save Amazon "tens of billions of capex dollars per year" and add "several hundred basis points" of operating margin advantage over relying on third-party chips for inference. That comparison only works if inference continues to dominate AI workloads, which it does today but may not forever.

So does the math work?

AWS's AI revenue run rate is $15 billion, which Jassy says is growing roughly 260 times faster than AWS did at a comparable stage. That sounds astonishing until you remember early AWS was selling basic compute and storage to startups in 2006. Comparing adoption curves across different eras of cloud computing is more marketing than analysis.

The free cash flow picture is genuinely concerning. A $27 billion year-over-year drop, driven by $50.7 billion in additional property and equipment spending, is the kind of number that tests investor patience. Jassy's argument is that much of the 2026 capex will be monetized in 2027 and 2028, and that a substantial portion already has customer commitments behind it.

Two unnamed large customers reportedly asked to buy all of Amazon's available Graviton capacity for 2026. Amazon said no, but Jassy includes the anecdote as proof of demand. Whether that demand persists through a potential economic slowdown is a question the letter doesn't address.

What Jassy isn't saying

The letter spends almost no time on the competitive dynamics with Microsoft Azure and Google Cloud, both of which are making similar bets on custom silicon. Google's TPUs have been in production for years. Microsoft is rolling out its Maia chips. Jassy frames Trainium as following the Graviton playbook (Amazon's successful campaign to displace Intel in its data centers), but Nvidia is a far more formidable incumbent than Intel was in cloud computing.

There's also no mention of what happens if AI demand plateaus or shifts. The entire capital plan assumes accelerating adoption. AWS's overall revenue run rate of $142 billion, growing 24% year over year, is healthy but not the kind of growth that obviously justifies $200 billion in spending.

Amazon reports Q1 2026 earnings on May 1. That's when the $200 billion thesis gets its first real data point.

Tags:AmazonAWSAndy JassyTrainiumAI infrastructurecapital expenditureNvidiacloud computingcustom chips
Liza Chan

Liza Chan

AI & Emerging Tech Correspondent

Liza covers the rapidly evolving world of artificial intelligence, from breakthroughs in research labs to real-world applications reshaping industries. With a background in computer science and journalism, she translates complex technical developments into accessible insights for curious readers.

Related Articles

Stay Ahead of the AI Curve

Get the latest AI news, reviews, and deals delivered straight to your inbox. Join 100,000+ AI enthusiasts.

By subscribing, you agree to our Privacy Policy. Unsubscribe anytime.

Amazon Plans $200B AI Capex for 2026 as Cash Flow Drops | aiHola