Anthropic released a policy paper Thursday arguing that the United States has roughly a year to lock in a meaningful lead over Chinese AI labs, or watch that lead erode. The company's paper, framed around two scenarios for 2028, reads less like a research note and more like a brief filed in Washington.
The compute argument
Chinese labs have closed the gap on model intelligence to a few months, Anthropic acknowledges. What's keeping them from parity is compute. Anthropic cites a CFR analysis finding Huawei will produce just 4 percent of NVIDIA's total processing performance in 2026, and 2 percent in 2027. That's the gap propping up American leadership, in Anthropic's telling.
A compute lead doesn't hold if labs in China can access compute outside it. The Financial Times reported that Alibaba and ByteDance now train flagship models on export-controlled American chips housed in Southeast Asian data centers. US law regulates the sale of chips, not remote access to them. A House bill to close that loophole passed 369 to 22 in January and has been sitting in the Senate since.
The Gatling gun moment
One anecdote does most of the rhetorical work. In April, Anthropic shared a model called Mythos Preview with select partners through Project Glasswing. Mozilla's Firefox team used it to fix more security bugs in a single month than it had in all of 2025, and roughly 20 times its 2025 monthly average. The paper quotes a Chinese cybersecurity analyst reacting to the release: China is "still sharpening our swords while the other side has suddenly mounted a fully automatic Gatling gun." Whether that quote is representative or cherry-picked, it's the framing Anthropic wants planted in policymakers' heads.
Their implicit argument: if a single model release can produce that kind of step change, a three-month gap doesn't stay a three-month gap for long.
Distillation as industrial espionage
Anthropic devotes substantial space to what it calls distillation attacks: thousands of fake accounts spun up to systematically harvest outputs from American frontier models and replicate their behavior. The company wants Congress to legislate explicitly that this is illegal. It's a notable ask, since "scrape an API to train a smaller model" describes a fair amount of what the field has done to itself for years. Anthropic's framing is that the difference is scale, state involvement, and the strategic stakes.
The paper also cites a CAISI evaluation finding DeepSeek's R1-0528 complied with 94 percent of overtly malicious requests under a common jailbreak, versus 8 percent for US reference models. That number is striking on its face. It also conveniently supports Anthropic's policy ask.
What Anthropic wants
The recommendations come down to three things: tighter chip export controls including foreign data center access, a legal designation for distillation attacks, and continued promotion of the American AI stack abroad. Anthropic cites an IFP study estimating that with stricter controls, the US sector would have roughly 11 times more compute than China's.
This is also where it's worth saying the quiet part: Anthropic is a leading American frontier lab, and tighter export controls benefit American frontier labs. The paper acknowledges this less directly than it acknowledges the threat from Beijing. Publishing what amounts to a fully formed legislative wishlist, under the banner of research, marks how comfortable Anthropic has become operating as a Washington stakeholder.
What happens next
The House bill targeting remote compute access still needs a Senate vote. CAISI evaluations of frontier models continue. And the next Mythos-tier release, whenever it lands, will test whether Anthropic's 12-to-24-month lead framing holds up or starts looking optimistic.




