The Department of War announced on May 1, 2026 that eight frontier AI companies have signed agreements to deploy their models on its classified networks, according to a Friday press release. SpaceX, OpenAI, Google, NVIDIA, Reflection, Microsoft, Amazon Web Services and Oracle will operate at Impact Level 6 and Impact Level 7, the security tiers that handle secret and highly restricted national security data. Anthropic, the first lab cleared for those networks, is missing from the list.
The "lawful purposes" line
Every company on the list accepted the Pentagon's standard wording: their models can be used for "any lawful operational use." That phrase is exactly what blew up Anthropic's relationship with the agency. The startup wanted explicit guardrails against autonomous weapons and domestic mass surveillance. The Pentagon refused, hit Anthropic with a "supply chain risk" designation in early March, and the company filed two lawsuits within days. The label is normally reserved for foreign adversary suppliers.
One detail surfaced in later reports: at least one of the signing companies negotiated language requiring human oversight on autonomous or semi-autonomous missions, plus consistency with constitutional rights. That sounds a lot like the red lines Anthropic drew. OpenAI has previously said it secured similar assurances. If both are accurate, the fight that got Anthropic blacklisted wasn't really about what the safeguards would be. It was about who got to set them.
What the Pentagon is buying
The agreements feed into GenAI.mil, the in-house generative AI platform that launched in December with Google Gemini. The department claims 1.3 million personnel have used it, generating tens of millions of prompts and deploying hundreds of thousands of agents. That number almost certainly mixes serious analyst work with someone asking a chatbot to summarize a PDF, but it's the figure officials are quoting.
Two names on the list are not the usual defense vendors. NVIDIA is there for chips and tooling. Reflection AI, an open-model startup backed by NVIDIA, has yet to ship anything at frontier scale. Pentagon CTO Emil Michael has been pushing open-weight models as an "American alternative" to Chinese open-source releases. That's the most plausible reason a pre-product startup is sitting on a list next to Microsoft and Google.
The Mythos contradiction
Anthropic is officially out, yet its newest model is being studied across the federal government. Mythos, tuned for finding and patching cyber vulnerabilities, has been accessed by the NSA for evaluation, Michael told CNBC's Squawk Box on Friday. He called it "a separate national security moment" from the supply chain dispute. Michael also confirmed the Pentagon is still running older Claude versions on classified systems because they're embedded in existing tools, just without any updates.
The White House, meanwhile, is drafting executive guidance that could let agencies route around the supply chain designation entirely, per Axios reporting on Wednesday. One source described the effort as a way to "save face and bring em back in." Anthropic CEO Dario Amodei met with chief of staff Susie Wiles and Treasury Secretary Scott Bessent in mid-April. Trump told CNBC last week that Anthropic was "shaping up."
What's actually next
Anthropic's two lawsuits remain active. A San Francisco federal judge granted a preliminary injunction against the broader federal ban in late March. The D.C. appeals court denied a similar bid against the Pentagon blacklist on April 8. The split keeps Claude usable across most of the government but blocked from defense contracts.
The next concrete date is the executive guidance the White House is rehearsing with industry this week. If it lands, the Pentagon may have signed eight new vendors only to find the ninth one walked back through a side door.




