Anthropic told the Pentagon no. With a Friday 5:01 p.m. deadline bearing down, CEO Dario Amodei published a public statement Thursday rejecting the Defense Department's "last and final offer" on contract terms for its Claude AI model. The sticking points: two usage restrictions Anthropic refuses to drop, covering mass domestic surveillance and fully autonomous weapons.
The contract language received overnight "made virtually no progress," Anthropic said. New compromise wording was "paired with legalese that would allow those safeguards to be disregarded at will."
How it got here
Defense Secretary Pete Hegseth summoned Amodei to the Pentagon on Tuesday and laid out three threats: cancel Anthropic's $200 million contract, designate the company a "supply chain risk" (a label typically reserved for foreign adversaries), or invoke the Defense Production Act to compel compliance. Anthropic has until Friday evening to accept "any lawful use" terms or face the consequences.
The roots trace back to Hegseth's January 9 AI strategy memo, which directed all Defense Department AI contracts to incorporate "any lawful use" language within 180 days, stripping company-specific guardrails. OpenAI, Google, and xAI all signed onto the standard for unclassified systems. xAI went further this week, agreeing to classified deployment under those terms too.
Anthropic is the odd one out. It also happens to be the only company whose model currently runs on classified networks, through a partnership with Palantir.
The contradiction Amodei won't let go
"Those latter two threats are inherently contradictory," Amodei wrote. "One labels us a security risk; the other labels Claude as essential to national security." It is a pointed observation, and one that legal experts seem to agree with. The Lawfare Institute's legal analysis describes the DPA threat as mapping "awkwardly onto a dispute about AI safety guardrails," given the statute was designed for steel mills and tank factories during the Korean War.
Pentagon Undersecretary Emil Michael pushed back in a CBS News interview Thursday, saying the military offered to acknowledge existing federal laws restricting surveillance and existing Pentagon policies on autonomous weapons. "At some level, you have to trust your military to do the right thing," he said, which is the kind of appeal that sounds reasonable until you remember Anthropic's entire argument is that written guardrails matter more than trust.
Chief Pentagon Spokesman Sean Parnell was less diplomatic: "We will not let ANY company dictate the terms regarding how we make operational decisions."
Why go public?
The most interesting part of this standoff isn't the legal question. It's the publicity. Amodei's statement reads like it was written as much for Congress as for the Pentagon.
"A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow," Amodei wrote in an essay last month. That sentence does a lot of work. It frames the surveillance concern not as hypothetical corporate squeamishness but as a concrete threat to democratic governance, the sort of framing that might make a senator pause before rubber-stamping a DPA invocation.
Republican Sen. Thom Tillis of North Carolina seemed to take the bait. "Why in the hell are we having this discussion in public?" he told reporters Thursday, before adding that when a company resists a market opportunity out of fear of negative consequences, "you should listen to them."
Tillis isn't exactly an AI policy champion. But his comments suggest the public nature of this fight is creating exactly the kind of political pressure Anthropic probably wants.
What happens at 5:01
Amodei offered the Pentagon an off-ramp. Anthropic will keep its models available "on the terms we have proposed, for as long as they are needed," and will facilitate a smooth transition to another provider if the Pentagon decides to cut ties. No drama, no disruption to military operations.
The DPA path looks shaky. Legal scholars at the Institute for Law & AI note that if neither side backs down, litigation is the likely outcome, and the government's legal footing appears weak. The DPA itself is up for reauthorization in September, which gives Congress an opening to weigh in.
If the Pentagon does invoke the DPA, Anthropic would probably comply under protest and immediately seek a temporary restraining order. Courts can sometimes grant those within days.
The Friday deadline is real. What comes after it is far less certain.




