AI Security

Pentagon Gives Anthropic Until Friday to Drop AI Safeguards or Face Blacklist

Hegseth threatens Defense Production Act, supply chain risk designation over Claude restrictions.

Liza Chan
Liza ChanAI & Emerging Tech Correspondent
February 25, 20264 min read
Share:
The Pentagon building seen from above with an overlay of abstract AI circuit patterns

Defense Secretary Pete Hegseth told Anthropic CEO Dario Amodei on Tuesday that the company has until 5:01 p.m. Friday to give the U.S. military unrestricted access to Claude, its AI model, or face penalties that could cripple the company's government business and ripple into its commercial operations.

The threats, confirmed by multiple officials across Axios, CNN, and CNBC, include invoking the Defense Production Act to compel cooperation, canceling Anthropic's $200 million Pentagon contract, and designating the company a "supply chain risk." That last one is the real weapon: a label typically reserved for foreign adversaries like Huawei and Kaspersky, which would force every Pentagon contractor to certify they don't use Claude in any military-related work.

What Anthropic won't budge on

Anthropic isn't refusing to work with the military. It was the first AI company to deploy models on classified Pentagon networks, working through Palantir and Amazon's top-secret cloud infrastructure. The company's two red lines are specific: no mass surveillance of American citizens, and no weapons systems that fire without a human in the loop.

The Pentagon's position, as a senior official told Semafor, is that legality is the military's problem, not a vendor's. "You can't lead tactical ops by exception," the official said, which sounds reasonable until you remember the Pentagon is asking a private company to pre-authorize uses that don't yet have regulatory frameworks. Anthropic's argument, per sources, is that AI isn't reliable enough to operate weapons and that no laws currently govern AI-assisted mass surveillance. Both of those things are true, even if they're inconvenient.

Amodei didn't budge in Tuesday's meeting. Accounts of the tone vary: a Pentagon source called it "not warm and fuzzy at all," while another person described it to NPR as cordial with no raised voices. Hegseth reportedly praised Claude's capabilities during the same conversation in which he threatened to blacklist the company that built it.

The missile hypothetical that broke things

The relationship was already deteriorating before Tuesday. Semafor reported that back in early December, Under Secretary Emil Michael posed a hypothetical to Amodei: if hypersonic missiles were headed for U.S. soil and Claude could stop them, would Anthropic's autonomous weapons policy get in the way?

Pentagon sources say Amodei suggested officials should check with Anthropic during the attack. Anthropic calls that "patently false" and says it offered a missile defense carveout. Who is telling the truth matters less than the fact that both sides are recounting the same conversation with completely different narratives. That's not a policy disagreement anymore; it's a relationship that's broken.

The Maduro raid in January made things worse. Claude was used during the operation through Palantir, and when an Anthropic employee later asked Palantir about it, a Palantir executive got alarmed enough to notify the Pentagon. Anthropic denies expressing concern. The Pentagon took it as interference.

The DPA problem

Invoking the Defense Production Act against an AI company would be novel, to put it mildly. The DPA, a 1950s-era law, has been used to compel manufacturing during emergencies. Applying it to software is legally untested. Semafor's Reed Albergotti drew a comparison to Apple's fight with the FBI over unlocking the San Bernardino shooter's iPhone: Apple never backed down, and the FBI eventually found another way in.

There's a useful irony in that parallel. If the Pentagon invokes the DPA, Anthropic gets to comply under duress rather than consent, preserving something of its safety-focused brand. But as Albergotti noted, it also reminds international customers exactly how much power the U.S. government can exercise over American tech companies. For a company reportedly planning an IPO this year, that's not great investor messaging.

Georgetown's Owen Daniels told the AP that Anthropic's bargaining position is weak because competitors have already agreed to "all lawful uses" language. OpenAI, Google, and xAI signed on. Musk's xAI recently got cleared for classified settings. But none of those models are actually deployed in classified environments at the scale Claude is, and a Pentagon official privately acknowledged that competing models are "just behind" on specialized government applications.

The broader question, as Lawfare argued last week, is why the rules governing military AI are being hashed out between a defense secretary and a startup CEO at all. Congress has set no framework. There is no legislation defining acceptable military AI use cases, no regulatory body reviewing deployments, no democratic process producing the guardrails that both sides claim to want.

The deadline is Friday at 5:01 p.m. Anthropic has said it has no plans to budge. The Pentagon has said it won't back down. Something has to give, and there are no good options left that don't set a precedent someone will regret.

Tags:AnthropicPentagonClaudePete HegsethDario AmodeiDefense Production Actmilitary AIAI safetynational securityPalantir
Liza Chan

Liza Chan

AI & Emerging Tech Correspondent

Liza covers the rapidly evolving world of artificial intelligence, from breakthroughs in research labs to real-world applications reshaping industries. With a background in computer science and journalism, she translates complex technical developments into accessible insights for curious readers.

Related Articles

Stay Ahead of the AI Curve

Get the latest AI news, reviews, and deals delivered straight to your inbox. Join 100,000+ AI enthusiasts.

By subscribing, you agree to our Privacy Policy. Unsubscribe anytime.

Pentagon Gives Anthropic Friday Deadline on Claude AI Access | aiHola