Regulation

Trump Orders All Federal Agencies to Drop Anthropic Over Pentagon AI Dispute

Pentagon designates Anthropic a supply chain risk as AI company refuses to lift weapons and surveillance guardrails.

Liza Chan
Liza ChanAI & Emerging Tech Correspondent
February 28, 20265 min read
Share:
The Pentagon building viewed from above with AI-themed digital overlay representing the clash between government and tech company

President Trump directed every federal agency to stop using Anthropic's technology on Friday, escalating a weeks-long standoff between the AI company and the Pentagon into something that looks a lot like a government-wide blacklist. Defense Secretary Pete Hegseth followed up by designating Anthropic a "supply chain risk to national security," a label typically reserved for foreign adversaries like Huawei.

Anthropic says it will challenge the designation in court.

What actually happened

The fight boils down to two words in a contract. The Pentagon wants to use Anthropic's Claude AI model for "all lawful purposes" on its classified networks. Anthropic has two conditions it won't budge on: no mass domestic surveillance, and no fully autonomous weapons. The Pentagon's position, as DefenseScoop reported, is that existing law already prohibits those things, so writing them into a contract is redundant and gives a private company veto power over military operations.

Anthropic CEO Dario Amodei sees it differently. "We cannot in good conscience accede to their request," he said in a statement Thursday, which is the kind of sentence that sounds principled and also happens to be expensive. The Pentagon contract is worth up to $200 million. Claude was the only AI model cleared for classified military networks until this week.

Trump's Truth Social post called Anthropic "Leftwing nut jobs" who made a "DISASTROUS MISTAKE." He gave agencies six months to phase out Anthropic's products, then threatened "major civil and criminal consequences" if the company doesn't cooperate during the transition. That last part raised eyebrows among legal observers, though the administration hasn't specified what those consequences might look like.

The deal that wasn't

Here's a detail that says a lot about how Friday unfolded. According to Axios, Pentagon undersecretary Emil Michael was on the phone offering Anthropic a deal at the same moment Hegseth posted his supply chain risk designation on X. The deal reportedly would have required allowing collection or analysis of data on Americans, from geolocation to web browsing history to financial records purchased from data brokers. That's not mass surveillance in the traditional wiretapping sense, but Anthropic apparently thought it was close enough.

Michael, who has been steering negotiations with AI firms, had already called Amodei a "liar" with a "God complex" on Thursday. The tone of this negotiation has not been subtle.

So OpenAI swoops in

Hours after Trump's announcement, OpenAI CEO Sam Altman posted on X that his company had struck a deal with the Pentagon for classified network deployment. The punchline: OpenAI claims to have the same red lines Anthropic does.

"Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," Altman wrote. "The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."

There's an obvious question here. If the Pentagon accepted OpenAI's safety conditions on Friday night, why did it spend a week publicly destroying Anthropic over nearly identical ones? The answer likely has to do with how the conditions are framed. Anthropic wanted explicit contractual restrictions. OpenAI's approach, per Axios, relies on existing law plus the company's own technical safeguards: cloud-only deployment, security-cleared researchers monitoring usage, and the ability to update safety systems over time. Whether that distinction holds up in practice is another matter entirely.

Altman told CNBC on Friday morning, before the ban dropped, that "for all the differences I have with Anthropic, I mostly trust them as a company." That's a generous thing to say about your biggest competitor while positioning yourself to take their government contract.

What Anthropic is betting on

Anthropic's legal argument is narrow and specific: the supply chain risk designation applies only to Pentagon contracts, not to how military contractors use Claude for other customers. "The Secretary does not have the statutory authority to back up this statement," the company said, referring to Hegseth's claim that anyone doing business with the military must also cut ties with Anthropic.

Sen. Mark Warner called the move politically motivated. Hundreds of employees from OpenAI and Google signed an open letter supporting Anthropic's position. Even Ilya Sutskever, who very publicly fell out with Altman over safety concerns at OpenAI, weighed in to say it was "extremely good" that Anthropic held firm.

But support from the AI community doesn't pay the bills if federal contracts dry up and commercial clients start worrying about being caught in the crossfire. The designation could matter less for what it legally requires and more for the signal it sends: do business with Anthropic, and you might have a problem with the Pentagon.

Anthropic has said it hasn't received direct communication from either the Pentagon or the White House about next steps. The six-month phaseout clock is ticking, and the company's lawyers are presumably already drafting filings. A federal court will eventually decide whether the government can label a domestic AI company a supply chain risk for negotiating contract terms. That ruling, whenever it comes, will matter far more than Friday's Truth Social post.

Tags:AnthropicPentagonTrumpAI policyClaude AIOpenAImilitary AIsupply chain riskdefense contractsAI regulation
Liza Chan

Liza Chan

AI & Emerging Tech Correspondent

Liza covers the rapidly evolving world of artificial intelligence, from breakthroughs in research labs to real-world applications reshaping industries. With a background in computer science and journalism, she translates complex technical developments into accessible insights for curious readers.

Related Articles

Stay Ahead of the AI Curve

Get the latest AI news, reviews, and deals delivered straight to your inbox. Join 100,000+ AI enthusiasts.

By subscribing, you agree to our Privacy Policy. Unsubscribe anytime.

Trump Orders Federal Agencies to Drop Anthropic AI | aiHola