The Anthropic website on August 22, 2025.


American AI firm Anthropic has till 5:01 pm ET to give in to the Pentagon’s calls for or face being labeled a “supply chain risk,” a sort of designation often reserved for firms thought to be extensions of international adversaries.

The Pentagon, which makes use of Anthropic’s Claude AI system on its categorised networks, needs to have the ability to use it for “all lawful purposes.” But Anthropic has two redlines for the Pentagon: that Claude is not going to be utilized in autonomous weapons, and that it’s going to not be used within the mass surveillance of US residents.

Anthropic on Thursday announced it has no intention of acquiescing.

“Threats do not change our position: we cannot in good conscience accede to their request,” the corporate’s CEO mentioned in a assertion.

The Pentagon claims that it has no real interest in utilizing AI for both objective and that it wants the liberty to use the expertise it is licensing.

“This is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk,” Pentagon spokesperson Sean Parnell wrote on X. “We will not let ANY company dictate the terms regarding how we make operational decisions.”

It all got here to a head on Tuesday at a excessive stakes assembly on the Pentagon between Defense Secretary Pete Hegseth and Anthropic CEO Dario Amodei.

While a supply acquainted with the state of affairs mentioned the assembly was cordial, Pentagon officers didn’t simply threaten to cancel Anthropic’s $200 million contract with them, but in addition a designation that would threaten their backside line.

Anthropic’s Claude was the primary AI mannequin to work on the navy’s categorised networks. The firm struck a contract price up to $200 million with the Pentagon final summer season. Other main AI firms like OpenAI have solely struck offers with the Pentagon on their unclassified networks.

Within Anthropic’s “acceptable use policy” within the contract are prohibitions towards the usage of Claude in mass surveillance and autonomous weapons.

“This dispute comes at an awkward time because on the one hand, the user base within the Department of Defense loves Anthropic, loves Claude, and says that their restrictions on usage, at least from the conversations that I have been having, have never been triggered,” Gregory Allen, a senior advisor on the Center for Strategic and International Studies, mentioned on Bloomberg Radio.

But the Pentagon doesn’t need to be constrained by a firm’s insurance policies. A Pentagon official instructed NCS: “You can’t lead tactical (operations) by exception,” and “legality is the Pentagon’s responsibility as the end user.”

In the Pentagon’s view, it doesn’t need to be in the course of a nationwide safety state of affairs, needing to ask a firm for permission and guardrails to be dropped.

Cutting ties with Anthropic might be a headache for the Pentagon as effectively, contemplating they would wish to substitute any inside programs that use Claude. Though a Pentagon official mentioned Elon Musk’s Grok AI system is “on board with being used in a classified setting,” Grok is not considered as being as superior as Claude.

The Anthropic website on August 22, 2025.

Losing a $200 million contract wouldn’t pose an existential risk for Anthropic, which was not too long ago valued at round $380 billion. The larger danger is that it will get labeled a provide chain danger, which implies any firm works with the US navy would have to show that they don’t contact something associated to Anthropic of their work with the Pentagon. Much of Anthropic’s success stems from its enterprise contracts with massive firms – a lot of which can have contracts with the Pentagon.

“It means that Anthropic’s existing customer base, some large portion of it might evaporate, either because they have government contracts or might want them in the future,” mentioned Adam Connor, vp for expertise coverage on the Center for American Progress, a Washington assume tank.

Jensen Huang, CEO of main AI chipmaker Nvidia, mentioned that whereas he hopes the Pentagon and Anthropic can come to an settlement, “if it doesn’t get worked out, it’s also not the end of the world” since there are different AI firms the Pentagon can work with and Anthropic has different clients.

Earlier this week, the Pentagon had mentioned that it might additionally think about compelling Anthropic to work with them by way of the Defense Procurement Act, a 1950 legislation that “gives the president significant emergency authority to control domestic industries,” in accordance to the Council on Foreign Relations. It’s not clear if or how the Pentagon would have the ability to each compel Anthropic to work with them by way of the DPA and deem them a provide chain danger.

Anthropic isn’t the one firm below risk from this dispute, mentioned Connor. The Pentagon’s risk is a sign to different AI firms trying to make thousands and thousands promoting their providers to the federal government.

“I think in the broader sense, this sends a message to the other AI companies that they are negotiating with to make sure they do not attempt to put any sort of restrictions on AI’s uses,” mentioned Connor.

If the Pentagon was merely sad with Anthropic’s circumstances for its mannequin, it may merely terminate the contract and get the AI mannequin it needs from one other firm, mentioned Alan Rozenshtein, a legislation professor on the University of Minnesota.

“What the government really wants is it wants is to keep using Anthropic’s technology, and it’s just using every source of leverage possible,” he mentioned. “This is a very powerful source of leverage.”

The Pentagon has mentioned if Anthropic doesn’t agree to the phrases by 5:01pm, they’ll cancel the contract and deem the corporate a “supply chain risk”. It’s not clear if it’ll make a public announcement.

It’s additionally not clear whether or not Anthropic’s Claude AI system would instantly disappear from the navy’s programs – and what it might get replaced by. There’s additionally the query of how different navy contractors that work with Anthropic will act. In situations the place international firms have been deemed dangers, companies had a while to show that they’d reduce ties with the dangerous firm.

But if the Pentagon does comply with by with its risk, it’ll signify an unprecedented escalation on what is thought-about probably the greatest and most profitable AI fashions in existence.

“To take a domestic AI champion at a time when the White House is saying that the AI race with China is equivalent to the space race during the Cold War with the Soviet Union — you do not want to take one of the crown jewels of your industry and light it on fire over something like this,” Allen mentioned on Bloomberg. “There is a better way to resolve this dispute than the absolutist stance the administration has taken.”

NCS’s Chris Isidore contributed to this report.