American AI firm Anthropic has till 5:01 pm ET to provide in to the Pentagon’s calls for or face being labeled a “supply chain risk,” a sort of designation normally reserved for firms regarded as extensions of overseas adversaries.
The Pentagon, which makes use of Anthropic’s Claude AI system on its labeled networks, needs to have the ability to use it for “all lawful purposes.” But Anthropic has two redlines for the Pentagon: that Claude won’t be utilized in autonomous weapons, and that it’s going to not be used within the mass surveillance of US residents.
Anthropic on Thursday announced it has no intention of acquiescing.
“Threats do not change our position: we cannot in good conscience accede to their request,” the corporate’s CEO stated in a assertion.
The Pentagon claims that it has little interest in utilizing AI for both function and that it wants the liberty to make use of the expertise it is licensing.
“This is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk,” Pentagon spokesperson Sean Parnell wrote on X. “We will not let ANY company dictate the terms regarding how we make operational decisions.”
It all got here to a head on Tuesday at a excessive stakes assembly on the Pentagon between Defense Secretary Pete Hegseth and Anthropic CEO Dario Amodei.
While a supply acquainted with the scenario stated the assembly was cordial, Pentagon officers didn’t simply threaten to cancel Anthropic’s $200 million contract with them, but in addition a designation that might threaten their backside line.
Anthropic’s Claude was the primary AI mannequin to work on the army’s labeled networks. The firm struck a contract price as much as $200 million with the Pentagon final summer season. Other main AI firms like OpenAI have solely struck offers with the Pentagon on their unclassified networks.
Within Anthropic’s “acceptable use policy” within the contract are prohibitions in opposition to the usage of Claude in mass surveillance and autonomous weapons.
“This dispute comes at an awkward time because on the one hand, the user base within the Department of Defense loves Anthropic, loves Claude, and says that their restrictions on usage, at least from the conversations that I have been having, have never been triggered,” Gregory Allen, a senior advisor on the Center for Strategic and International Studies, stated on Bloomberg Radio.
But the Pentagon doesn’t wish to be constrained by a firm’s insurance policies. A Pentagon official informed NCS: “You can’t lead tactical (operations) by exception,” and “legality is the Pentagon’s responsibility as the end user.”
In the Pentagon’s view, it doesn’t wish to be in the course of a nationwide safety scenario, needing to ask a firm for permission and guardrails to be dropped.
Cutting ties with Anthropic may very well be a headache for the Pentagon as nicely, contemplating they would wish to interchange any inner programs that use Claude. Though a Pentagon official stated Elon Musk’s Grok AI system is “on board with being used in a classified setting,” Grok is not considered as being as superior as Claude.

Losing a $200 million contract wouldn’t pose an existential menace for Anthropic, which was not too long ago valued at round $380 billion. The greater threat is that it will get labeled a provide chain threat, which implies any firm works with the US army must show that they don’t contact something associated to Anthropic of their work with the Pentagon. Much of Anthropic’s success stems from its enterprise contracts with large firms – a lot of which can have contracts with the Pentagon.
“It means that Anthropic’s existing customer base, some large portion of it might evaporate, either because they have government contracts or might want them in the future,” stated Adam Connor, vice chairman for expertise coverage on the Center for American Progress, a Washington suppose tank.
Jensen Huang, CEO of main AI chipmaker Nvidia, stated that whereas he hopes the Pentagon and Anthropic can come to an settlement, “if it doesn’t get worked out, it’s also not the end of the world” since there are different AI firms the Pentagon can work with and Anthropic has different prospects.
Earlier this week, the Pentagon had stated that it could additionally contemplate compelling Anthropic to work with them through the Defense Procurement Act, a 1950 legislation that “gives the president significant emergency authority to control domestic industries,” in accordance with the Council on Foreign Relations. It’s not clear if or how the Pentagon would have the ability to each compel Anthropic to work with them through the DPA and deem them a provide chain threat.
Anthropic isn’t the one firm beneath menace from this dispute, stated Connor. The Pentagon’s menace is a sign to different AI firms trying to make tens of millions promoting their providers to the federal government.
“I think in the broader sense, this sends a message to the other AI companies that they are negotiating with to make sure they do not attempt to put any sort of restrictions on AI’s uses,” stated Connor.
If the Pentagon was merely sad with Anthropic’s circumstances for its mannequin, it might merely terminate the contract and get the AI mannequin it needs from one other firm, stated Alan Rozenshtein, a legislation professor on the University of Minnesota.
“What the government really wants is it wants is to keep using Anthropic’s technology, and it’s just using every source of leverage possible,” he stated. “This is a very powerful source of leverage.”
The Pentagon has stated if Anthropic doesn’t comply with the phrases by 5:01pm, they are going to cancel the contract and deem the corporate a “supply chain risk”. It’s not clear if it should make a public announcement.
It’s additionally not clear whether or not Anthropic’s Claude AI system would instantly disappear from the army’s programs – and what it could get replaced by. There’s additionally the query of how different army contractors that work with Anthropic will act. In situations the place overseas firms have been deemed dangers, companies had a while to show that that they had lower ties with the dangerous firm.
But if the Pentagon does comply with by way of with its menace, it should characterize an unprecedented escalation on what is thought-about probably the greatest and most profitable AI fashions in existence.
“To take a domestic AI champion at a time when the White House is saying that the AI race with China is equivalent to the space race during the Cold War with the Soviet Union — you do not want to take one of the crown jewels of your industry and light it on fire over something like this,” Allen stated on Bloomberg. “There is a better way to resolve this dispute than the absolutist stance the administration has taken.”
Chris Isidore contributed to this report.