By Hadas Gold, NCS
(NCS) — ChatGPT maker OpenAI has the identical redlines as Anthropic when it comes to working with the Pentagon, an OpenAI spokesperson confirmed to NCS.
That means even when the Pentagon decides to cancel its Anthropic contract in favor of OpenAI, it’ll have to contend with the identical concerns over the use of AI in autonomous weapons and mass surveillance of US residents.
OpenAI CEO Sam Altman mentioned in an interview with CNBC on Friday morning that it’s necessary for corporations to work with the Pentagon, “as long as it is going to comply with legal protections” and “the few red lines” that OpenAI and plenty of in the AI business have when it comes to AI use in the army.
“For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety, and I’ve been happy that they’ve been supporting our war fighters,” Altman continued. “I’m not sure where this is going to go.”
The Pentagon declined to remark for this story.
A supply acquainted with the state of affairs mentioned Altman immediately approached the Pentagon this week expressing concern about Hegseth declaring Anthropic a provide chain danger or utilizing the Defense Procurement Act to compel Anthropic to work with the army.
Anthropic’s Claude system was the first AI mannequin to be used for work on the army’s labeled techniques. But the Pentagon has given the firm till 5:01pm on Friday to agree to drop its inner guardrails and permit its system to be used for “all lawful use.” If Anthropic doesn’t agree, it’ll lose a $200 million contract with the Pentagon and could possibly be designated a “supply chain risk,” the identical label given to corporations related to overseas adversaries.
Anthropic has mentioned it desires to work with the Pentagon, however that its worries about the use of AI in autonomous weapons and mass surveillance stem from concerns the know-how continues to be unreliable in these circumstances. Current legal guidelines and laws don’t correctly account for developments in AI, the firm additionally mentioned.
In a memo to OpenAI workers on Thursday obtained by NCS, Altman mentioned “this is no longer just an issue between Anthropic and the DoW; this is an issue for the whole industry and it is important to clarify our stance.” He added that OpenAI has a proposal for the Pentagon that they imagine will permit “our models to be deployed in classified environments and that fits with our principles” that might work for different AI labs as nicely.
“We believe this dispute isn’t about how AI will be used, but about control. We believe that a private US company cannot be more powerful than the democratically-elected US government, although companies can have lots of input and influence,” Altman continued in the memo, first reported by the Wall Street Journal.
“The way the current situation has gone risks our national security, and also risks the government resorting to actions which could risk American leadership in AI. We would like to try to help de-escalate things,” Altman wrote.
OpenAI is one in every of a number of AI corporations to have signed deals with the Pentagon final summer season to “develop prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains,” the Pentagon said at the time. But Anthropic’s Claude was the solely mannequin used on the army’s labeled system till lately. A Pentagon official informed NCS this week that Elon Musk’s Grok is now “on board with being used in classified setting,” whereas the different corporations together with OpenAI have been “close”.
Editor’s be aware: A remark beforehand offered by the Pentagon about the division’s work to broaden its arsenal of AI capabilities, and efforts to guarantee security and safety of AI fashions, has been faraway from this story after the Pentagon mentioned it was offered mistakenly.
NCS’s Haley Britzky contributed to this report.
The-NCS-Wire
™ & © 2026 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.