Anthropic rejects latest Pentagon offer, ‘we cannot in good conscience accede to their request.’


Anthropic is rejecting the Pentagon’s latest supply to change their contract, saying the modifications don’t fulfill the corporate’s considerations that AI may very well be used for mass surveillance or in absolutely autonomous weapons.

The Pentagon and Anthropic are at odds over restrictions the corporate locations on using Claude, the primary AI system to be used in the army’s labeled community.

Defense Secretary Pete Hegseth advised Anthropic CEO Dario Amodei on Tuesday that if Anthropic doesn’t permit its AI mannequin to be used “for all lawful purposes,” the Pentagon would cancel Anthropic’s $200 million contract. In addition to the contract cancellation, Anthropic can be deemed a “supply chain risk,” a classification usually reserved for corporations related to international adversaries, Hegseth mentioned.

Anthropic mentioned in a press release that the Pentagon’s new language was framed as a compromise however “was paired with legalese that would allow those safeguards to be disregarded at will.”

In a prolonged blog post on Thursday, Amodei wrote: “I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.”

Amodei mentioned Anthropic understands that the Pentagon, “not private companies, makes military decisions.” But “in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.” In the case of mass surveillance and autonomous weapons are “outside the bounds of what today’s technology can safely and reliably do.”

Amodei mentioned the Pentagon’s “threats do not change our position: we cannot in good conscience accede to their request.”

The Pentagon didn’t instantly reply to a request for remark.

Leave a Reply

Your email address will not be published. Required fields are marked *