By Hadas Gold, Devan Cole, NCS
(NCS) — A federal decide in California has indefinitely blocked the Pentagon’s effort to “punish” Anthropic by labeling it a supply chain risk and trying to sever authorities ties with the AI firm, ruling that these measures ran roughshod over its constitutional rights.
“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” US District Judge Rita Lin wrote in a stinging 43-web page ruling.
Lin, an appointee of former President Joe Biden, stated she would delay implementation of her ruling for one week to permit the federal government to attraction.
But in her ruling, she made it clear she disapproved of the federal government’s actions, which she stated violated the corporate’s First Amendment and due course of rights.
The ruling is the most recent court docket defeat for the Trump administration. Earlier this yr the Supreme Court dominated many of the president’s sweeping tariffs were illegal. And simply final week a federal judge struck down Pentagon press limits.
Anthropic applauded Lin’s ruling on Thursday.
“We’re grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits,” an Anthropic spokesperson stated after Thursday’s ruling. “While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”
The supply chain risk designation meant any firm that works with the army would want to present it didn’t use an Anthropic product. The label, leveled by the Pentagon final month, had beforehand been used just for firms seen as linked to international adversaries.
Anthropic stated the designation violated its First Amendment rights, tarnished its status and jeopardized a whole lot of hundreds of thousands of {dollars}’ value of contracts.
The Department of Defense’s quarrel with the Anthropic started after the corporate refused to again down over contractual guardrails round using its Claude AI mannequin in autonomous weapons and mass surveillance.
Secretary Pete Hegseth took the dramatic, unprecedented step of labeling it a supply chain risk in February, and Hegseth and President Donald Trump ordered federal businesses to stop utilizing the product and sever ties with firms that do enterprise with Anthropic.
But Lin stated that was all in retaliation for the corporate sticking with its guardrails.
“These broad measures do not appear to be directed at the government’s stated national security interests,” she wrote. “The Department of War’s records show that it designated Anthropic as a supply chain risk because of its ‘hostile manner through the press.’”
“Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation,” she added.
The Department of Defense needed unfettered entry to Claude for “all lawful purposes.” The department said it needed complete freedom to use the system, particularly in wartime.
“We can’t have a company that has a different policy preference that is baked into the model… pollute the supply chain so our warfighters are getting ineffective weapons, ineffective body armor, ineffective protection,” the Defense Department’s chief expertise officer, Emil Michael, advised CNBC earlier this month.
But Anthropic had two pink traces: It didn’t need its AI methods utilized in autonomous weapons or home mass surveillance. Anthropic argued in its go well with that the Pentagon was conscious of its place on the Claude limitations and that its stance is protected speech.
A separate problem by the corporate to different authorities Hegseth invoked to make the supply chain risk designation remains to be pending earlier than a federal court docket in Washington, DC.
The-NCS-Wire
™ & © 2026 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.