
Anthropic has filed a lawsuit against the Department of Defense and other federal agencies following the Trump administration’s formal designation of the company as a “supply-chain risk” late last week.
This marks the newest escalation in a continuing dispute between the Pentagon and Anthropic concerning the government’s use of its AI technology, a conflict with significant ramifications for AI governance and public-private relations.
Anthropic had sought to guarantee that its AI model, Claude, would not be utilized by the government for domestic mass surveillance or autonomous weapons systems. The Pentagon, which has employed Claude for tasks such as intelligence processing, demanded the removal of these restrictions from the existing contract and proposed a new agreement permitting the military to use Claude for “all lawful purposes.”
Anthropic declined to accept these conditions. Consequently, the Trump administration terminated the company’s government contracts and labeled it a supply-chain risk, a classification typically applied to firms with links to foreign adversaries. This designation bars defense contractors from using Anthropic’s technology in any work performed for the Department of War.
War Secretary Pete Hegseth stated the military would halt its use of Claude “immediately,” but also outlined a six-month transition period to avoid disrupting vital operations. Reports indicate the military has been using Claude in its ongoing conflict with Iran to analyze intelligence and targeting information. Hegseth also claimed the designation would force defense contractors to cut all commercial ties with Anthropic, a requirement most legal experts argue is not supported by the relevant statute. Anthropic has countered that the Pentagon’s formal designation applies solely to defense contract work and not to other unrelated commercial activities.
The lawsuit, submitted on Monday in the U.S. District Court for the Northern District of California, describes the administration’s actions as “unprecedented and unlawful” and asserts they risk causing “irreparable harm” to Anthropic. The legal complaint states that government contracts are already being terminated and private agreements are now uncertain, jeopardizing “hundreds of millions of dollars” in the short term.
An Anthropic spokesperson stated: “Pursuing judicial review does not alter our enduring commitment to leveraging AI for national security, but this is a necessary measure to safeguard our business, our customers, and our partners.
“We will continue to explore every avenue for a resolution, including discussions with the government,” the spokesperson added. The Department of War said it does not comment on ongoing litigation as a standard policy.
The supply-chain risk designation obligates defense vendors and contractors to confirm they are not employing Anthropic’s models in their work for the Pentagon. In a social media post, Trump also instructed federal agencies to “immediately cease” all use of Anthropic’s technology, writing on Truth Social: “WE will decide the fate of our Country—NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about.”
Legal specialists have raised doubts about the legal solidity of the supply-chain risk designation. In an article for the nonprofit publication Lawfare, attorneys Michael Endrias and Alan Z. Rozenshtein contended the label “exceeds what the statute authorizes,” that “the required findings don’t hold up,” and that Hegseth’s public comments “may have doomed the government’s litigation posture before it even begins.”
“The government cannot claim a vendor poses an acute supply-chain threat requiring emergency exclusion while simultaneously asserting it is perfectly safe to continue using the vendor for six months,” they wrote, labeling the overall designation “political theater: a show of force that will not stick.”
Adding complexity to the situation, within hours of the Anthropic-Pentagon negotiations collapsing, OpenAI secured its own agreement with the Department of War. The deal appeared to offer OpenAI’s models without the specific contractual bans Anthropic had demanded, though OpenAI stated that additional contractual and technical safeguards would, in practice, impose the same usage restrictions Anthropic had sought.
The agreement was met with immediate and severe criticism, with many observers questioning whether OpenAI’s contractual terms provided substantially different protections from those Anthropic had rejected. OpenAI later conceded the announcement appeared “sloppy and opportunistic” and mentioned it was renegotiating certain terms.
Relations between the two competing firms have worsened since. In an internal memo reported by The Information, Amodei referred to OpenAI staff as “gullible” and accused the company’s leadership of disseminating “straight-up lies.” Amodei subsequently apologized for the message, explaining it was drafted shortly after the negotiations failed and did not represent his “careful or considered views.”
