The AI firm Anthropic stated that it couldn’t accept the Pentagon’s “best and final” offer to resolve the company’s existing stance on how the U.S. military can utilize its AI models. With only a few hours remaining before the Friday deadline to meet the Pentagon’s demands or face actions that might prevent Anthropic from doing business with any company that also conducts business with the U.S. military, the dispute became increasingly acrimonious.
Pentagon officials have publicly cast doubt on the character of Anthropic CEO Dario Amodei. Meanwhile, employees at competing AI labs have signed open letters backing Anthropic’s position. According to Axios, OpenAI CEO Sam Altman informed his employees in a memo on Thursday that OpenAI would advocate for the same restrictions on autonomous weapons and mass surveillance as Anthropic when negotiating to expand the use of ChatGPT, which is currently available to the military for non – sensitive purposes, to more classified domains.
The conflict between Anthropic and the Pentagon now threatens to escalate into an industry – wide revolt among tech workers at AI companies regarding how the AI systems they are developing are used by the military. On Thursday, more than 100 Google employees sent a letter to Jeff Dean, the company’s chief scientist, also requesting similar limitations on how Google’s Gemini AI models are employed by the U.S. military, as reported by .
On Thursday, Amodei published a statement explaining why the company believes there should be restrictions on using its AI technology for autonomous weapons and mass surveillance. These are the two areas where Anthropic currently limits the military’s use of its models, both in its contract terms and through safeguards built directly into its Claude models. The Pentagon wants these limitations removed and for Anthropic to agree that the U.S. military can use its models “for any lawful purpose.”
Frontier AI systems are “not reliable enough to power fully autonomous weapons,” and without proper supervision, they “cannot be counted on to exercise the critical judgment that our highly trained, professional troops demonstrate every day,” Amodei wrote in his statement. Regarding surveillance, he argued that powerful AI can now combine individually harmless public data, such as location records, browsing history, and social connections, to create a comprehensive profile of any American citizen’s life on a large scale.
Emil Michael, the U.S. undersecretary of war, responded by calling Amodei “a liar” with a “God complex,” accusing the CEO of wanting “to personally control the U.S military” in posts on the social media platform. In a separate post, Michael also described Anthropic’s Claude Constitution—an internal document outlining the values and principles the company incorporates into its AI—as a corporate scheme to “impose their corporate laws on Americans.”
The Pentagon has demanded that Anthropic remove the contract limitations it opposes by 5:01 p.m. on Friday or face having its $200 – million contract with the U.S. military canceled. In a more extreme measure, it could be labeled “a supply – chain risk,” which would effectively prevent any company doing business with the military from using Anthropic’s technology.
This kind of step is typically reserved for companies of foreign adversaries like China’s or the Russian cybersecurity firm Kaspersky.
“Using it against a domestic company because they’re not willing to compromise on some principles of this kind is truly quite escalatory and unprecedented,” Seán Ó hÉigeartaigh, the executive director of Cambridge’s Center for the Study of Existential Risk, told .
The Department of War has also threatened to invoke the Cold War – era Defense Production Act, using the law to force Anthropic to hand over an unrestricted version of Claude on the grounds that the government deems it crucial to national security. If the Pentagon takes this route, it will be using powers intended only for emergencies to resolve a contract dispute during peacetime. There is some precedent for this: The Biden administration also invoked the DPA in 2023 to compel frontier AI labs to provide information about the safety of their AI models. But forcing a company to produce a product, rather than just provide information, comes closer to nationalizing a leading technology company.
“If they are effectively being coerced into allowing their technology to be used in ways that even they themselves say are unreliable in high – stakes life – and – death situations like on the battlefield,” Ó hÉigeartaigh said, “that sets a very dangerous precedent.”
The Department of War has publicly stated that it has no intention of conducting mass surveillance or removing humans from weapons – targeting decisions, but the dispute might hinge on how each side defines “autonomous” or “surveillance” in practice. Representatives for the department did not immediately respond to a request for comment from .
An Anthropic spokesperson told that the company was continuing “to engage in good faith” with the Department of War. However, the spokesperson said that the contract language received overnight had made “virtually no progress” on the core issues. The new language “framed as a compromise” was “paired with legal jargon that would allow those safeguards to be ignored at will,” they said. Amodei has called the threats from the Department of War “inherently contradictory” as “one labels us a security risk; the other labels Claude as essential to national security.”
Anthropic has received praise from some quarters for its determination to stand firm. Harvard law professor Lawrence Lessig commended the company’s statement as “a beautiful act of integrity and principle” and said it was “incredibly rare for our time.”
Rivals OpenAI and xAI have agreed to Pentagon contracts that allow their models to be used for all lawful purposes, with xAI going further by also agreeing to deploy its systems in some classified settings. But more than 330 current employees at rival labs DeepMind and OpenAI have also published an open letter in support of Anthropic, urging their own leadership to follow the company’s example. “They’re trying to divide each company by instilling the fear that the other will give in,” the letter read. “That strategy only works if none of us know where the others stand.” The signatories included senior research scientists and both named and anonymous researchers from both companies.
Ó hÉigeartaigh said that the outcome of the dispute could have far – reaching consequences beyond Anthropic itself. “If the Pentagon wins this,” he said, “it will set precedents that won’t be beneficial for the independence of these companies or their ability to uphold ethical standards.”
