
President Donald Trump announced on Friday that he will bar Anthropic from the federal government after the artificial intelligence firm declined to negotiate on the permissible uses of its technology by the U.S. military.
However, he is also providing the Pentagon with a six-month window to wind down its use of Anthropic’s systems, given it is among the limited number of AI companies authorized for work in classified environments.
In a statement, Trump labeled Anthropic as “woke” and “leftwing,” asserting that its refusal to meet the Defense Department’s requirements is putting service members at risk and compromising national security.
“Consequently, I am ordering EVERY Federal Agency in the United States Government to HALT ALL use of Anthropic’s technology immediately,” he stated. “We do not require it, we do not desire it, and we will not engage with them in the future! A six-month transition period will be granted to Agencies such as the Department of War that are currently utilizing Anthropic’s products at different tiers.”
Trump further warned that if Anthropic does not comply, he will employ “the complete authority of the Presidency to ensure their obedience.”
The San Francisco-based startup had declined to permit the deployment of its Claude models for widespread domestic surveillance or autonomous weaponry, whereas the Defense Department insisted on the right to utilize the technology for all lawful purposes.
Defense Secretary Pete Hegseth had threatened to cancel Anthropic’s $200 million military contract or classify the company as a supply-chain risk.
On Friday, he confirmed that he is officially classifying the firm as a “Supply-Chain Risk to National Security.” This designation bars Pentagon contractors from using Anthropic’s technology, placing the AI company in a category typically reserved for entities linked to foreign rivals like China and Russia.
Hegseth noted that the Defense Department’s half-year transition period will facilitate “a smooth shift to a superior and more patriotic service provider.”
Earlier, he had also suggested the potential use of the Defense Production Act to compel Anthropic to provide an unrestricted version of Claude, citing national security concerns.
“These pressures do not alter our stance: We cannot, in good faith, agree to their demands,” Anthropic CEO Dario Amodei wrote in a letter on Thursday.
Emil Michael, the Pentagon’s undersecretary for research and engineering, reacted by accusing the CEO of seeking “to personally command the U.S. military” in social media posts.
The Defense Department has publicly said it does not plan to implement mass surveillance or take humans out of the loop for targeting decisions, yet the disagreement may hinge on the practical definitions of terms like “autonomous” or “surveillance” by each party.
Anthropic had been the sole AI company approved for classified work—until Elon Musk’s xAI consented to allow the Pentagon to use its AI for lawful applications. Other firms are employed in unclassified contexts but are negotiating with the Defense Department regarding classified projects.
Nevertheless, the Pentagon is confronting resistance from Silicon Valley, even as defense officials work to reduce their reliance on Anthropic.
According to reports, OpenAI CEO Sam Altman informed staff in a Thursday memo that his company would advocate for the same restrictions on autonomous weapons and mass surveillance that Anthropic maintains.
Also on Thursday, over 100 Google employees submitted a letter to the company’s chief scientist, Jeff Dean, requesting comparable limits on the U.S. military’s use of its Gemini AI models, as per media accounts.
