According to a source present at the meeting and a summary of the meeting viewed by , Sam Altman informed OpenAI employees during a Friday afternoon all-hands gathering that discussions are underway for a potential agreement with the U.S. Department of War to utilize the startup’s AI models and tools. The contract has not yet been finalized.
The meeting concluded a week marked by a public fallout between Secretary of War Pete Hegseth and OpenAI competitor Anthropic, which resulted in the apparent termination of Anthropic’s contracts with the Pentagon and the federal government at large.
Altman stated that the government is willing to let OpenAI develop its own “safety stack”—a layered system of technical, policy, and human controls that sit between a powerful AI model and its real-world application—and that if the model refuses to perform a task, the government will not compel OpenAI to make it do so.
OpenAI would maintain control over the implementation of technical safeguards, the deployment of models (including which ones and where), and would restrict deployment to cloud environments rather than “edge systems.” (In a military context, edge systems could include aircraft and drones.) As a significant concession, Altman told employees that the government has agreed to include OpenAI’s specified “red lines” in the contract, such as prohibiting AI use for powering autonomous weapons, domestic mass surveillance, and critical decision-making.
OpenAI and the Department of War did not immediately respond to requests for comment.
According to the source, Sasha Baker, OpenAI’s head of national security policy, and Katrina Mulligan, who leads national security for OpenAI’s government division, also spoke at the all-hands meeting. One of these officials said the breakdown in the government’s relationship with Anthropic stemmed from actions by Anthropic CEO and cofounder Dario Amodei, including publishing blog posts that “upset the department,” which had offended Department of War leadership.
Anthropic, a company founded by former OpenAI members over safety concerns, had been the only major commercial AI developer with models approved for use at the Pentagon, through a partnership with Palantir. However, Anthropic’s management and the Pentagon had been locked in a multi-day dispute over limitations Anthropic sought to impose on its technology’s use. These limitations are largely the same as the ones Altman said the Pentagon would accept if using OpenAI’s technology.
Anthropic refused Pentagon demands to remove safeguards on its Claude model that restrict uses such as domestic mass surveillance or fully autonomous weapons, even as defense officials insisted that AI models must be available for “all lawful purposes.” The Pentagon, including Secretary of War Pete Hegseth, had warned Anthropic it could lose a contract worth up to $200 million if it did not comply. Altman has previously stated that OpenAI shares Anthropic’s “red lines” on limiting certain military uses of AI, highlighting that even as OpenAI negotiates with the U.S. government, it faces the same core tension now playing out publicly between Anthropic and the Pentagon.
The OpenAI all-hands meeting occurred shortly after President Trump announced that the federal government will stop working with Anthropic, a dramatic escalation of the government’s conflict with the company over its AI models.
“I am directing every federal agency in the United States government to immediately cease all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again!” Trump said in a post on Truth Social. He noted that the Department of War and other agencies using Anthropic’s Claude models will have a six-month phase-out period.
According to the source, staff were told at the OpenAI all-hands that concerns about foreign surveillance were the most challenging aspect of the deal for leadership, with significant worry that AI-driven surveillance could threaten democracy. However, company leaders also acknowledged the reality that governments engage in international espionage against adversaries, recognizing claims that national-security officers “can’t do their jobs” without international surveillance capabilities. They referenced threat intelligence reports indicating that China is already using AI models to target dissidents overseas.
