Trump’s Iran Strike and the New Breed of AI Wars Means Bombs Can Drop Faster Than the Speed of Thought

AI has made its way into military command centers, and experts say it’s here to stay for the foreseeable future.

Even though President Donald Trump ordered federal agencies and military contractors to stop working with Anthropic, the U.S. military is said to have used the company’s AI system, Claude, in its strike on Iran, per . 

Now, several experts are voicing worries about AI’s use in combat operations. “The AI system is suggesting targets, and in some respects, that’s faster than the speed at which humans can think,” Dr. Craig Jones—author of The War Lawyers: U.S., Israel and the Spaces of Targeting, a book exploring military lawyers’ roles in contemporary warfare—told .

In an interview with , Jones—a Newcastle University lecturer specializing in war and conflict—stated that AI has significantly speeded up the “kill chain” (the process from identifying a target to destroying it). He noted that the U.S.-Israeli strikes on Iran, which killed Ayatollah Ali Khamenei, likely wouldn’t have occurred without AI. 

“Doing it that way would have been impossible or nearly so,” Jones told . “The speed, scale, and scope of the strikes are, I believe, powered by AI.”

The Pentagon has turned to AI firms to quicken and improve war planning, including a 2024 partnership with Anthropic that fell apart last week due to disputes over using Claude, the company’s AI model. But OpenAI has a deal with the Pentagon, and Elon Musk’s xAI has agreed to let its AI system, Grok, be used in classified systems. The U.S. Army also uses data-mining company Palantir’s software to gain AI-driven insights for decision-making.

AI in the battlefield

Jones explained that the U.S. Air Force has used “speed of thought” as a standard for decision-making speed for years. He noted that during WWII and the Vietnam War, the time from gathering intelligence—like aerial surveillance—to launching a bombing raid could take as long as six months. AI has drastically shortened that timeline.

The primary function of AI tools in military command centers is to rapidly process enormous volumes of data. “We’re talking terabytes upon terabytes of data,” Jones said—“from aerial photos and human intelligence to online data and mobile phone tracking, literally everything.”

Dr. Amir Husain, co-author of Hyperwar: Conflict and Competition in the AI Century, stated that AI is being used to shorten the U.S. military’s OODA loop—a decision-making framework standing for observe, orient, decide, act. He said AI already plays a major role in the observation phase (interpreting satellite and electronic data), tactical decision-making, and the “act” phase—particularly with autonomous drones that have to operate without human input when signals are jammed. Some of these drones are counterparts to Iran’s own autonomous Shahed drones.

AI has also been used in other conflicts. Israel is said to have used AI to target Hamas during the Israel-Hamas war. And autonomous systems are on the front lines in the Russia-Ukraine war, with both Russia and Ukraine using some form of autonomous technology.

Multiplying risks

But Jones highlighted several worries about AI-powered warfare. “The issue with adding AI is that it increases the potential for error—by a huge margin, I’d say,” Jones stated.

Jones acknowledged that human error exists regardless of AI, pointing to the 2003 U.S. invasion of Iraq—which was based on faulty intelligence—as an example. But he warned that AI could make such mistakes worse because of the massive amount of data it processes.

AI-driven warfare also raises a host of ethical issues, most notably around accountability—a requirement already mandated by the Geneva Convention and laws of armed conflict, according to Husain. As AI blurs the distinction between machine and human decision-making, he said the international community must ensure that humans are held responsible for all battlefield actions.

“The laws of armed conflict demand that we hold a person responsible,” Husain said. “No matter how much automation is used in combat, a human must be accountable.”