The US strikes on Iran ordered by President Donald Trump mark the arrival on a large scale of a new era of warfare assisted by artificial intelligence.
Captain Timothy Hawkins, a Central Command spokesperson, told me last night that the AI tools the US military is using in Iran operations don’t make targeting decisions and don’t replace humans. But they do help “make smarter decisions faster.”
That’s been the driving ambition of the US military, which has spent years looking at how to develop and deploy AI to the battlefield, as I found researching my forthcoming book, Project Maven: A Marine Colonel, His Team and the Dawn of AI Warfare.
For the past few years, US military exercises have sometimes focused on a specific phrase: “a thousand decisions,” in which commanders practice filtering a range of data to identify 1,000 objects as friend or foe within an hour.
Within the first 24 hours of the war with Iran, the US fired on 1,000 targets, a huge number of strikes to hit in so short a time. Admiral Brad Cooper, the head of Central Command, has described it as nearly “double the scale” of America’s “shock and awe” assault on Iraq in 2003.

Critics, such as Stop Killer Robots, a coalition of 270 human-rights groups, argue that AI-enabled decision-support systems reduce the separation between recommending and executing a strike to a “dangerously thin” line.
Hawkins said the military’s use of AI assistance follows a rigorous process aligned with US policy, military doctrine and the law. Artificial intelligence helps analysts whittle down what they need to focus on, generating so-called points of interest and helping personnel make “smart” decisions in the Iran operations, he told me. AI is also helping to pull data within systems and organize information to provide clarity.
Among the AI tech used in the Iran campaign is Maven Smart System, a digital mission control platform produced by Palantir Technologies Inc., according to people familiar with the US operations, who spoke on condition of anonymity in order to discuss sensitive information. That emerged from Project Maven, a project started in 2017 by the Pentagon to develop AI for the battlefield.
Among the large language models installed on the system is Anthropic’s Claude AI tool, according to the people, who said it has become central to US operations against Iran and to accelerating Maven’s development.
Claude is also at the center of a row that pits Anthropic against the Department of Defense over limits on the software. Today, I reported that the Pentagon made good on its threat to deem Anthropic a supply-chain risk, a designation typically reserved for US adversaries.
The decision, which the company has previously said it would challenge in court, threatens to disrupt both the company and the military, leaving the future of America’s AI warfare experiment still to be determined.
Source: Bloomberg