U.S. military is using AI to help plan Iran air attacks, sources say, as lawmakers call for oversight

As the U.S. military expands its use of AI tools to pinpoint targets for airstrikes in Iran, members of Congress are calling for guardrails and greater oversight of the technology’s use in war.
Two people with knowledge of the matter, who requested anonymity to discuss sensitive matters, confirmed the military is using AI systems from data analytics company Palantir to identify potential targets in the ongoing attacks. The use of Palantir’s software, which relies in part on Anthropic’s Claude AI systems, comes as Defense Secretary Pete Hegseth aims to put artificial intelligence at the heart of America’s combat operations — and as he has clashed with Anthropic leadership over limitations on the use of AI.
Yet, as AI assumes a wider role on the battlefield, lawmakers are demanding greater focus on the protections that should govern its use and increased transparency about how much control is ceded to the technology.
“We need a full, impartial review to determine if AI has already harmed or jeopardized lives in the war with Iran,” Rep. Jill Tokuda, D-Hawaii, a member of the House Armed Services Committee, told NBC News in response to questions about the use and reliability of AI in military contexts. “Human judgment must remain at the center of life-or-death decisions.”
The Defense Department and leading AI companies such as OpenAI and Anthropic have publicly stated that current AI systems should not be able to kill without human signoff. But the concern remains that relying on AI for parts of its operations or decision-making can lead to mistakes in military operations.
The Pentagon’s chief spokesperson, Sean Parnell, said in a post on X on Feb. 26 that the military did not “want to use AI to develop autonomous weapons that operate without human involvement.”
The Defense Department did not respond to questions about how the military balances its use of AI to reduce human workloads while verifying analysis and targeting suggestions are accurate.
Lawmakers and independent experts who spoke to NBC News raised alarm over the military’s use of such tools, calling for clear safeguards to ensure humans remain involved in life-or-death decisions on the battlefield.
“AI tools aren’t 100% reliable — they can fail in subtle ways and yet operators continue to over-trust them,” said Rep. Sara Jacobs, D-Calif, a member of the House Armed Services Committee.
“We have a responsibility to enforce strict guardrails on the military’s use of AI and guarantee a human is in the loop in every decision to use lethal force, because the cost of getting it wrong could be devastating for civilians and the service members carrying out these missions,” she said.
Anthropic’s Claude has become a crucial component of Palantir’s Maven intelligence analysis program, which was also used in the U.S. operation to capture Venezuelan President Nicolás Maduro. News of Claude’s role in recent military actions was first reported by The Wall Street Journal and The Washington Post.
But that role has been complicated by Anthropic’s clash with Hegseth after the company sought to prevent the military from using its AI for domestic surveillance and autonomous deadly weapons. Last week, the Defense Department labeled Anthropic a threat to national security, a move that threatens to remove it from military use in the coming months. Anthropic filed a lawsuit to fight that designation.
Anthropic declined to comment. Palantir did not respond to a request for comment.
In a video posted to X on Wednesday, Adm. Brad Cooper, leader of U.S. Central Command, acknowledged that AI had become a key tool in helping the U.S. choose targets in Iran.
“Our warfighters are leveraging a variety of advanced AI tools. These systems help us sift through vast amounts of data in seconds so our leaders can cut through the noise and make smarter decisions faster than the enemy can react,” he said.
“Humans will always make final decisions on what to shoot and what not to shoot and when to shoot, but advanced AI tools can turn processes used to take hours and sometimes even days into seconds.”
The Trump administration has publicly embraced using the technology both for the military and throughout the government.
Rep. Pat Harrigan, R-N.C., said that AI has already become crucial for rapidly processing military intelligence, including in Iran.
“AI is a tool that helps our warfighters process enormous amounts of data faster than any human could alone, and what we saw in Operation Epic Fury, over 2,000 targets struck with remarkable precision, is a testament to how these capabilities can be used responsibly and effectively,” Harrigan, who also serves on the House Armed Services Committee, told NBC News in a statement.
“But no AI system replaces the judgment, the training, and the experience of the American warfighter. The human in the loop is not a formality, it is a requirement, and nothing in how our military operates suggests otherwise,” he said.
While no lawmakers contacted by NBC News said that AI should be completely removed from military use, some said that more oversight is needed.
Sen. Elissa Slotkin, D-Mich., a member of the Senate Armed Services Committee, said that the Defense Department had not done enough to clarify how well humans are vetting AI-assisted or generated military intelligence.
“It’s really up to the humans, and in this case the Secretary of Defense, to ensure that there’s human redundancy for the foreseeable future, and that is what we just don’t have confidence in,” she said.
Sen. Mark Warner, D-Va., the top Democrat on the Senate Intelligence Committee, said that he is concerned about the military’s use of AI to assist with identifying targets and that there are unanswered questions about how the new technology is being used. “This has to be addressed,” he told NBC News.
OpenAI and Anthropic, both of which have worked with the U.S. military, have said that even their most advanced systems are error prone, and the world’s top AI researchers admit they don’t fully understand how leading AI systems work.
In an interview with NBC last month, Anthropic CEO Dario Amodei said: “I can’t tell you there’s a 100% chance that even the systems we build are perfectly reliable.”
A major OpenAI study published in September found that all major AI chatbots, which rely on systems called large language models, “hallucinate” or periodically fabricate answers.
Sen. Kirsten Gillibrand, D-N.Y., called for clearer rules on how the military can use AI.
“The Trump administration has already proven that it is willing to subvert American law to prosecute an unpopular war,” she told NBC News. “There is little reason to trust that the DOD will be any more responsible with its use of AI without explicit safeguards.”
Mark Beall, head of government affairs at the AI Policy Network, a Washington D.C. think tank, and the director of AI strategy and policy at the Pentagon from 2018 to 2020, said that while AI could streamline the process of deciding where to strike, it was clear humans still need to thoroughly vet targets.
“There’s a lot of steps before the trigger gets pulled. AI systems are being deployed very effectively to accelerate existing workflows and allow commanders and analysts and planners to have better and faster decision making capabilities,” he added. “But when it comes to actually deploying weapon systems, this technology is not ready yet.”
“These systems will get really, really good, and as other adversaries start using them, there will be more pressure to shorten the review of AI outputs in order to operate at useful and effective speeds,” Beall said. “We have to figure out how to solve this reliability problem before we get there. No matter what you think about lethal autonomous weapons, making them safe and effective is in the interest of the entire world.”
Heidy Khlaaf, the chief scientist at the AI Now Institute, a nonprofit that advocates for ethical use of the technology, said she was concerned that reliance on AI to rapidly process information for life-or-death decisions could be a way for militaries to avoid accountability for mistakes.
“It’s very dangerous that ‘speed’ is somehow being sold to us as strategic here, when it’s really a cover for indiscriminate targeting when you consider how inaccurate these models are,” Khlaaf said.
You may be interested

Oscar nominee Rose Byrne has famous partner and fans will be floored | Films | Entertainment
new admin - Mar 11, 2026Damages series trailer starring Glenn Close and Rose Byrne This weekend, the brightest Hollywood stars will be descending on the…

Brooke Slusser responds to critics over transgender roommate claims
new admin - Mar 11, 2026[ad_1] NEWYou can now listen to Fox News articles! Left-wing social media users launched a volley of insults at 23-year-old…

The Live Nation settlement has industry insiders baffled
new admin - Mar 11, 2026Instead of moving forward with a jury trial against Live Nation-Ticketmaster as expected, the Justice Department announced a settlement Monday…


































