How Pentagon Is Moving Towards Increasing AI Integration in US Warfare

“The Pentagon wants AI systems usable for all lawful purposes.”
The Wall Street Journal reported that Anthropic's artificial intelligence tool, Claude, was used in the US military operation that led to the capture of former Venezuelan President Nicolas Maduro, highlighting the growing role of AI models within the Pentagon.
Axios, citing two sources familiar with the matter, reported that the US military used the same model in the operation, raising sensitive questions about the limits of AI use in military actions.
The use of Claude in this context raised questions about the terms of the contract between Anthropic and the Department of War, estimated at hundreds of millions of dollars.
US officials defended the legality of the procedure, stating that the AI was used exclusively for information processing and not for the direct execution of military actions.
However, civil rights organizations and technology ethics experts demanded greater transparency regarding the specific role played by the tool.
Military Partnership
Tensions are escalating between the US Department of War and Anthropic, one of the world's leading AI companies, in a dispute that extends beyond financial matters to encompass the limits of using technology in warfare.
Therefore, it is pressuring major AI companies to allow the use of their technologies for all legitimate purposes, even sensitive ones.
In contrast, Anthropic maintains its restrictions on its Cloud AI model, asserting that it does not permit the use of its technology in developing autonomous weapons or mass surveillance systems. Pentagon officials consider this stance impractical given their growing operational needs.
Cloud's involvement in these operations stems from a partnership between Anthropic and Palantir, a company relied upon by US defense and security agencies for data analysis and operations management.
A senior US administration official stated that the Pentagon is reassessing its partnership with Anthropic following the controversy surrounding the use of Cloud in the operation.
Defense officials, on the other hand, want more freedom. They are seeking the ability to use the technology for all lawful purposes.
American press reports indicate that the Pentagon may cancel a contract worth approximately $200 million if the two parties fail to reach an understanding that allows for greater flexibility in the use of the AI model.
The review follows a two-year agreement Anthropic signed with the Pentagon last year. That deal included prototype versions of Claude designed specifically for government use.
It marked a key step for the company, which had previously focused mostly on commercial and enterprise customers.
Anthropic has pushed to block certain uses of its AI, including large-scale surveillance of U.S. citizens and weapons systems that could operate without a human making the final decision.
Because of that gap, the situation has grown more serious. The Department of War is now considering whether to label Anthropic a supply chain risk.”
If that happens, the impact could go beyond the company’s current contract. Businesses that work with both the Pentagon and Anthropic might be forced to end their partnerships with the AI firm.
Pentagon spokesman Sean Parnell confirmed that the relationship is under review. He said defense partners must be willing to support military readiness and national security needs.

Additional Restrictions
Cloud's role in Operation Inherent Resolve went beyond mere preparation and training; it served as a real-time analytical brain.
Its technical tasks included processing satellite imagery, deciphering intelligence, and refining targeting strategies, enabling precise strikes and minimizing risks to American personnel.
Cloud did not directly control the weapons, but rather mapped out the intelligence roadmap, while the Pentagon utilized other systems that handled autonomous systems management.
Reports indicate that no Americans were killed during Maduro's abduction, while Cuba and Venezuela announced the deaths of dozens of their soldiers and security personnel.
Meanwhile, the Pentagon is pushing for broader application rights that would allow it to employ artificial intelligence models in any scenario, provided the law is followed. This clashes with the desire of some companies to impose additional restrictions beyond the legal minimum.
This operation has placed Anthropic and its CEO, Dario Amodei, in a very difficult position.
The company, which prioritizes safety first, now finds its technology embroiled in an unprecedented military operation that resulted in fatalities.
According to the report, the company is focused on ensuring its technology is not used for mass surveillance of Americans or to operate fully autonomous weapons.
The tension between the company's ethical principles and the U.S. Department of War's military ambitions has created a potentially costly rift.
Anthropic has launched an internal investigation into the use of its software in the operation, alarming the Pentagon, which fears an ethical revolt by the company that could hinder future operations.
As a result, U.S. officials are now seriously considering canceling a $200 million contract with Anthropic and reconsidering the strategic partnership due to the company's restrictions, which the military might perceive as an obstacle on the battlefield.
For his part, Colonel Omar Melhem explained to Al-Estiklal that “the controversy surrounding Cloud will not end with the Venezuela operation, as the model's ability to generate complete technical files (Word, PDF, Excel) within a chat environment makes it a comprehensive production platform for field reports.”
“Although Anthropic emphasizes its commitment to policies that prevent violence, the technological reality indicates that the cloud has already become the new digital brain upon which superpowers rely to manage their most sensitive files, surpassing models focused on public consumption and entertainment,” he added.
“The Anthropic-Pentagon clash isn’t a small contract dispute. It’s a glimpse into the messy future of AI ethics in real-world conflict,” he said.

AI Weaponization
Washington is seeking to expand its reliance on AI in military tasks, including document analysis and the operation of autonomous systems, raising concerns about the dangers of weaponizing AI and using it for surveillance.
The U.S. is ushering in a new era of military dominance through a comprehensive strategy aimed at transforming the US military into the world's leading digital fighting force.
This vision goes beyond limited technical testing, placing AI at the very core of combat and intelligence operations, under the direct guidance of President Donald Trump.
This approach, announced by Secretary of War Pete Hegseth, reflects Washington's determination to treat the development of algorithms as an urgent national security matter that cannot be postponed.
In the midst of a frenzied technological arms race, this move represents a clear declaration that artificial intelligence is no longer merely a supplementary tool, but has become the new standard of sovereign power in the 21st century.
The US drive toward militarizing AI under the banner of a wartime approach presents the world with a new security reality, one in which the lines between civilian software and lethal weapons are blurring, all beyond the reach of public scrutiny or even traditional legal accountability.
For his part, Anthropic CEO Dario Amodei warned of the dangers of employing AI in autonomous combat operations or internal surveillance, and called for stricter controls on such uses.
Washington has used artificial intelligence to guide its strikes in Iraq and Syria in recent years.
“Israel” has also used autonomous drones in Gaza and has employed AI extensively to enhance its target identification capabilities there.

The tech landscape is witnessing escalating competition among major AI companies to enter the US defense market.
Reuters previously revealed that the Pentagon is pressuring major AI companies to make their AI tools available on classified networks without the numerous restrictions that companies impose on users.
OpenAI, Google and xAI recently reached deals for military users to access their models without many of the safeguards that apply to ordinary users.
OpenAI also partnered with two defense technology firms selected by the Pentagon to develop voice-activated software for controlling swarms of autonomous drones.
Sources
- Pentagon Used Anthropic’s Claude in Maduro Venezuela Raid
- Pentagon threatens to cut off Anthropic in AI safeguards dispute
- Pentagon Reconsiders Anthropic Deal After Dispute Over Surveillance and AI Weapons
- Militarisation of Artificial Intelligence Implications of civilian AI firms entering defence contracts










