Anthropic’s AI model Claude was reportedly used in a US military operation in Venezuela, even as tensions grow between the company and the Pentagon over how its technology should be deployed.
Quick Summary – TLDR:
- Claude AI was reportedly used in a US military operation targeting Nicolás Maduro in Venezuela.
- The Pentagon is pressuring Anthropic to allow its AI models to be used for all lawful purposes, including weapons development.
- Anthropic insists on limits around fully autonomous weapons and mass domestic surveillance.
- The Pentagon is considering ending its contract, which is valued at up to 200 million dollars.
What Happened?
Claude, the artificial intelligence model developed by Anthropic, was reportedly used during a US military operation aimed at capturing Venezuelan leader Nicolás Maduro. At the same time, the Pentagon is considering cutting ties with the company because it refuses to remove certain safeguards on how its AI tools can be used.
The dispute centers on whether AI systems like Claude should be allowed in sensitive military activities without restrictions.
WSJ: Anthropic’s artificial-intelligence tool Claude was used in the U.S. military’s operation to capture former Venezuelan President Nicolás Maduro, highlighting how AI models are gaining traction in the Pentagon.
— Vineet (@cozyduke_apt29) February 14, 2026
Anthropic’s usage guidelines prohibit Claude from being used to… pic.twitter.com/1MvM58ycyr
Reported Use of Claude in Venezuela Operation
According to reporting from The Wall Street Journal, Claude was deployed during a US military raid in Venezuela through Anthropic’s partnership with Palantir Technologies. The operation reportedly involved air strikes across Caracas and resulted in dozens of deaths, according to Venezuela’s defense ministry.
It remains unclear exactly how Claude was used. The AI model is capable of processing large documents, analyzing intelligence data, and even assisting with autonomous drone operations. However, Anthropic declined to comment on whether Claude was involved in that specific mission.
An Anthropic spokesperson said the company had not discussed the use of Claude for specific operations with the Department of War and that any use must comply with its Usage Policy.
Pentagon Pushes for Fewer AI Restrictions
At the heart of the dispute is the Pentagon’s demand that leading AI firms allow their models to be used for “all lawful purposes.” That includes applications in:
- Weapons development
- Intelligence collection
- Battlefield operations
Anthropic has refused to remove two core limitations from its policies. The company prohibits the use of its AI systems for fully autonomous weapons and for mass surveillance of Americans.
A senior administration official told Axios that the Pentagon is frustrated after months of negotiations. The official said everything is on the table, including scaling back or ending the partnership altogether, though replacing Claude would not be easy.
Anthropic was the first frontier AI company to deploy its model on classified Pentagon networks. The company signed a contract last year worth up to 200 million dollars, and Claude became the first such system integrated into classified US defense infrastructure.
Other AI Companies Take a Different Approach
While Anthropic has maintained strict guardrails, other major AI firms appear more flexible.
OpenAI’s ChatGPT, Google’s Gemini, and xAI’s Grok are already being used in unclassified military settings. The Pentagon is now negotiating with those companies to expand into classified networks. According to officials, at least one of the companies has agreed to the all lawful purposes standard, while the others are showing more flexibility than Anthropic.
In January, the Pentagon also announced a partnership with xAI, owned by Elon Musk. The US military increasingly uses artificial intelligence in targeting and operational planning, similar to how other nations have integrated AI into defense systems.
Internal and External Pressure
The tension reflects a broader debate about the role of artificial intelligence in warfare. Critics warn that AI driven targeting systems could lead to mistakes, especially when human oversight is limited.
Anthropic CEO Dario Amodei has publicly called for stronger regulation of advanced AI systems. He has expressed concern about the risks of autonomous lethal operations and surveillance inside the United States.
According to sources familiar with the company, some engineers inside Anthropic are uneasy about deepening ties with the Pentagon. At the same time, the company maintains that it remains committed to supporting US national security within the boundaries of its policies.
An Anthropic spokesperson said the company is “committed to using frontier AI in support of U.S. national security” and emphasized that its discussions with the government have focused on clearly defined usage policy questions.
SQ Magazine Takeaway
I see this as a turning point for the AI industry. This is no longer just about technology. It is about power, control, and responsibility. If AI companies remove all limits for military use, we enter dangerous territory. But if they refuse entirely, governments may look elsewhere. Anthropic is trying to walk a tight line between national security and ethical responsibility. Whether that stance holds could shape the future of AI in warfare.