OpenAI has reached a new agreement with the US Department of War to deploy its AI systems in classified defense environments, as tensions grow between the Pentagon and rival AI firm Anthropic.
Quick Summary – TLDR:
- OpenAI signed a classified AI deployment deal with the US Department of War
- The contract includes three strict red lines on surveillance, weapons, and automated decisions
- President Donald Trump moved against Anthropic, calling it a supply chain risk
- OpenAI says it does not support labeling Anthropic a risk and wants de escalation
What Happened?
OpenAI announced it has secured a deal with the US Department of War, previously known as the Department of Defense, to deploy advanced AI systems within classified networks. The agreement comes just days after President Donald Trump directed the government to stop working with rival AI startup Anthropic, which the Pentagon labeled a supply chain risk.
OpenAI CEO Sam Altman confirmed the agreement and emphasized that the company’s safety principles are built directly into the contract.
Yesterday we reached an agreement with the Department of War for deploying advanced AI systems in classified environments, which we requested they make available to all AI companies.
— OpenAI (@OpenAI) February 28, 2026
We think our deployment has more guardrails than any previous agreement for classified AI…
OpenAI’s Three Red Lines
At the center of the agreement are what OpenAI calls its three main red lines. The company says its technology cannot be used for:
- Mass domestic surveillance.
- Directing autonomous weapons systems.
- High stakes automated decisions such as social credit systems.
Altman wrote:
OpenAI said these restrictions are enforced through a layered safety system. Unlike some other AI labs that rely mainly on usage policies, OpenAI says it retains control over its safety stack and deploys its models through a cloud only setup. This structure prevents edge deployments that could enable fully autonomous lethal systems.
The company also confirmed that cleared OpenAI engineers and safety researchers will remain involved in deployments to ensure compliance.
Contract Safeguards and Legal Limits
The agreement allows the Department of War to use the AI system “for all lawful purposes” but makes clear that it cannot independently direct autonomous weapons in cases where human control is required by law or policy. The contract references existing laws, including the Fourth Amendment, the National Security Act of 1947, and the Foreign Intelligence Surveillance Act.
OpenAI stressed that its systems cannot be used for unconstrained monitoring of US persons and cannot support domestic law enforcement beyond what existing laws allow.
The company also stated that if the government violates the agreement, it has the right to terminate the contract. “We don’t expect that to happen,” OpenAI said.
The Anthropic Dispute
The deal arrives amid a public clash between the Pentagon and Anthropic. Founded by former OpenAI research head Dario Amodei, Anthropic previously secured contracts worth up to 200 million dollars to support intelligence and cyber operations. Its AI model Claude has been deployed across national security systems.
However, Anthropic refused to allow unrestricted military use of its AI tools, particularly for autonomous weapons and surveillance. The Pentagon responded by labeling the company a supply chain risk, and President Trump criticized the firm, calling it a “radical left, woke” company.
Anthropic has said it will challenge the designation in court.
Interestingly, OpenAI made clear that it does not support labeling Anthropic as a supply chain risk. “We have made our position on this clear to the government,” the company said.
Push for Industry Wide Terms
OpenAI also requested that the Department of War make the same contractual terms available to all AI companies. Altman said the company wants to see tensions de-escalate and move toward broader collaboration between the US government and AI labs.
“We believe strongly in democracy,” OpenAI said, adding that responsible collaboration is the only sustainable path forward as AI becomes central to national security.
SQ Magazine Takeaway
From my perspective, this is a defining moment for AI in national defense. OpenAI is walking a careful line. It wants to support the US military while protecting its brand and values. By publicly drawing red lines and building layered safeguards into its contract, the company is trying to prove that powerful AI can serve national security without crossing ethical boundaries.
At the same time, the standoff with Anthropic shows how messy this space is becoming. Governments want flexibility. AI labs want guardrails. The balance between security and safety will shape the future of this technology.
One thing is clear. AI is no longer on the sidelines of defense policy. It is now at the center of it.