The US Treasury Department and the Federal Housing Finance Agency have moved to end their use of Anthropic products after President Donald Trump ordered federal agencies to stop working with the company.
Quick Summary – TLDR:
- Treasury says it is ending all use of Anthropic products, including Claude, effective immediately.
- FHFA, plus mortgage giants Fannie Mae and Freddie Mac, are also terminating Anthropic use.
- The State Department is switching its internal chatbot StateChat to OpenAI, using GPT4.1 for now.
- The decision could force teams to rework projects and raises new questions about how the government will judge AI vendors.
What Happened?
President Trump ordered federal agencies to stop using Anthropic technology, triggering quick moves by major departments to cut ties. Treasury confirmed it is terminating Anthropic use, while FHFA leadership said the same for its agency and the mortgage firms it oversees. Separately, the State Department is moving its internal chatbot away from Anthropic and onto OpenAI.
🚨BREAKING: US TREASURY JUST TERMINATED ALL USE OF ANTHROPIC PRODUCTS
— NIK (@ns123abc) March 2, 2026
ITS SO OVER pic.twitter.com/PNIy7oYdT2
Treasury moves first, ends Claude use immediately
Treasury Secretary Scott Bessent announced Monday that the department is terminating all use of Anthropic products, including Claude. In his post, Bessent wrote:
Treasury did not immediately detail which internal programs were using Anthropic models, but the decision is expected to create real friction for teams that already built workflows around them. When a model is embedded in tools, scripts, and day to day processes, replacing it is rarely quick. It can mean retesting outputs, rewriting prompts, and rebuilding safeguards that were tuned to a specific system.
FHFA follows, and mortgage agencies also cut ties
Shortly after Treasury’s move, William Pulte, director of the Federal Housing Finance Agency, said his department and the US mortgage agencies Fannie Mae and Freddie Mac are terminating all use of Anthropic products.
That matters because FHFA sits on top of some of the most important plumbing in US housing finance. Even limited AI usage inside these organizations can touch sensitive areas like risk work, compliance support, document handling, and internal productivity tools. Cutting off one vendor means shifting quickly to alternatives or pausing projects.
State Department switches StateChat to OpenAI
While Treasury and FHFA focused on termination, the State Department is taking a replacement step. According to a memo reported by Reuters, the department is switching the model behind its in house chatbot StateChat from Anthropic to OpenAI.
The memo says: “For now, StateChat will use GPT4.1 from OpenAI,” and adds that more information will come later.
This is one of the clearest signals yet that the federal government is not just pulling away from Anthropic, but also actively redirecting work to OpenAI in certain areas.
Pentagon pressure and a wider federal phase out
The Trump order appears to be part of a broader dispute tied to how the Pentagon can deploy advanced AI and what rules should apply. Reuters reported that the Pentagon would declare Anthropic a supply chain risk, a label that can effectively freeze a vendor out of sensitive government work.
Trump also said there would be a six month phase out for the Defense Department and other agencies that use Anthropic products. That timeline suggests some systems may be too embedded to switch overnight, especially in defense environments where approvals and testing can be slow by design.
Adding to the shakeup, OpenAI announced a deal to deploy technology in the Defense Department’s classified network. Put together, the moves point to a moment where vendor trust and security posture are becoming just as important as model performance.
A reminder of how fast federal AI policy can shift
This reversal is especially notable because Claude was previously made broadly available across all three branches of the federal government under a General Services Administration OneGov agreement, according to reporting. Even if each agency used it differently, the new stop order shows how quickly access can change when the White House gets involved.
For federal employees, the practical impact is straightforward: projects that relied on Anthropic models may need to be rebuilt. For the AI industry, the message is sharper: winning government work can also mean losing it suddenly.
SQ Magazine Takeaway
I think this is a loud warning to every AI company chasing government contracts. Performance alone is not enough. If Washington decides you are a risk, you can get pushed out fast, even if your tools were already widely used. And honestly, the part that should worry teams inside government is the whiplash. When tools get turned off overnight, the people doing real work are the ones left cleaning up the mess.