OpenAI has lost access to Anthropic’s Claude API after the AI rival accused the company of violating its terms of service while preparing its next-generation model GPT5.

Quick Summary (TLDR):

  • Anthropic revoked OpenAI’s access to the Claude API, citing a terms of service violation.
  • OpenAI staff allegedly used Claude to benchmark and test GPT5 performance.
  • Benchmarking is industry standard, OpenAI argued, but Anthropic maintained its rules were breached.
  • This move comes amid growing AI rivalry and past API restrictions targeting competitors.

What Happened?

Anthropic confirmed it cut off OpenAI’s access to Claude earlier this week after discovering that OpenAI’s technical team had been using its coding tools to support GPT5 development. According to Anthropic, this violated its commercial terms, which forbid using Claude to build competing AI products, train rival models, or reverse engineer its services.

OpenAI Tested Claude in Internal Development Tools

Multiple sources told WIRED that OpenAI was integrating Claude into its internal development systems through special API access, rather than the standard chat interface. This allowed the company to evaluate Claude’s performance in coding, creative writing, and safety scenarios, including sensitive categories like CSAM, self-harm, and defamation.

The data reportedly helped OpenAI benchmark its models and adjust GPT5 accordingly. While OpenAI acknowledged the practice, it described the process as industry standard.

“We respect Anthropic’s decision to cut off our API access, but it’s disappointing considering our API remains available to them,” said Hannah Wong, OpenAI’s chief communications officer.

Anthropic Defends the Ban but Allows Benchmarking Access

Anthropic’s spokesperson Christopher Nulty emphasized that while benchmarking is allowed in principle, OpenAI’s use went beyond standard testing and violated the company’s commercial protections. He also suggested that OpenAI may still receive limited API access for safety and benchmarking purposes, though the company has not clarified how this will work under the new restrictions.

History of API Restrictions in the Tech Industry

This is not the first time Anthropic has revoked access to its AI models. In June, it cut off Windsurf, a coding startup rumored to be a future OpenAI acquisition. Similarly, tech firms have historically used API restrictions to protect their ecosystems. Facebook previously blocked Vine from its APIs, and last month Salesforce restricted Slack data access to competitors.

The move reflects the growing tension in the AI race, where top companies compete fiercely over talent, model performance, and market dominance. With GPT5 on the horizon, protecting proprietary tools like Claude has become a key battleground.

SQ Magazine Takeaway

I think this clash highlights just how cutthroat the AI industry has become. OpenAI calling it “industry standard” might be true, but when your rival thinks you are using their tech to leapfrog them, it is bound to get messy. This feels like the first shots in a bigger AI cold war, where companies guard their APIs like gold. For developers, the bigger takeaway is that access to popular AI tools can vanish overnight if corporate rivalries heat up.

Barry Elad

Barry Elad


Disclaimer: Content on SQ Magazine is for informational and educational purposes only. Please verify details independently before making any important decisions based on our content.

Reader Interactions

Leave a Comment