Anthropic has accused three major Chinese AI labs of extracting millions of interactions from its Claude model to train competing systems.
Quick Summary – TLDR:
- Anthropic says DeepSeek, Moonshot AI, and MiniMax used over 24,000 fake accounts to generate 16 million Claude exchanges.
- The firms allegedly used a technique called distillation to improve their own AI models.
- Anthropic links the activity to concerns around export controls and advanced AI chip access.
- Experts warn the practice could raise national security and competitive risks.
What Happened?
Anthropic claims that DeepSeek, Moonshot AI, and MiniMax conducted large scale campaigns to extract capabilities from its Claude AI model. The company says the labs generated more than 16 million exchanges using approximately 24,000 fraudulent accounts, violating its terms of service and regional access restrictions.
The activity allegedly relied on distillation, a common AI training method that can also be misused to replicate a rival model’s strengths at lower cost and faster speed.
We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax.
— Anthropic (@AnthropicAI) February 23, 2026
These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models.
Anthropic Details the Alleged Campaigns
San Francisco based Anthropic said the three labs targeted Claude’s most differentiated capabilities, including agentic reasoning, tool use, and coding. According to the company, each campaign followed a coordinated playbook using proxy services and large networks of fake accounts to avoid detection.
The scale varied by company:
- DeepSeek generated more than 150,000 exchanges focused on reasoning tasks, reinforcement learning grading systems, and censorship safe alternatives to politically sensitive prompts.
- Moonshot AI conducted more than 3.4 million exchanges targeting agentic reasoning, coding, data analysis, computer use agents, and computer vision.
- MiniMax accounted for more than 13 million exchanges centered on agentic coding and tool orchestration.
Anthropic said it detected MiniMax’s campaign while it was still active. When the company launched a new Claude model, MiniMax allegedly redirected nearly half its traffic within 24 hours to capture capabilities from the latest release.
Why Distillation Is Controversial?
Distillation is widely used across the AI industry. Frontier labs often distill their own large models into smaller and more affordable versions. However, Anthropic argues that when competitors use distillation to extract capabilities from another company’s system, it becomes a form of intellectual property exploitation.
OpenAI recently sent a memo to lawmakers accusing DeepSeek of similar behavior. DeepSeek previously drew attention after releasing its open source R1 reasoning model, which approached the performance of leading US models at significantly lower cost. The company is expected to launch DeepSeek V4 soon, a model reportedly strong in coding tasks.
Anthropic says the scale of extraction requires access to advanced AI chips, placing the issue squarely inside the debate over US export controls. The Trump administration recently allowed US firms such as Nvidia to export advanced AI chips like the H200 to China. Critics argue that easing restrictions could strengthen China’s AI capabilities at a sensitive moment in global competition.
“Distillation attacks therefore reinforce the rationale for export controls: restricted chip access limits both direct model training and the scale of illicit distillation,” per Anthropic’s blog.
National Security Concerns
Anthropic says the risks extend beyond competition. The company warns that models built through illicit distillation may not retain important safeguards designed to prevent misuse.
Anthropic’s blog post read:
Dmitri Alperovitch, chairman of the Silverado Policy Accelerator think tank and co founder of CrowdStrike, told TechCrunch he believes the allegations confirm long standing suspicions.
Alperovitch said:
Anthropic said it is strengthening detection systems, sharing intelligence with industry partners, and tightening account verification. Still, the company called for coordinated action across AI labs, cloud providers, and policymakers.
SQ Magazine Takeaway
I think this story shows just how intense the global AI race has become. If these allegations are accurate, distillation is no longer just a training shortcut. It is a geopolitical flashpoint. The debate over export controls, chip sales, and AI safety is no longer theoretical. It is unfolding in real time, and companies on both sides are moving fast. The next few months could shape who leads the future of AI.