Anthropic Alleges Large Scale Claude Distillation by Chinese AI Labs
Anthropic says three Chinese AI labs generated over 16 million Claude exchanges using 24,000 fraudulent accounts.
Anthropic has accused three Chinese artificial intelligence labs of conducting coordinated distillation campaigns to extract capabilities from its Claude models.
The company claims the activity violated its terms of service and regional access rules. The allegations surface as debates intensify over semiconductor export controls and the competitive balance in global AI development.
Industrial Scale Distillation Campaigns
In an official blog post, Anthropic said it identified large-scale campaigns by DeepSeek, Moonshot AI, and MiniMax. According to the company, the labs generated more than 16 million exchanges with Claude through approximately 24,000 fraudulent accounts.
Anthropic said the groups used a distillation technique, a training method where a smaller model learns from the outputs of a stronger one. While commonly used internally by AI companies, Anthropic alleged that competitors used the method to replicate Claude’s agentic reasoning, tool use, and coding strengths.
The company attributed the campaigns through IP correlations, metadata, and infrastructure indicators. It also warned that illicitly distilled systems may not retain built-in safeguards designed to prevent misuse in areas such as cyber operations or biological threats.
Highlighting Export Control and Policy Debate
Reporting by TechCrunch placed the accusations within the broader US-China AI rivalry. The outlet confirmed Anthropic’s claim that over 24,000 fake accounts were used to conduct millions of Claude interactions targeting advanced capabilities.
The report noted that distillation has become central to policy discussions about chip exports. US authorities recently allowed companies such as Nvidia to resume exporting advanced AI chips, including the H200, to China. Critics argue that expanded chip access could increase computing capacity for large-scale model training and distillation.
TechCrunch also cited cybersecurity expert Dmitri Alperovitch, who said the allegations reinforce concerns about intellectual property extraction in frontier AI development.
Why This Allegation Matters
This conflict highlights a major flaw in how top-tier AI is built. If a rival can simply ‘copy’ an AI’s best work, a company’s head start disappears, even with strict export laws in place. Anthropic argues that large-scale distillation depends on access to high-performance chips, linking the issue directly to semiconductor policy.
The allegations also raise safety considerations. Anthropic says its systems include safeguards against misuse by state and non-state actors. If models are rebuilt through output extraction, those protections may not transfer intact. As generative AI models grow more capable, enforcement of access controls and monitoring of coordinated account activity may become a defining issue in global AI governance.
A Broader Test for AI Governance
Anthropic says it is strengthening detection systems, verification processes, and intelligence sharing with industry partners. The company emphasized that coordinated action across AI labs, cloud providers, and policymakers will be required to address large-scale distillation risks.
The case adds new complexity to debates over export controls, model security, and the protection of advanced AI capabilities in an increasingly competitive global landscape.



