The company Anthropic has made accusations against several Chinese artificial intelligence developers, including DeepSeek, Moonshot, and MiniMax, claiming that they used extensive distillation to enhance their models, relying on the capabilities of their AI known as Claude.
According to Anthropic, 24,000 fake accounts were involved in these actions, through which 16 million requests were made. Distillation is a machine learning method where a less powerful model is trained on the results of a more powerful one.
Although this method is legal, Anthropic believes that its use by Chinese companies violates American export restrictions and licensing terms.
Representatives of Anthropic emphasized that “foreign labs that illegally conduct distillation of American models may circumvent protective measures by transferring model capabilities into their military and intelligence systems.” Previously, other American companies, such as OpenAI, also accused DeepSeek of using similar methods, but Anthropic provided more detailed data.
Methods of Operation
According to Anthropic, companies employed networks of thousands of fake accounts, referred to as “hydra clusters,” to distribute traffic through APIs and cloud services. The requests exhibited high frequency and narrow specialization on specific functions, which is more characteristic of model training than of ordinary user actions. For example, DeepSeek made over 150,000 requests, focusing on logical reasoning and “safe” rewrites of politically sensitive queries.
Moonshot, the developer of the Kimi model, sent over 3.4 million requests focused on agent-based thinking, programming, and computer vision. MiniMax became the leader in the number of requests, with more than 13 million directed at agent programming. After the release of a new version of Claude, the company redirected almost half of its traffic within 24 hours to “capture” new capabilities.
Response Measures
Anthropic reported its intention to enhance protection to complicate and detect such attacks. The company is implementing classifiers and analytical systems to identify patterns in API traffic, as well as sharing technical metrics with other AI labs and tightening the account verification process.
Additionally, mechanisms are being developed at the product and model level to reduce the potential for using its outputs for illegal purposes, while not degrading the user experience for legitimate users.