Anthropic Accuses Three Chinese AI Labs of Distillation Attacks on Claude

Key Points

  • Anthropic claims DeepSeek, Moonshot and MiniMax conducted large‑scale “distillation attacks” on Claude.
  • The campaigns involved roughly 24,000 fraudulent accounts and over 16 million exchanges.
  • Anthropic linked the activity to the three firms using IP address data, metadata requests and infrastructure clues.
  • The company plans to upgrade its systems to make such attacks harder and easier to detect.
  • OpenAI previously reported similar distillation concerns and banned suspected accounts.
  • Anthropic is simultaneously dealing with a lawsuit from music publishers over training data.

Anthropic Accuses Three Chinese AI Labs of Distillation Attacks on Claude

Anthropic Raises Alarm Over Distillation Attacks

Anthropic, the creator of the Claude chatbot, has publicly accused three Chinese artificial‑intelligence companies—DeepSeek, Moonshot and MiniMax—of running what it describes as “industrial‑scale campaigns” to illicitly extract Claude’s capabilities. The company characterizes these activities as “distillation attacks,” where less capable models rely on the responses of a more powerful model to train themselves.

According to Anthropic’s statement, the three firms used approximately 24,000 fraudulent accounts to generate more than 16 million exchanges with Claude. By leveraging Claude’s outputs, the companies could shortcut the development of their own AI models, potentially bypassing safeguards built into Claude.

Anthropic said it linked each campaign to the specific firms with “high confidence” by analyzing IP address correlations, metadata requests and other infrastructure indicators. The company also consulted with other industry players who have observed similar behaviors.

The allegation follows a similar claim made by OpenAI earlier last year, when OpenAI reported rival firms distilling its models and took action by banning suspected accounts. Anthropic indicated it will upgrade its systems to make distillation attacks more difficult to execute and easier to identify.

While Anthropic points to these alleged abuses, it also faces a separate lawsuit from music publishers who allege the company used illegal copies of songs to train Claude. The company has not commented on the lawsuit in the present statement.

Source: engadget.com