Anthropic accuses Chinese AI labs of cloud mining while US discusses AI chip exports

✨ Check out this awesome post from TechCrunch 📖

📂 **Category**: AI,Government & Policy,Anthropic,deepseek,distillation,Exclusive,minimax,moonshot ai

✅ **What You’ll Learn**:

Anthropic accuses three Chinese AI companies of creating more than 24,000 fake accounts using its Claude AI model to improve their own models.

The labs — DeepSeek, Moonshot AI, and MiniMax — are alleged to have generated more than 16 million exchanges with Cloud through those accounts using a technique called “distillation.” Anthropic said the labs “targeted Claude’s most differentiated abilities: active thinking, tool use, and programming.”

The accusations come amid debates over how stringent export controls should be imposed on advanced AI chips, a policy aimed at curbing AI development in China.

Distillation is a common training method that AI labs use in their own models to create smaller, cheaper versions, but competitors can use it to essentially copy other labs’ homework. OpenAI sent a memo to House lawmakers earlier this month accusing DeepSeek of using distillation to mimic its products.

DeepSeek first made waves a year ago when it released an open-source R1 inference model that roughly matches U.S. frontier laboratories in performance at a fraction of the cost. DeepSeek is expected to soon release DeepSeek V4, its latest model, which can reportedly outperform Anthropic’s Claude and OpenAI’s ChatGPT in programming.

Each attack varies in scope. Anthropic tracked more than 150,000 DeepSeek exchanges that appear to be aimed at improving the underlying logic and alignment, specifically around censored, secure alternatives to policy-sensitive queries.

Moonshot AI had over 3.4 million exchanges targeting logical reasoning and tool use, coding and data analysis, computer agent development, and computer vision. Last month, the company released a new open source Kimi K2.5 template and coding agent.

TechCrunch event

Boston, MA
|
June 9, 2026

MiniMax’s 13 million exchanges targeted proxy encryption, use and orchestration of the tools. Anthropic said it was able to observe the MiniMax in action as it redirected nearly half of its traffic to pull capabilities from the latest Claude model when it launched.

Anthropic says it will continue to invest in defenses that make distillation attacks more difficult to carry out and easier to identify, but is calling for a “coordinated response across the AI ​​industry, cloud providers, and policy makers.”

The distillery attacks come at a time when US chip exports to China are still hotly debated. Last month, the Trump administration officially allowed US companies like Nvidia to export advanced AI chips (such as the H200) to China. Critics argue that this easing of export controls increases China’s AI computing capacity at a critical time in the global race for AI dominance.

Anthropic says the scale of extraction performed by DeepSeek, MiniMax and Moonshot “requires access to advanced chips.”

“Distillation attacks thus reinforce the rationale for export controls: restricting access to chips limits live model training and the volume of illicit distillation,” according to the Anthropic blog.

Dmitry Alperovitch, head of the Silverado Policy Accelerator think tank and co-founder of CrowdStrike, told TechCrunch he was not surprised to see these attacks.

“It has been clear for some time that part of the reason for the rapid progress of Chinese AI models was theft by distillation of US frontier models,” Alperovich said. “Now we know this as a fact.” “This should give us even more compelling reasons to refuse to sell any AI chips to any of these companies [companies], Which would benefit them more.

Anthropic also said that distillation not only threatens to undermine US AI dominance, but may also create national security risks.

“Anthropic and other U.S. companies are building systems that prevent state and non-state actors from using artificial intelligence, for example, to develop biological weapons or carry out malicious cyber activities,” Anthropic’s blog post said. “Models created through illicit distillation are unlikely to retain those safeguards, meaning dangerous capabilities can proliferate with many protections completely removed.”

Anthropic noted that authoritarian governments are deploying frontier AI for things like “offensive cyber operations, disinformation campaigns, and mass surveillance,” a risk that doubles if these models are open source.

TechCrunch has reached out to DeepSeek, MiniMax, and Moonshot for comment.

⚡ **What’s your take?**
Share your thoughts in the comments below!

#️⃣ **#Anthropic #accuses #Chinese #labs #cloud #mining #discusses #chip #exports**

🕒 **Posted on**: 1771877384

🌟 **Want more?** Click here for more info! 🌟

By

Leave a Reply

Your email address will not be published. Required fields are marked *