The Pentagon is moving to classify Anthropology as a supply chain risk

🔥 Check out this trending post from TechCrunch 📖

📂 **Category**: AI,Anthropic,autonomous weapons,In Brief,pentagon

✅ **What You’ll Learn**:

In a post on Truth Social, President Trump directed federal agencies to stop using all human products following the company’s public dispute with the Department of Defense. The president allowed a six-month phase-out period for departments using the products, but stressed that Anthropic was no longer welcome as a federal contractor.

“We don’t need them, we don’t want them, and we will never deal with them again,” the president wrote in the post.

Notably, the President’s post did not mention any plans to classify Anthropics as a supply chain risk, as previously reported as a result. However, a subsequent tweet from Defense Minister Pete Hegseth made good on that threat.

“In conjunction with the President’s directive to the federal government to cease all use of Anthropic technology, I am directing the War Department to designate Anthropic as a national security supply chain risk,” Secretary Hegseth wrote. “As of now, no contractor, supplier, or partner doing business with the U.S. military may conduct any business activity with Anthropic.”

The Pentagon dispute centered on Anthropic’s refusal to allow its AI models to be used to power mass domestic surveillance or fully autonomous weapons, which Secretary Hegseth found unduly restrictive.

CEO Dario Amodei reiterated his position in a public post on Thursday, refusing to concede both points.

“We strongly prefer to continue serving the Department and our warfighters – with the two required safeguards in place,” Amodei wrote at the time. “If the Department chooses to retire Anthropic, we will enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions.”

TechCrunch event

Boston, MA
|
June 9, 2026

OpenAI has come out in support of Anthropic’s decision. According to the BBC, CEO Sam Altman sent a memo to employees on Thursday saying he shares the same “red lines” and that any defense contracts related to OpenAI would also reject “unlawful or inappropriate uses of cloud deployments, such as local surveillance and autonomous offensive weapons.”

Ilya Sutskever, co-founder of OpenAI, who publicly disagreed with Altman in November 2023 and has since co-founded his own AI company, also participated in the conversation on Friday, writing on X: “It is very good that Anthropic did not back down, and it is important that OpenAI took a similar stance.

In the future, there will be more challenging situations of this kind, and it will be crucial for the leaders involved to rise to the occasion and for fierce competitors to put aside their differences. “It’s good to see that happen today.”

Anthropic, OpenAI, and Google were awarded contracts from the US Department of Defense last July. While some Google employees have expressed their support for Anthropic, Google and its parent company have yet to comment.

⚡ **What’s your take?**
Share your thoughts in the comments below!

#️⃣ **#Pentagon #moving #classify #Anthropology #supply #chain #risk**

🕒 **Posted on**: 1772240291

🌟 **Want more?** Click here for more info! 🌟

By

Leave a Reply

Your email address will not be published. Required fields are marked *