✨ Explore this trending post from TechCrunch 📖
📂 **Category**: AI,Government & Policy,Anthropic,dario amodei,dod,Exclusive,pentagon,pete hegseth
✅ **What You’ll Learn**:
Anthropic has until Friday evening to either give the US military unrestricted access to its AI model or face consequences, according to Axios.
Defense Secretary Pete Hegseth told Anthropic CEO Dario Amodei at a meeting Tuesday morning that the Pentagon will either declare Anthropic a “supply chain risk” — a designation typically reserved for foreign adversaries — or invoke the Defense Production Act (DPA) to force the company to design a version of the model to suit the military’s needs.
The Darfur Peace Agreement gives the president the power to force companies to prioritize or expand production for national defense purposes. It was recently used during the COVID-19 pandemic to force companies like General Motors and 3M to produce ventilators and masks, respectively.
Anthropic has long stated that it does not want its technology to be used for mass surveillance of Americans or for fully autonomous weapons — and refuses to concede on those points.
Pentagon officials say the military’s use of the technology should be subject to U.S. law and constitutional restrictions, not the use policies of private contractors.
The use of the DPA in a dispute over AI-powered guardrails would represent a significant expansion of the modern use of the law. It would also reflect an expansion of a broader pattern of instability in executive power that has intensified in recent years, according to Dean Paul, a senior fellow at the Foundation for American Innovation and a former senior advisor for artificial intelligence policy in the Trump White House.
“It’s basically going to be the government saying, ‘If you disagree with us politically, we’re going to try to put you out of business,'” Paul said.
TechCrunch event
Boston, MA
|
June 9, 2026
The dispute is unfolding against a backdrop of ideological friction, with some in the administration — including AI czar David Sachs — publicly criticizing Anthropic’s safety policies as “woke.”
“Any rational, responsible investor or corporate manager would look at this and think the United States is no longer a stable place to do business,” Paul said. “This attacks the core of what makes America an important center for global trade. We have always had a stable and predictable legal system.”
It’s a serious game, and Anthropist may not be the one to blink first. According to Reuters, Anthropic does not plan to ease usage restrictions.
Anthropic is the only frontier AI lab with classified DoD access, according to several reports. The Department of Defense does not currently have a fallback option running — although the Pentagon has reached an agreement to use xAI’s Grok in classified systems.
This lack of redundancy may help explain the Pentagon’s aggressive stance, Paul said.
“If Anthropic canceled the contract tomorrow, it would be a serious problem for the Department of Defense,” he told TechCrunch, noting that the agency appears unable to fulfill a national security memorandum issued by the late Biden administration directing federal agencies to avoid relying on a single secrecy-ready AI system.
He continued: “The Department of Defense does not have any backups. This is a case of one vendor here.” “They can’t fix this overnight.”
TechCrunch has reached out to Anthropic and the Department of Defense for comment.
⚡ **What’s your take?**
Share your thoughts in the comments below!
#️⃣ **#Anthropics #Wont #Budge #Pentagon #Escalates #Dispute**
🕒 **Posted on**: 1772004853
🌟 **Want more?** Click here for more info! 🌟
