💥 Check out this must-read post from TechCrunch 📖
📂 **Category**: AI
✅ **What You’ll Learn**:
Anthropic filed two sworn statements in a federal court in California late Friday afternoon, responding to the Pentagon’s assertion that the AI company poses an “unacceptable risk to national security” and arguing that the government’s case rests on technical misunderstandings and claims that were not actually raised during the months of negotiations leading up to the dispute.
The announcements were filed along with a summary of Anthropic’s response in the lawsuit against the Department of Defense and come ahead of a hearing next Tuesday, March 24, before Judge Rita Lin in San Francisco.
The dispute dates back to late February, when President Trump and Defense Secretary Pete Hegseth publicly announced they were cutting ties with Anthropic after the company refused to allow unrestricted military use of its AI technology.
The two people who submitted the declarations are Sarah Hick, Head of Policy at Anthropic, and Thiago Ramasamy, Head of Public Sector at Anthropic.
Heck is a former National Security Council official who worked in the White House during the Obama administration before moving to Stripe and then Anthropic, where she ran the company’s government relations and political work. She was personally present at the February 24 meeting where CEO Dario Amodei sat with Defense Minister Hegseth and Defense Undersecretary Emil Michael.
In her announcement, Heck pointed out what she called a central lie in government documents: that Anthropics demanded some kind of approval for military operations. She says this claim is simply not true. “At no time during Anthropic’s negotiations with the Department did I or any other Anthropic employee mention that the company wanted this type of role,” she wrote.
It also claims that Pentagon concerns about the possibility of technology being disrupted or changed mid-human operation were never raised during negotiations. Instead, she says, he first appeared in government court files, which gave Anthropic no opportunity to respond.
TechCrunch event
San Francisco, California
|
October 13-15, 2026
Another detail in Hick’s announcement sure to draw attention is that on March 4 — the day after the Pentagon officially finalized its anti-Anthropic supply chain risk designation — Undersecretary Michael emailed Amodei to say the two sides were “very close” on the two issues the government now cites as evidence that Anthropic poses a national security threat: his positions on autonomous weapons and mass surveillance of Americans.
The email, which Heck attached as a document for her announcement, is worth reading alongside what Michael said publicly in the days that followed. On March 5, Amodei published a statement saying the company had had “productive conversations” with the Pentagon. The next day, Michael posted on X that “there are no active negotiations between the War Department and Anthropic.” A week later, he told CNBC: “There is no chance” of the talks resuming.
Hick’s point seems to be: If Anthropic’s position on these two issues is what makes them a national security threat, why did the Pentagon official say the two sides were nearly neck-and-neck on exactly these two issues after the classification was finalized? (She stops short of saying the government used the designation as a bargaining chip, but the timeline she outlines leaves the question open.)
Ramasamy brings a different kind of experience to the issue. Before joining Anthropic in 2025, he spent six years at Amazon Web Services managing AI deployments for government clients, including classified environments. At Anthropic, he is credited with building the team that brought Claude models to national security and defense settings, including a $200 million contract with the Pentagon announced last summer.
His announcement embraces the government’s claim that Anthropics could theoretically interfere in military operations by disrupting technology or changing the way it operates, which Ramasamy says is not technically possible. By his account, once Claude is deployed inside an “air-sealed” system secured by the government and operated by an outside contractor, Anthropic has no access to him; There is no remote kill switch, no backdoor, and no mechanism to push unauthorized updates. He points out that any kind of “operational veto” is a fantasy, explaining that changing the model would require explicit approval from the Pentagon and procedures to implement it.
He says a human cannot even see what government users type into the system, let alone extract that data.
Ramasamy also questions the government’s claims that Anthropic’s employment of foreign nationals makes the company a security risk. He notes that Anthropic employees have undergone US government security clearance checks — the same background check process required to access classified information, adding in his statement that “to my knowledge,” Anthropic is the only AI company whose cleared employees have built AI models designed to work in classified environments.
Anthropic’s lawsuit says the supply chain risk designation — the first ever applied to a U.S. company — amounts to government retaliation for the company’s publicly stated views on AI safety, in violation of the First Amendment.
The government, in a 40-page filing earlier this week, rejected that framework entirely, saying that Anthropic’s refusal to allow all legitimate military uses of its technology was a business decision, not protected speech, and that the designation was a direct call to national security rather than punishment for the company’s views.
🔥 **What’s your take?**
Share your thoughts in the comments below!
#️⃣ **#court #filing #reveals #Pentagon #told #Anthropic #sides #page #week #Trump #announced #relationship #collapsed**
🕒 **Posted on**: 1774059759
🌟 **Want more?** Click here for more info! 🌟
