🚀 Discover this insightful post from TechCrunch 📖
📂 **Category**: AI,Government & Policy,Anthropic,dario amodei,Donald Trump,OpenAI,pege hegseth,sam altman
💡 **What You’ll Learn**:
Sam Altman, CEO of OpenAI, announced late Friday that his company has reached an agreement that will allow the Department of Defense to use its AI models in the department’s classified network.
It comes on the heels of a high-profile standoff between the department – also known under the Trump administration as the War Department – and OpenAI’s rival Anthropic. The Pentagon has pushed AI companies, including Anthropic, to allow their models to be used “for all lawful purposes,” while Anthropic has sought to draw a red line around mass domestic surveillance and fully autonomous weapons.
In a lengthy statement issued Thursday, Anthropic CEO Dario Amodei said the company “has never raised any objections to specific military operations or attempted to limit the use of our technology in Customized “In a way,” he said, but he said that “in a narrow set of cases, we think AI is capable of undermining democratic values, rather than defending them.”
More than 60 OpenAI employees and 300 Google employees signed an open letter this week asking their employers to support Anthropic’s position.
After Anthropic and the Pentagon failed to reach an agreement, President Donald Trump criticized “leftist jobs at Anthropic” in a social media post that also directed federal agencies to stop using the company’s products after a six-month phase-out period.
In a separate post, Defense Secretary Pete Hegseth claimed that Anthropic was trying to “seize veto power over operational decisions of the United States Army.” Hegseth also said he classifies Anthropic as a supply chain risk: “As of now, no contractor, supplier, or partner that does business with the U.S. military may conduct any business activity with Anthropic.”
On Friday, Anthropic said it had “yet to receive direct communication from the War Department or the White House regarding the status of our negotiations,” but insisted it would “challenge any supply chain risk designation in court.”
TechCrunch event
Boston, MA
|
June 9, 2026
Surprisingly, Altman claimed in a post on X that OpenAI’s new defense contract includes safeguards that address the same issues that have become a flashpoint for Anthropic.
“Two of our most important safety principles are the prohibition of domestic mass surveillance and human responsibility for the use of force, including autonomous weapons systems,” Altman said. “The Department of Labor agrees to these principles, reflects them in law and policy, and places them in our agreement.”
OpenAI will “build in technical safeguards to ensure that our models behave as they should, which is what the Department of Labor also wanted,” Altman said, and will deploy engineers with the Pentagon “to help with our models and to ensure their integrity.”
“We are asking the Department of Labor to offer these same terms to all AI companies, which we believe everyone should be willing to accept,” Altman added. “We have expressed our strong desire to see matters settle beyond legal and governmental action and reach reasonable agreements.”
Fortune’s Sharon Goldman reports that Altman told OpenAI employees in an all-hands meeting that the government would allow the company to build its own “security stack” to prevent abuse, and that “if a model refuses to do a task, the government won’t force OpenAI to make it do that task.”
Altman’s post came shortly before news emerged that the US and Israeli governments had begun bombing Iran, with Trump calling for the overthrow of the Iranian government.
⚡ **What’s your take?**
Share your thoughts in the comments below!
#️⃣ **#OpenAIs #Sam #Altman #Announces #Pentagon #Agreement #Technical #Assurances**
🕒 **Posted on**: 1772298909
🌟 **Want more?** Click here for more info! 🌟
