OpenAI is sharing more details about its agreement with the Pentagon

✨ Discover this must-read post from TechCrunch 📖

📂 **Category**: AI,Government & Policy

✅ **What You’ll Learn**:

By CEO Sam Altman’s admission, OpenAI’s deal with the Department of Defense was “definitely rushed,” and “the theories don’t look good.”

After negotiations between Anthropic and the Pentagon failed on Friday, President Donald Trump directed federal agencies to stop using Anthropic’s technology after a six-month transition period, and Defense Secretary Pete Hegseth said he classified the AI ​​company as a supply chain risk.

Then, OpenAI quickly announced that it had reached its own agreement to deploy models in classified environments. With Anthropic saying it draws red lines around using its technology in fully autonomous weapons or mass domestic surveillance, and Altman saying OpenAI has the same red lines, there were some obvious questions: Has OpenAI been honest about its safeguards? Why was it able to reach an agreement while Anthropics could not?

So while OpenAI executives defended the agreement on social media, the company also published a blog post outlining its approach.

In fact, the publication cited three areas in which it said OpenAI’s models could not be used — mass domestic surveillance, autonomous weapons systems, and “high-risk automated decisions (e.g. systems like Social Credit).”

The company said that unlike other AI companies that have “lowered or eliminated their safety barriers and relied primarily on usage policies as their primary safeguards in national security deployments,” the OpenAI agreement protects its red lines “with a more broad, multi-layered approach.”

“We retain full discretion over our security stack, deploy via the cloud, have authorized OpenAI personnel on hand, and have strong contractual protections,” the blog said. “This is all in addition to the strong protections found in US law.”

TechCrunch event

San Francisco, California
|
October 13-15, 2026

“We don’t know why Anthropic couldn’t reach this deal, and we hope they and other labs will consider it,” the company added.

After the post went live, Techdirt’s Mike Masnick claimed that the deal “fully allows for domestic surveillance,” as it says the collection of private data would comply with Executive Order 12333 (along with a number of other laws). Masnick described this as “how the NSA conceals its internal surveillance by wiretapping communications *outside the United States* even if they contain information from/about US persons.”

In a LinkedIn post, Katrina Mulligan, head of national security partnerships at OpenAI, said much of the debate over contract language assumes that “the only thing standing between Americans and the use of AI for mass domestic surveillance and autonomous weapons is a single-use policy provision in a single contract with the War Department.”

“That’s not how any of this works,” Mulligan said, adding, “The structure of the publication matters more than the language of the contract […] By limiting our deployment to the cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational devices.

Altman also fielded questions about the deal on So why do that?

“We really wanted to de-escalate things, and we thought the deal offered was a good one,” Altman said. “If we are right and this de-escalates between the Department of Labor and industry, we will look like geniuses, a company that took a lot of pain to do things to help the industry. If not, we will continue to be described as […] “Rash and incautious.”

⚡ **What’s your take?**
Share your thoughts in the comments below!

#️⃣ **#OpenAI #sharing #details #agreement #Pentagon**

🕒 **Posted on**: 1772383245

🌟 **Want more?** Click here for more info! 🌟

By

Leave a Reply

Your email address will not be published. Required fields are marked *