AP Report: Hegseth warns Anthropic against allowing the military to use the company’s AI technology as it sees fit

💥 Check out this must-read post from PBS NewsHour – Politics 📖

📂 **Category**: Anthropic,artificial intelligence,defense department,pete hegseth

✅ **What You’ll Learn**:

WASHINGTON (AP) — Defense Secretary Pete Hegseth gave Anthropic CEO a deadline Friday to open the company’s artificial intelligence technology for unrestricted military use or risk losing its government contract, according to a person familiar with their meeting.

Hegseth met Tuesday with Anthropic CEO Dario Amodei, whose company makes the chatbot Claude and remains the last of its peers to not supply its technology to a new U.S. military intranet.

Read more: Musk’s Grok chatbot faces EU privacy investigation over deeply sexual images

Besides canceling the contract, Pentagon officials warned they could designate Anthropic as a supply chain risk or use the Defense Production Act to give the military more authority to use its products even if it doesn’t approve how they are used, according to the person.

The tone of the meeting was cordial but Amodei did not budge on two areas he identified as lines Anthropologie would not cross — fully independent military targeting operations and domestic surveillance of American citizens, said the person, who was not authorized to speak publicly about the meeting and spoke on condition of anonymity.

The Ministry of Defense did not immediately comment.

Amodei has repeatedly made clear his ethical concerns about unsupervised government use of artificial intelligence, including the dangers of fully autonomous armed drones and AI-assisted mass surveillance that could track dissent.

Read more: Experts say Trump’s use of AI images further erodes public trust

A defense official, who was not authorized to comment publicly and spoke on condition of anonymity, confirmed the meeting between Hegseth and Amodei.

It highlights the debate over the role of artificial intelligence in national security and concerns about how the technology will be used in high-risk situations involving lethal force, sensitive information, or government surveillance. It also comes as Hegseth has pledged to root out what he calls a “woke culture” in the armed forces.

“Powerful AI that searches across billions of conversations from millions of people can measure public sentiment, detect pockets of disloyalty that form, and eliminate them before they grow,” Amodei wrote in an article last month.

Anthropic was the only AI company certified for covert military networks

The Pentagon announced last summer that it would award defense contracts to four AI companies: Anthropic, Google, OpenAI, and Elon Musk’s XAI. The value of each contract is up to $200 million.

Anthropic was the first AI company to gain approval for covert military networks, working with partners like Palantir. The other three companies, for now, operate only in unclassified environments.

Read more: The Pentagon is adopting Musk’s Grok AI chatbot as it sparks global outrage

By early this year, Hegseth was highlighting only two of them: xAI and Google.

In a speech in January at Musk’s spaceflight company SpaceX, in South Texas, the defense secretary said he was ignoring any AI models that “won’t let you fight wars.”

Hegseth said his vision for military AI systems means they operate “without ideological constraints that limit lawful military applications,” before adding that “the Pentagon’s AI will not wake up.”

In January, Hegseth said that Musk’s AI chatbot Grok would join the Pentagon’s network, called GenAI.mil. The announcement came days after Grok — embedded in X, the social media network owned by Musk — came under global scrutiny for creating highly sexualized fake photos of people without their consent.

OpenAI announced in early February that it would also join the Army’s secure AI platform, enabling service members to use a customized version of ChatGPT for unclassified tasks.

Anthropy calls itself more safety-conscious

Anthropic has long positioned itself as the most responsible and safety-conscious among leading AI companies, ever since its founders left OpenAI to form the startup in 2021.

The Pentagon’s uncertainty puts those intentions to the test, according to Owen Daniels, associate director for analysis and fellow at Georgetown University’s Center for Security and Emerging Technology.

“Anthropic’s peers, including Meta, Google and xAI, were willing to comply with the administration’s policy on using models for all legal applications,” Daniels said. “So the company’s bargaining power here is limited, and it risks losing leverage as management pushes to embrace AI.”

In the AI ​​frenzy that followed the launch of ChatGPT, Anthropic closely aligned itself with President Joe Biden’s administration in volunteering to subject its AI systems to third-party auditing to protect against national security risks.

Amodei, the CEO, warned of the potentially catastrophic risks of AI, while declining to describe AI as “doomed.” “We are closer to real risk in 2026 than we were in 2023,” he said in a January article, but those risks should be managed “in a realistic and practical way.”

The anthropologist has been at odds with the Trump administration

This wouldn’t be the first time Anthropic’s call for stricter AI safeguards has led to disagreement with the Trump administration. Human chip maker Nvidia has publicly criticized Trump’s proposals to ease export controls to enable the sale of some AI-powered computer chips in China. However, the AI ​​company remains a close partner with Nvidia.

The Trump administration and Anthropic have also been on opposite sides of the pressure campaign to regulate AI in US states.

David Sachs, Trump’s senior advisor on artificial intelligence, accused Anthropic in October of “managing a sophisticated regulatory takeover strategy based on fear-mongering.”

Sacks made the remarks on

Anthropic named a number of former Biden officials shortly after Trump returned to the White House, but also tried to signal a bipartisan approach. The company recently added Chris Liddell, a former White House official from Trump’s first term, to its board of directors.

The Pentagon-human debate is reminiscent of the uproar several years ago when some tech workers objected to their companies’ participation in Project Maven, the Pentagon’s drone surveillance program. While some workers resigned over the project and Google itself withdrew, the Pentagon’s reliance on drone surveillance has increased.

Likewise, Daniels said, “The use of AI in military contexts is already a reality and is not going away.”

Amos Toh, a senior adviser at New York University’s Brennan Center’s Freedom and National Security Program, said the Pentagon’s “accelerating” adoption of AI shows the need for more oversight or regulation by Congress, especially if AI is used to surveil Americans.

“The law is not keeping up with how quickly technology is evolving,” Toh wrote in a post on Bluesky’s website. “But that doesn’t mean the Defense Department has a blank check.”

O’Brien reported from Providence, Rhode Island.

A free press is the cornerstone of a healthy democracy.

Support trustworthy journalism and civil dialogue.


🔥 **What’s your take?**
Share your thoughts in the comments below!

#️⃣ **#Report #Hegseth #warns #Anthropic #allowing #military #companys #technology #sees #fit**

🕒 **Posted on**: 1771964802

🌟 **Want more?** Click here for more info! 🌟

By

Leave a Reply

Your email address will not be published. Required fields are marked *