Trump orders federal agencies to stop using human technology over a dispute over the safety of artificial intelligence

💥 Discover this insightful post from PBS NewsHour – Politics 📖

📂 **Category**: Anthropic,artificial intelligence,Donald Trump news,pentagon,pete hegseth

💡 **What You’ll Learn**:

WASHINGTON (AP) — The Trump administration on Friday ordered all U.S. agencies to stop using Anthropic’s artificial intelligence technology and imposed other major sanctions, culminating an unusually public fight between the government and the company over AI safeguards.

Defense Secretary Pete Hegseth said he was designating Anthropic as a supply chain risk, a move that could prevent U.S. military vendors from working with the company.

Read more: AP Report: Hegseth warns Anthropic against allowing the military to use the company’s AI technology as it sees fit

Hegseth’s comments, delivered in a social media post, came shortly after a deadline the Pentagon set for Anthropic to allow unrestricted military use of its AI technology or face consequences — and about 24 hours after CEO Dario Amodei said his company “cannot in good conscience” accede to the Defense Department’s demands.

President Donald Trump called the company “left-wing jobs” and said Anthropic made a mistake while trying to strong-arm the Pentagon. Trump wrote on Truth Social that most agencies should immediately stop using Anthropic’s AI, but he gave the Pentagon a six-month period to phase out the technology already embedded in military platforms.

The dispute in the defense contract was over the role of artificial intelligence in national security. Anthropic said it had requested limited assurances from the Pentagon that CLOUD would not be used for mass surveillance of Americans or in fully autonomous weapons. But after months of private conversations turned public, she said in a statement Thursday that the new contract language “framed as a compromise was coupled with law that would allow those safeguards to be ignored at will.”

Anthropic, the maker of chatbot Claude, could afford to lose the contract. But Hegseth’s ultimatum this week posed broader risks at the height of the company’s meteoric rise from little-known computer science research lab in San Francisco to one of the world’s most valuable startups. Military officials had warned Anthropic earlier in the week that they might consider it a “supply chain risk,” a label typically stamped on foreign adversaries that could derail the company’s important partnerships with other companies.

Trump also said Anthropic could face “significant civil and criminal consequences” if it is not useful in the phase-out period. Anthropic did not immediately respond to a request for comment on the new developments.

The president’s decision was preceded hours after senior Trump appointees from the Pentagon and State Department took to social media to criticize Anthropic and criticize their reluctance to acquiesce to the administration’s demands.

Sen. Mark Warner of Virginia, the top Democrat on the Senate Intelligence Committee, said the sanctions on Anthropic “combined with inflammatory rhetoric attacking that company, raise serious concerns about whether national security decisions are driven by careful analysis or political considerations.”

The dispute has caught AI developers in Silicon Valley by surprise, where a growing number of workers from Anthropic’s biggest rivals, OpenAI and Google, have expressed support for Amodei’s position in open letters and other forums.

The move is likely to benefit Elon Musk’s competitive chatbot Grok, which the Pentagon plans to give access to secret military networks, and could serve as a warning to two other competitors, Google and OpenAI, which also have contracts to supply the military with their own AI tools.

Musk sided with the Trump administration on Friday, saying on his social media platform

But one of Amodei’s fiercest rivals, OpenAI CEO Sam Altman, sided with Anthropic and questioned the Pentagon’s “threatening” move in an interview with CNBC, suggesting that OpenAI and much of the AI ​​field share the same red lines. Amodei once worked at OpenAI before he and other OpenAI leaders resigned to form Anthropic in 2021.

“For all the differences between me and Anthropic, for the most part I trust them as a company, and I think they really care about safety,” Altman told CNBC.

“Painting a bullseye on Anthropologie makes hot headlines, but everyone loses in the end,” retired Air Force Gen. Jack Shanahan wrote on social media.

Shanahan said Claude is already widely used across government, including in classified settings, and that the red lines set by Anthropic are “reasonable.” He said the large language models of AI that power chatbots like Cloud “are not ready for prime-time use in national security settings,” especially for fully autonomous weapons.

“They’re not trying to play nice here,” he wrote Thursday on LinkedIn.

O’Brien reported from Providence, Rhode Island.

A free press is the cornerstone of a healthy democracy.

Support trustworthy journalism and civil dialogue.


🔥 **What’s your take?**
Share your thoughts in the comments below!

#️⃣ **#Trump #orders #federal #agencies #stop #human #technology #dispute #safety #artificial #intelligence**

🕒 **Posted on**: 1772238074

🌟 **Want more?** Click here for more info! 🌟

By

Leave a Reply

Your email address will not be published. Required fields are marked *