Humanitarian response after US military calls it ‘supply chain risk’

✨ Explore this trending post from WIRED 📖

📂 **Category**: Business,Business / Artificial Intelligence,Risky Business

✅ **What You’ll Learn**:

US Secretary of State US Defense Secretary Pete Hegseth directed the Pentagon to classify Anthropics as a “supply chain risk” on Friday, sending shockwaves through Silicon Valley and leaving many companies scrambling to understand whether they can continue using one of the industry’s most popular AI models.

“As of now, no contractor, supplier, or partner that does business with the U.S. military may conduct any business activity with Anthropic,” Hegseth wrote in a social media post.

This designation comes after weeks of tense negotiations between the Pentagon and Anthropic over how the US military will use the startup’s artificial intelligence models. In a blog post this week, Anthropic said its contracts with the Pentagon should not allow its technology to be used for mass domestic surveillance of Americans or fully autonomous weapons. The Pentagon asked Anthropic to agree to allow the US military to apply its AI to “all legitimate uses” without specific exceptions.

Supply chain risk classification allows the Pentagon to restrict or exclude certain vendors from defense contracts if they are deemed to pose security vulnerabilities, such as risks related to foreign ownership, control or influence. Its purpose is to protect sensitive military systems and data from potential compromise.

Anthropic responded in another blog post on Friday evening, saying it would “challenge any supply chain risk classification in court,” and that such a classification “would set a dangerous precedent for any U.S. company negotiating with the government.”

Anthropic added that it has not received any direct communication from the Department of Defense or the White House regarding negotiations over the use of its AI models.

“Secretary Hegseth implied that this designation would prevent anyone doing business with the military from doing business with Anthropic. The Secretary does not have the legal authority to support this statement,” the company wrote.

The Pentagon declined to comment.

“This is the most shocking, hurtful and overreaching thing I have ever seen done by the United States government,” says Dean Paul, a senior fellow at the Foundation for American Innovation and a former senior advisor for AI policy at the White House. “We just imposed sanctions on an American company. If you’re an American, you should think about whether or not you should live here 10 years from now.”

People across Silicon Valley spoke on social media expressing similar shock and dismay. “The people running this are reckless and vindictive,” said Paul Graham, founder of startup accelerator Y Combinator. “I think that’s enough to explain their behavior.”

“Bringing one of our leading AI companies to its knees is rightfully about the worst target we can make,” Boaz Barak, a researcher at OpenAI, said in a post. “I very much hope that calmer minds will prevail and that this announcement will be reversed.”

Meanwhile, OpenAI CEO Sam Altman announced Friday night that the company has reached an agreement with the Department of Defense to deploy its AI models in classified environments, apparently with cutouts. “Two of our most important safety principles are the prohibition of domestic mass surveillance and human responsibility for the use of force, including autonomous weapons systems,” Altman said. “The Department of Labor agrees to these principles, reflects them in law and policy, and places them in our agreement.”

Confused customers

In its blog post on Friday, Anthropic said the supply chain risk classification, under authority 10 USC 3252, applies only to DoD contracts directly with suppliers, and does not cover how contractors use Claude AI software to serve other customers.

Three federal contracting experts say it’s impossible at this point to determine which Anthropic customers, if any, should now cut ties with the company. Hegseth’s announcement “is not steeped in any law that we can speculate on at the moment,” says Alex Major, a partner at the law firm McCarter & English, which works with technology companies.

⚡ **What’s your take?**
Share your thoughts in the comments below!

#️⃣ **#Humanitarian #response #military #calls #supply #chain #risk**

🕒 **Posted on**: 1772262329

🌟 **Want more?** Click here for more info! 🌟

By

Leave a Reply

Your email address will not be published. Required fields are marked *