AI safety meets war machine

🚀 Explore this trending post from WIRED 📖

📂 **Category**: Business,Business / Tech Culture,Backchannel

📌 **What You’ll Learn**:

when Anthropic last year became the first major AI company to be cleared by the US government for covert use — including military applications — and the news received little attention. But this week a second development hit like a cannonball: The Pentagon is reconsidering its relationship with the company, including a $200 million contract, ostensibly because the safety-conscious AI company objects to participating in some deadly operations. The so-called War Department has classified Anthropic as a “supply chain risk,” a scarlet letter typically reserved for companies that do business with countries under scrutiny by federal agencies, such as China, which means the Pentagon will not do business with companies that use Anthropic AI in its defense work. In a statement to WIRED, the Pentagon’s chief spokesman, Sean Parnell, confirmed that Anthropic was in the hot seat. “Our nation demands that our partners be ready to help our warfighters win any battle,” he said. “At the end of the day, this is about our forces and the safety of the American people.” That’s a message to other companies, too: OpenAI, xAI, and Google, which currently have Department of Defense contracts for unclassified work, are jumping through the hoops required to get high clearances of their own.

There’s a lot to unpack here. For one thing, there’s a question about whether Anthropic will be punished for complaining about the fact that its AI model, Claude, was used as part of the raid to oust Venezuelan President Nicolas Maduro (this has been reported; but the company denies it). There’s also the fact that Anthropic publicly supports regulation of AI, which is an odd position in the industry and at odds with management policies. But there is a bigger and more troubling problem. Will government claims for military use make AI itself less safe?

Researchers and executives believe that artificial intelligence is the most powerful technology ever invented. Almost all current AI companies have been founded on the premise that it is possible to achieve artificial general intelligence, or superintelligence, in a way that prevents harm at scale. Elon Musk, founder of xAI, has been a big proponent of reining in AI. He co-founded OpenAI because he feared the technology was too dangerous to be left in the hands of profit-seeking companies.

Anthropic has carved out a space as the most safety conscious ever. The company’s mission is to deeply integrate guardrails into its models so that bad actors cannot exploit AI’s darkest potential. Isaac Asimov said it first and best in his laws of robotics: A robot may not harm a human being, or allow a human being, through inaction, to harm it. Even when AI becomes smarter than any human on Earth – a possibility in which AI leaders believe fervently – these barriers must remain.

So it seems paradoxical that leading AI labs are striving to introduce their products into cutting-edge military and intelligence operations. As the first major laboratory under a classified contract, Anthropic offers the government “a custom set of Claude Joffe models designed exclusively for US national security customers.” However, Anthropic said it did so without violating its own safety standards, including a ban on using CLOUD to produce or design weapons. Anthropic CEO Dario Amodei specifically said he doesn’t want Claude involved in autonomous weapons or government surveillance of AI. But this may not work with the current administration. The Defense Department’s CTO Emil Michael (former Uber CEO) told reporters this week that the government will not tolerate an AI company limiting how the military uses AI in its weapons. “If there’s a swarm of drones coming out of a military base, what are your options for shooting them down? If human reaction time isn’t fast enough… how are you going to do it?” he asked rhetorically. So much for the first law of robotics.

There is a good argument that effective national security requires the best technologies from the most innovative companies. While some technology companies backed away from working with the Pentagon, even a few years ago, in 2026 they are generally flag-waving potential military contractors. I have yet to hear any AI executive talk about their models being linked to lethal force, but Palantir CEO Alex Karp is not shy about saying with obvious pride: “Our product is sometimes used to kill people.”

🔥 **What’s your take?**
Share your thoughts in the comments below!

#️⃣ **#safety #meets #war #machine**

🕒 **Posted on**: 1771807451

🌟 **Want more?** Click here for more info! 🌟

By

Leave a Reply

Your email address will not be published. Required fields are marked *