The trap that Anthropy created for itself

💥 Check out this must-read post from TechCrunch 📖

📂 **Category**: AI,Anthropic,Max Tegmark,OpenAI,pentagon,xAI

📌 **What You’ll Learn**:

On Friday afternoon, while this interview was going on, a news alert flashed across my computer screen: The Trump administration was cutting ties with Anthropic, the San Francisco AI company founded by Dario Amodei and other former OpenAI researchers who left behind safety concerns. Defense Secretary Pete Hegseth invoked the National Security Act — a law designed to counter foreign supply chain threats — to blacklist the company to prevent doing business with the Pentagon after Amodei refused to allow Anthropic’s technology to be used in mass surveillance of American citizens or in autonomous armed drones that can select and kill targets without human intervention.

It was an amazing sequence. Anthropic is now set to lose a contract worth up to $200 million, as well as be banned from working with other defense contractors after President Trump posted on Truth Social directing every federal agency to “immediately stop using Anthropic’s technology.” (Anthropic has since said it will challenge the Pentagon in court, calling the supply chain risk classification legally unsound and “never before publicly applied to a US company.”)

Max Tegmark has spent the better part of a decade warning that the race to build ever more powerful artificial intelligence systems is outpacing the world’s ability to control them. The Swedish-American physicist and MIT professor founded the Future of Life Institute in 2014. In 2023, he helped organize an open letter — ultimately signed by more than 33,000 people, including Elon Musk — calling for a pause in the development of advanced artificial intelligence.

His view of the humanitarian crisis is harsh: The company, like its competitors, has sown the seeds of its own predicament. Tegmark’s argument begins not with the Pentagon, but with a decision made years ago — a choice shared across the industry, to resist binding regulation. Anthropic, OpenAI, Google DeepMind and others have long promised to govern themselves responsibly. Earlier this week, Anthropic abandoned the core tenet of its Safety Pledge — its promise not to release increasingly powerful AI systems until the company is confident they won’t cause harm.

Now, in the absence of rules, there is little to protect these players, Tegmark says. Here’s more from that interview, edited for length and clarity. You can hear the full conversation this coming week on TechCrunch’s StrictlyVC Download podcast.

Now when you saw this news about Anthropy, what was your first reaction?

The road to hell is paved with good intentions. It’s interesting to think about the last decade, when people were so excited about how we could create AI to cure cancer, grow prosperity in America and make America strong. And here we are now where the US government is angry at this company for not wanting to use AI for domestic mass surveillance of Americans, and also for not wanting to have killer robots that can independently decide – without any human input at all – who gets killed.

TechCrunch event

San Francisco, California
|
October 13-15, 2026

Anthropic has staked its entire identity on being a safety-first AI company, yet it has been collaborating with defense and intelligence agencies [dating back to at least 2024]. Do you think this is contradictory at all?

It’s contradictory. If I can give a little sarcasm about this – yes, Anthropic has been very good at marketing itself as being about safety. But if you actually look at the facts rather than the claims, what you see is that Anthropic, OpenAI, Google DeepMind, and xAI have all talked a lot about how much they care about safety. None of them supported binding safety regulation as in other industries. All four companies have now broken their promises. In the beginning we had Google – this big slogan, “Don’t be evil.” Then they dropped it. Then they dropped another long-term commitment that basically said they promised not to harm AI. They dropped that so they could sell AI for surveillance and weapons purposes. OpenAI has just dropped the word security from its mission statement. XAI shut down the entire safety team. And now, earlier this week, Anthropic abandoned its most important safety commitment — a promise not to release powerful AI systems until it’s sure they won’t do any harm.

How did companies that made such high-profile safety commitments end up in this position?

All of these companies, especially OpenAI and Google DeepMind but to some extent Anthropic as well, have consistently lobbied against regulation of AI, saying: “Just trust us, we will regulate ourselves.” They succeeded in applying pressure. So, we now have less regulation on AI systems in America than we do on sandwiches. You know, if you want to open a sandwich shop and the health inspector finds 15 rats in the kitchen, he won’t let you sell any sandwiches until you fix them. But if you say, “Don’t worry, I’m not going to sell sandwiches, I’m going to sell AI girlfriends to 11-year-olds who have been linked to suicides in the past, and then I’m going to release something called superintelligence that might overthrow the US government, but I have a good feeling about it” — the inspector should say, “Okay, go ahead, just don’t sell sandwiches.”

There is regulation of food safety and no regulation of artificial intelligence.

And I think all of these companies really share the blame. Because if they had taken all these promises they’d made in the past about how they were going to be safe and good, and got together, and then gone to the government and said, “Please, take our voluntary commitments and turn them into American law that binds even our most negligent competitors” — this would have happened instead. We are in a complete regulatory vacuum. We know what happens when there’s a full corporate amnesty: you get thalidomide, the tobacco companies push cigarettes to kids, you get the asbestos that causes lung cancer. So it is ironic that their resistance to having laws stating what is and is not acceptable to do with AI has now come back to bother them.

There’s no law right now that prevents AI from being built to kill Americans, so the government could suddenly order it. If the companies themselves had come out earlier and said, “We want this law,” they would not have been in this predicament. They really shot themselves in the foot.

The corporate counterargument is always to race with China — if American companies won’t do it, Beijing will. Does this argument hold up?

Let’s analyze it. The most common talking point among lobbyists for AI companies — they are now better funded and more numerous than lobbyists from the fossil fuel industry, the pharmaceutical industry, and the military-industrial complex combined — is that when anyone suggests any kind of regulation, they say: “But China.” So let’s look at that. China is about to completely ban AI girlfriends. Not just age limits, they are looking to ban all forms of anthropomorphic AI. Why? Not because they want to please America, but because they feel that this corrupts the Chinese youth and makes China weak. Clearly, this makes American youth vulnerable as well.

And when people say we have to race to build superintelligence so we can beat China – when in reality we don’t know how to control superintelligence, so that the default outcome is humanity losing control of Earth to alien machines – guess what? The Chinese Communist Party really likes control. Who thinks Xi Jinping will tolerate some Chinese AI company building something that overthrows the Chinese government? impossible. Obviously, it would be very bad for the US government as well if it were overthrown in a coup by the first American company to build superintelligence. This is a threat to national security.

This is a compelling framing – superintelligence as a national security threat, not an asset. Do you think this opinion is widely accepted in Washington?

I think if people in the national security community listened to Dario Amodei describe his vision — he gave a famous speech where he said we would soon have a country of geniuses in a data center — they might start thinking: Wait, did Dario just use the word “country”? Maybe I should put this country of data center geniuses on the same list of threats I’m monitoring, because that sounds like a threat to the US government. I believe that very soon, enough people in the US national security community will realize that uncontrollable superintelligence is a threat, not a tool. This is quite similar to the Cold War. There was a race for dominance – economic and military – against the Soviet Union. We Americans won that race without participating in the second race, which was to see who could create the most nuclear craters in the other superpower. People realized that this was just suicide. Nobody wins. The same logic applies here.

What does all this mean for the pace of AI development more broadly? How close do you think we are to the systems you describe?

Six years ago, almost every AI expert predicted that we would be decades away from having an AI capable of mastering human-level language and knowledge — maybe in 2040, maybe in 2050. They were all wrong, because we already have it now. We’ve seen AI advance very quickly from high school level to college level to doctoral level to professor level in some fields. Last year, artificial intelligence won a gold medal at the International Mathematical Olympiad, a task no less difficult than human tasks. I wrote a paper with Yoshua Bengio, Dan Hendricks, and other leading AI researchers just a few months ago, providing a rigorous definition of artificial general intelligence. According to this, GPT-4 is 27% of the way there. GPT-5 was 57% of the way there. So, we’re not there yet, but going from 27% to 57% quickly indicates that it may not take long.

When I gave a lecture to my students yesterday at MIT, I told them that even if it takes four years, it means that when they graduate, they may not be able to get a job anymore. It is certainly not too early to start preparing for this.

Anthropy has now been blacklisted. I’m curious to see what happens next, will the other AI giants side with them and say: we won’t do this either? Or does someone like xAI raise their hand and say, Anthropic doesn’t want this contract, we’ll take it? [Editor’s note: Hours after the interview, OpenAI announced its own deal with the Pentagon.]

Last night, Sam Altman came out and said he stands with Anthropic and has the same red lines. I admire him for having the courage to say that. Google, even when we started this interview, didn’t say anything. If they remained quiet, I think it would be very embarrassing for them as a company, and a lot of their employees would feel the same way. We haven’t heard anything from xAI either. So it will be interesting to see. Basically, there is that moment where everyone has to show their true colors.

Is there a version of this where the result is actually good?

Yes, which is why I’m actually optimistic in a weird way. There is such an obvious alternative here. If we just started treating AI companies like any other company — dropped the corporate amnesty — they would obviously have to do something like a clinical trial before they could launch something that powerful, and prove to independent experts that they know how to control it. Then we’ll have a golden age with all the good things of AI, without the existential angst. This is not the path we are on now. But it could be.

⚡ **What’s your take?**
Share your thoughts in the comments below!

#️⃣ **#trap #Anthropy #created**

🕒 **Posted on**: 1772324539

🌟 **Want more?** Click here for more info! 🌟

By

Leave a Reply

Your email address will not be published. Required fields are marked *