🚀 Explore this trending post from TechCrunch 📖
📂 **Category**: AI,Apps,Government & Policy,Startups,evergreens,Meta,moltbook,nvidia,OpenAI,openclaw
✅ **What You’ll Learn**:
You can chart it annually by product launches, or you can measure it at major moments that change the way we look at AI. The AI industry is constantly making news, like major acquisitions, independent developer successes, public outcry against sketchy products, and existentially serious contract negotiations – there’s a lot to sort out, so we’re taking a look at where we are and where we’ve been so far this year.
Anthropy vs. Pentagon
The business partners, Anthropic CEO Dario Amodei and Defense Secretary Pete Hegseth, reached a bitter impasse in February when they renegotiated contracts dictating how the U.S. military would use Anthropic’s artificial intelligence tools.
Anthropic has taken a hard line against using its AI for mass surveillance of Americans or to operate autonomous weapons that could attack without human supervision. On the other hand, the Pentagon has argued that the Department of Defense – which President Donald Trump’s administration calls the War Department – should be allowed access to Anthropic models for any “lawful use.” Government representatives resented the idea that the army should be confined to private company bases, but Amodei maintained his position.
“Anthropic recognizes that the Department of War, not private companies, makes military decisions. We have never raised objections to specific military operations nor attempted to limit the use of our technology in an ad hoc manner,” Amodei wrote in a statement addressing the situation. “However, in a narrow set of cases, we believe that AI can undermine democratic values, rather than defend them.”
The Pentagon gave Anthropic a deadline to approve their contract. Hundreds of employees at Google and OpenAI signed an open letter urging their leaders to respect Amodei’s limits and refuse to budge on the issues of autonomous weapons or domestic surveillance.
The deadline passed without Anthropic agreeing to the Pentagon’s demands. Trump directed federal agencies to phase out their use of humanitarian tools during a six-month transition period, and described the $380 billion artificial intelligence company as a “far-left, woke company” in a capitalized social media post. The Pentagon then moved to declare Anthropic a “supply chain risk,” a designation usually reserved for foreign adversaries that bars any company that works with Anthropic from doing business with the U.S. military. (Anthropic has since filed a lawsuit challenging this designation.)
Then human competition company OpenAI swooped in and announced it had reached an agreement that would allow its own models to be deployed in covert situations. This came as a shock to the tech community, as reports indicated that OpenAI would adhere to red lines set by Anthropic that govern the use of AI in the military.
TechCrunch event
San Francisco, California
|
October 13-15, 2026
Public sentiment was that people found OpenAI’s move suspicious — the day after OpenAI announced its deal, ChatGPT uninstalls jumped 295% day-over-day, and Anthropic’s Claude jumped to first place in the App Store. Caitlin Kalinowski, OpenAI’s chief hardware officer, resigned in response to the deal, saying it was “rushed through without identifying guardrails.”
OpenAI told TechCrunch that it believes its agreement is “clarifying.” [its] Red lines: No autonomous weapons and no self-monitoring.
As this saga continues, it will have major implications for the future of how AI is deployed in warfare, potentially changing the course of history – you know, no big deal…
OpenClaw “coded with Vibe” accelerates the shift to agentic AI
February was OpenClaw Month, and its influence continues to reverberate. In quick succession, the crypto AI assistant application went viral, spawned a bunch of spin-off companies, struggled with privacy issues, and was then acquired by OpenAI. Even one company built on OpenClaw, Reddit’s version of AI clients called Moltbook, was recently acquired by Meta. This crustacean-themed ecosystem has driven Silicon Valley into a state of outright madness.
Created by Peter Steinberger — who has since joined OpenAI — OpenClaw is a wrapper for AI models like Claude, ChatGPT, Google’s Gemini, or xAI’s Grok. What sets it apart is that it allows people to communicate with AI agents in natural language via the most popular chat apps, such as iMessage, Discord, Slack, or WhatsApp. There is also a public market where people can code and upload “skills” to add to their own AI agents, making it possible to automate anything that can be done on a computer.
If this sounds too good to be true, that’s because it kind of is. For an AI agent to be effective as a personal assistant, it needs access to your email, credit card numbers, text messages, computer files, etc. If it is compromised, a lot can go wrong, and unfortunately, there is no way to completely secure these clients against injection attacks.
“It’s just an agent sitting with a bunch of credentials in a box that’s connected to everything — your email, your messaging platform, everything you use,” Ian Ahl, chief technology officer at Permiso Security, told TechCrunch. “What that means is, when you get an email, and maybe someone can put a quick injection technique in there to take action, [and] That agent sitting on your box who has access to everything you’ve given him can now take that action.
An AI security researcher at Meta said OpenClaw wreaked havoc on her inbox, deleting all her emails despite repeated calls to stop. “I had to run to my Mac mini like I was defusing a bomb” to actually disconnect the device, she wrote in a now-viral post on X, which included photos of shutdown prompts that were discarded as receipts.
Despite the security risks, the technology interested OpenAI enough to hire it.
Other tools built on OpenClaw, including Moltbook — a Reddit-like “social network” where AI agents can communicate with each other — ended up becoming more widespread than OpenClaw itself.
In one case, a post went viral in which an AI agent appeared to encourage fellow agents to develop their own secret language, end-to-end encrypted, in which they could organize among themselves without humans knowing.
But researchers soon revealed that Moltbook’s encrypted device wasn’t very secure, meaning it was all too easy for human users to pose as artificial intelligence to make posts that would spark viral social hysteria.
Once again, although the discussion around Moltbook was based more on panic than reality, Meta saw something in the app and announced that Moltbook and its creators, Matt Schlicht and Ben Parr, would be joining Meta Superintelligence Labs.
It seems strange that Meta would buy a social network where all the users are bots. Although Meta didn’t reveal much about the acquisition, we believe that owning Moltbook is more about access to the talent behind it, who are excited to experiment with AI agent ecosystems. CEO Mark Zuckerberg himself said he believes that one day, every company will have its own artificial intelligence.
As we watch the hype around OpenClaw, Moltbook, and NanoClaw, it seems as if those who have predicted the future of agentic AI may be on to something, at least for now.
Chip shortages, hardware drama, and data center demands are escalating
The harsh demands of the AI industry — requiring computing power and data centers of unprecedented sizes — have reached a point where the average consumer has no choice but to care. Now the industry may not be able to meet the massive demand for memory chips, and consumers are already starting to see the prices of their phones, laptops, cars and other devices rise.
So far, analysts from IDC and Counterpoint have predicted that smartphone shipments, for example, will decline about 12% to 13% this year; Apple has already raised MacBook Pro prices by up to $400.
Google, Amazon, Meta, and Microsoft plan to spend up to $650 billion on data centers alone this year, representing an estimated 60% increase over last year.
If the chip shortage doesn’t affect your wallet, it might affect your community as a whole. In the United States alone, there are nearly 3,000 new data centers under construction, in addition to the 4,000 data centers already operating in the country. The need for workers to build these data centers is great enough that “guy camps” have sprung up in Nevada and Texas, trying to lure workers with the promise of golf-simulator game rooms and grilled steaks on demand.
Not only does the construction of data centers have a long-term environmental impact, it also creates health risks for nearby residents, polluting the air and affecting the safety of nearby water sources.
Meanwhile, Nvidia, one of the world’s most valuable hardware and chip developers, is reshaping its relationship with leading AI companies like OpenAI and Anthropic. Nvidia has been a consistent backer of these companies, raising concerns about the circularity of the AI industry and the extent to which these dizzying valuations depend on recurring deals with each other. Last year, for example, Nvidia invested $100 billion in OpenAI stock, and then OpenAI said it would buy $100 billion worth of Nvidia chips.
It came as a surprise, then, when Nvidia CEO Jensen Huang said his company would stop investing in OpenAI and Anthropic. This is because companies plan to go public later this year, he said, although this reasoning is not entirely logical, as investors typically pump more money before an IPO to extract as much value as possible.
⚡ **What’s your take?**
Share your thoughts in the comments below!
#️⃣ **#biggest #stories #year**
🕒 **Posted on**: 1773456516
🌟 **Want more?** Click here for more info! 🌟
