Everything you need to know about the viral personal AI assistant Clawdbot (now Moltbot)

💥 Explore this must-read post from TechCrunch 📖

📂 **Category**: AI,agentic ai,clawdbot,personal assistants

📌 **What You’ll Learn**:

The latest wave of AI excitement has brought us an unexpected mascot: the lobster. Clawdbot, an AI personal assistant, went viral within weeks of its launch, and will retain its crustacean theme despite having to change its name to Moltbot after a legal challenge from Anthropic. But before you jump on the bandwagon, here’s what you need to know.

True to its tagline, Moltbot (formerly Clawdbot) is “AI that actually does things” — whether that’s managing your calendar, messaging through your favorite apps, or checking you in for flights. This promise has attracted thousands of users willing to handle the technical setup required, although it began as a scrappy personal project created by a developer for his own use.

That man is Peter Steinberger, an Austrian developer and founder known online as @steipete and actively blogging about his work. After stepping away from his previous project, PSPDFkit, Steinberger felt empty and barely touched his computer for three years, as he explained in his blog. But he eventually found his spark again, which led to Multibot.

While Moltbot is now more of a solo project, the publicly available version still draws from Clawd, Peter’s “tough assistant,” now called Molty, a tool he designed to help him “manage his digital life” and “explore what human-AI collaboration could be.”

For Steinberger, this meant diving deeper into the momentum around AI that had reignited his building spark. A self-confessed “cludoholic,” he initially named his project after Claude, Anthropic’s lead AI product. He revealed on X that Anthropic then forced him to change the trademark for copyright reasons. TechCrunch has reached out to Anthropic for comment. But the “lobster spirit” of the project remains unchanged.

For its early adopters, Moltbot represents the forefront of how useful AI assistants can be. Those who were already excited at the prospect of using AI to quickly create websites and apps are now even more eager to have their personal AI assistant perform tasks for them. And just like Steinberger, they are keen to manipulate it.

This explains how Moltbot has amassed over 44,200 stars on GitHub so quickly. Moltbot has received so much viral attention that it has moved the markets. Cloudflare stock rose 14% in premarket trading Tuesday, as social media buzz around an AI agent reignited investor enthusiasm for Cloudflare’s infrastructure, which developers use to run Moltbot natively on their devices.

TechCrunch event

San Francisco
|
October 13-15, 2026

However, there is still a long way to go before moving out of the early adoption zone, and this is probably for the best. Installing Moltbot requires you to be familiar with the technology, and this also includes awareness of the inherent security risks that come with it.

On the one hand, Moltbot is designed with safety in mind: it’s open source, meaning anyone can inspect its code for vulnerabilities, and it runs on your computer or server, not in the cloud. But on the other hand, its very premise is inherently risky. As entrepreneur and investor Rahul Sood pointed out to X, “doing things” means “the ability to execute random commands on your computer.”

What keeps Sood up at night is “instant content injection” – where a malicious person can send you a WhatsApp message that can prompt Moltbot to take unintended actions on your computer without your intervention or knowledge.

This risk can be partially mitigated through careful preparation. Since Moltbot supports multiple AI models, users may want to make setup choices based on their resistance to these types of attacks. But the only way to completely prevent this is to run Moltbot in a silo.

This may be obvious to experienced developers tinkering with a weeks-old project, but some have become more vocal in warning users attracted by the hype: Things can go wrong quickly if they handle it as carelessly as ChatGPT.

Steinberger himself was reminded of the presence of malicious actors when he “messed up” his project’s renaming process. He complained on [him] As the owner of the coin, it is a scam.” He then posted that the GitHub issue had been fixed, but warned that the legitimate X account was @moltbot, “not any of the 20 fraudulent variations of it.”

This doesn’t necessarily mean you should stay away from Moltbot at this point if you want to test it out. But if you’ve never heard of a VPS — a virtual private server, which is basically a remote computer that you rent to run software — you might want to wait your turn. (This is where you might want to run Moltbot currently. “Not your laptop with SSH keys, API credentials, and a password manager,” Sood warned.)

Currently, running Moltbot safely means running it on a separate computer with temporary accounts, which defeats the purpose of having a useful AI assistant. Fixing this trade-off between security and facilities may require solutions beyond Steinberger’s control.

However, by building a tool to solve his own problem, Steinberger showed the developer community what AI agents can actually achieve, and how autonomous AI can finally become truly useful rather than just impressive.

💬 **What’s your take?**
Share your thoughts in the comments below!

#️⃣ **#viral #personal #assistant #Clawdbot #Moltbot**

🕒 **Posted on**: 1769561535

🌟 **Want more?** Click here for more info! 🌟

By

Leave a Reply

Your email address will not be published. Required fields are marked *