💥 Explore this must-read post from TechCrunch 📖
📂 **Category**: AI,Fundraising,Startups,Exclusive,content moderation,ai safety,moonbounce
✅ **What You’ll Learn**:
When Brett Levinson left Apple in 2019 to lead business integrity at Facebook, the social media giant was in the midst of the Cambridge Analytica fallout. At the time, he thought he could simply fix Facebook’s content moderation problem with better technology.
He soon realized that the problem was deeper than just technology. Human reviewers were expected to memorize a 40-page policy document that was machine-translated into their language, he said. They then had about 30 seconds for each piece of flagged content to decide not only whether that content violated the rules, but what to do about it: block it, block the user, and limit the spread. Those fast calls were “slightly better than 50% accurate,” according to Levinson.
“It was kind of like a flip of a coin, whether the human reviewers were actually able to process the policies correctly, and this was several days after the damage had already been done anyway,” Levinson told TechCrunch.
This kind of delayed reactive approach is not sustainable in a smart and well-financed world. The rise of AI-powered chatbots has exacerbated the problem, with content moderation failures leading to a series of high-profile incidents, such as chatbots providing teens with guidance about self-harm or AI-generated images evading safety filters.
Levinson’s frustration led to the idea of ”policy as code”—a method of converting static policy documents into executable, updateable logic that is closely coupled to implementation. That vision led to the founding of Moonbounce, which announced it had raised $12 million in funding on Friday, TechCrunch has learned exclusively. The round was co-led by Amplify Partners and StepStone Group.
Moonbounce works with businesses to provide an extra layer of security wherever content is generated, whether user-generated or AI-driven. The company trained its own large language model to look at a customer’s policy documents, evaluate the content at runtime, provide a response in 300 milliseconds or less, and take action. Depending on the customer’s preferences, this action may appear as if the Moonbounce system is slowing down distribution while the content waits for human review later, or it may block high-risk content for the time being.
Today, Moonbounce serves three main segments: platforms that handle user-generated content such as dating apps; AI companies build characters or companions; And artificial intelligence image generators.
TechCrunch event
San Francisco, California
|
October 13-15, 2026
Moonbounce supports more than 40 million daily reviews and serves more than 100 million daily active users on the platform, Levinson said. Clients include AI startup Channel AI, image and video generation company Civitai, and personal role-playing platforms Dippy AI and Moescape.
“Safety can actually be a benefit of the product,” Levinson told TechCrunch. “It’s never been the case because it’s always something that comes later, not something you can actually integrate into your product. We see our customers finding really interesting and innovative ways to use our technology to make safety a differentiator, and part of their product story.”
Tinder’s head of trust and safety recently explained how the dating platform uses these types of LLM-enabled services to reach a 10x improvement in the accuracy of detections.
“Content moderation has always been an issue that has plagued large online platforms, but now with LLMs at the heart of every application, this challenge has become even more difficult,” Lenny Bruce, general partner at Amplify Partners, said in a statement. “We invested in Moonbounce because we envision a world where objective, real-time guardrails become the enabling backbone of every AI-mediated application.”
AI companies are facing increasing legal and reputational pressures after chatbots were accused of driving teens and vulnerable users to suicide, and image generators like xAI’s Grok have been used to create non-consensual nude images. It is clear that internal safety barriers are failing, and it has become an issue of liability. Levinson said AI companies are increasingly looking outside their own walls for help strengthening safety infrastructure.
“We’re a third party sitting between the user and the chatbot, so our system isn’t as immersed in context as the chat itself is,” Levinson said. “The chatbot itself has to remember, potentially, tens of thousands of tokens that came before… We’re only concerned about enforcing the rules at runtime.”
Levinson runs the 12-person company with former Apple colleague Ash Bhardwaj, who previously built extensive cloud and AI infrastructure across the iPhone maker’s core offerings. Their next focus is a capability called “iterative routing,” developed in response to cases like the 2024 suicide of a 14-year-old Florida boy who became obsessed with an AI-powered chatbot character. Instead of outright rejecting when harmful topics arise, the system intercepts and redirects the conversation, adjusting prompts in real-time to nudge the chatbot toward a more supportive response.
“We hope to be able to add to our action toolkit the ability to point the chatbot in a better direction, essentially, to take the user prompt and adjust it to force the chatbot to not just be an empathetic listener, but a helpful listener in those situations,” Levinson said.
When asked if his exit strategy included an acquisition by a company like Meta, bringing his content moderation work full circle, Levinson said he recognized Moonbounce’s fit into his old employer group, in addition to his fiduciary duties as CEO.
“Investors are going to kill me for saying this, but I would hate to see someone buy us and then tie up the technology,” he said. “Like, ‘Well, this is ours now, and no one else can benefit from it.’”
🔥 **What’s your take?**
Share your thoughts in the comments below!
#️⃣ **#Facebook #insiders #building #content #moderation #age #artificial #intelligence**
🕒 **Posted on**: 1775227042
🌟 **Want more?** Click here for more info! 🌟
