💥 Discover this trending post from TechCrunch 📖
📂 **Category**: AI,TC,Campbell Brown,Forum AI,Lerer Hippeau
✅ **What You’ll Learn**:
Campbell-Brown has spent her career chasing accurate information, first as a popular TV journalist, then as Facebook’s first and only head of news. Now, as she watches artificial intelligence reshape how people consume information, she sees history threatening to repeat itself. And this time, she’s not waiting for someone else to fix her.
Her company Forum AI — which she recently discussed with TechCrunch’s Tim Fernholz at a StrictlyVC evening in San Francisco — evaluates how foundational models perform on what it calls “high-stakes topics” — geopolitics, mental health, finance, employment — topics where “there are no clear yes or no answers, where they are ambiguous, nuanced, and complex.”
The idea is to find the world’s leading experts, have them design standards, and then train AI judges to evaluate models at scale. For the AI Forum’s geopolitical work, Brown has recruited Niall Ferguson, Fareed Zakaria, former Secretary of State Tony Blinken, former House Speaker Kevin McCarthy, and Anne Neuberger, who led cybersecurity in the Obama administration. The goal is to get roughly 90% consensus from the AI judges with these human experts, a threshold it says the AI Forum has been able to reach.
Brown traces the origins of the Artificial Intelligence Forum, founded 17 months ago in New York, to a specific moment. “I was in Meta when ChatGPT was first released publicly, and I remember shortly after realizing that this was going to be the path through which all information would flow. And it’s not very good,” she recalled. The implications for her children made the moment seem almost existential. “My kids are going to be really stupid if we don’t figure out how to fix this,” she recalled thinking.
What frustrated her most was that accuracy didn’t seem to be anyone’s priority. Typical enterprise companies are “very focused on programming and math,” she said, while news and information are more challenging. But harder doesn’t mean optional, she said.
In fact, when Forum AI started evaluating the leading models, the results weren’t exactly encouraging. She cited Gemini’s withdrawal from Chinese Communist Party sites for “stories unrelated to China,” and pointed to left-leaning political bias in nearly all models. Subtle failures also abound, she said, including missing context, missing perspectives, and straw-man arguments that go unacknowledged. “There is a long way to go,” she said. “But I also think there are some very easy solutions that will dramatically improve outcomes.”
Brown spent years at Facebook watching what happens when a platform improves the wrong thing. “We failed at a lot of the things we tried,” she told Fernholz. The fact-checking program you created no longer exists. The lesson, even if social media turns a blind eye, is that improving engagement has been bad for society and has left many less informed.
Her hope is that artificial intelligence can break this vicious cycle. “Right now things could go either way,” she said. Companies can give users what they want, or they can “give people what is real, what is honest, and what is honest.” She acknowledged that the ideal version of that — artificial intelligence that improves reality — might seem naive. But she believes that business may be an unexpected ally here. Companies that use AI to make credit, lending, insurance, and hiring decisions care about accountability, and “they’ll want you to improve to get it right.”
This enterprise demand is also what Forum AI is betting its business on, although turning compliance attention into consistent revenue remains a challenge, especially since much of the current market remains satisfied with checkbox audits and standard criteria that Brown considers inadequate.
She said the compliance scene is a “joke.” When New York City passed its first hiring bias law requiring audits using artificial intelligence, the state comptroller found that more than half of employees had committed violations that had gone undetected. A true assessment requires domain expertise to work through not only known scenarios but also emergency situations that “can cause you problems that people don’t think about,” she said. This work takes time. “Smart generalists won’t cut it.”
Brown — whose company last fall raised $3 million led by Lerer Hippo — is in a unique position to describe the disconnect between the AI industry’s self-image and the reality for most users. “You hear from leaders of big tech companies that this technology is going to change the world, it’s going to put you out of work, it’s going to cure cancer,” she said. “But for the average person who uses a chatbot to ask basic questions, they still get a lot of wrong answers.”
Trust in AI is at very low levels, and believes that skepticism is justified in many cases. “The conversation is kind of happening in Silicon Valley about one thing, and there’s a completely different conversation happening among consumers.”
When you buy through links in our articles, we may earn a small commission. This does not affect our editorial independence.
🔥 **What’s your take?**
Share your thoughts in the comments below!
#️⃣ **#decides #tells #Campbell #Brown #news #Meta #ideas**
🕒 **Posted on**: 1778741061
🌟 **Want more?** Click here for more info! 🌟
