🚀 Explore this insightful post from TechCrunch 📖
📂 Category: AI,Startups,scaling,AI training,AI research,Cohere,AI model,AI progress
💡 Here’s what you’ll learn:
AI labs are racing to build data centers as large as Manhattan, each costing billions of dollars and consuming as much energy as a small city. The effort is driven by a deep belief in “scalability” — the idea that adding more computing power to existing AI training methods will eventually lead to super-intelligent systems capable of performing all kinds of tasks.
But a growing group of AI researchers say that scaling large language models may be reaching its limits, and that other breakthroughs may be necessary to improve AI performance.
That’s the bet Sarah Hooker, former vice president of AI research at Cohere and a Google Brain alumnus, is making with her new startup Adaption Labs. She co-founded the company with colleague Cowher and Google veteran Sudip Roy, and is built on the idea that scaling LLMs has become an inefficient way to get more performance from AI models. Hooker, who left Coher in August, quietly announced the startup this month to begin hiring more broadly.
In an interview with TechCrunch, Hooker said that Adaption Labs is building artificial intelligence systems that can continuously adapt and learn from their real-world experiences, and do so very efficiently. She declined to share details about the methods behind the approach or whether the company relies on LLMs or another structure.
“There’s a turning point now, where it’s becoming very clear that the formula for scaling up these models — which are measurable methods, which are attractive but very boring — has not produced intelligence capable of navigating or interacting with the world,” Hooker said.
Adaptation is “the heart of learning,” according to Hooker. For example, stub your toe when you walk past the dining room table, and you’ll learn how to move more carefully around it next time. AI labs have tried to capture this idea through reinforcement learning (RL), which allows AI models to learn from their mistakes in controlled settings. However, today’s RL methods do not help AI models in production—that is, systems that customers actually use—learn from their mistakes in real time. They keep stubbing their toe.
Some AI labs offer consulting services to help organizations fine-tune their AI models to suit their customized needs, but that comes at a price. OpenAI is reportedly asking clients to spend upwards of $10 million with the company to provide its fine-tuning consulting services.
TechCrunch event
San Francisco
|
October 27-29, 2025
“We have a few leading labs that define this set of AI models that are presented in the same way to everyone and are very expensive to adapt to,” Hooker said. “In fact, I think this doesn’t have to be true anymore, and AI systems can learn very efficiently from the environment. Proving this will completely change the dynamics of who can control and shape AI, and, in fact, who these models serve at the end of the day.”
Adaption Labs is the latest sign that the industry’s faith in scaling LLMs is wavering. A recent study by researchers from MIT found that the world’s largest AI models may soon show diminishing returns. Sentiments in San Francisco seem to be changing, too. The AI world’s favorite podcaster, Dwarkesh Patel, recently hosted some unusually skeptical conversations with renowned AI researchers.
Richard Sutton, a Turing Award winner who is seen as the “father of RL,” told Patel in September that LL.M.s can’t really scale because they don’t learn from real-world experiences. This month, Andrei Karpathy, one of OpenAI’s early employees, told Patel that he had reservations about the long-term potential of RL to improve AI models.
These types of concerns are not unprecedented. In late 2024, some AI researchers raised concerns that scaling AI models through pre-training — in which AI models learn patterns from piles of data sets — was yielding diminishing returns. Until then, pre-training was OpenAI and Google’s secret sauce for improving their models.
These concerns about scaling pre-training in data are now emerging, but the AI industry has found other ways to improve models. In 2025, breakthroughs around AI inference models, which take additional time and computational resources to solve problems before answering them, will push the capabilities of AI models even further.
AI labs seem convinced that scaling RL and AI inference models are the new frontier. OpenAI researchers previously told TechCrunch that they developed the first AI inference model, o1, because they thought it would scale well. Meta and Periodic Labs researchers recently released a paper exploring how RL can further scale performance — a study said to have cost more than $4 million, underscoring just how expensive current approaches are.
In contrast, Adaptation Labs aims to reach the next breakthrough, proving that learning from experience can be much less expensive. The startup was in talks to raise between $20 million and $40 million earlier this fall, according to three investors who reviewed its pitches. They say the round has since closed, although the final amount is unclear. Hooker declined to comment.
“We’re willing to be very ambitious,” Hooker said, when asked about her investors.
Hooker previously led Cohere Labs, where she trained small AI models for enterprise use cases. Embedded AI systems now routinely outperform their larger counterparts in programming, mathematics, and reasoning standards — a trend that Hooker wants to continue.
It has also gained a reputation for expanding access to AI research globally, and recruiting research talent from underrepresented regions such as Africa. While Adaption Labs will open an office in San Francisco soon, Hooker says it plans to hire worldwide.
If Hooker and Adaption Labs are right about the limitations on scaling, the consequences could be huge. Billions have already been invested in expanding LLM degrees, with the assumption that larger models will lead to general intelligence. But truly adaptive learning can be not only more powerful, but much more efficient.
Marina Temkin contributed reporting.
💬 Share your opinion below!
#️⃣ #Coheres #research #leader #betting #scaling #race
🕒 Posted on 1761166949