💥 Read this insightful post from TechCrunch 📖
📂 Category: AI,Hardware,Robotics,Transportation,nvidia,autonomous driving,Jensen Huang,artificial intelligence,physical ai
💡 Here’s what you’ll learn:
Nvidia announced new AI infrastructure and models on Monday as it works to build the underlying technology for physical AI, including robots and autonomous vehicles that can perceive and interact with the real world.
The semiconductor giant announced Alpamayo-R1, an open logic vision language model for autonomous driving research at the NeurIPS AI conference in San Diego, California. The company claims this is the first working prototype of a vision language focused on autonomous driving. Visual language models can process text and images together, allowing vehicles to “see” their surroundings and make decisions based on what they see.
This new model is based on Nvidia’s Cosmos Reason Model, a reasoning model that considers decisions before responding. Nvidia initially released the Cosmos model family in January 2025. Additional models were released in August.
Technology like the Alpamayo-R1 is critical for companies looking to reach Level 4 autonomous driving, which means full autonomy in a specific area and under specific conditions, Nvidia said in a blog post.
NVIDIA hopes this type of thinking model will give self-driving vehicles the “common sense” to better handle precise driving decisions like humans.
This new model is available on GitHub and Hugging Face.
Along with the new vision model, Nvidia has also uploaded new step-by-step guides, inference resources, and post-training workflows to GitHub — collectively called the Cosmos Cookbook — to help developers better use and train Cosmos models for their specific use cases. The guide covers data processing, synthetic data generation, and model evaluation.
TechCrunch event
San Francisco
|
October 13-15, 2026
These announcements come as the company works full speed toward physical AI as a new vehicle for its advanced GPUs.
Nvidia co-founder and CEO Jensen Huang has repeatedly said that the next wave of AI is physical AI. Bill Daly, chief scientist at Nvidia, echoed this sentiment in a conversation with TechCrunch over the summer, emphasizing physical AI in robots.
“I think robots will eventually be a big player in the world, and we want to make the brains of basically all robots,” Daly said at the time. “To achieve this, we need to start developing the underlying technologies.”
⚡ Tell us your thoughts in comments!
#️⃣ #Nvidia #announces #open #models #tools #autonomous #driving #research
🕒 Posted on 1764624173
