Nvidia launches powerful new architecture for its Rubin chip

🔥 Discover this trending post from TechCrunch 📖

📂 **Category**: AI,Hardware,TC,ai infrastructure,compute,nvidia,Rubin architecture

✅ **What You’ll Learn**:

Today at the Consumer Electronics Show, Nvidia CEO Jensen Huang officially launched the company’s new Rubin computing architecture, which he described as the state-of-the-art in AI hardware. The new architecture is currently in production and is expected to ramp up further in the second half of the year.

“Vera Rubin is designed to address this fundamental challenge we face: the amount of computation needed for artificial intelligence is increasing exponentially.” Huang told the audience. “Today, I can tell you that Vera Rubin is in full production.”

The Rubin architecture, first announced in 2024, is the latest result of Nvidia’s ongoing hardware development cycle, which has turned Nvidia into the world’s most valuable company. The Rubin architecture would replace the Blackwell architecture, which in turn replaced the Hopper and Lovelace architectures.

Rubin’s chips are already slated to be used by nearly every major cloud service provider, including Nvidia’s high-profile partnerships with Anthropic, OpenAI, and Amazon Web Services. Rubin’s systems will also be used in HPE’s Blue Lion supercomputer and the upcoming Doudna supercomputer at Lawrence Berkeley National Laboratory.

Named after astronomer Vera Florence Cooper Rubin, the Rubin structure consists of six separate slats designed for use in concerts. The Rubin GPU stands at the center, but the architecture also addresses growing bottlenecks in storage and interconnection through new improvements in Bluefield and NVLink respectively. The architecture also includes a new Vera CPU, designed for logical reasoning.

Explaining the benefits of the new storage, Deon Harris, senior director of AI infrastructure solutions at Nvidia, pointed to the increasing cache-related memory requirements of modern AI systems.

“When you start enabling new types of workflows, like agent AI or long-running tasks, that puts a lot of pressure and demands on your KV cache,” Harris told reporters in a phone call, referring to the memory system that AI models use to condense inputs. “So we’ve introduced a new layer of storage that connects externally to the computing device, allowing you to scale your storage more efficiently.”

TechCrunch event

San Francisco
|
October 13-15, 2026

As expected, the new architecture also represents a significant advance in speed and power efficiency. According to Nvidia’s tests, the Rubin architecture will perform three and a half times faster than the previous Blackwell architecture on model training tasks and five times faster on inference tasks, reaching 50 petaflops. The new platform will also support eight times more inference computation per watt.

Robin’s new capabilities come amid intense competition to build AI infrastructure, which has seen AI labs and cloud providers scrambling to acquire Nvidia chips as well as the facilities needed to run them. In an October 2025 earnings call, Hwang estimated that between $3 trillion and $4 trillion would be spent on AI infrastructure over the next five years.

⚡ **What’s your take?**
Share your thoughts in the comments below!

#️⃣ **#Nvidia #launches #powerful #architecture #Rubin #chip**

🕒 **Posted on**: 1767652953

🌟 **Want more?** Click here for more info! 🌟

By

Leave a Reply

Your email address will not be published. Required fields are marked *