top of page

>

>

Nvidia unveils Rubin chip to power next generation of AI systems

Nvidia Rubin Chip

News

Nvidia unveils Rubin chip to power next generation of AI systems

Jan 7, 2026

13:00

Nvidia (NVDA) on Monday introduced its new Rubin computing architecture at the CES. Chief executive Jensen Huang said the platform represents the company’s most advanced AI hardware to date.


Huang said the Rubin architecture is already in full production. He added that manufacturing and deployment are expected to accelerate in the second half of the year.


The architecture was first announced in 2024. It marks the latest step in Nvidia’s rapid hardware development cycle, which has driven the company’s growth in AI computing.


Rubin is set to replace Nvidia’s Blackwell architecture. Blackwell previously succeeded the Hopper and Lovelace platforms.


The new chips are scheduled for use by most major cloud providers. Nvidia has confirmed partnerships involving Anthropic, OpenAI, and Amazon Web Services.


Rubin-based systems will also power large research installations. These include HPE’s Blue Lion supercomputer and the planned Doudna system at Lawrence Berkeley National Laboratory.


The architecture is named after astronomer Vera Florence Cooper Rubin. It consists of six distinct chips designed to operate together as a single system.


At the center of the platform is the Rubin GPU. The architecture also introduces upgrades to BlueField networking and NVLink interconnect technology to address data movement bottlenecks.


A new Vera CPU is included in the platform. Nvidia said the processor is optimized for agentic reasoning workloads.


Company executives said modern AI systems are placing growing pressure on memory and storage. Nvidia introduced a new external storage tier to support larger and more flexible memory pools.


Dion Harris, senior director of AI infrastructure solutions, said cache-related memory demands are increasing as AI workflows grow more complex. He said the new storage design allows systems to scale more efficiently.


Nvidia reported major gains in performance and energy efficiency. Internal tests show Rubin delivers three and a half times the training speed of Blackwell.


Inference performance is measured at up to five times faster than the previous generation. Peak performance can reach 50 petaflops.


The company also said the platform provides eight times more inference compute per watt. Nvidia positioned the launch amid intense global competition to build AI infrastructure.


Huang has previously estimated that global spending on AI infrastructure could reach $3 trillion to $4 trillion over the next five years.

Recommended Articles

loading-animation.gif

loading-animation.gif

loading-animation.gif

bottom of page