top of page
Huang Nvidia

Analysis | Opinion

What's next for Nvidia?

AI

Chris Wood

Jan 13, 2026

18:30

- By Chris Wood, Chief Investment Strategist at RiskHedge


We are going to stop calling Nvidia a “chip company” because it’s evolved into so much more.


Today, Nvidia is an “AI systems company.”


Its premiere AI system is the GB300 NVL72. This is a game-changing product. I think it—along with the GB200 NVL72, which launched a few months earlier—is arguably the most significant product launch in Nvidia’s history.


This product is also something most investors appear to be overlooking because it marks the shift from 8-GPU “server scale” systems to 72-GPU “rack scale” systems, which has far-reaching implications across the AI infrastructure ecosystem.


A server is basically a single metal computer box—like a beefed-up PC with no display screen or keyboard—packed with components like GPUs, CPUs, and memory chips. A rack is a six- to seven-foot-tall metal cabinet that holds multiple servers stacked vertically like pizza boxes.


“Server scale” refers to building systems by focusing on what’s inside one server box. For example, Nvidia’s older setups—like the DGX H100 AI computer—houses eight GPUs linked together to work as one large GPU. If you want more power, you connect multiple servers together with standard external networking.


“Rack scale” means building the entire rack (the whole tall shelf) as one giant, unified AI system. Nvidia’s new GB300 NVL72 packs 72 Blackwell Ultra B300 GPUs (plus 36 Grace CPUs, high-bandwidth memory, interconnects, etc.) into a single rack, linked with lightning-fast networking to act as one giant GPU.


The key difference is that server scale is modular and slower (relatively speaking), while rack scale is integrated and faster.


This shift to rack-scale systems is game-changing.

For Nvidia this moment is a major turning point, because it’s evolving from selling what are more like AI “kits” to turnkey integrated AI machines. This lets Nvidia:


  • Control more of the design (like networking, cooling, and software).


  • Charge premium prices (a single GB200 NVL72 rack sells for about $3 million and the newer GB300 system sells for about $3.5 million to $4.5 million depending on the configuration).


  • Push the boundaries on performance when it comes to training and running massive AI models with 4X faster training, 30X faster inference, and 50X higher AI factory output per megawatt.


For big tech customers like Microsoft and Alphabet—as well as “neocloud” providers like CoreWeave (CRWV) and AI “startups” like OpenAI building data centers—it’s a big win for speed and efficiency.


It’s also a much simpler setup because you don’t have to fiddle with tons of cables between servers. And it lets you operate a smaller footprint or get more out of your current footprint because one rack can do what used to take many racks.


You haven’t seen anything yet of Nvidia’s hardware capabilities.


Nvidia is shipping GB200 NVL72 systems at scale and just started shipping GB300 NVL72 systems. Customers with the cash and need for industrial-scale AI are gobbling them up as fast as they can… because if they don’t, they’ll get left in the dust.


In the company’s fiscal Q3 ‘26 (July 28, 2025, through October 26, 2025) “data center” revenue—which closely approximates AI revenue—grew 66% year over year to $51.2 billion. That’s an annual run rate of $204.8 billion. But as big as that number is, it’s just the beginning.


Nvidia CEO Jensen Huang recently said he’s chasing a $3 trillion to $4 trillion AI infrastructure opportunity over the next five years… and that he has “visibility” into $500 billion in data center revenue for the six quarters through the end of calendar year 2026.


Huang wasn’t saying Nvidia would capture that entire $500 billion chunk over the next six quarters, but my projections aren’t far off that mark.


I like to lean to the conservative side when making financial forecasts to bake in some cushion. Even so, my model has Nvidia generating about $370 billion in data center revenue over the next six quarters.


And for the three calendar years of 2026 through 2028, I have Nvidia producing data center revenue of about $270 billion, $400 billion, and $600 billion, respectively.


Nvidia currently trades at a price-to-sales multiple of about 30. That sounds extremely rich, but it’s not considering how fast the company is growing.


If I take a more conservative estimate of 17 (to reflect the fact that growth will slow over time) and multiply that by my $600 billion data center revenue forecast for calendar year 2028, we get $10.2 trillion.


Nvidia could be a $10 trillion market cap company by the end of 2028.


That number is so big, it’s hard to wrap your head around. But I think it’s completely reasonable.


The stock won’t go up in a straight line from here, and we could certainly see a correction of around 40% (or more) within the next few years. But that will only be a buying opportunity if things stay on track from an operational and demand perspective. _____________________________

Chris Wood is Chief Investment Strategist at RiskHedge. To get more ideas like this from him, check out his substack Grow or Die.

Recommended Articles

loading-animation.gif

loading-animation.gif

loading-animation.gif

bottom of page