top of page

>

AI

>

How Nvidia becomes the first $10 trillion company

Nvidia

Analysis | Opinion

How Nvidia becomes the first $10 trillion company

AI

Chris Wood

Feb 21, 2026

16:00

Summary


  • AI spending boom: Big Tech is pouring $650B+ into AI by 2026, and Nvidia captures a huge share.


  • From chips to AI machines: Nvidia now sells full rack-scale AI systems, boosting pricing power and performance.


  • Path to $10T: If revenue hits ~$550B by 2028 and valuation holds, Nvidia could exceed a $10 trillion market cap.


- By Chris Wood, Chief Investment Strategist at RiskHedge


With Nvidia (NVDA) due to report earnings next week, my partner in crime here at Grow or Die, Chris Reilly, thought it would be a good idea for me to provide my thoughts on where the company is headed.


Put simply: I think Nvidia will become the first $10 trillion market cap company in history, no matter what the company says on its earnings call next week. That’s more than double the current market cap of $4.5 trillion. And I think it will get there within three years. That’s a lot sooner than most, even staunch Nvidia bulls, believe is possible.


I urge you to hear me out. I’m following the lead of compounding fundamentals.


Today, I’ll give you my simple three-step mental model to how I reached my conclusion.


First, I should note that my team and I first recommended Nvidia in one of my former advisories back in April 2013 at a split-adjusted $0.29 per share. And we’ve been long the stock in one of my current advisories since September 2020 at a split-adjusted $13.02. So I’ve been following the company and accurately forecasting its progress and performance for a long time.


Step 1: Follow the money


Big tech’s capital expenditures (or “capex”) is what’s driving the whole AI infrastructure buildout boom. So this is the money we want to follow.


The big four US-based hyperscalers—Amazon (AMZN), Alphabet (GOOGL), Microsoft (MSFT), and Meta Platforms (META)—are spending hundreds of billions of dollars a year building AI data centers.


In 2024, for example, these four companies spent about $250 billion on capex, most of it directed to AI infrastructure.


In 2025, that figure surged to about $400 billion.


For 2026, these same four companies have guided to about $650 billion in capex. Add in Oracle (ORCL) and CoreWeave (CRWV) and we’re talking well over $700 billion.


This isn’t a speculative projection from optimistic analysts. It’s the committed capital plans of the world’s richest companies.


And Nvidia is their #1 supplier, capturing about half of total AI capex today.


Meanwhile, the reason this AI investment will continue unabated for the foreseeable future: These companies simply can’t afford to fall behind in the AI arms race.


When you believe—as they do—that the ultimate prize is a “digital-god-scale” platform worth trillions of dollars over decades, you’re highly motivated to keep investing as much and as fast as possible.


Step 2: Follow the business


It’s important to stop thinking about Nvidia as a “chip company” because it’s evolved into so much more.


Today, Nvidia is an “AI systems company.”


Its premiere AI system is the GB300 NVL72. This is a game-changing product. It—along with the GB200 NVL72—is the most significant product launch in Nvidia’s history, because it marks the shift from 8-GPU “server scale” systems to 72-GPU “rack scale” systems.


A server is basically a single metal computer box—like a beefed-up PC with no display screen or keyboard—packed with components like GPUs, CPUs, and memory chips. A rack is a six- to seven-foot-tall metal cabinet that holds multiple servers stacked vertically like pizza boxes.


“Server scale” refers to building systems by focusing on what’s inside one server box. For example, Nvidia’s older setups—like the DGX H100 AI computer—houses eight GPUs linked together to work as one large GPU. If you want more power, you connect multiple servers together with standard external networking.


“Rack scale” means building the entire rack (the whole tall shelf) as one giant, unified AI system. Nvidia’s new GB300 NVL72 packs 72 Blackwell Ultra B300 GPUs (plus 36 Grace CPUs, high-bandwidth memory, interconnects, etc.) into a single rack, linked with lightning-fast networking to act as one giant GPU.


The key difference is that server scale is modular and slower (relatively speaking), while rack scale is integrated and faster.


This shift to rack-scale systems is game-changing for Nvidia because it’s evolving from selling what are more like AI “kits” to turnkey integrated AI machines. This lets Nvidia:


  • Control more of the design (like networking, cooling, and software).


  • Charge premium prices (a single GB200 NVL72 rack sells for about $3 million and the newer GB300 system sells for about $3.5 million to $4.5 million depending on the configuration).


  • Push the boundaries on performance when it comes to training and running massive AI models with about 4X faster training, 30X faster inference, and 50X higher AI factory output per megawatt.


For big tech customers like Microsoft and Alphabet—as well as “neocloud” providers like CoreWeave and AI “startups” like OpenAI building data centers—it’s a big win for speed and efficiency.


It’s also a much simpler setup because you don’t have to fiddle with tons of cables between servers. And it lets you operate a much smaller footprint or get much more out of your current footprint because one rack can do what used to take many racks.


Nvidia started shipping GB300 NVL72 systems at scale late last year. And customers with the cash and need for industrial-scale AI are gobbling them up as fast as they can; because if they don’t, they’ll get left in the dust.


Towards the end of this year, Nvidia will release its next-generation technology, Vera Rubin. These AI systems will have twice as many GPUs as Blackwell Ultra and offer more than 3X the performance.


Then comes Rubin Ultra in late 2027, which will ramp the number of GPUs to nearly 600 and offer about 13X the performance of Vera Rubin.


And then in 2028 comes the Feynman architecture. Details here are sparse, but it will be at least a 10X performance jump from Rubin Ultra.


These timelines could (and probably will) get pushed out a bit. But we’re still talking about Nvidia releasing a new generation of AI system architecture about every 12 to 18 months.


That’s insane.


During that whole time the total cost of compute is expected to plummet. So you’re getting exponentially more performance at a lower and lower cost.


What we’ll be able to do with that much computing power is difficult to comprehend.


Put simply, you ain’t seen nothin’ yet when it comes to Nvidia’s hardware capabilities.


Yes, the custom silicon being developed by big tech companies like Google and Amazon will have its place for application specific AI workloads.


And yes AMD’s first rack-scale system “Helios,” which is optimized for AI inference and due to start volume shipping later this year, will attract a lot of attention and sales.


But Nvidia will still sell all the AI systems it can possibly make for many years to come.


Step 3: Follow the (simple) math


Nvidia reported $51.2 billion in “data center” revenue—which closely approximates AI revenue—for its most recent quarter. That reflected year-over-year growth of 66.8%. And it’s over $200 billion a year annualized.


(Again, that’s just from AI. Total revenue for the quarter rang in at $57 billion.)


But as big as those numbers are, it’s just the beginning.


A few months ago Nvidia CEO Jensen Huang said he’s chasing a $3 trillion to $4 trillion AI infrastructure opportunity over the next five years… and that he has “visibility” into $500 billion in data center revenue for the six quarters through the end of calendar year 2026.


Huang wasn’t saying Nvidia would capture that entire $500 billion chunk over those six quarters, but my projections aren’t far off that mark.


I like to lean to the conservative side when making financial forecasts to bake in some cushion. Even so, for the three calendar years of 2026 through 2028, my model has Nvidia generating about $270 billion, $390 billion, and $550 billion, respectively.


Nvidia currently trades at a price-to-sales multiple of about 24. That sounds rich, but it’s not considering the company’s rapid growth. And it’s right around the average for the past few years.


If I take a slightly more conservative estimate of 19 (to reflect the fact that the growth rate will slow a bit over time) and multiply that by $550 billion data center revenue forecast for calendar year 2028, we get a value of $10.45 trillion.


In other words, this simple math gets us to a $10 trillion plus market cap by the end of 2028.


That number is so big, it’s hard to wrap your head around. But I think it’s completely reasonable.


The stock won’t go up in a straight line from here, and we could certainly see a correction of around 40% (or more) within the next few years. But that will only be a buying opportunity if things stay on track from an operational and demand perspective.


One final note: Lead tech analyst at I/O Fund, Beth Kindig, who I think is the best Nvidia analyst out there, makes a compelling case for Nvidia reaching a $20 trillion market cap by 2030. If you want to go really deep on Nvidia, I highly recommend following her work.


_____________________________


Chris Wood is Chief Investment Strategist at RiskHedge. To get more ideas like this from him, check out his substack Grow or Die.


Nvidia (NVDA) has a Disruption Score of 4. Click here to learn how we calculate the Disruption Score.  


Nvidia is also part of the Disruption Aristocrats, our quarterly list of the world’s top disruptive stocks.

Recommended Articles

loading-animation.gif

loading-animation.gif

loading-animation.gif

bottom of page