
News
Nvidia can sell H200 chips into China again
Disruption snapshot
Nvidia can sell H200 chips into China again. It also plans a China-ready inference chip. That reopens a market that once made up 13% of revenue.
Winners: Nvidia and Chinese cloud giants like Tencent, Alibaba, and ByteDance. Losers: AMD and Chinese chipmakers that hoped export limits would speed local replacement.
Watch China inference deployments, not just training chip sales. A key signal is whether the new China-ready chip launches by May and scales across hyperscalers by Q3.
Nvidia (NVDA) has a Disruption Score of 4.
Nvidia H200 AI chips are back in focus after Nvidia (NVDA) pulled off something most investors thought was off the table.
After months of back and forth between Washington and Beijing, it now has a path to sell high-end AI chips into China again.
That includes the powerful H200, plus a new China-ready inference chip already in the pipeline. Orders are coming in. Production is ramping. And a market that once made up 13% of Nvidia’s revenue is back in play.
Nvidia now has approval from both sides to restart business with major Chinese tech players like ByteDance, Tencent, Alibaba, and DeepSeek. The U.S. signed off on limited export licenses earlier this year. Beijing has now cleared domestic sales. That logjam that froze one of the most important AI markets is finally breaking open.
At the same time, Nvidia is getting ready to launch a China-compatible version of its Groq inference chip as soon as May. This chip is built to compete, especially in inference, where demand is exploding as AI models move from training to real-world use.
Put those pieces together and you get something bigger than a simple sales rebound.
This is Nvidia tightening its grip on the global AI supply chain right when competition is heating up.
The disruption behind the news: China chose Nvidia over domestic substitution, at least for now.
This is the collapse of the “China will go it alone” narrative in high-end AI compute.
Domestic chips from Huawei and others still lag in performance, ecosystem, and software compatibility.
Nvidia’s CUDA stack remains the default operating system for AI.
That lock-in just got reinforced, even as China pushes broader ambitions through its new five-year plan centered on AI and quantum.
I think this resets the competitive clock by at least 18 to 24 months. That’s how long it will take Chinese alternatives to catch up again, if they even can. In AI, time is everything. Missing one model generation means losing entire markets.
The Groq move is even more important. Training chips get headlines, but inference is where the money is. It’s the phase where AI actually gets used. Chatbots. Copilots. Autonomous systems. If Nvidia controls inference in China, it controls recurring demand, not just one-time infrastructure buys.
And Nvidia is threading the regulatory needle with precision. The H200 gets limited approvals. The Groq chip is designed to be compliant from day one. No waiting. No redesign cycles. That means faster deployment and fewer political risks for Chinese buyers.
Huang said production is ramping immediately. That suggests Nvidia has been stockpiling or pre-planning capacity during the freeze. If shipments scale fast, we could see billions in deferred revenue snap back within two quarters.
A training GPU might run at about 30% to 40% utilization over its lifecycle. That’s because training is bursty and project-based. Inference chips can run at 60% to 80% or more once deployed in production. They handle live workloads like chatbots and apps that run all the time.
If a Chinese hyperscaler spends $1B on hardware, shifting even half of that toward inference can effectively double the lifetime revenue Nvidia captures per chip. That comes from higher usage and software lock-in through CUDA.
For investors, this matters because it changes how you should think about Nvidia. It’s not just selling hardware. They are building something closer to a usage-driven platform, even though its financial model still looks like a hardware business.
What to watch next
Watch inference revenue, not training headlines.
Watch how fast Chinese hyperscalers deploy Groq at scale.
Watch U.S. regulators react to a non-downgraded chip entering China.
It really comes down to timing.
If Groq lands in May and scales by Q3, Nvidia could lock in another multi year run before anyone else has time to react.
If even 30% of China’s AI spending shifts toward inference heavy workloads, Nvidia benefits. Those chips run more consistently, which means stronger margins.
For businesses, that likely means AI services in China get cheaper and faster sooner than people expect.
For competitors like AMD and domestic Chinese chipmakers, it’s a setback. Nvidia just pulled further ahead.
And for regulators, there’s an uncomfortable takeaway. Partial restrictions don’t split the market. They end up strengthening the company that can design around them. Nvidia H200 chips are part of that story, showing how product strategy can outmaneuver geopolitics and widen the gap even further.
Nvidia (NVDA) has a Disruption Score of 4. Click here to learn how we calculate the Disruption Score.
Nvidia is also part of the Disruption Aristocrats, our quarterly list of the world’s top disruptive stocks.
Recommended Articles



