
Baidu pushes forward new AI chips to cut off Nvidia
AI
Leon Wilfan
Nov 13, 2025
11:00
On Thursday, Baidu introduced two in-house processors in Beijing. We see them as a cheaper path to high performance for Chinese builders of AI systems. The move shows a deeper push to own more of the compute stack and cut reliance on foreign chips. It also puts Baidu on a clearer roadmap toward larger local production of AI hardware.
Baidu kicks off a bold new direction in AI chips
Baidu’s new M100 and M300 chips are its most direct challenge yet to the local chip market, which still struggles to get advanced processors. The M100 focuses on inference. It is set for early 2026. The M300 covers both training and inference and targets early 2027. Both chips aim to give developers a steadier supply and lower costs. GPU shortages remain common, so predictable access matters.
Baidu also unveiled two supernode systems. These systems link hundreds of processors to boost total throughput, or total work done at once. The Tianchi 256 system integrates 256 P800 units and arrives in the first half of next year. A 512-chip version follows in the second half. These systems compete with domestic offerings from Huawei as it pushes large-scale compute hardware. Baidu also rolled out a new version of its Ernie model to improve text, image, and video tasks.
Tech giants increasingly want to produce AI chips in-house
We see this moment as another marker in the broader semiconductor cycle. Compute is moving closer to the companies that need it most. Hardware control matters because model builders want reliable access without premium pricing or delays. These new processors help Baidu pull more of its stack in-house. We expect this trend to speed up across the region.
The focus on full-system design stands out. Linking hundreds of processors through supernodes points to a future where large-scale compute becomes modular and easier to deploy. That shift could change how enterprises handle training and inference. It may make advanced workloads more accessible. For markets, this means rising competition in accelerator-class hardware. It also opens more paths for model developers to scale without depending on a single vendor.
Can Nvidia maintain its global dominance in AI chips?
For investors, the question is which U.S.-listed companies could gain—or lose—as this wave of hardware adoption builds. Nvidia (NVDA) remains the main winner of global demand for accelerated compute. Local alternatives, though, may pressure its China revenue. We do not see an immediate threat to Nvidia’s global growth. Still, we expect continued softness in its China shipments as domestic systems mature.
Baidu (BIDU) stands to gain from deeper vertical integration. This shift can raise margins and help lock in customers who want stable compute. Companies tied to cloud infrastructure—such as Amazon (AMZN) and Microsoft (MSFT)—could face long-term competition in regions where localized stacks take hold. On the server side, Dell (DELL) and Super Micro (SMCI) may feel pressure if more buyers shift to region-specific systems built around new processors.
In our view this is early-stage, but directionally important. Each hardware cycle creates room for new winners and reduces dependence on older suppliers. We expect Baidu’s push to nudge the market toward more diverse compute ecosystems. In the final stretch of this outlook, it’s clear that companies that adapt fastest to new AI hardware trends will hold the advantage.
Nvidia (NVDA) has a Disruption Score of 5 and is part of the Disruption Aristocrats.
Baidu (BIDU) has a Disruption Score of 2. Click here to learn more about our scoring system.
Recommended Articles



