top of page
Google and Meta vs Nvidia

Topic:

AI

Ticker:

Author:

GOOGL, META, NVDA

Leon Wilfan

Dec 18, 2025

Google teams with Meta to challenge Nvidia with refined TPU`s

Google (GOOGL) is taking another swing at Nvidia (NVDA).


The company is developing an internal initiative—known as TorchTPU—designed to make its Tensor Processing Units run PyTorch smoothly and natively.


The goal is straightforward: lower the software friction that has kept most AI developers locked into Nvidia’s ecosystem, even when they’re open to using alternative chips.


That friction has become increasingly expensive.


TPUs have grown into a meaningful contributor to Google Cloud revenue as demand for AI compute has surged.


But hardware alone hasn’t been the bottleneck. The real constraint has been software—specifically, the fact that most developers build on PyTorch, not on Google’s preferred internal stack.


Until now, Google has been asking customers to adapt to how Google works.


TorchTPU flips that assumption. Instead of pushing developers toward Jax and XLA, Google is trying to meet them where they already are.


The initiative focuses on making TPUs fully compatible with PyTorch and easier to deploy without custom engineering. Google is also considering open-sourcing parts of the effort to accelerate adoption.


That level of commitment marks a shift from earlier, half-hearted attempts to support PyTorch on TPUs. This time, the project has real resources behind it, driven by rising demand from customers who want alternatives to Nvidia but don’t want to rewrite their software to get there.


To understand why this matters, you have to understand Nvidia’s real moat.


Nvidia’s dominance isn’t just about faster chips.


It’s about CUDA, the software layer that has been optimized for years alongside frameworks like PyTorch.


For many teams, PyTorch and Nvidia hardware are effectively inseparable. Models, tooling, workflows, and institutional knowledge have all grown up together.


Switching chips is easy on a slide deck. Switching software stacks is not.


Google’s historical approach widened that gap. Internally, the company built TPUs around Jax, with performance handled through XLA. That worked well for Google’s own research teams. It worked far less well for external developers who wanted their PyTorch models to run without translation layers and performance penalties.


The result was a paradox. Google had competitive hardware, but customers still defaulted to Nvidia because that’s where their code lived.


TorchTPU is an attempt to resolve that mismatch.


The effort has also drawn Meta into the picture.


Meta (META), which heavily supports PyTorch, has been working with Google on the project, and discussions have included expanded access to TPUs. That collaboration signals something important: even companies deeply invested in Nvidia are looking for credible alternatives, as long as the software barrier comes down.


Google’s timing isn’t accidental. In 2022, TPU oversight shifted to Google Cloud, opening the chips up to broader external use just as AI demand exploded.


More recently, Google began selling TPUs directly into customer data centers, not just through its cloud. Earlier this month, the company named Amin Vahdat as head of AI infrastructure, reporting directly to Sundar Pichai.


The organizational pieces are lining up behind the same objective.


What TorchTPU ultimately represents is a recognition that AI infrastructure competition has moved up the stack.


Faster silicon still matters, but developer convenience matters more.


If PyTorch runs cleanly, predictably, and efficiently on TPUs, Nvidia’s grip weakens at the margin.


This isn’t about dethroning Nvidia overnight. It’s about removing the software excuse that has kept developers from trying anything else.

Recommended Articles

Add a Title

Add a Title

Add a Title

bottom of page