
News
SK Hynix promises more AI memory chips
AI
Leon Wilfan
Feb 23, 2026
17:30
Disruption snapshot
High bandwidth memory has become the key bottleneck in AI systems. SK Hynix is expanding production after selling out years ahead, tightening control over AI buildouts.
Winners: Dominant HBM suppliers with locked-in design wins. Losers: Data center operators and chip designers that can’t easily switch memory partners.
Watch SK Hynix capital spending. A major capex increase would confirm management expects multi-year AI demand, not a short-lived surge.
Memory continues to be one of AI's biggest bottlenecks.
SK Hynix sits at the middle of it as one of prime suppliers of memory.
SK Group Chairman Chey Tae-won said in Washington that SK Hynix will ramp artificial intelligence memory production to meet surging global demand.
High bandwidth memory, or HBM, is the centerpiece of AI right now. He called it a monster chip. He is right.
HBM is the high speed memory that sits next to AI accelerators and keeps them fed with data. Memory is one of the 4 AI bottlenecks. Without it, expensive AI chips stall. With it, data centers hum. SK Hynix has already sold out its planned 2026 HBM output. So has Micron. Analysts now peg SK Hynix 2026 operating profit at $70 billion on average, up from $50 billion just months ago, with some estimates topping $100 billion. The stock has risen more than fourfold in a year.
US tech giants like Microsoft and Meta plan to pour roughly $650 billion into AI infrastructure this year. That money is flowing straight into the hands of three companies that dominate advanced memory. SK Hynix sits at the top of that stack.
The disruption behind the news: This is about who controls the choke point of the AI economy.
For years, Nvidia owned the AI narrative.
Now memory is asserting leverage.
AI accelerators cannot function at scale without HBM.
Every new data center rack, every new model training run, every generative AI product launch depends on it. When supply is sold out two years ahead, pricing power shifts hard to the supplier.
HBM is not commodity DRAM. It is complex, stacked, and tightly integrated with leading edge processors. That creates switching costs. Hyperscalers cannot easily redesign systems around a different memory architecture. Once SK Hynix wins a design slot next to a specific accelerator, it tends to stick for multiple product cycles. That locks in revenue visibility that most semiconductor companies only dream about.
The numbers matter. If even a fraction of that projected $650 billion AI spend flows into memory subsystems, we are talking tens of billions in annual revenue tied directly to AI deployment curves. With operating profit estimates hitting $70 billion and potentially $100 billion in 2026, SK Hynix is morphing from a cyclical chipmaker into an AI infrastructure toll collector.
Chey also floated building power plants near data centers. That is not random. Energy is the next bottleneck. AI training clusters consume enormous electricity. That is also why Japan invests $36 Billion in U.S. energy projects. Memory, compute, cooling, and power are converging into one integrated industrial stack. Companies that secure energy and component supply together will win.
What to watch next
First, watch capital spending in 2026.
If SK Hynix materially increases capex, it signals confidence that AI demand is not a bubble but a multi year buildout.
Second, watch pricing trends for HBM.
If prices hold or rise despite new capacity, the supply constraint is structural, not temporary.
Third, track energy strategy.
If SK Group actually builds power assets near AI hubs, it signals vertical integration that competitors may struggle to match.
Over the next 6 to 24 months, the fight will not be about who has the smartest model. It will be about who can secure memory and power at scale. That determines who ships products and who waits in line.
The AI revolution will not be limited by imagination but by hardware, and SK Hynix is positioning itself to own that limit.
Recommended Articles



