Why we own SK Hynix and why NVIDIA’s ecosystem risk matters
For the last few years the market narrative around AI hardware has revolved around one company: NVIDIA. It has executed extraordinarily well, built a software advantage that is difficult to replicate, and continues to post numbers that defy historical comparisons. Yet even while acknowledging NVIDIA’s excellence, we have always debated a key long term risk. The hyperscalers are now building their own accelerators, and that changes the balance of power in AI compute. The largest buyers of AI hardware have both the incentive and the resources to reduce their dependency on a single vendor with premium pricing, and that shift is now clearly underway.
Recent reporting of Google’s next generation TPUs and custom ASIC programs is an early sign that this dynamic is beginning to play out. Google is pushing hard to match and in some cases surpass NVIDIA’s performance for specific workloads.
One useful way to think about the emerging competition between GPUs and custom ASICs is through the analogy of a decathlete. NVIDIA’s GPUs are like world class decathletes: exceptionally versatile, capable of performing at a high level across a wide range of events and unmatched when the workload is broad and constantly shifting. The hyperscalers are now building the equivalent of dedicated long jumpers or high jumpers. A TPU or ASIC will not attempt every event, but for the specific disciplines they are built for they can outperform the decathlete with greater efficiency and lower cost. This is why the largest buyers of compute are increasingly willing to develop their own silicon. They do not need a chip that wins every event, only one that wins the events that matter most to their internal workloads.
Amazon and Microsoft are building their own silicon teams. Meta is investing aggressively in internal AI chips. None of this is an indictment of NVIDIA in the short term and we are not calling the top of the cycle. However, it does highlight why our portfolio construction has always included a way to participate in the growth of AI compute without concentrating risk behind one architecture.
This is where SK Hynix fits in. When we bought SK Hynix, the thesis was simple and remains intact today. Every successive generation of AI accelerator, regardless of who designs it, consumes greater amounts of high bandwidth memory (HBM). HBM has become the enabler of AI performance. If model sizes keep growing and compute requirements keep increasing, the demand for HBM scales with them. It is structurally tied to the direction of the industry rather than the success of any single chip design.
HBM demand is the common denominator
One of the most important insights in modern AI hardware is that memory bandwidth has become as critical as compute throughput. Training and inference are limited by how fast data can be fed into the GPU or accelerator. This is why HBM has moved from a niche product to the centre of the AI supply chain. Each new generation of accelerator requires:
- higher bandwidth per package
- larger HBM stacks
- more advanced process technology in both DRAM and packaging
- tighter thermal and power design integration
This is not about incremental change. HBM3, HBM3E and the emerging HBM4 roadmap show step changes in both density and bandwidth. If NVIDIA continues to win, HBM consumption rises. If the hyperscalers build their own ASICs, HBM consumption still rises. Even if the industry fragments into multiple specialised architectures, the fundamental requirement for high bandwidth memory remains.
SK Hynix has emerged as the leader in HBM, with strong share and a credible path to maintain that position. The company has proven its ability to execute on advanced packaging, TSV stacking and yield improvements. These capabilities take years to build. They cannot be replicated quickly by competitors. As a result, SK Hynix sits in a position where it participates in the structural growth of AI compute but does not carry the architectural risk of betting on any single chip vendor.
Valuation discipline matters
A second part of our thesis is valuation. At the time of purchase, SK Hynix traded near five times its enterprise value to EBIT. NVIDIA traded around thirty times its enterprise value to EBIT. Both companies are tied to AI compute, but one is priced for generational perfection while the other is priced like a cyclical memory manufacturer (our thesis is that HBM is not commoditised in the way NAND standard DRAM are). In our view, that disconnect created an opportunity to gain exposure to the most economically scarce part of the AI bill of materials with a far lower embedded risk profile.
We are not suggesting that SK Hynix will match NVIDIA’s economics. NVIDIA has best in class margins, a software moat and a dominant ecosystem. But from a portfolio perspective, owning SK Hynix alongside or instead of NVIDIA reduces concentration risk while still capturing a key structural tailwind. It also gives us a buffer if hyperscalers accelerate their internal silicon efforts. If Google shifts more workloads to TPUs, if Meta deploys its custom inference chips at scale, or if Amazon leans harder into Trainium and Inferentia, SK Hynix continues to benefit because all of those chips rely on HBM.
This is the critical distinction. NVIDIA’s long term risk is architectural substitution. SK Hynix’s long term driver is memory intensity. These are not the same thing.
Positioning for the long term
As AI compute evolves, we expect more heterogeneity in the accelerator landscape. The hyperscalers care about cost performance, power efficiency and workload specificity. Custom silicon addresses these needs directly. This trend does not unwind overnight, but it is real and increasingly visible. Our concern with NVIDIA has always been that if one company captures too much economic value in a rapidly scaling market, customers eventually push back. The hyperscalers have both the capability and the motivation to do so.
By contrast, SK Hynix benefits from the fact that memory scaling is unavoidable. The company trades at a fraction of NVIDIA’s multiple, has leadership in a product with genuine scarcity value and sits in a part of the stack that gains regardless of which brand of accelerator ultimately wins.
This positioning helps us participate in the AI thematic while protecting our portfolio from single vendor risk. It aligns with our broader philosophy of owning high quality, strategically important businesses at valuations that allow for both upside and resilience.
4 topics
1 stock mentioned