Quarterly Outlook
Q4 Outlook for Investors: Diversify like it’s 2025 – don’t fall for déjà vu
Jacob Falkencrone
Global Head of Investment Strategy
Investment Strategist
When most investors think of AI hardware, they still think “Nvidia or nothing.” That is understandable. Nvidia’s graphics processing units (GPUs) power a large share of today’s AI training and the company has become a symbol of the whole theme.
Yet in the background, Alphabet is rebuilding its own plumbing. It designs tensor processing units (TPUs) for AI workloads and Axion central processing units (CPUs) for general cloud computing. These chips run inside Google’s data centres and, crucially, are rented out through Google Cloud.
For long-term investors, including anyone who read our earlier piece on Warren Buffett’s Alphabet bet, the real question is simple. Are we looking at a challenger to Nvidia’s crown, or at a platform using chips to quietly mine value from the whole AI boom?
At heart, AI is a large pile of linear algebra. Models juggle matrices of numbers. To do that quickly, you need specialised hardware.
GPUs were first built to draw computer graphics. Their strength is handling many small tasks in parallel, which turned out to be perfect for modern AI. Nvidia wrapped those chips in a software platform called CUDA and built a huge ecosystem around them. That combination made Nvidia the default choice for AI labs, cloud providers and start-ups.
TPUs are a different species. A tensor processing unit is a custom AI accelerator chip (an application specific integrated circuit, or ASIC) that Google designed specifically for neural networks. Instead of doing a bit of everything, TPUs focus on the core matrix operations that dominate AI training and inference. That allows high throughput and energy efficiency for the right workloads.
There are two practical differences that matter for investors.
First, flexibility. Nvidia GPUs are general-purpose accelerators. You can buy them from many vendors, plug them into many systems and run many frameworks. TPUs are tightly integrated with Google’s own software tools such as TensorFlow and JAX, and they live almost entirely inside Google Cloud.
Second, control. Nvidia sells chips. Alphabet owns the entire stack: chips, data centres and cloud services. When a customer chooses TPUs over third-party GPUs inside Google Cloud, more of each AI dollar remains with Alphabet. That is less visible than a GPU headline, but it is very visible to margins.
Alphabet is now several generations into its TPU programme. TPU v5p underpins its AI “hypercomputer” offering for model training. Newer generations, including the Ironwood TPU aimed at inference, focus on serving huge volumes of AI queries quickly and cheaply for customers and for Google’s own products.
Alongside this, Google has built Axion, its first custom Arm-based data centre CPU. The company claims Axion can significantly outperform traditional x86 server chips on both performance and power efficiency. Together, Axion plus TPUs let Alphabet tune whole data centres around its own silicon rather than being fully dependent on Nvidia, Intel or AMD.
Economics tell the real story. Google Cloud reports that earlier TPU v5e instances deliver up to roughly four times better AI performance per dollar than comparable inference solutions, and partner case studies on the newer TPU v6e, including Introl’s analysis, suggest that moving suitable workloads from Nvidia GPU-based setups to TPUs has cut inference costs by about 50% to 65% in some deployments.
That does not mean Nvidia is finished. Its chips remain the most widely supported option across clouds and on-premise sites. For many enterprises, especially those that value flexibility or use software stacks not yet tuned for TPUs, Nvidia still looks like the safe default.
What is changing is bargaining power. When hyperscalers such as Alphabet, Amazon and Microsoft design their own accelerators, they no longer have to accept whatever pricing and supply Nvidia offers. They can shift workloads between in-house chips and third-party GPUs, play vendors against each other and pass some savings on to customers to win cloud market share.
Alphabet’s advantage is that its chips are already wrapped in products that hundreds of millions of people use daily, from Search and Maps to YouTube and Photos, and now in the Gemini 3 AI suite. That makes its silicon a tool to deepen an existing moat, not a new business it has to sell from scratch.
Berkshire Hathaway’s latest 13F filing showed something unusual. After months of net selling, it revealed a new position in Alphabet worth around 4.3 billion USD and holding nearly 18 million shares, while trimming long-held Apple exposure. Alphabet now sits among Berkshire’s larger equity positions.
Buffett has often said that not buying Google years ago was a mistake. Today’s purchase looks like a partial repair job, but also a forward-looking one. Instead of picking a single chip design, Berkshire is backing a platform that:
earns from search and advertising,
rents cloud capacity to other AI players, and
increasingly runs all of that on its own chips.
In our earlier article on Buffett’s Alphabet bet, we argued that this is really an AI infrastructure decision. The GPU versus TPU story simply makes the implicit bet more visible. Alphabet’s custom silicon helps it keep more value from AI inside the house, which is the kind of quiet structural edge that tends to appeal to Buffett-style investors.
No neat story is risk-free, and this one has several moving parts.
On the technical side, TPUs must keep pace with fast-changing models. If the industry leans more heavily into tools and frameworks optimised for Nvidia’s ecosystem, TPUs could remain a powerful but niche option. Developer mindshare is hard to win and easy to lose.
On the demand side, AI spending is still cyclical. Both Nvidia and Alphabet are pouring billions into data centres. If enterprise AI projects slow, or regulators push back on some use cases, overcapacity could weigh on both chip utilisation and cloud margins.
On the regulatory side, Alphabet remains under pressure around competition and data use. Stronger antitrust rules or stricter privacy standards might limit how tightly it can bundle AI across its products, which in turn would affect how quickly it can monetise any TPU cost advantage.
For Nvidia, the long-term risk is not just rival chips, but key customers like Alphabet, Amazon, Microsoft and others steadily internalising more of the value chain.
For long-term investors trying to make sense of this, a few practical rules can help.
Think of Nvidia as the reference name for general AI accelerators and Alphabet as a broad AI platform with an internal chip edge.
Watch metrics such as Google Cloud revenue growth, capital spending and any disclosures on TPU and Axion adoption, not just headline Gemini announcements.
Treat bold claims on “4x cheaper” or “fastest ever” hardware with caution. Focus instead on real-world customer case studies and margin trends.
Use diversification so your portfolio is not hostage to one version of the AI future or a single chip winner.
The first wave of the AI story taught investors to watch Nvidia. Its GPUs supplied the raw power that turned machine learning from lab curiosity into everyday tool, and the stock price followed. The next wave asks a deeper question. Who controls the plumbing that turns that power into long-term profit?
Alphabet’s TPUs and Axion CPUs are one answer. They pull more of the AI stack inside Google’s walls, improve its cost base and give Google Cloud a sharper pricing tool. That is why markets rewarded Alphabet so strongly in the past days and why Berkshire’s new stake speaks louder than most AI speeches.
For long-term investors, the real edge may lie less in guessing the winning chip and more in backing the platforms that quietly decide how those chips get used.
This material is marketing content and should not be regarded as investment advice. Trading financial instruments carries risks and historic performance is not a guarantee of future results.
The instrument(s) referenced in this content may be issued by a partner, from whom Saxo receives promotional fees, payment or retrocessions. While Saxo may receive compensation from these partnerships, all content is created with the aim of providing clients with valuable information and options.