Outrageous Predictions
A Fortune 500 company names an AI model as CEO
Charu Chanana
Chief Investment Strategist
Investment Strategist
Amazon’s Anthropic deal shows AI funding is increasingly about chips, cloud and power, not just software.
ASML and TSMC suggest the AI build-out is still very much alive, and scarce capacity still holds pricing power.
Nvidia still matters, but custom chips and long contracts are making the “picks and shovels” trade more specialised.
A chatbot writes the poem, answers the email, and gets all the applause. The bill, however, often lands somewhere else. That is what makes Amazon’s expanded Anthropic partnership so interesting. On 20 April 2026, Amazon announced it would invest up to USD 25 billion in Anthropic, while Anthropic committed to spend more than USD 100 billion over the next decade on Amazon Web Services technology. This is less a venture funding headline than a long-term capacity reservation dressed in startup clothing.
The most revealing part of the Amazon-Anthropic announcement is not the cheque. It is the plumbing. Anthropic said it would secure up to 5 gigawatts of current and future Trainium chip capacity, expand inference, meaning the running of models after training, in Asia and Europe, and keep deepening its use of Amazon’s cloud stack. Amazon also said more than 100,000 customers already run Claude models on Amazon Web Services. That begins to sound less like backing a promising artificial intelligence lab and more like signing up a large industrial customer for the next decade.
That matters because AI is becoming capital-intensive in a very old-fashioned way. Software still matters, of course. But once demand scales, the winners are often the firms that own scarce inputs, not the ones with the flashiest demo. In AI, those scarce inputs are compute, advanced chips, networking, cooling, and the data-centre capacity needed to keep the whole system running. In other words, the chatbot may charm the user, but the rack gets paid.
The recent updates from ASML and TSMC make that harder to ignore. ASML lifted its 2026 revenue outlook as customers pressed ahead with expansion plans tied to artificial intelligence demand. A day later, TSMC raised its full-year growth outlook and pointed to capital spending landing at the high end of its range. That is not the tone of an industry stepping back for breath. It is the tone of builders asking for more concrete, more steel and a bigger site.
That is the more useful investor lens. Artificial intelligence is no longer only a software story. It is increasingly an industrial build-out, complete with bottlenecks, lead times and supply constraints. The visible part is still the chatbot, the assistant and the model demo. The less visible part is the factory, the foundry, the chip rack and the power bill. When demand runs ahead of supply, the companies that control key equipment and production capacity can end up with steadier economics than the businesses closer to the user.
Nvidia still sits at the centre of this world. Its graphics processing units, or GPUs, remain the benchmark for training advanced models. But the market is also shifting towards inference, which is the running of models after they have been trained. That part of the market places a bigger premium on speed, efficiency and cost. It opens the door to alternatives such as Google’s tensor processing units, or TPUs, Amazon’s Trainium chips and other custom designs built for narrower, more specific tasks. The artificial intelligence race is not moving away from infrastructure. It is moving deeper into it.
This is where the old “just buy the shovel sellers” line starts to get a bit lazy. The shovels are no longer generic. Broadcom has signed a deal through 2031 to develop Google’s custom artificial intelligence chips and separately agreed to provide Anthropic access to about 3.5 gigawatts of artificial intelligence computing capacity from 2027. Google is also exploring additional chip designs with Marvell, including a memory processing unit and a new tensor processing unit aimed at running models more efficiently.
That changes the investor map. The earlier version of the AI trade was simple: Nvidia sells powerful chips, everyone queues up, end of story. The newer version is more crowded, more specialised and more strategic. Some customers want to reduce dependence on outside suppliers. Some want custom chips that lower costs for specific tasks. Some want tighter bundles that combine chips, software and cloud infrastructure into one sticky package. The profit pool may still sit with the picks and shovels, but now the tools are bespoke, the contracts are longer and switching costs may matter as much as the silicon itself. Slightly less poetic, perhaps, but much more useful.
There are still a few obvious traps. First, capacity can stay scarce until it suddenly does not. If hyperscalers start trimming capital expenditure, or if enterprise demand proves slower than expected, today’s shortages can become tomorrow’s overbuild. Second, custom chips are not magically safer than standard ones. They still face delays, software headaches and design missteps. The fact that Google has been working to make TPUs easier to use with popular developer tools is a useful reminder that hardware alone is not enough. Third, the whole chain remains concentrated. If a small number of foundries, toolmakers and cloud platforms control the bottlenecks, any disruption can travel far and fast.
Separate the model winners from the infrastructure toll collectors. The overlap is real, but it is not always neat.
Watch hyperscaler earnings for comments on capacity, utilisation and inference, not just headline artificial intelligence revenue.
Treat custom chips as a sign of specialisation, not proof that Nvidia is finished.
Follow the full bundle: chips, cloud, software tools, and access to power and data-centre space.
The simplest way to read this moment is also the most useful. AI still looks like magic on the screen, but it increasingly behaves like heavy industry underneath. The Amazon-Anthropic deal says the real contest is not just who makes the smartest model. It is also who can secure the compute, book the capacity, and keep the infrastructure busy for years. That does not make the chatbot unimportant. It just means the economics may end up favouring the firms that own the rails behind it. In this phase of the AI race, the cleverest answer may still come from a model, but the fattest invoice may come from the machine room.
This material is marketing content and should not be regarded as investment advice. Trading financial instruments carries risks and historic performance is not a guarantee of future results.
The instrument(s) referenced in this content may be issued by a partner, from whom Saxo receives promotional fees, payment or retrocessions. While Saxo may receive compensation from these partnerships, all content is created with the aim of providing clients with valuable information and options.