The Economics of Data Centres

The AI boom is fueling a historic data centre surge. Discover the key economics, costs, and investment risks shaping the global economy.
Damien Klassen

Nucleus Wealth

The explosion of artificial intelligence has triggered one of the largest capital expenditure booms in history. Trillions of dollars are being poured into building and upgrading data centres, which now sit at the heart of the global economy. Understanding their underlying economics—cost structures, pressure points, and investment implications—is becoming essential for investors and policymakers alike.

From Web Servers to AI Factories

Data centres today look nothing like those of a decade ago. Older facilities were essentially warehouses of independent computers linked by Ethernet, designed for tasks like website hosting or database access.

Modern AI workloads are far more demanding. Training models requires massive datasets to be loaded across thousands of interconnected machines with ultra-high bandwidth. Graphics Processing Units (GPUs) now dominate, often tightly networked with high-speed memory and positioned in extreme proximity to minimise latency.

Nvidia has emerged as a pivotal player in this shift, providing not just chips but the architecture that effectively functions as the “operating system” of AI data centres. The result is facilities that are denser, more power-hungry, and vastly more expensive to build.

Anatomy of Costs

AI Cloud Capital Cost of Ownership
AI Cloud Capital Cost of Ownership

The cost of equipping a modern AI data centre is staggering. Older Nvidia H100 servers ran about $200,000 each. The new GB200 servers are closer to $3 million apiece—before adding another $700,000 in networking, storage, software, and installation.

Despite these costs, the economics can be highly profitable. Running the latest GPUs costs around $2–3 per hour, but demand is so high that capacity can often be sold for $6–15 per hour. Scarcity of servers has turned compute into one of the most lucrative services in technology.

Importantly, costs evolve quickly. The price of training an AI model like DeepSeek fell from $4.5 million in July to a projected $2.5 million by December of the same year. Leaders who fail to keep up with falling costs risk losing their advantage rapidly.

Who Operates These Facilities?

The ecosystem of operators falls into three broad tiers:

  • Hyperscalers (Google, Microsoft, Amazon): These giants own entire data centres, benefit from scale, and often negotiate better deals with suppliers.

  • Neocloud firms (e.g., CoreWeave): They rent bare-metal racks from others, incurring costs roughly 10% higher than hyperscalers.

  • Smaller resellers: They rent capacity and resell at a premium, benefiting from current scarcity but facing an uncertain future when supply catches up.

The market is somewhat circular—hyperscalers themselves occasionally rent from smaller providers to meet spikes in demand.

The Power Problem

Electricity Demand for Data Centres
Electricity Demand for Data Centres

While electricity costs make up a small share of running expenses, power availability is the single biggest constraint on expansion. AI data centres consume several times more electricity than older facilities, and global demand could double or quadruple by 2030.

Demand of Data Centres by Country
Demand of Data Centres by Country

Bottlenecks are emerging across the supply chain. Lead times for critical components like transformers and backup generators have stretched from months to years, creating scarcity pricing even for relatively low-tech equipment. In places like Japan, new data centres are projected to account for half of all new electricity demand over the next five years.

I doubt that data centres will be completely "off-grid" as the redundancy costs are too high. Also data centres can help provide load balancing to the network. However, hybrid power strategies—grid connections supplemented by on-site batteries, solar, or gas plants—are likely to become standard.

As a final point, chips will become much more power efficient. However, efficiency gains in chips are unlikely to reduce overall demand.Instead, we expect improvements to simply enable even more computing.

Following the Money

Growing Capex Demands of Data Centre
Growing Capex Demands of Data Centre

Estimates for global data centre spending range between $3.7 trillion and $8 trillion. In the high-end scenario:

  • ~60% goes to IT equipment, mostly (Nvidia) GPUs.

  • ~7% to power.

  • ~33% to the data centre infrastructure.

This distribution highlights the dominance of semiconductors.

Power is critical, but also a relatively minor cost. When power is a few cents per GPU hour, it can double in price, and the end user will not see much change in overall costs. Data centres are not price sensitive to power costs.

A circular reference?

In the 2000 tech boom we saw loss-making companies raising money from the stock market to give to other loss-making internet companies. When the ability to raise capital goes away, suddenly industry revenues plummetted. A ponzi-like structure.

We need to be wary of that. The latest Oracle info is clearly in that category:

  • Oracle announces $300b order from OpenAI over the next 5 years.
  • OpenAI is expected to clock a little over $10b in sales in 2025. Clearly it needs to raise a lot of capital to fund that order.
  • OpenAI is a loss-making, currently not-for-profit, trying to change its status in order to raise said capital
  • Oracle has placed large orders with NVIDIA in order to be able to fulfil the contract
  • NVIDIA is looking at funding OpenAI

Now, not all orders are like this. But we need to watch the market to work out how much is. The data is messy and moving fast. My estimate of compute spend by buyer:

  • Hyperscalers’ own workloads: 35–45%. Microsoft, Google, Amazon, and others are building and running their own models.
  • Startups and GPU clouds: 20–30%. Think Anthropic, OpenAI, xAI, Cohere, Mistral. Plus GPU cloud providers like CoreWeave and Lambda that resell capacity.

The rest is “real economy” demand: roughly 25–35%. This is where productivity gains and P&L impact get tangible.

Breakdown of that “real economy” slice:

  • Financial services is the largest, probably more than 10%. Banks and insurers are spending for trading, risk, underwriting, and claims. They don’t want to be left behind. Many expect models to outperform expensive human processes.
  • Healthcare, biotech, and pharma are probably ~10%: Drug discovery, imaging, bio foundation models.
  • Manufacturing, robotics, and vision (including autos and autonomy): ~5–10%. Clear productivity gains drive spend.
  • Media, gaming, and consumer internet: similar range, ~5–10%. Content and tooling adoption is rising.
  • Government, defense, and public sector: ~5% or less today, but with upside. Military investment is active. The Ukraine–Russia war is a testbed. Breakthroughs could catalyse a multi-country spend ramp.

Net effect: roughly two-thirds of demand today comes from hyperscalers’ internal use and AI startups, while only one-third comes from “real economy” applications like finance, healthcare, or manufacturing.

Investment Opportunities

The investment universe around data centres is broad, spanning:

  • Builders (construction and real estate firms) – a leading indicator of boom conditions. Classic boom stocks, if you are playing in this space, then be really wary of any signs that the bust has begun.

  • Energisers (power and cooling suppliers like Schneider and ABB). Similar characteristics to the above, but as mostly service companies the boom/bust won't be as extreme.

  • Technology developers (Nvidia, TSMC, Broadcom), the “picks and shovels” players. These are very profitable. The risk is not survival. It is the growth rate normalising when the boom cools.

  • Operators (hyperscalers and property trusts like Equinix). Scale is a real advantage here. Scarcity boosts the economics, and when the cycle turns the large companies will be a little less profitable. The small ones will be broke.

  • AI architects (OpenAI, Anthropic), currently the least profitable despite being at the centre of the hype.

Two big questions drive valuation. First, what is a reasonable normalised earnings base when the boom stabilises? Second, could demand still double before that happens? Both are plausible scenarios. Keep a close eye on capital availability and cost.

One complication: much of the funding is now outside listed markets. During the dot‑com era, most activity showed up in public disclosures. Today, large private rounds and off‑market builds blur the picture. Visibility is lower. Customer pipelines and deployment metrics are harder to triangulate, forcing more estimation than we would like.

Where We Stand

Investment stance is a trade‑off. You can try to sidestep the cycle peak and re‑enter later. Or you can stay selectively invested and manage risk as it evolves. We are opting for the latter.

The peak is unknowable in advance. Numbers look stretched in places, but not at classic bubble extremes. Policy momentum points to lower rates. Capex is soft in other sectors. That macroeconomic backdrop can extend the cycle, similar to prior tech buildouts.

Where is most of the value and IP today? Primarily in the hardware and component layers, and in critical energy and cooling. Operators also earn well, with both scarcity and scale benefits. AI application firms remain capital-intensive and loss‑making. If we see significant capital recycling from AI architects back into upstream suppliers, our caution will rise. Until then, the runway can extend.

The topic in this article was originally featured in the weekly podcast "Nucleus Investment Insights", which is available for streaming either in podcast format or YouTube video.

........
The information on this blog contains general information and does not take into account your personal objectives, financial situation or needs. Past performance is not an indication of future performance. Damien Klassen is an Authorised Representative of Nucleus Advice Pty Limited, Australian Financial Services Licensee 515796. And Nucleus Wealth is a Corporate Authorised Representative of Nucleus Advice Pty Ltd.The information on this blog contains general information and does not take into account your personal objectives, financial situation or needs. Past performance is not an indication of future performance. Damien Klassen is an authorised representative of Nucleus Wealth Management, a Corporate Authorised Representative of Nucleus Advice Pty Ltd - AFSL 515796.

Damien Klassen
Head of Investment
Nucleus Wealth

Damien runs asset allocation and global stock portfolios for Nucleus Super, Nucleus Ethical and Nucleus Wealth. His 25 year+ career includes Global Quant at Schroders, Strategy at Wilson HTM & co-founder of Aegis.

I would like to

Only to be used for sending genuine email enquiries to the Contributor. Livewire Markets Pty Ltd reserves its right to take any legal or other appropriate action in relation to misuse of this service.

Personal Information Collection Statement
Your personal information will be passed to the Contributor and/or its authorised service provider to assist the Contributor to contact you about your investment enquiry. They are required not to use your information for any other purpose. Our privacy policy explains how we store personal information and how you may access, correct or complain about the handling of personal information.

Comments

Sign In or Join Free to comment