AI: compressing decades of diffusion into single-digit years
The narrative of technological progress is accelerating. We've established that the PC era was a 20-year cycle dictated by the cost and friction of physical hardware. The cloud era then solved the capex problem and built the instant distribution rails.
Now, we look at artificial intelligence, which is poised to leverage these two precursors to compress the technology adoption curve from decades into single-digit years, a timeframe that demands attention from investors.
This compression is why the current, seemingly absurd, levels of capital expenditure in AI infrastructure may be completely rational.
The unprecedented adoption rate
Early data is already confirming this rapid-fire adoption. Studies comparing the pace of generative AI adoption to past technologies reveal a stark difference:
- PC era: it took roughly three years after the launch of the IBM PC (1981) to reach a work adoption rate of ~25%.
- Internet era: adoption of the internet, three years after it was opened to commercial traffic was ~30%.
- Generative AI: within just two years of its mass-market launch (late 2022), generative AI adoption reached ~40% among US adults and is outpacing both the PC and the internet on a comparative timeline.
The initial hurdle, the physical capital is gone, allowing AI to immediately focus on driving software-led productivity gains.
Why AI can outrun its predecessors
AI is a productivity wave with the scale of the PC era but the deployment model of the cloud. This combination results in an adoption that can rapidly climb an S-curve, avoiding the long, flat tail seen with PCs in the 1980s.
Four structural advantages allow AI to diffuse faster than any previous general-purpose technology:
1. Zero physical friction
The biggest bottleneck of the PC era was the installation of hardware and software. AI is deployed through APIs like Azure OpenAI or AWS Bedrock, lines of code that instantly integrate advanced intelligence into existing enterprise workflows and applications, with no physical device to install across the world.
2. The cloud distribution rails operate now
The massive network of data centres, connectivity and standardised software models built during the cloud era are the instant arteries for AI. Every knowledge worker already has a device (PC/mobile) and an internet connection; the US$15,000 computer cost barrier of 1985 has been solved. The AI agent simply needs access to the existing cloud platform to start performing tasks.
3. Agentic automation (work on behalf of the user)
Unlike the PC, which simply provided a tool for a worker to use, AI agents can perform tasks on behalf of the worker without the need for complex, hands-on training. Whether it's drafting code, summarising documents or managing a sales funnel, the AI is layered directly into the workflow, providing immediate and measurable productivity uplift.
4. Centralised capital investment
In 2025 alone, Microsoft, Amazon, Google, and Meta will collectively spend more than $250 billion on data centres, chips, and power systems, with the majority now explicitly tied to AI workloads. This is centralised, patient capital building what will effectively become the global AI utility grid, while the software and agents that run on it are adopted in a completely decentralised way by enterprises and developers worldwide.
Even larger ambitions are emerging beyond the hyperscalers. OpenAI, working with governments, sovereign funds, and industrial partners, has outlined multi-year investment plans that approach $1.4 trillion in cumulative capital expenditure over the next five to ten years. These headline figures routinely trigger accusations of a bubble, yet the confusion almost always comes from mixing two fundamentally different concepts.
The $600–700 billion number cited in many forecasts is projected steady-state annual revenue at the application and agent layer by the early 2030s: the SaaS subscriptions, API calls, reasoning services, and vertical workflows that enterprises will actually pay for. This is high-margin, recurring cash flow.
The $1.4 trillion number is the cumulative asset base required in the infrastructure layer: the chips, data centres, power plants, and grid upgrades needed to supply the compute. This is long-lived capital stock, depreciated over five to thirty years depending on the asset class.
This distinction changes the investment equation. From an industrial standpoint, investing $1.4 trillion in cumulative assets to unlock a potential $630 billion annual revenue stream implies an asset turnover profile that is economically viable, provided the demand materialises. It moves the numbers from the realm of "impossible" to "plausible industrial logic."
One final nuance separates the two spending patterns. The hyperscalers’ $250 billion-plus annual run rate remains partly diversified across their broader cloud portfolios: general compute, storage, and enterprise SaaS. OpenAI-style roadmaps, by contrast, are almost pure-play AI infrastructure whose payback depends nearly entirely on future model usage and agentic workloads.
In short, today’s large capital commitments are the price of compressing two decades of economic value creation into less than one. Once the infrastructure and application layers are properly distinguished, the expenditures shift from looking like reckless spending to a calculated industrial wager. It remains a risk, but one grounded in asset mechanics rather than pure speculation.
Sizing the opportunity: a plausible trajectory
To forecast the economic impact of AI, we often look to the PC era as a baseline, though this comparison requires significant nuance. It is not strictly "apples-to-apples". In 1980, the software industry was nascent, meaning "IT investment" was almost exclusively hardware assets. Today, software is a dominant economic force, and the line between general "Cloud" spending and specific "AI" spending is increasingly fungible.
However, even with these distinctions, the historical pattern offers a framework for the scale of the ramp.
- The broad market (output): The total worldwide AI-related market revenue is already significant, estimated to approach approximately 0.25-0.30% of Global GDP in 2025 (~US$300-$350 billion, with more than $250 billion explicitly tied to AI workloads).
- The core investment (input): Crucially, the pure infrastructure capex driving this growth, namely the hyperscaler spending that fuels exponential model advances, is still starting from a very low base of 0.25% of global GDP in 2025 ($250 billion).
This disparity is the signal. We have a highly concentrated, comparatively small investment engine (0.25%) that is already powering a large, established economic layer. Given the unprecedented speed of adoption, we can project a compression of the economic ramp. We are not just repeating the 20-year PC cycle; we are likely to see that growth curve compressed into a fraction of the time, fuelled by an investment base that has significant room to run before it even matches the starting intensity of the PC era.
| Years |
AI spend (% global GDP) |
Commentary |
|
2025 |
0.25-0.30% |
Current total AI related market (software, services, hardware) |
|
2030 |
2.0-2.5% |
Rapid market expansion driven by enterprise adoption and model integration |
|
2040 |
4.0-5.0% |
AI can exceed the mature contribution level of the PC era, due to faster diffusion and labour replacement |
The rationale for capex
Seen through the lens of a compressed technology adoption cycle, today’s hyperscaler capex boom looks far more rational than it might at first glance.
If AI ultimately unlocks a 15 percent productivity uplift across the US$35 trillion global knowledge worker base, the resulting economic value approaches US$5.25 trillion annually.
Assuming vendors capture around 12 percent of that value through software, APIs and cloud consumption, this implies roughly US$630 billion in long run annual revenue. Against that backdrop, a US$250 billion annual investment in AI infrastructure becomes proportionate, not excessive. Today the application/agent layer generates only ~$80–120 billion annually, less than 20% of that future steady-state run-rate.
Historical precedent supports this pattern. Every major productivity revolution, including railways, electrification and cloud computing, required large, front loaded capital investment long before the full economic value was realised. AI follows the same logic, but with an important difference: it is being deployed on top of existing cloud rails and endpoint devices. This dramatically reduces physical friction and accelerates the speed at which productivity gains can be captured.
This is why the spending appears large. The opportunity is large. If AI truly combines the scale of the PC era with the deployment speed of the cloud era, then a meaningful portion of the value creation that historically took decades could be pulled forward into a single business cycle.
Generative AI is already diffusing faster than either the PC or the internet, largely because the hyperscalers’ centralised capex is absorbing the cost and complexity that previously limited global technology rollout. That investment engine is what enables a plausible trajectory in which AI reaches the PC era’s contribution to global GDP, potentially in half the time.
Tomorrow, we shift from speed to scale, introducing a three step economic framework to more precisely quantify the trillions in global knowledge worker productivity that AI is positioned to unlock.
1 topic