Big Tech’s $90 B+ CapEx Surge: A Deep Bet on AI Infrastructure

Alphabet’s massive $90B+ capex is the clearest sign yet that the AI race has moved from model demos to hard infrastructure: chips, data centres and global compute supply. The stakes are now physical, whoever owns the pipes and the power, owns the future of AI.

Big Tech’s $90 B+ CapEx Surge: A Deep Bet on AI Infrastructure
Photo by Aidin Geranrekab / Unsplash

In the most recent quarter, tech-giants such as Alphabet Inc., Meta Platforms, Microsoft Corporation and Amazon.com, Inc. collectively deployed more than US $90 billion in capital expenditures, a clear signal that their focus has shifted from software features to physical infrastructure built for artificial intelligence.

Why the investment is so large

Alphabet’s leadership reaffirmed that its 2025 capital-spending plan will reach approximately US $75 billion, much of it dedicated to expanding data-centres, building next-gen AI chips and scaling services like Google’s Gemini model.

Meanwhile, broader analyst forecasts from Citigroup Inc. now predict that total AI-related infrastructure investment by the tech-giants will exceed US $2.8 trillion through 2029.

What it means for Big Tech

  1. AI is no longer just software — the frontier is now in hardware, networked datacentres, and massive scale-out.
  2. Compute bottlenecks matter — as model sizes and inference demands grow, firms are locking in data-centre capacity, custom chips and global deployment nodes before competitors can catch up.
  3. Economics are shifting — free cash flow remains strong at Alphabet despite enormous capex, suggesting confidence in long-term returns even if near-term margins compress.

The strategic implications

For market watchers, the capex number is a double-edged sword. On one hand, such large scale-out places these firms several steps ahead in the AI arms race: whoever owns the infrastructure may define the pace at which models get trained, deployed and monetised.

On the other hand, investors are scrutinising the timing, returns and sustainability of that bet. A misstep in overcapacity, regulation or model performance could erode advantages.

Looking ahead

As more of the value-chain in AI becomes physical (chips → servers → data centres → global nodes), the winners will likely be those who:

  • Control cost per inference and training run
  • Deploy globally to meet latency/regulation needs
  • Monetize AI “as a service” at scale

With the threshold for participation rising, only a few players appear prepared to make the leap.