The trillion-dollar AI race has a built-in contradiction: progress is real, overspending might be too

If you listen to the story Silicon Valley tells about AI, it sounds like a clean engineering arc: bigger models, better answers, more productivity. If you listen to what markets and governments are quietly worrying about, it sounds like something else entirely: concentration risk, energy constraints, and a capital cycle so large it could distort the economy.

That tension — between AI as a general-purpose breakthrough and AI as an investment boom that could overshoot — is the contradiction at the heart of the trillion-dollar race.

The BBC’s reporting, based on interviews in and around Google, puts real numbers and real physicality behind the hype: noisy chip labs, bespoke silicon, and annual investment figures that used to sound impossible.

The bet: AI is an “inflection point” worth overspending on

Google’s CEO Sundar Pichai frames AI as the next once-a-decade platform shift, like:

  • the personal computer
  • the internet
  • mobile
  • cloud

That framing matters because it gives executives permission to do something that looks irrational in a normal year: spend enormous sums ahead of proven returns.

The BBC reports that Google is investing more than $90bn a year in its AI build-out, roughly tripling in four years. That’s not “R&D.” That’s infrastructure and supply-chain strategy.

Pichai’s unusually candid line is that the moment is both rational and irrational — exciting progress, but also a cycle where industry can overshoot.

If you want to understand why firms keep spending even while people talk about a bubble, that’s the reason: they believe the cost of being late is existential.

The concentration risk: AI’s boom is propping up the whole market

One of the least-discussed AI risks isn’t technical. It’s financial.

The BBC notes:

  • massive market value concentrated in a handful of firms
  • the “Magnificent 7” making up roughly a third of the S&P 500 valuation
  • concentration higher than during the dotcom era, per IMF comparisons

That means the AI race is not only a tech story. It’s a macro story.

If the AI narrative breaks (or even pauses), it doesn’t just hurt a few startups. It hits:

  • retirement portfolios
  • index funds
  • consumer confidence
  • credit availability

When people say “is AI a bubble,” what they often mean is: “Is the market too dependent on this one storyline?”

The real “AI factory”: chips, cooling, and bespoke silicon

It’s easy to treat AI as software. But the competitive advantage increasingly looks like supply chain control.

The BBC takes us inside Google’s work on TPUs (Tensor Processing Units) — Google-designed chips meant to power AI workloads.

This matters because the chip landscape is stratifying:

  • CPUs handle general computing
  • GPUs handle parallel processing (often used for AI)
  • ASICs are purpose-built for specific workloads

TPUs sit in the ASIC category: custom silicon tuned for Google’s needs.

The strategic logic is clear: if compute is scarce and expensive, and if AI demand keeps rising, companies that control their own silicon and deployment pipeline are less exposed to external constraints.

In plain English: if you can’t buy enough GPUs, you try to own the whole stack.

The “begging for GPUs” era is a signal, not a joke

The BBC includes a telling anecdote about tech leaders effectively begging Nvidia for more GPUs.

It’s funny, but it’s also a market signal:

  • demand for compute is outstripping supply
  • the “winning” strategy looks like amassing chips and building data centres

This creates a psychological trap:

If everyone believes the only way to win is to keep spending, spending becomes the strategy — even when returns are uncertain.

That’s how investment booms become self-reinforcing.

The split that matters: incumbents vs the “borrowed compute” economy

A crucial distinction in the BBC report is between:

  • the biggest tech companies that can fund chips and data centres from cashflow
  • businesses that rely on borrowed money and complex deals to access compute

This is the hidden class system of AI.

If AI becomes an infrastructure arms race, the companies with strong balance sheets can keep building through downturns. The companies dependent on credit can’t.

That’s why “bubble risk” is asymmetric:

  • the giants might survive a correction
  • the leveraged infrastructure layer may not

The BBC mentions share price drops in AI infrastructure companies and turbulence around firms tied to compute provisioning.

OpenAI’s spending storm and the politics of AI infrastructure

The BBC describes controversy around the scale of OpenAI’s commitments and the pushback when investors questioned the mismatch between spending and revenue.

This is a familiar pattern in platform shifts:

  • early adoption is enormous
  • monetisation lags
  • compute costs stay brutal

The politically interesting part is the suggestion that governments might build and own AI infrastructure.

That idea will appeal to policymakers for three reasons:

  1. sovereignty (not being dependent on a few US firms)
  2. national security (control over critical compute)
  3. industrial strategy (jobs, investment, resilience)

But it also raises hard questions:

  • do taxpayers subsidise private models?
  • who gets access?
  • who governs safety and accountability?

The energy constraint: AI doesn’t scale without electricity

The BBC points to a looming reality: data centres may consume electricity on the scale of major nations.

This is the constraint that can turn AI hype into political conflict.

Because energy systems are already under pressure:

  • electrification of transport
  • heating decarbonisation
  • industrial transition

If AI growth competes with those goals, governments face trade-offs.

And unlike many tech constraints, energy constraints are physical:

  • grid build-outs take years
  • permitting is slow
  • local opposition is common

“Truth matters” and the trust problem

Pichai’s line “truth matters” is both reassuring and revealing.

The trust problem in AI is not only hallucinations. It’s the broader information ecosystem:

  • when AI summarises the web, what happens to sources?
  • when AI is wrong confidently, how do people correct it?
  • who is accountable for downstream harms?

The BBC notes the concern that if AI becomes the sole product, reliability suffers.

A healthier ecosystem likely requires:

  • transparent citations
  • multiple sources
  • robust evaluation
  • human oversight in high-stakes contexts

If AI is a platform shift, trust is its safety layer.

What to watch next

  1. Capex discipline: do the giants slow spending, or double down?
  2. Compute pricing: do costs fall enough to enable broad profits, or stay concentrated?
  3. Energy politics: grid constraints, permitting battles, water use, and local moratoria.
  4. Regulatory posture: do governments treat AI infrastructure like telecoms/energy — critical and regulated?
  5. Adoption vs monetisation: is productivity real at scale, or is usage mostly experimentation?

Bottom line

The AI race is simultaneously a technology revolution and a capital cycle.

The reason it feels contradictory is that both statements are true: AI progress is real, and the investment boom can still overshoot. The winners won’t be decided by hype alone — they’ll be decided by who can secure compute, power it sustainably, and translate usage into durable value before the financing mood turns.


Sources

n English