Meta’s $135bn AI spending plan: what it’s really buying (and the bubble risk)

Summary: Meta says it could spend up to $135bn this year—nearly double last year’s AI-related spend—mostly on infrastructure that powers artificial intelligence. This is not just a “bigger budget” story. It’s a strategic land grab for compute, talent, and distribution at a moment when leaders across tech and finance are openly debating whether the AI boom is an economic bubble.

The key question isn’t whether AI will matter (it will). The question is whether Meta can translate giant capex into durable product advantage and profit—without repeating past cycles where enthusiasm outran returns.

Why this story is bigger than “capex goes up”

The easy version of this story is “Meta will spend more on AI.” The more important version is: Meta is trying to buy its way into a leadership position in the next interface layer—AI-powered recommendations, assistants, and agents—before the market structure settles.

That’s why it’s worth separating what’s confirmed (numbers, statements) from what’s implied (strategy and expected outcomes).

What Meta actually said (the concrete facts)

From the reporting:

  • Meta expects to spend up to $135bn (£97bn) this year, mostly on AI infrastructure.
  • That compares with roughly $72bn last year.
  • Over the last three years, Meta has spent about $140bn chasing the AI boom.
  • Zuckerberg said he expects 2026 to be the year AI “dramatically changes the way we work.”
  • Meta’s expenses have been rising faster than revenue (pressure on margins).
  • Zuckerberg hinted that AI will compress work that used to require big teams.
  • Meta has already laid off hundreds of workers (notably in Reality Labs).

Those points frame the story: Meta is doubling down on the belief that AI is shifting from a feature to an operating layer for both products and internal work.

Where the money actually goes (and why it’s so expensive)

When a company says “AI infrastructure,” it usually means a stack of things that are power-hungry and capital-intensive.

One simple way to think about it: Meta isn’t buying “AI.” It’s buying throughput—the ability to train bigger models faster, and run inference at scale for billions of daily interactions.

That requires:

1) Compute hardware

  • GPU/accelerator clusters to train and run models.
  • High memory bandwidth, fast interconnects, storage.

2) Data centres

  • physical buildings, racks, redundancy
  • power delivery (often long-term power contracts)
  • cooling systems (a major engineering constraint)

3) Networking

Training large models requires thousands of chips acting like one computer. That demands:

  • high-speed fabrics
  • low latency
  • careful topology and reliability

4) Tooling and model operations

  • data pipelines
  • safety/evaluation harnesses
  • deployment and monitoring

This is why AI capex has a different shape from a “normal” software investment: you can’t just hire engineers. You must buy electricity + silicon + real estate.

Meta’s strategic bet: AI + distribution is a moat

Meta is one of the few companies with global consumer distribution across multiple surfaces:

  • Facebook
  • Instagram
  • WhatsApp
  • (and adjacent efforts in hardware/AR)

If AI becomes a primary interface for how people discover content, communicate, and create media, distribution matters.

Meta’s implicit strategy is:

  1. invest aggressively to build model capability and capacity
  2. deploy it across the surfaces where people already spend time
  3. turn those improvements into:
    • better engagement
    • better ad performance
    • new products (assistants, agents, creative tools)

Even small improvements in ad targeting efficiency or creative generation can compound, because Meta’s ad business is so large.

The “AI dramatically changes work” claim: what it could mean

Zuckerberg’s comments about projects shrinking from “big teams” to “a single, very talented person” signals a very specific direction: AI as a productivity multiplier inside the company.

In practice, that could look like:

  • software engineers using AI to write, refactor, test, and document code faster
  • product managers using AI to synthesize feedback, generate experiments, draft specs
  • marketers generating variants and iterating quickly

But there’s a catch: productivity tools are uneven. People who learn to use them well get much more value. That aligns with Zuckerberg’s comment about a “big delta” between people who do it well and those who don’t.

Why layoffs show up in the same conversation

When executives talk about productivity compression, layoffs are the shadow topic.

It doesn’t necessarily mean “AI replaces everyone.” More often it means:

  • fewer people needed for routine tasks
  • teams are expected to ship more with less
  • organisations re-rank which roles are strategic

Reality Labs layoffs in particular hint that Meta is shifting budget away from longer-horizon bets (metaverse hardware) toward nearer-term AI infrastructure and AI product integration.

Bubble risk: why smart people keep saying the quiet part out loud

The article notes multiple leaders raising bubble concerns, comparing the moment to the dot-com era.

This is an important nuance: “bubble” doesn’t mean “AI is fake.” It usually means:

  • too much capital is chasing too few clearly profitable applications
  • many companies will not survive the shakeout
  • infrastructure winners and distribution winners capture most value

Cisco’s CEO is quoted warning that winners will emerge but there will be “carnage along the way.” That’s a realistic description of technology transitions.

One more dot-com lesson: during the bubble, companies built real infrastructure (fibre, data centres, networks). Much of the early equity value evaporated—but the infrastructure remained and later enabled the modern internet economy. Today’s AI buildout could follow the same pattern: painful shakeout for some firms, but long-lived capacity that becomes foundational.

Meta’s risk profile: four ways this can go wrong

1) Capex without durable product differentiation

If competitors match capabilities quickly, the spend becomes table stakes—expensive, but not differentiating.

2) Underestimating operating costs

Buying hardware is only the start. Model training and inference burn:

  • electricity
  • networking capacity
  • engineering time for evaluation and safety

If operating costs scale faster than revenue lift, the “AI advantage” becomes a margin drag.

3) Margin pressure and investor patience

Meta can afford large spend as long as its core ad engine remains strong. But if macro conditions or engagement shift, investors will reprice the risk.

4) Regulatory and trust issues

AI-driven ranking and generation raises concerns about:

  • misinformation amplification
  • deepfakes and fraud
  • content moderation errors
  • privacy boundaries in messaging apps

If AI features create more harm than value, regulators may tighten constraints, reducing upside.

What success looks like (signals worth watching)

If you want to judge whether Meta’s AI spending is working, ignore press releases and look for measurable signals.

A useful framing: Meta needs AI to improve either revenue per user, cost per unit of output, or ideally both. If you can’t see those showing up over time, the investment thesis weakens.

1) Product improvements that stick

  • better recommendations that increase time spent without increasing complaints
  • creative tools that genuinely reduce friction for advertisers and creators

2) Business performance

  • ad pricing and conversion quality
  • cost per outcome for advertisers
  • whether revenue growth accelerates relative to expense growth

3) Model capability and deployment pace

  • how quickly new models are deployed across apps
  • whether “agents” become useful in normal workflows (not just demos)

4) Safety and trust

  • how well Meta contains abuse (scams, impersonation, synthetic media)
  • transparency about AI-generated content

A practical reader’s guide: what to believe and what to treat as marketing

AI announcements often mix solid engineering realities with narrative framing. A useful checklist:

  • If it’s about chips, power, data centres, it’s real and measurable.
  • If it’s about agents changing work, ask what workflows are actually improved today.
  • If it’s about cost savings, ask whether savings show up in margins or just fund more growth.

Bottom line

Meta is spending like a company that believes AI is the next platform shift—and that the right move is to secure compute and deploy AI everywhere its users already are.

The scale is the story: Meta is choosing to compete on infrastructure and distribution, not just on clever prompts. That’s the kind of commitment that can create a moat—or a very expensive mistake.

The upside is real: better products, better ads, new assistants and creative tools. The downside is also real: margin compression, a crowded AI field, and the risk that regulation and trust problems blunt the returns.

This is what a platform transition looks like in real time: huge infrastructure investment, loud skepticism, and a race to prove that the spend turns into durable advantage.

If Meta can show sustained improvements in ad performance and product stickiness while keeping trust and safety under control, the capex will look like foresight. If not, it risks becoming a high-profile example of how easy it is to overspend in a hype cycle.


Sources

n English