Smaller data centres, closer to users: why ‘edge’ compute is back

Summary: While tech giants continue to build enormous “AI factory” data centres, a counter-trend is gaining attention: smaller data centres closer to users (“edge” compute), on-device AI, and even reusing waste heat for buildings. The argument is not that hyperscale data centres vanish overnight, but that the default architecture of computing may shift from “everything in the cloud” toward a mix of cloud + local.

This matters because data centres are now a major economic and environmental story, not just an IT detail.

The big claim: ‘small is the new big’

The BBC report describes growing interest in:

  • small data centres near populations (lower latency)
  • local “edge” deployments
  • using waste heat (e.g., heating a pool or a home)

At the same time:

  • massive new data centre builds continue worldwide

So we’re in a transition phase: both models expand, for different reasons.

Why hyperscale data centres grew in the first place

Centralised data centres win on:

  • economies of scale
  • professional operations
  • easier redundancy planning
  • consolidated security teams

And they enable:

  • streaming
  • cloud apps
  • online banking
  • AI training and inference

They’re not going away quickly.

What’s changing: AI workloads are diversifying

The BBC notes a shift:

  • from generic “one model for everything” toward bespoke enterprise AI tools
  • toward smaller models that can run locally

This matters because:

  • smaller models need less compute
  • local models reduce data movement
  • privacy can improve when data stays on-device

As the report notes, premium devices already do some AI on-device (Apple Intelligence, Copilot+ PCs).

Edge compute: the latency argument

If you’re doing:

  • real-time video analytics
  • AR/VR
  • industrial automation
  • autonomous systems

Latency matters. Processing closer to users can:

  • reduce delay
  • reduce bandwidth needs
  • improve resilience

Edge isn’t about replacing the cloud; it’s about not sending everything to the cloud.

Waste heat: the “physics dividend”

Computing produces heat.

In a centralised data centre, that heat is often treated as a problem.

In a distributed model, heat can be a feature:

  • warm buildings
  • reduce heating costs

But it requires:

  • building integration
  • reliable operations
  • safety compliance

It’s not plug-and-play, but it’s a compelling idea.

The security trade-off

The BBC includes the counter-argument:

  • many small sites could be harder to secure

And the counter-counter argument:

  • large centres are big points of failure
  • smaller sites reduce blast radius

The truth is:

  • both architectures need strong security
  • centralisation concentrates risk
  • distribution multiplies attack surface

Policy and engineering must match the architecture.

Environmental pressure is forcing the conversation

Data centres consume:

  • large amounts of energy
  • significant water (in many cooling designs)

As demand rises, environmental constraints push toward:

  • efficiency
  • right-sizing models
  • local processing when appropriate

The “best” architecture may be the one that avoids unnecessary compute.

What to watch

  1. Smaller, specialised models becoming mainstream.
  2. On-device AI moving from premium to mid-range hardware.
  3. Edge build-outs near cities and industrial zones.
  4. Heat reuse projects scaling beyond niche pilots.
  5. Regulation and planning: grid capacity, zoning, sustainability rules.

Bottom line

We’re not seeing the end of big data centres. We’re seeing the beginning of a more hybrid computing world.

The long-run direction is likely: more compute moves closer to where data is generated—because that’s faster, often more private, and potentially less wasteful.


Sources

n English