AI anti-shoplifting tech: from CCTV to watchlists on the high street

Retailers are increasingly turning to “smart” surveillance to deal with a very old problem: theft. The newest wave goes beyond standard CCTV, using tools that can flag faces, bodies, or behaviour patterns in real time.

A BBC report filmed by Jim Connolly shows how quickly this kind of AI-driven anti-shoplifting tech is moving from big chains into everyday places like an independent Post Office. It also shows why the pushback is growing just as fast: these systems don’t just watch — they can sort people into risk categories.

Why the tech is spreading now

Shoplifting has always been part of retail, but the incentives around it have shifted. Stores are operating with tighter staffing, more self-checkouts, and higher volumes moving through smaller teams. That creates a practical gap: fewer human eyes on the floor, but more opportunity for loss.

So vendors are pitching a tempting proposition: keep staffing roughly flat while “multiplying” vigilance using software.

The BBC piece notes that some major retailers and independent stores have introduced a mix of:

  • AI body scans
  • CCTV systems with automated alerts
  • facial recognition equipment

On paper, the systems are simple: instead of asking staff to watch a wall of screens, the computer watches and pings a staff member when it thinks something looks suspicious.

In practice, “suspicious” can mean several different things depending on the product:

  • a face the system thinks matches a previous incident
  • a body the system classifies as “known” or “unknown”
  • movement patterns that resemble prior thefts

That’s a broad net. And broad nets catch more fish — and more bycatch.

What “AI body scans” and facial recognition actually do

A useful way to think about these tools is that they turn video into searchable data.

Traditional CCTV is mostly passive: it records footage that someone might review later. AI-enabled CCTV is active: it tries to label what it sees as it happens.

Facial recognition (the obvious one)

Facial recognition attempts to create a “faceprint” from camera footage and compare it to a stored list. If there’s a close match, the system can alert a worker, lock a door, notify security, or simply log the event.

From the store’s point of view, this is attractive because it promises consistency: the same person who stole last week can be spotted at the entrance today.

But it also creates a sharp question: where does the reference list come from, and how does someone get off it?

AI body scans (less intuitive, but often more common)

The BBC report mentions AI body scans alongside facial recognition. In many deployments, “body scanning” doesn’t mean a sci‑fi full-body scanner. It often means a system that detects and tracks people based on body shape, posture, clothing silhouette, or movement.

Why would a retailer use this?

  • Body-based identification can work even when the face is partially obscured.
  • It can track a person across multiple camera angles.
  • It can label “behaviour” (lingering, moving quickly, returning to a shelf) as patterns.

This is the part that makes civil liberties advocates nervous: you may not need to be identified by name to be treated as “someone we should watch.”

The quiet power of watchlists

Civil liberty campaigners told the BBC that the public are being put on “secret watchlists and electronically blacklisted” from their high streets.

That language matters, because it describes something bigger than a single shop deciding to ban a customer.

A watchlist becomes more consequential when it has these features:

  1. It persists over time. A moment of suspicion can follow you to future visits.

  2. It travels between locations. A flag from one shop can influence how you’re treated in another.

  3. It is hard to contest. If the system never tells you that you were flagged, you can’t challenge it.

Even without a formal “ban,” a watchlist can shape outcomes:

  • staff approach you differently
  • you’re watched more closely
  • you’re denied entry
  • security is called earlier than it otherwise would be

The risk is not only false positives — it’s that false positives become sticky.

What the law says vs what people experience

The BBC report says the government’s position is that commercial facial recognition is legal, but its use must comply with strict data protection laws and be used transparently.

That single sentence contains the real battleground.

A retailer can do something that is technically legal and still trigger backlash if customers feel the rules are one-sided.

Surveillance tech changes the emotional contract of shopping. People accept a certain level of loss-prevention (cameras, staff, tags). But when the system begins to categorise visitors — potentially without them knowing — the relationship shifts from “store protects its goods” to “store is evaluating me.”

Transparency is harder than putting up a sign

“Transparency” sounds like an easy box to tick: add a notice at the door. But meaningful transparency would require answers to questions like:

  • Are you using facial recognition, or only standard CCTV?
  • What data do you store, and for how long?
  • Do you share the data with any other sites or partners?
  • How does someone appeal or correct a mistaken flag?

For most customers, the default is ignorance: they only learn a system exists when something goes wrong.

The operational trade-offs retailers don’t advertise

Retailers adopt these systems for cost and coverage, but they inherit risks that don’t fit neatly into a budget spreadsheet.

1) False positives create real-world harm

If the system flags an innocent person, the “harm” is not abstract. It can be embarrassment, intimidation, exclusion, or escalation.

It also has a feedback effect: once someone is treated like a suspect, any nervous behaviour can look more “suspicious,” reinforcing the system’s initial error.

2) Staff become enforcers of a black box

When a system pings an alert, staff are pushed into a decision point: act on it, or ignore it.

If they act and it’s wrong, the human interaction is the thing people remember — not the algorithm. If they ignore it and a theft happens, management may ask why the alert was dismissed.

So even if the tool is “advisory,” it becomes coercive inside the workplace.

3) The tech invites mission creep

A system installed for shoplifting might later be repurposed for:

  • identifying repeat refund attempts
  • enforcing bans for anti-social behaviour
  • tracking staff performance

Mission creep is not always malicious. It’s often just the logic of investment: “We already paid for this system; what else can it do?”

How the public conversation is likely to evolve

What comes next is less about the hardware and more about governance.

In the short term, we’ll probably see a pattern:

  • more deployments (especially as vendors package systems for smaller businesses)
  • more campaigns demanding clear rules and disclosure
  • more friction as customers learn that “smart surveillance” exists in everyday locations

The highest-leverage policy questions will be practical rather than philosophical:

  • Who sets the standards for accuracy?
  • Who audits the watchlists?
  • How does someone learn they were flagged?
  • What is the process for removal?

Without answers, retailers may find that a tool meant to prevent loss creates a different kind of cost: reputational damage and customer distrust.

Bottom line

AI anti-shoplifting systems promise to replace missing staff time with automated vigilance, and that’s why they’re spreading from big retailers into local shops. But when surveillance turns into categorisation — watchlists, blacklisting, and opaque “risk” labels — the technology stops being a quiet security measure and becomes a public trust problem.


Sources

n English