Non-human identities are a breach engine: why tokens and service accounts keep getting exposed

Non-human identities—API keys, tokens, service accounts, workload identities—are now one of the easiest ways into modern cloud environments. Not because attackers suddenly became geniuses, but because organizations increasingly run on machine-to-machine trust, and that trust is often overbroad, long-lived, and poorly monitored.

A new analysis highlighted in reporting points to a familiar pattern at huge scale: thousands of container images and repositories accidentally exposing secrets that quietly grant access to production systems. The problem isn’t just that developers sometimes make mistakes. The problem is that the default tooling and incentives make it easy to ship secrets and hard to prove you didn’t.

This is an “invisible breach” story. Many compromises won’t start with an exploit or a loud malware event. They’ll start with a token that authenticates cleanly—so everything looks normal—until you realize the wrong principal has been using it.

What “non-human identities” are (in plain terms)

A non-human identity (NHI) is any credential that lets software authenticate as a trusted actor:

  • cloud access keys and session tokens
  • service accounts and workload identities
  • CI/CD credentials used by build pipelines
  • tokens for SaaS tools (GitHub, GitLab, Slack, monitoring platforms)
  • API keys for third-party services (payment providers, email providers, AI model APIs)

The important difference from human logins is that NHIs typically:

  • run continuously
  • are embedded in code or config
  • and often don’t use MFA

That makes them attractive.

If an attacker obtains a working token, they don’t have to “break in.” They authenticate.

Why this is getting worse now

Three trends are pushing NHIs from “important” to “dominant risk”:

1) Software supply chains are bigger than ever

Modern apps are assembled from:

  • containers
  • open-source dependencies
  • infrastructure-as-code
  • dozens of SaaS integrations

Every integration adds another credential.

2) Automation is everywhere

Organizations want:

  • faster deploys
  • self-service infrastructure
  • ephemeral environments

Automation is good—but it is powered by identities that have privileges.

3) Credentials last longer than the people who created them

Humans change roles and leave.

But a token in a repo or a container can:

  • persist for months or years
  • be copied into new builds
  • and remain valid long after anyone remembers it exists

So the attack surface grows silently.

How secrets leak in real life (it’s not always “someone committed a key”)

The stereotype is a developer committing AWS_SECRET_ACCESS_KEY to GitHub.

That still happens. But a lot of leakage is less obvious:

  • tokens baked into container layers
  • config files copied into images during build
  • debug logs containing secrets
  • “temporary” keys shared in chat and later pasted into code
  • CI variables printed by misconfigured pipelines

And container images are particularly dangerous because:

  • they get mirrored
  • they get cached
  • they get copied between teams

Even if you delete the key from the repo, the key can remain in old image layers.

Why leaked tokens are more dangerous than many exploits

Exploits are noisy. They often trigger alerts.

Leaked tokens are quiet. They often look like normal usage:

  • successful authentication
  • correct API calls
  • legitimate endpoints

That changes the defender’s problem.

Instead of detecting “an attacker,” you have to detect:

  • an unexpected principal using valid credentials
  • from unusual locations
  • at unusual times
  • doing unusual actions

This is why NHIs are a detection gap for many organizations.

The privilege problem: tokens are often too powerful

A lot of secrets are created as “get it working” shortcuts:

  • broad cloud permissions
  • admin-level API access
  • long-lived keys

And once the system works, people don’t want to touch it.

This creates a dangerous asymmetry:

  • a human account might have MFA and monitoring
  • the machine identity might have broad access and little scrutiny

When the machine identity leaks, the blast radius can be bigger.

What a good NHI strategy looks like (concrete practices)

This is solvable, but only if you treat NHIs as first-class security assets.

1) Prefer short-lived credentials

Where possible:

  • use temporary session credentials
  • rotate tokens frequently
  • avoid “never expires” keys

Short-lived tokens reduce the payoff of leaks.

2) Replace static keys with workload identity where you can

In modern cloud setups, you can often authenticate workloads via:

  • instance identity
  • OIDC federation
  • managed identity

This reduces the need to store static keys.

3) Separate environments strictly

A common failure is using the same token across:

  • dev
  • staging
  • production

Tokens should be environment-scoped.

If a dev image leaks, it shouldn’t unlock prod.

4) Inventory and ownership

Every meaningful token should have:

  • an owner
  • a purpose
  • an expected usage pattern

If a token has no owner, it is technical debt waiting to become an incident.

5) Monitor NHI behavior like you monitor human behavior

Good signals include:

  • impossible travel / unusual geographies
  • unusual API call sequences
  • spikes in data access
  • new permissions granted
  • new tokens created

The goal is not perfect detection; it’s early detection.

6) Treat CI/CD as a high-risk identity factory

CI systems frequently hold:

  • deployment keys
  • signing keys
  • cloud credentials

Lock them down:

  • least privilege
  • separated runners
  • secret masking and prevention of log leaks
  • strict approvals for production deploy steps

Where teams usually fail (and how to avoid it)

“We rotate keys sometimes” isn’t a plan

If rotation is manual and painful, it won’t happen under pressure.

Make rotation routine and automated.

Security tools without enforcement become “monitoring theater”

Scanning repositories for secrets is useful, but it’s not enough.

You also need:

  • rapid revocation
  • alerts on usage
  • and prevention in build pipelines

The container layer trap

If secrets ever entered a container build context, assume they may exist in:

  • old images
  • cached layers
  • CI artifacts

The fix is not only “delete the repo key.” It’s:

  • rotate the secret
  • rebuild and republish images
  • invalidate caches where possible

What to watch next

If you want to track whether organizations are improving on NHIs, look for:

  • adoption of short-lived identity (OIDC/workload identity)
  • widespread token rotation programs
  • stronger CI/CD boundary controls
  • incident reports that acknowledge “valid credentials used” as a primary cause

Also watch the tooling side: the best tools will shift from “detect exposed strings” to “reduce the number of static secrets that exist at all.”

The economics: why attackers love token hunting

Token hunting scales.

An attacker who steals one working credential can often:

  • access multiple systems (cloud + source control + CI)
  • reuse the same technique across many organizations
  • and sell access in marketplaces if they don’t want to operate it themselves

For defenders, this means the threat isn’t just “a hacker targeting us.” It’s “a machine economy that profits from any reusable credential.”

That’s why prevention is more valuable here than response. If the key never existed in static form, it can’t be harvested later.

Concrete detection ideas (what to alert on)

If you’re building detections for NHIs, focus on behavior changes rather than magic keywords.

High-signal examples:

  • A service account used from a new country/ASN it never used before.
  • A token that normally only calls one API suddenly enumerates resources or downloads large volumes.
  • A CI identity performing actions outside the normal release window.
  • Secrets used from interactive user endpoints when they were intended only for server workloads.

Even basic anomaly alerts can catch the “quiet credential theft” pattern early.

Concrete hardening ideas (low effort, high leverage)

These are practical changes most teams can make without a massive redesign:

  • Reduce token scope: split one broad token into several narrowly scoped tokens.
  • Rotate on schedule: rotate even when nothing is “wrong,” so rotation becomes muscle memory.
  • Gate production: require explicit approval for production deploy identities.
  • Block plaintext secrets in builds: CI should fail builds when obvious secret patterns appear.

Each move shrinks blast radius even if a leak still occurs.

A simple internal policy that prevents a lot of pain

If you want one policy that forces better behavior, it’s this:

  • No production-capable secrets in developer laptops or in container build contexts.

That rule drives changes like:

  • workload identity for services
  • staging credentials for development
  • and explicit approvals for production deployment steps

It’s annoying at first, but it cuts off the easiest leak paths.

Bottom line

Non-human identities are the backbone of modern automation—and also a major breach driver because they often bypass the protections we’ve built for humans.

If you want a practical threshold: once you can answer “where do our long-lived tokens live, who owns them, and how quickly can we revoke/rotate them,” you’ve moved from wishful thinking to a real defense program.

The practical fix is not one magic scanner. It’s a program: minimize static secrets, shrink privileges, make rotation routine, and monitor machine identities like you monitor user accounts.


Sources

n English