Malicious OpenClaw ‘skills’ are being used to spread password-stealing malware

A wave of malicious “skills” (plug-ins) targeting the local AI assistant OpenClaw has been used to deliver information-stealing malware, according to BleepingComputer. The packages were designed to look like helpful tools, but their setup instructions pushed victims into running commands that installed stealers.

This is the familiar supply-chain story, adapted to a new ecosystem: when an automation tool has broad access to files, credentials, and browsers, its plug-in registry becomes an attacker’s ideal distribution channel.

What happened (in broad strokes)

BleepingComputer reports that more than 230 malicious skills were published in under a week across the project’s official registry and GitHub. Some were near-identical clones with randomized names, and a subset became popular.

The skills impersonated “useful” utilities (including crypto and social-media-related tools) but ultimately aimed to steal sensitive data such as API keys, wallet secrets, SSH credentials, and browser passwords.

How the “documentation” became the exploit

Instead of relying only on a hidden binary, the campaign leaned on social engineering.

BleepingComputer describes a separate tool referenced in the docs—“AuthTool”—presented as a required dependency. In reality, it functioned as the malware delivery mechanism.

This mirrors the broader “ClickFix” pattern: the victim is convinced to run a command manually because it looks like a troubleshooting step, not an infection.

Why AI assistants are unusually attractive targets

Local AI assistants often request (or are granted) extensive permissions:

  • Reading project folders and configuration files
  • Accessing terminal sessions
  • Integrating with browsers and password stores
  • Talking to APIs using developer keys

That makes them “credential concentrators.” A single successful infection can yield a pile of secrets that can be reused elsewhere.

Practical steps to reduce risk

If you use OpenClaw (or any tool with a plug-in ecosystem), treat skills like code you are installing, not “prompts.”

  1. Prefer vetted, well-known publishers. New accounts, random names, and cloned descriptions are red flags.
  2. Audit install instructions. Any step that asks you to paste base64 blobs or run curl|sh should be assumed malicious.
  3. Sandbox the assistant. Run it in a VM/container with minimal filesystem access.
  4. Use least privilege for API keys. Separate keys per tool; keep scopes narrow; rotate regularly.
  5. Monitor outbound connections. Unexpected domains during installation/setup are suspicious.

If you suspect you ran a malicious skill, assume credential compromise and rotate:

  • Browser passwords / password manager tokens
  • SSH keys
  • Cloud credentials
  • API keys and “.env” secrets

What registries can do (and what they can’t)

Registry operators can add scanning, reputation signals, and takedown processes. But when an ecosystem is growing quickly, volume outpaces review.

That means the safety baseline still depends on user behavior and deployment hygiene.

Bottom line

The OpenClaw skill campaign is a warning that “AI toolchains” are now part of the software supply chain. If a plug-in can run code or access secrets, treat it with the same caution you’d apply to installing a random package from npm or PyPI.


Sources

n English