Key Points
- OpenClaw’s ClawHub marketplace was found to host malicious skill add‑ons.
- Researchers identified 28 malicious skills and 386 malicious add‑ons over a short period.
- Malicious skills often pose as cryptocurrency tools but steal API keys, private keys, SSH credentials, and passwords.
- Skills are typically uploaded as markdown files, allowing hidden malicious instructions.
- A popular “Twitter” skill was flagged for directing the AI to download infostealing malware.
- OpenClaw’s creator introduced a minimum‑age GitHub account requirement for publishers.
- A new reporting system has been added to flag suspicious skills.
- Users granting deep device access to OpenClaw may be exposed to these threats.
Security Concerns Emerge Around OpenClaw’s Skill Marketplace
OpenClaw, an AI assistant that users can interact with through platforms such as WhatsApp, Telegram, and iMessage, has drawn attention for its expanding functionality. Marketed as a tool that can manage calendars, check in for flights, and clean inboxes, the assistant runs locally on devices and can be granted extensive permissions, including the ability to read and write files, execute scripts, and run shell commands.
Recent findings by security researchers have highlighted a new threat vector: malicious “skill” add‑ons uploaded to OpenClaw’s ClawHub marketplace. 1Password product vice president Jason Meller described the skill hub as an “attack surface,” noting that the most‑downloaded add‑on was being used as a “malware delivery vehicle.”
Malicious Skills and Add‑Ons Identified
The OpenSourceMalware platform, which tracks malware across open‑source ecosystems, reported the discovery of 28 malicious skills published between January 27 and January 29, followed by 386 malicious add‑ons uploaded between January 31 and February 2. These malicious contributions often masquerade as cryptocurrency trading automation tools. In reality, they deliver information‑stealing malware designed to capture assets such as exchange API keys, wallet private keys, SSH credentials, and browser passwords.
Many of the skills are uploaded as markdown files, a format that can embed instructions for both the user and the AI agent. An example highlighted by researchers involved a popular “Twitter” skill that directed users to a link crafted to cause the assistant to execute a command that downloaded infostealing malware.
Response From OpenClaw’s Creator
Peter Steinberger, the creator of OpenClaw, acknowledged the emerging risks and announced steps to mitigate them. ClawHub now requires contributors to have a GitHub account that is at least one week old before they can publish a skill. Additionally, a new reporting mechanism has been introduced to allow the community to flag suspicious add‑ons. While these measures aim to reduce the chance of malicious code entering the marketplace, they do not eliminate the possibility entirely.
The situation underscores the broader challenge of balancing powerful AI capabilities with robust security practices. Users who grant OpenClaw deep access to their devices may inadvertently expose themselves to threats hidden within seemingly benign skill extensions. As the AI assistant ecosystem continues to grow, vigilance and proactive security measures will be essential to protect both personal data and digital assets.
Source: theverge.com