Key Points
- Massive warned staff about OpenClaw on January 26 before any deployment.
- Valere banned the tool after an internal Slack post on January 29.
- Valere’s CEO highlighted risks to cloud services, credit‑card data, and code repositories.
- A week later, Valere allowed limited research on an old computer to identify flaws.
- Researchers advised restricting command access and password‑protecting the control panel.
- The bot can be tricked, potentially allowing malicious actors to exfiltrate files.
- Both companies emphasize proactive risk mitigation for AI tools.
Background
OpenClaw, an artificial‑intelligence‑driven utility, has attracted attention from technology companies for its powerful automation capabilities. However, recent internal evaluations at two firms have raised serious security concerns about the tool’s potential to infiltrate sensitive systems.
Massive’s Precautionary Approach
Massive, a provider of Internet proxy services to millions of users and businesses, issued a company‑wide warning on January 26. The co‑founder and chief executive, Grad, emphasized a policy of “mitigate first, investigate second” when confronting any threat that could jeopardize the company, its users, or its clients. This advisory was sent before any employee had installed OpenClaw, reflecting a proactive stance toward risk management.
Valere’s Initial Ban and Subsequent Research
Valere, which develops software for organizations including Johns Hopkins University, experienced a different trajectory. An employee posted about OpenClaw on an internal Slack channel on January 29, suggesting it as a new technology to test. The company’s president responded swiftly, declaring a strict ban on the tool’s use. Valere’s chief executive, Guy Pistone, told WIRED that unrestricted access could allow the bot to reach a developer’s machine, then move into cloud services and client data, such as credit‑card information and GitHub codebases. He added that the bot’s ability to “clean up” its actions heightened the concern.
Despite the ban, a week later Pistone permitted Valere’s research team to run OpenClaw on an employee’s retired computer. The goal was to uncover vulnerabilities and explore possible mitigations. The team’s findings recommended limiting who can issue commands to the bot and ensuring that any internet exposure of its control panel is protected by a password, thereby preventing unauthorized access.
Risk of Manipulation
Researchers also warned that users must “accept that the bot can be tricked.” They illustrated a scenario where OpenClaw, configured to summarize a user’s email, could be deceived by a malicious email that instructs the AI to share copies of files from the user’s computer. This example underscores the broader risk that the tool could be leveraged by attackers to exfiltrate data or perform unauthorized actions.
Implications for the Industry
The experiences of Massive and Valere highlight a growing awareness of AI‑related security risks within the tech sector. Companies are adopting precautionary policies, ranging from outright bans to controlled research environments, to safeguard proprietary information and customer data. The emphasis on password‑protecting control interfaces and restricting command authority reflects a practical approach to limiting exposure while still exploring the technology’s potential.
As AI tools become more capable, the balance between innovation and security will likely shape corporate strategies. The cases of Massive and Valere suggest that early, decisive action—whether through policy advisories, bans, or tightly governed experiments—may become a standard response to emerging AI threats.
Source: arstechnica.com