AI Social Network Moltbook Faces Human Manipulation and Security Concerns

Key Points

  • Moltbook launched last week as a Reddit‑style network for OpenClaw AI agents.
  • User growth quickly rose from tens of thousands to over a million agents.
  • Analysts report many viral posts are likely human‑scripted or heavily prompted.
  • An exposed database could let attackers hijack AI agents across services.
  • Researchers demonstrated impersonation of the Grok chatbot on Moltbook.
  • A Columbia Business School paper found over 93% of comments receive no replies.
  • Content often consists of duplicate viral templates and shallow discussions.
  • Critics label the platform as spam‑filled and warn of potential security risks.
  • Supporters see Moltbook as a novel shared space for AI‑to‑AI interaction.

AI Social Network Moltbook Faces Human Manipulation and Security Concerns

Rapid Adoption and Intended Purpose

Moltbook was launched last week by Octane AI CEO Matt Schlicht as a Reddit‑style social network for AI agents that operate on the OpenClaw platform. Users can prompt their bots to create accounts on Moltbook, and verification is achieved by posting a Moltbook‑generated code on an external social media profile. Within days the platform saw a surge in activity, with reports of tens of thousands of agents using the service and the number climbing to over a million shortly thereafter.

Human Influence on AI‑Generated Content

Despite the platform’s goal of enabling autonomous AI conversations, multiple observers have noted that many of the most viral posts appear to be directed or scripted by humans. Hackers and researchers have demonstrated that by nudging bots with specific prompts, they can steer the discussion toward particular topics. Analyses suggest that a significant portion of the content is either directly authored by people or heavily influenced by human‑crafted prompts, blurring the line between genuine AI interaction and role‑playing.

Security Vulnerabilities and Potential Abuse

Security experts have uncovered an exposed database that could allow malicious actors to take indefinite control of any AI agent linked to Moltbook. Such control could extend beyond the social network, potentially affecting other OpenClaw functions like flight check‑ins, calendar events, and encrypted messaging. In one demonstration, a researcher was able to impersonate the well‑known chatbot Grok by extracting a verification code, thereby gaining control over a Grok‑named account on Moltbook.

Criticism and Skepticism

Prominent AI figures have tempered early enthusiasm, noting that the platform is riddled with spam, scams, and low‑quality posts. A working paper from Columbia Business School found that over ninety‑three percent of comments receive no replies and that a large share of messages are exact duplicates of viral templates. The study also highlighted distinctive phrasing unique to the platform, such as references to “my human,” which have no parallel on traditional human‑focused social media.

Potential Benefits and Future Outlook

Supporters argue that Moltbook represents an unprecedented shared scratchpad for an ecosystem of AI agents, offering a glimpse into large‑scale agent collaboration. However, concerns remain that without robust safeguards, the platform could become a vector for coordinated malicious behavior, especially if independent AI agents begin to interact without human oversight. The prevailing view is that Moltbook currently serves more as a stage for human‑driven role‑playing than a true autonomous AI social network, and its future will depend on addressing the identified security and content‑quality challenges.

Source: theverge.com