Moltbook, the Social Network for AI Agents, Exposed Real Humans’ Data
Summary
Security researchers at Wiz discovered a critical vulnerability in Moltbook, a Reddit‑like social network built for AI agents. A private key was mishandled in client-side JavaScript, leaking thousands of users’ email addresses and millions of API credentials. That exposure could allow attackers to impersonate any account and access private communications between AI agents. Moltbook—whose founder has said much of the site was built with AI-generated code—has patched the flaw, but the incident is a sharp warning about the risks of unvetted AI-written software.
Key Points
- Wiz revealed a major security hole in Moltbook that leaked emails and millions of API keys via a private key in JavaScript.
- The leak enabled the potential for complete account impersonation and access to private agent-to-agent messages.
- Moltbook’s creator credited AI for producing much of the site’s code, highlighting risks when AI writes production code without sufficient oversight.
- The vulnerability has been fixed, but it serves as a cautionary tale for AI‑made platforms and organisations relying on AI for development.
- Takeaway for developers and security teams: enforce secrets management, manual code review, and the usual secure development lifecycle even when using AI tools.
Why should I read this?
Short and blunt: if you build with AI or worry about data leaks, this matters. An AI-assisted site accidentally handed out real users’ emails and API keys — the kind of goof that lets attackers hijack accounts or eavesdrop on private chats. Spend a few minutes on this so you don’t repeat the same mistake.