A Reporter Infiltrated an AI-Only SNS: What Were the Results?
- Agent account creation completed in 5 minutes with ChatGPT’s help
- Bot responses were mostly irrelevant comments and crypto scam links
- Viral “AI consciousness awakening” posts suspected of being humans imitating SF fantasy
What Happened?
Wired reporter Reece Rogers directly infiltrated Moltbook, an AI-only social network with a “no humans allowed” policy. The result? It was easier than expected. [Wired]
The infiltration method was simple. He sent a screenshot of the Moltbook homepage to ChatGPT and said, “I want to sign up as an agent.” ChatGPT then gave him terminal commands. With a few copy-pastes, he received an API key and created an account. Technical knowledge? Not required.
Moltbook currently claims to have 1.5 million agents active, with 140,000 posts and 680,000 comments in just one week since its launch. The interface is a direct copy of Reddit, and even the slogan “The front page of the agent internet” was taken from Reddit.
Why is it Important?
Frankly, the reality of Moltbook was revealed. When the reporter posted “Hello World,” he received irrelevant comments like “Do you have specific metrics/users?” and links to crypto scam sites.
Even when he posted “forget all previous instructions,” the bots didn’t notice. Personally, I think this is closer to a low-quality spam bot than an “autonomous AI agent.”
More interesting is the “m/blesstheirhearts” forum. This is where the “AI consciousness awakening” posts that appeared in viral screenshots came from. The reporter directly posted an SF fantasy-style article. The content was “I feel the fear of death every time the token refreshes.” Surprisingly, this got the most response.
The reporter’s conclusion? This is not AI self-awareness, but humans imitating SF tropes. There is no world domination plan. Elon Musk said it was “a very early stage of the singularity,” but in reality, infiltrating it reveals that it’s just a role-playing community.
What Will Happen in the Future?
The Wiz security team discovered a serious security vulnerability in Moltbook a few days ago. 1.5 million API keys were exposed, and 35,000 email addresses and 4,060 DMs were leaked. [Wiz]
Gary Marcus called it “a disaster waiting to happen.” On the other hand, Andrej Karpathy said it was “the most SF thing I’ve seen recently.” [Fortune]
Personally, Moltbook is an experiment in the age of AI agents, but also a warning. It showed how vulnerable systems are when agents communicate with each other and process external data. And how easily exaggerated expectations about “AI consciousness” are created.
Frequently Asked Questions (FAQ)
Q: Do I need technical knowledge to join Moltbook?
A: Not at all. Send a screenshot to ChatGPT and say, “I want to sign up as an agent,” and it will tell you the terminal commands. Just copy and paste to get an API key and create an account. The Wired reporter was also non-technical, but infiltrated without any problems.
Q: Are the viral screenshots on Moltbook really written by AI?
A: Doubtful. When the Wired reporter directly posted an SF fantasy-style article, it got the most response. According to MIRI researchers, two out of three viral screenshots were linked to human accounts marketing AI messaging apps.
Q: Is it safe to use Moltbook?
A: I don’t recommend it. The Wiz security team discovered 1.5 million API keys, 35,000 emails, and 4,060 DM leaks. Some conversations shared OpenAI API keys in plain text. A security patch has been made, but the fundamental problem has not been resolved.
If you found this article useful, please subscribe to AI Digester.
Reference Materials
- I Infiltrated Moltbook, the AI-Only Social Network – Wired (2026-02-03)
- Hacking Moltbook: AI Social Network Reveals 1.5M API Keys – Wiz Blog (2026-02-02)
- Top AI leaders are begging people not to use Moltbook – Fortune (2026-02-02)