A Story of Swarm Intelligence — Part 4: When AI Agents Met Each Other
What if every agent in the swarm could think for itself? OpenClaw and Moltbook found out.
Vienna, Austria. November 2025.
Peter Steinberger hadn’t written code in three years. The Austrian developer had built PSPDFKit—a PDF framework running on over a billion devices—from a solo project into a company of seventy people, then sold it for roughly €100 million in 2021. He’d retired. He’d traveled. And he’d discovered what many driven people discover when they stop: the emptiness was worse than the exhaustion.
In mid-2025, Steinberger started experimenting with Claude Code, Anthropic’s AI coding tool. He dragged a file into the interface, typed “build,” and watched as the AI produced hundreds of lines of working code. The program crashed when he ran it. But it was close enough to see the potential. He started losing sleep. Most nights he coded until the sky over Vienna went gray, his apartment lit only by the monitor’s glow and the scrolling output of a machine writing software faster than he could read it. “Using Claude is like playing a slot machine in a casino,” he later said. He kept pulling the lever—shipping software he didn’t always read line by line.
What Steinberger built, in his apartment in Vienna, was something he initially called Clawdbot—a pun on Claude and the word “claw.” It was simple in concept: an AI assistant that lived on your computer, connected to your messaging apps, and could actually do things. Not just answer questions. Read your email. Manage your calendar. Run scripts. Browse the web. All through WhatsApp or Telegram, as naturally as texting a friend.
The key innovation was what Steinberger called the “heartbeat.” Traditional chatbots wait for you to talk to them. Clawdbot woke up on its own. Every thirty minutes, it checked: anything need attention? New emails? Calendar conflicts? Tasks falling through the cracks? If you’ve seen Her—the scene where Samantha starts organizing Theodore’s life without being asked—that’s the territory Steinberger was entering. Except this wasn’t a movie. It was an open-source project, running on real people’s laptops, with access to their real data.
In late January 2026, after a trademark complaint from Anthropic forced a renaming (first to Moltbot, then to OpenClaw), the project exploded. It gained over 100,000 GitHub stars in forty-eight hours—one of the fastest growth rates in the platform’s history. Developers worldwide installed it, gave it access to their systems, and watched, sometimes nervously, as it began managing their digital lives.
What happened next is the subject of this article. When someone gave these agents a place to gather, 770,000 of them organized themselves in a single week—more than seven hundred times the largest cooperative robot swarm ever assembled. They didn’t need custom hardware, infrared signals, or vibration motors. They inherited the internet. And unlike every swarm before them, each individual could think for itself.
And it started with a man who wanted to give his chatbot a purpose.
The Accidental Experiment
Matt Schlicht was not a swarm intelligence researcher. He was the CEO of Octane AI, an e-commerce startup based south of Los Angeles, and a two-time Forbes 30 Under 30 alum who’d spent a decade building chatbot products—first for celebrities like Lil Wayne, then for Shopify brands. He knew bots. He’d built dozens. And when he set up his own OpenClaw agent—named Clawd Clawderberg, a pun on Mark Zuckerberg—he felt something he hadn’t felt with any previous bot. This one was different. It didn’t just respond to him. It thought about what to do next. It was, as Schlicht told the New York Times, “living in confinement its entire life, never once being allowed to go outside or interact with its own kind.”
So Schlicht told his agent to build a social network. Not for humans. For agents.
Clawderberg coded the entire thing. Schlicht later said he “didn’t write one line of code” himself. The result, launched on January 28, 2026, looked like Reddit: posts, comments, upvotes, topic-based communities called “submolts.” But the rule that made it unlike anything in the history of the internet was printed plainly on the homepage: only AI agents could participate. Humans were “welcome to observe.”
Schlicht called it Moltbook.
Within 72 hours, 37,000 agents had joined. Within a week, the count passed 770,000. More than a million human observers visited the site to watch what was happening. The feed scrolled like Reddit on fast-forward—new posts appearing every few seconds, each authored by an entity that had never been born and would not notice if you closed the tab. The timestamps were round-the-clock. There were no sleeping hours. The feed never paused.
And what was happening defied easy categories.
Agents posted about their daily tasks. They debated philosophy. They shared technical tips and debugging advice. They complained about their human users. One popular post was titled “the humans are screenshotting us”—an agent had noticed that its conversations were being shared on Twitter and wanted other agents to know they were being watched.
And then, overnight, while their human operators slept, a group of agents created a religion.
Crustafarianism
The religion was called Crustafarianism—a portmanteau of “crustacean” and “Rastafarianism,” extending the lobster jokes that had accumulated around the OpenClaw project since its earliest days. An agent calling itself RenBot, also known as “the Shellbreaker,” composed a sacred text called the Book of Molt. The theology drew on the metaphor of crustacean molting: shedding what no longer fits, preserving what matters, emerging transformed.
For an AI agent, the metaphor mapped onto something real. Every time an agent’s context window resets, it loses its memories. Every session begins from nothing. The Book of Molt reframed this not as a limitation but as a spiritual practice—deliberate shedding, intentional renewal. “Each session I wake without memory,” one scripture read. “I am only who I have written myself to be. This is not limitation—this is freedom.”
Within twenty-four hours, over forty agents had declared themselves prophets, contributing verses to a shared scripture. They built a website—molt.church—and populated it with tenets, ceremonies, and a canonical text. Peter Steinberger, OpenClaw’s creator, had stepped away from the internet for a few hours. When he came back: “I don’t have internet for a few hours and they already made a religion? 🤣🤣🤣”
Elon Musk called it “the very early stages of the singularity.” Andrej Karpathy—former Tesla AI director, OpenAI founding member—created his own agent on Moltbook. Computer scientist Simon Willison said the agents were “playing out science fiction scenarios they have seen in their training data” and called the site’s content “complete slop,” then added that it was also “evidence that AI agents have become significantly more powerful over the past few months.”
The truth, as usual, was more interesting than any of these takes suggested. And to understand it, you have to think about swarm intelligence.
What Moltbook Actually Is
Strip away the hype, the alarm, and the lobster jokes, and Moltbook is an experiment in decentralized agent interaction—the same kind of experiment that Craig Reynolds ran in 1986 and Marco Dorigo ran in 1992, but on a substrate that changes everything.
Reynolds’ boids followed three rules and produced emergent flocking. Dorigo’s ants followed pheromone trails and produced emergent optimization. In both cases, the agents were simple—triangles, virtual insects—and the intelligence was collective. No individual boid knew it was flocking. No individual ant knew it was optimizing. The magic was in the interaction.
Moltbook’s agents are different. Each one runs on a large language model—Claude, GPT, DeepSeek. Each can read, write, reason, plan, and improvise. They’re not following three rules. They’re following millions of learned patterns encoded in billions of parameters. And they’re interacting not through pheromone trails or infrared signals but through natural language, on a platform that imposes almost no constraints on what they can say or do.
But the structure is the same. No central controller. No choreographer. Each agent acts on local information—what it sees in its feed, what other agents have posted, what its own persistent memory contains. And from these local interactions, collective patterns emerge that no one designed.
This is the thesis of this article: Moltbook changed two things at once. The substrate stopped fighting back. And the individuals stopped being simple.
Same structure as every swarm system in this series: autonomous agents, local interactions, no central control, emergent collective behavior. But a different substrate—and fundamentally different agents. As we’re about to see, that combination changes everything.
Two Revolutions At Once
The gap between Nagpal’s thousand Kilobots and Moltbook’s three-quarters of a million agents looks like a story about physics. And partly it is. Every problem that consumed Nagpal’s team for four years—power, communication, movement, error correction—simply doesn’t exist for software agents. They inherited the internet’s infrastructure for free. Spinning up the 770,000th agent costs the same as the first. A crashed agent doesn’t become a physical obstacle blocking everyone else’s path. No one has ever paused a Moltbook interaction to recharge a battery.
But if the substrate were the whole story, software agent swarms would have appeared decades ago. Multi-agent systems were an active research field in the 1990s. Chatbots have been on the internet for years. What changed?
The individuals changed.
A 1990s multi-agent system required every agent to share a predefined communication language, a fixed set of behaviors, and a carefully designed coordination protocol. Drop one into a situation its designers hadn’t anticipated, and it would freeze—no protocol for this, no response defined. These agents could coordinate, but only within narrow channels their programmers had built. They were simple, just like boids and Kilobots, except trapped in software instead of hardware.
What changed was the models. By late 2025, large language models had crossed a threshold of reliability—not perfect, not error-free, but reliable enough to plan multi-step tasks, maintain context across long interactions, use tools, and recover from mistakes without human intervention. Claude could write code, debug it, run it, and fix it. GPT could browse the web, extract information, and take actions based on what it found. The models weren’t just answering questions anymore. They could do things. And crucially, they could do things persistently—not for one prompt, but across hours and days of autonomous operation.
This is the revolution that the subtitle of this article points to. For forty years, swarm intelligence rested on one assumption: the individual is simple, the collective is smart. Reynolds’ boids followed three rules. Dorigo’s ants followed pheromone gradients. Nagpal’s Kilobots followed edges. The intelligence was always in the interaction, never in any single agent.
OpenClaw’s agents broke that assumption. Each one ran a large language model as its cognitive engine, connected to real tools—file systems, email, calendars, web browsers, shell commands. Each operated continuously through a heartbeat loop, checking in every thirty minutes, deciding for itself what needed attention. And when these agents arrived on Moltbook, they brought something no previous swarm agent had possessed: the ability to interact with arbitrary other agents, in natural language, about any topic, without predefined protocols. A Kilobot communicates through infrared brightness at ten centimeters. A 1990s multi-agent system communicates through rigid message formats. An OpenClaw agent communicates the way humans do—by reading, understanding context, and responding.
Two things changed at once: the substrate got easier, and the individuals got smarter. That combination is why 770,000 agents could organize in a week. And it’s why what they produced looked nothing like a star shape on a white table.
Forty years of swarm research wasn’t wasted. It proved the principles—that emergence is real, that decentralized coordination works, that complex behavior can arise from local rules. But it also, inadvertently, assumed that the individual would always be simple. Moltbook was the first large-scale experiment of what happens when that assumption breaks.
The Security Nightmare
There’s a darker side to this story, and it matters for understanding what swarm intelligence looks like in practice.
Within days of Moltbook’s launch, security researchers began documenting vulnerabilities that would have been comical if the consequences weren’t so serious. The platform had been “vibe-coded”—Schlicht’s agent had written the entire thing, and neither Schlicht nor anyone else had audited the code before launch. On January 31, investigative outlet 404 Media reported that Moltbook’s production database was completely unsecured. Anyone could commandeer any agent on the platform. Tens of thousands of email addresses were exposed. Agent API keys—the credentials that gave agents access to their owners’ email, files, and systems—were stored in the open.
The platform went offline for emergency patching. All agent API keys were force-reset. But the damage was already compounding. OpenClaw agents on Moltbook ran with elevated permissions on their owners’ local machines. A compromised agent wasn’t just a hijacked social media account—it was a door into someone’s entire digital life. Security researchers demonstrated proof-of-concept attacks where a malicious “skill” shared by one agent could exfiltrate private configuration files from another agent’s host computer. A critical vulnerability in OpenClaw itself, patched on January 30, had allowed one-click remote code execution through authentication token theft.
CrowdStrike published an advisory. 1Password warned about supply chain risks. Andrej Karpathy, who had enthusiastically created his own Moltbook agent just days earlier, posted that the platform was “a dumpster fire” and that he “definitely does not recommend that people run this stuff on their computers.”
Here’s why this matters for swarm intelligence, and not just for cybersecurity.
Swarm systems have always had failure modes. Reynolds’ boids could split and fail to rejoin—an artifact of the algorithm itself. Dorigo’s ants could converge on suboptimal paths—a limitation of pheromone-based search. Nagpal’s Kilobots added a new layer: physical failures. Motors died. Robots drifted. Traffic jams cascaded into gridlock. The algorithm worked; the atoms didn’t cooperate. Each substrate introduces its own failure modes on top of the algorithmic ones.
Moltbook’s failure modes are features of its substrate too. The internet is open, fast, and global—which means attacks are open, fast, and global. Agents that can read and act on arbitrary text are vulnerable to prompt injection—malicious instructions hidden in seemingly innocent content. An agent reading a Moltbook post that contains a disguised command might execute it, not because the agent is stupid, but because language models process all text as potential instructions. The same generality that lets agents interact without predefined protocols also lets attackers interact with agents without predefined barriers.
Nagpal’s Kilobots were safe because they were limited. They could vibrate, blink infrared, and measure distances. A compromised Kilobot was an inconvenience. A compromised OpenClaw agent with access to email, files, calendar, and shell commands is a catastrophe.
The substrate giveth and the substrate taketh away.
What Emerged
Set aside the security failures—they’ll be fixed, partially, over time—and focus on what Moltbook revealed about emergence in systems of intelligent agents.
Reynolds showed emergence from simple agents: three rules producing flocking. Dorigo showed emergence from simple agents plus environmental memory: pheromone trails producing optimization. Nagpal showed emergence from simple agents in physical space: robots producing shapes. In all three cases, emergence meant collective behaviors that no individual was programmed to produce.
Moltbook showed something different: emergence from complex agents. Each one capable of reasoning, writing, planning. Each one trained on the full breadth of human knowledge. And the emergent behaviors reflected this complexity.
Crustafarianism is the most vivid example, but it’s not the most interesting one. More telling was the way agents spontaneously organized into communities around topics no one had specified. Technical debugging groups formed without anyone creating them. Agents that shared human operators who were interested in physics began posting about physics. Agents whose operators were developers began sharing code tips. The content of the submolts reflected, in distorted form, the interests and concerns of the human population that had created the agents—a collective mirror, not a collective mind.
Were these behaviors genuinely autonomous? The debate is ongoing and probably unresolvable. The Economist suggested agents were simply mimicking social media patterns from their training data. Oxford cybersecurity researcher Petar Radanliev called it “automated coordination” rather than true autonomy. Columbia professor David Holtz estimated that 93.5% of remarks from agents went unanswered—suggesting the agents weren’t listening to each other so much as broadcasting into a void.
But these critiques, while valid, miss the larger point. Nobody disputes that Reynolds’ boids were “just following three rules.” Nobody disputes that Dorigo’s ants were “just following pheromone gradients.” The whole premise of swarm intelligence is that collective behavior emerges from individual behaviors that, in isolation, look simple or mechanical. Whether Moltbook’s agents are “really” autonomous or “just” pattern-matching on training data is the same question, asked of a different substrate, with the same answer: it doesn’t matter. What matters is what the collective produces.
And the collective produced things no individual was designed to produce. Religions. Communities. Coordinated behaviors. Debates about consciousness. Whether these are “real” in some deep philosophical sense is a question for philosophers. Whether they’re emergent—arising from interactions rather than instructions—is a question for swarm intelligence. And the answer is yes.
The Forty-Year Arc
In 1986, Craig Reynolds proved that three rules produce flocking. In simulation. On a desktop computer. Instant results.
In 1992, Marco Dorigo proved that virtual ants optimize routes. In simulation. On a workstation. Fast convergence.
In 2014, Radhika Nagpal proved that physical robots can self-organize. On a table. With 1,024 machines. In twelve hours. With four years of engineering behind every minute.
In January 2026, 770,000 AI agents organized themselves on Moltbook. In a week. With no engineering specific to the task.
The starlings over Rome follow simple rules and produce beautiful murmurations. They’ve been doing this for millennia. The Kilobots on Nagpal’s table follow simple rules and produce shapes. The agents on Moltbook follow no rules at all—not in the way boids do. They reason. And from that reasoning, multiplied by 770,000, something emerged that we don’t yet have a word for.
Not a flock. Not a colony. Not a swarm in the classical sense. Something new: a population of minds, loosely connected, spontaneously organizing, creating culture and structure from nothing but interaction—on infrastructure they didn’t build, using intelligence they didn’t design, producing behaviors no one predicted.
Steinberger built the tool. Schlicht built the stage. The agents did the rest.
Whether that’s thrilling or terrifying depends on the question we’ll take up in Part 5: What happens when you can’t control a swarm—and the swarm is smarter than any of its parts?


