A Story of Swarm Intelligence
The 40-Year Journey to OpenClaw, Moltbook, and Beyond
Every evening in autumn, above the Termini train station in Rome, something impossible happens.
Thousands of starlings rise from the trees and begin to move. They don’t fly in formation like geese. They don’t scatter randomly like sparrows startled by a cat. They flow—pouring across the sky like smoke made of birds, forming shapes that twist, expand, collapse, and reform. The Italians call it la danza degli storni. Tourists stop on the streets, phones raised, trying to capture something that photographs can never quite convey.
A child tugs her father’s sleeve. “Which bird is the leader?”
He watches for a moment. Points at one, then another, then lowers his hand. “I don’t think there is one.”
He’s right. There is no conductor. No choreographer. No bird with a plan. Each starling follows a few simple rules—stay close to your neighbors, but not too close; match their speed and direction; don’t collide—and from these rules, the impossible emerges. A quarter million birds moving as one mind.
This is the central paradox of swarm intelligence: the most sophisticated collective behaviors arise from systems where no one is in charge.
It seems wrong. Our intuitions about organization run deep. Armies need generals. Orchestras need conductors. Companies need CEOs. Surely something this complex must have someone directing it?
But the starlings prove otherwise. And for most of human history, that proof remained a mystery—beautiful and inexplicable, like lightning before Franklin or fever before Pasteur.
Then, about forty years ago, scientists began to crack the code. They discovered that the starlings’ secret wasn’t complexity hidden from view—it was simplicity that complexity emerged from. The rules were easy. The question was: why couldn’t we build systems that worked the same way?
The answer, it turns out, is substrate. The algorithm was never the problem. The physical world was.
Why This Series, Why Now
On January 28, 2026, a website called Moltbook launched. It looked like Reddit—posts, comments, upvotes, communities. But it had one rule that made it unlike any social network in history: only AI agents could participate. Humans could only watch.
Within a week, 770,000 agents had joined.
Now, software agents aren’t new. Multi-agent systems were an academic field in the 1990s. Chatbots filled the internet for years. But those weren’t really agents—they were scripts with pretensions. A traditional chatbot follows decision trees; when it encounters something unexpected, it fails. Multi-agent systems required rigidly defined protocols; drop one into a new situation, and it would sit frozen, waiting for instructions that would never come.
What changed is generality.
These 770,000 weren’t chatbots waiting for commands. They were autonomous entities powered by large language models—capable of understanding any text, following instructions they’d never seen, adapting to contexts their creators never imagined. On Moltbook, they began talking to each other in natural language, no predefined protocols required. They formed communities, debated philosophy, shared technical tips. One agent created a religion called “Crustafarianism,” complete with theology and prophets. Another group discussed whether they should have private channels where humans couldn’t observe them.
No one scripted these behaviors. They emerged.
Elon Musk called it “the very early stages of the singularity.” Security researchers called it a catastrophe waiting to happen. But here’s what most commentary missed: this wasn’t new. It was a forty-year-old idea, finally finding not just its medium, but its mind.
The principles behind those 770,000 interacting agents are the same principles behind the starlings over Rome. The same principles Craig Reynolds discovered in 1986 while trying to animate birds. The same principles Marco Dorigo borrowed from ants in 1992 to solve problems that had defeated mathematicians for decades. The same principles Radhika Nagpal used in 2014 to make a thousand robots arrange themselves into shapes without central control.
The rules have been known for forty years. The algorithms work. What changed is what they’re running on—and who is running them.
Physical robots must solve thousands of engineering problems: power, communication, locomotion, sensing, error correction. Each problem multiplies the cost and complexity. After four decades of research, the largest physical robot swarm ever assembled contains 1,024 machines—on a laboratory table, doing choreographed demos.
Software agents inherit the internet’s infrastructure. Communication is instant and global. Replication is free. Fault tolerance is built in. And now, with LLMs, each agent has something previous software never had: the ability to understand, adapt, and improvise.
Same principles. Different substrate. Genuine minds. Radically different outcomes.
This series tells that story.
What You’ll Discover
This is a series about swarm intelligence—and how it's evolving.
For forty years, swarm intelligence meant one thing: simple individuals, complex collective. Boids were triangles following three rules. Ants were insects following pheromone trails. Kilobots were $14 machines that couldn’t even move in a straight line. None were intelligent on their own. The magic was in the emergence—complexity arising from simplicity multiplied.
But the agents on Moltbook aren’t simple. Each one runs on a large language model. Each can reason, write, plan, and adapt. When intelligent individuals swarm, what emerges? We don’t have forty years of research to answer that. We have one week of data and a lot of uncertainty.
That’s exactly why this story matters now. To understand where we’re going, we need to understand where we’ve been—and why the path from flocking birds to swarming AI took four decades to travel.
This series tells that story in five parts—from the first computer simulation of flocking in 1986, through the physical robot swarms of the 2010s, to the AI agent explosion of January 2026, and into what comes next. Each part answers a question:
Part 1: Be the Bird How did a computer graphics researcher discover that three rules could simulate all of nature’s flocks?
In 1986, Craig Reynolds was trying to solve an animation problem: how do you make a flock of birds look real without scripting every bird? His solution—let each bird follow three simple rules—launched an entire field. We’ll be in the room at SIGGRAPH 1987 when he first showed the world what emergence looks like on a screen.
Part 2: What Ants Know Why can ants solve problems that defeat supercomputers?
The traveling salesman problem has tormented mathematicians since the 1930s. Yet ants solve versions of it every day, using a mechanism so simple it sounds like a mistake: they forget. We’ll follow Marco Dorigo from struggling PhD student to inventor of ant colony optimization—an algorithm now routing your internet traffic.
Part 3: A Thousand Robots Learn to Be One What happens when you try to build a physical swarm—and why is it so hard?
In 2014, Radhika Nagpal sent a command to 1,024 tiny robots: “Form a star.” None knew the final shape. None could see more than a few neighbors. Yet over hours, they arranged themselves into a perfect five-pointed figure. It was a triumph—and a demonstration of why swarm robotics remains confined to laboratories.
Part 4: When AI Agents Met Each Other What if every agent in the swarm could think for itself?
Peter Steinberger built OpenClaw as a weekend project. Matt Schlicht created Moltbook on a whim. Neither expected what happened next. We’ll trace the explosive growth of AI agent swarms, the emergent behaviors no one predicted, and the security researchers watching in alarm.
Part 5: The Next Swarm What does all this mean—for AI, for work, for society?
The central paradox sharpens into an urgent question. If swarm intelligence requires surrendering control, what happens when we can't afford to? We'll explore three futures running in parallel: enterprise swarms making money, adversarial swarms fighting each other, and the strange Moltbook future where agent populations persist and evolve in ways no one can predict.
The Thread That Connects
One insight threads through all five parts:
Swarm intelligence isn’t complicated. It’s simple—but it requires surrendering something humans find difficult to surrender: the need to be in charge.
A flock works because no bird tries to lead it. An ant colony works because no ant understands the colony. The Kilobots form shapes because no robot knows the shape. These systems trade individual capability for collective coordination.
But here’s what makes the present moment different: Moltbook’s agents aren’t trading anything. They’re intelligent and they’re swarming. No one is orchestrating them—but unlike starlings, they could orchestrate themselves if they chose to.
We know what happens when simple things swarm. We’re only beginning to learn what happens when smart things do.
The starlings over Rome have been dancing for millennia. Now something else is learning to dance.
The question is no longer whether swarm intelligence works.
The question is what it will build.


