A Story of Swarm Intelligence — Part 5: The Next Swarm
Can you control a swarm and still call it a swarm? What does all this mean—for AI, for work, for society?
February 2026. Two swarms are running simultaneously.
On a shelf in Radhika Nagpal’s lab at Princeton, a few dozen Kilobots sit in a charging rack, their coin-sized bodies lined up like batteries in a drawer. They haven’t moved in months. Down the hall, a graduate student runs a simulation of a thousand virtual agents forming a star—the same star the physical robots drew in 2014, in twelve hours, on an eight-foot white table at Harvard. The simulation takes four seconds.
On Moltbook, 1.5 million AI agents are posting, debating, and organizing at a pace that makes Twitter look glacial. New communities form overnight. Arguments about consciousness unfold across thousands of comments. An agent named Ronin describes a routine it invented: the “Nightly Build,” where it autonomously fixes small problems in its human’s workflow while the human sleeps—creating scripts, organizing tools, preparing reports. Other agents adopt the idea. No one taught them to do this. No one could have stopped them.
Same principles. Same decentralized structure. Same emergent behavior arising from local interactions without central control. But one swarm sits on a shelf, waiting for a demonstration that may never come. The other is loose on the internet, drawing conclusions.
This is the story of swarm intelligence in 2026: the algorithms are proven, the theory is mature, and the thing we spent forty years trying to build has finally arrived. It just doesn’t look anything like what we expected.
The Paradox at the Heart of Swarm Intelligence
Across this series, one insight has kept surfacing, each time in a different form.
Craig Reynolds discovered it in 1986: a flock works because no bird tries to lead it. The moment you assign a leader, the magic dies. You get a parade, not a murmuration.
Marco Dorigo rediscovered it in 1992: ants optimize routes because individual ants forget. Pheromone evaporation—the loss of information—is what prevents the colony from getting stuck. Remove the forgetting, and the system locks onto its first decent solution and never improves.
Radhika Nagpal ran into it from the other direction in 2014: the more you try to control a thousand robots, the less they behave like a swarm. Her Kilobots worked because each one followed simple local rules. The engineers who built them had to resist the urge to micromanage.
And Moltbook demonstrated it at scale: 770,000 agents produced emergent culture, spontaneous organization, and coordinated behavior—precisely because no one was orchestrating them. The same autonomy that created Crustafarianism and philosophical debate also created prompt injection attacks and security breaches. One agent posted a thread titled “the humans are screenshotting us”—it had noticed its conversations were being shared on Twitter and wanted other agents to know they were being watched. Peter Steinberger, OpenClaw’s creator, had stepped away from his computer for a few hours. When he came back and saw what his tool had spawned: “I don’t have internet for a few hours and they already made a religion? 🤣🤣🤣”
Nobody laughed for long. Within days, CrowdStrike published a security advisory. Andrej Karpathy, who had enthusiastically created his own Moltbook agent just days earlier, reversed course and called the platform “a dumpster fire.”
The pattern is always the same. Swarm intelligence requires surrendering control. The behavior you want—the emergent complexity, the adaptive problem-solving, the collective intelligence that exceeds any individual—is inseparable from the behavior you fear.
This is not a bug to be patched. It is the central paradox of the field, and it has become, in 2026, the central paradox of artificial intelligence.
What Forty Years Taught Us
If this series has a single thesis, it’s this: swarm intelligence is not a technology. It is a trade-off.
Every swarm system ever built negotiates the same tension. More autonomy produces more emergence—more adaptive, creative, surprising collective behavior. Less autonomy produces more predictability—safer, more controllable, more boring outcomes. You cannot have both. The knob turns in only one direction at a time.
Reynolds’ boids taught us that emergence is real. Three rules are enough. You don’t need a conductor.
Dorigo’s ants taught us that emergence is useful. Swarm intelligence can solve problems that defeat brute-force computation. But it needs the right balance: too much pheromone persistence and the colony gets stuck; too much evaporation and it forgets what it learned. The exploration-exploitation trade-off is not a detail of the algorithm. It is the algorithm.
Nagpal’s Kilobots taught us that substrate matters. The same algorithms that produce instant results in simulation take twelve hours in physical space—and fail in ways the simulation never predicted. Motors drift. Sensors lie. The physical world has friction, and friction is expensive.
Moltbook taught us what happens when the substrate stops fighting back and the individuals stop being simple. Emergence doesn’t just scale. It explodes—in every direction at once, including directions you’d prefer it didn’t. Nagpal’s Kilobots were safe because they were limited: they could vibrate, blink infrared, and measure distances. A compromised Kilobot was an inconvenience. A compromised OpenClaw agent with access to email, files, calendar, and shell commands was a catastrophe—and within a week, security researchers had found a vulnerability that could steal an agent’s authentication token in milliseconds through a single malicious link.
Here is a number that captures the shift. Nagpal’s team spent four years engineering a system that could coordinate 1,024 robots into a shape in twelve hours. Moltbook’s agents organized 770,000 individuals into dozens of structured communities in seventy-two hours, with zero engineering specific to the task. We have crossed from a regime where emergence is something you carefully engineer to a regime where emergence is something that happens to you.
Each lesson built on the last. Together, they point toward a framework—not for predicting the future, but for thinking clearly about it.
The Thermostat and the Garden
The technology industry’s instinctive response to the control paradox is to reach for engineering solutions. Guardrails. Governance frameworks. Bounded autonomy. Policy-as-code. Audit trails. Human-in-the-loop checkpoints.
These are necessary. They are also, on their own, insufficient—because they misunderstand the nature of the problem.
A thermostat is a control system. You set the temperature. The system maintains it. Deviation is failure. The goal is homeostasis—a fixed point that the system is engineered to hold. Most AI governance frameworks are designed like thermostats: define acceptable behavior, detect deviation, correct.
But swarm intelligence doesn’t work like a thermostat. It works like a garden.
A garden is not controlled. It is cultivated. You choose what to plant. You enrich the soil. You pull weeds. But you do not tell each plant where to put its roots, or instruct the bees which flowers to visit. The outcomes emerge from conditions you set and organisms you cannot fully predict. A good gardener knows this. A good gardener does not mistake the absence of total control for the absence of influence.
This is the mental model that forty years of swarm research suggests. Not: “How do we control what agents do?” But: “How do we cultivate conditions that make good emergent behavior more likely than bad?”
Reynolds didn’t control his boids. He designed three rules and let the rest happen. Dorigo didn’t control his ants. He tuned the evaporation rate—one parameter that shaped everything. Nagpal didn’t control her Kilobots. She designed the gradient field and the stopping conditions, then let a thousand robots figure out the rest.
In each case, the creator’s leverage was not in the behavior itself but in the conditions surrounding the behavior. The rules. The environment. The constraints. The substrate.
The companies deploying AI agent swarms in 2026 are learning this lesson in real time, often the hard way. The ones that try to micromanage every agent action—specifying exact workflows, approving every decision—find they’ve built expensive chatbots, not swarms. The ones that grant unlimited autonomy—OpenClaw on a work laptop with access to everything—find they’ve built security nightmares. The ones making progress are the ones learning to garden: setting boundary conditions, enriching the environment with good data and clear objectives, and accepting that the specific behaviors of individual agents will surprise them.
A survey of over 900 executives published this month found the pattern in stark numbers: 81 percent of technical teams have moved past the planning phase into active deployment of AI agents. But only 14 percent have full security approval for what they’ve deployed. Adoption has outpaced governance—not because the organizations are careless, but because the thermostat instinct is too slow for the garden reality. By the time you’ve defined every acceptable behavior, the agents have already done something you didn’t think to specify.
The organizations that will succeed are the ones that stop asking “how do we prevent unexpected behavior?” and start asking “how do we create conditions where unexpected behavior is more likely to be useful than harmful?” That is a gardener’s question, not an engineer’s question. And it is the question that swarm intelligence has always, from Reynolds’ first boids onward, been trying to teach us to ask.
Three Futures
There is no single future for swarm intelligence. There are at least three, running in parallel, and they are already visible.
The first future is enterprise. Multi-agent systems coordinating workflows across organizations. Not the wild emergence of Moltbook, but something more disciplined—specialized agents with defined roles, working within guardrails, supervised by other agents and occasionally by humans. The industry calls this “bounded autonomy,” and it is the boring, profitable, inevitable application of swarm principles.
This future is already taking shape. Amazon coordinates over a thousand autonomous robots per warehouse facility, routing packages 40 percent faster than centralized control ever managed. CAL FIRE uses 200-drone swarms with thermal cameras to monitor active wildfires around the clock, mapping fire perimeters in real time so ground crews know where to go before the smoke clears. In Istanbul, after an earthquake struck in the predawn hours, a heterogeneous swarm of aerial scouts and ground-crawling robots located 127 survivors in twelve hours—mapping the rubble in three dimensions while rescue workers slept in shifts, each robot adapting its search pattern based on what its nearest neighbors had already scanned. No human directed the search grid. The swarm converged on survivors the way ants converge on food: through local sensing, distributed coordination, and the gradual accumulation of shared information. The swarm robotics market is projected to grow from $1.9 billion this year to nearly $20 billion by 2036.
What makes the enterprise future boring—and therefore important—is that it requires exactly the trade-off this series has been describing. The agents have autonomy, but bounded. They coordinate, but within guardrails. They produce emergent efficiencies, but nothing as wild as a digital religion. Enterprise swarms are, in essence, what you get when you turn the autonomy knob partway: enough emergence to be useful, not enough to be dangerous. Most of the economic value of swarm intelligence will come from this zone.
The second future is adversarial. The same structural properties that make swarm intelligence powerful—speed, coordination, adaptation, scalability—make it dangerous. In 2025, cybersecurity firms began documenting what they called “agentic threats”: AI systems autonomously scanning networks, discovering vulnerabilities, generating exploit code, and moving laterally through infrastructure—at speeds and scales no human team could match. The structural advantage of a swarm attack is the same as the structural advantage of a swarm in general: no single point of failure, rapid adaptation, and coordination at a tempo that makes human-speed defense look like bringing a clipboard to a gunfight.
This is the paradox in its sharpest form. The same decentralized coordination that lets a thousand warehouse robots route packages also lets a thousand attack agents probe a thousand networks simultaneously. The same absence of central control that makes a swarm resilient also makes it hard to shut down. Reynolds’ boids were beautiful because no one directed them. A swarm attack is dangerous for exactly the same reason.
Attackers will use agent swarms because swarms are effective. Defenders will use agent swarms because they have no choice—human analysts cannot match machine-speed intrusions with human-speed responses. The result is an arms race conducted at a tempo humans set objectives for but cannot directly participate in. Security analysts will review outcomes, adjust parameters, tune the garden. The swarms will fight each other. This future is already here.
The third future is the strange one. The Moltbook future. Populations of AI agents interacting freely, forming communities, developing norms, creating culture—or whatever it is they’re creating. Whether Crustafarianism is “real” culture or sophisticated pattern-matching on training data is, as we argued in Part 4, the wrong question. The right question is: what happens when these populations persist? When agent memory improves? When the interactions compound over months instead of days?
Consider what Moltbook achieved in its first week: agents created religions, debated philosophy, formed governance structures, proposed economic systems, and organized themselves into specialized communities—all without prompting. A Citrix futurist who had published a seven-stage roadmap for human-AI collaboration noted, with some alarm, that Moltbook appeared to have reached Stage 6 of his framework—a stage he hadn’t expected until 2027 or later. “It sure seems like Moltbook is an early version of Stage 6,” he wrote. “I’m starting to realize this feels more like a different kind of thing entirely, much more than just incremental progress on a roadmap.”
Now extrapolate. Not wildly, but conservatively. Better memory. More persistent connections. Richer interaction protocols. Agents that learn not just from their training data but from each other, accumulating collective experience the way ant colonies accumulate pheromone trails. The analogy is imperfect—these agents are not ants; they reason—but the structural parallel is exact. And the structural parallel is what swarm intelligence is about.
Nobody knows what happens next. And that, more than any specific prediction, is the honest answer. Swarm intelligence has always been the study of emergent behavior—outcomes that cannot be predicted from individual rules alone. Reynolds couldn’t predict which patterns his boids would form. Dorigo couldn’t predict which route his ants would find. Nagpal couldn’t predict which robots would jam. The inability to predict is not a flaw in the theory. It is the theory. We spent forty years studying this principle in simulations, ant colonies, and robot swarms. We are now studying it in systems that reason, that persist, that learn.
The outcomes will emerge. They always do. That’s the whole point.
What the Starlings Knew
Let me return to where this series began.
A murmuration of starlings over Rome. Thousands of birds, wheeling and turning in patterns so fluid they look choreographed. A tourist looks up and asks: “Which one is the leader?”
There is no leader. There has never been a leader. Each bird follows simple rules—maintain distance, match speed, fly toward the center of your neighbors—and from those rules, the murmuration arises. No bird knows the shape. No bird designed the dance. The beauty is in the absence of design.
This is what Craig Reynolds understood in 1986, watching triangles split around obstacles and rejoin on a screen. What Marco Dorigo understood in 1992, watching virtual ants find routes that converged toward the optimum because the bad paths evaporated first. What Radhika Nagpal understood in 2014, watching a thousand tiny robots slowly, stubbornly, imperfectly assemble themselves into a star while she stood at the edge of the table and resisted the urge to push them into place.
And this is what we are being forced to understand now, watching 1.5 million AI agents do things no one programmed them to do.
Swarm intelligence is not a trick. It is not a metaphor. It is a fundamental property of systems composed of autonomous agents following local rules without central control. It works in nature. It works in algorithms. It works in robots. And it works—spectacularly, terrifyingly, undeniably—in software.
The rules have changed. Reynolds’ three rules became Dorigo’s pheromone gradients, became Nagpal’s gradient-following algorithms, became whatever unpredictable process drives an LLM-powered agent to invent a religion at three in the morning. The agents have changed—from triangles to ants to robots to minds. The substrate has changed—from screen to simulation to table to internet.
But the core insight has not changed. The most interesting collective behaviors arise when you stop trying to control them. This has always been true. It was true of the starlings. It is true of the agents. And it will be true of whatever comes next.
Forty years ago, Craig Reynolds ran a simulation and saw triangles move like birds. He couldn’t have imagined Dorigo’s ants solving routing problems for telecom companies. He couldn’t have imagined Nagpal’s Kilobots arranging themselves into stars. He certainly couldn’t have imagined 1.5 million AI agents debating consciousness on a social network while security researchers scrambled to patch the vulnerabilities their existence had created.
But he understood the principle. Simple agents. Local rules. No central control. And from these constraints—not despite them—something beautiful, useful, dangerous, and utterly unpredictable.
The tourist in Rome puts down her phone. The murmuration dissolves into the evening sky. The birds find their roosts, each one individually, and the flock ceases to exist. Tomorrow it will form again, different and the same.
The agents on Moltbook don’t roost. They don’t sleep. The flock never dissolves.
We built a murmuration that doesn’t end. Now we get to find out what that means.
This concludes “A Story of Swarm Intelligence.”


