A Story of Swarm Intelligence — Part 1: Be the Bird
How did a computer graphics researcher discover that three rules could simulate all of nature’s flocks?
Anaheim, California. July 1987.
The conference hall at SIGGRAPH was dark except for the glow of projection screens. Craig Reynolds, a thirty-four-year-old researcher from Symbolics Graphics Division, stood at the podium waiting for his video to play. He’d spent years thinking about this problem—years of what he later called “speculations about flocks” while his colleagues politely listened, wondering when he’d move on to something practical.
The video started. Triangles appeared on screen—simple geometric shapes, nothing impressive. Then they began to move.
The audience leaned forward.
The triangles weren’t following a script. They weren’t tracing predetermined paths. They were flocking—wheeling and diving like starlings, splitting around obstacles and reforming on the other side, moving with the unmistakable fluidity of living things. Except these weren’t living things. They were “boids”—bird-oid objects—and each one was following exactly three rules.
That was it. Three rules. No choreographer. No master plan. No bird knew what the flock was doing. Yet the flock moved like a single mind.
Reynolds had just demonstrated something that would take computer scientists and biologists another decade to fully appreciate: complexity doesn’t require complex rules. It requires simple rules, applied locally, with no one in charge.
The Problem Reynolds Couldn’t Solve
In the early 1980s, computer animation had a bird problem.
If you wanted to animate a flock—for a movie, a commercial, anything—you had two options. You could hand-animate every bird, plotting each one’s path frame by frame. For a flock of fifty birds over ten seconds of film, that meant scripting 12,000 individual positions. Studios did this. It was brutal, expensive, and the results often looked mechanical—birds following invisible rails through the sky.
The other option was randomness. Scatter some birds, give them random velocities, hope for the best. This produced chaos, not flocking. Real birds don’t move randomly. They move together, in ways that look coordinated but somehow aren’t.
Reynolds had been thinking about this since childhood. Growing up watching flocks, he saw them through what he called “a procedural lens”—wondering not just what they did, but what algorithm they were running. In high school and college in the 1970s, as he learned to program, the question crystallized: How could I write a program to simulate that?
By 1986, Reynolds was working at Symbolics, a company building high-end graphics workstations. He had access to powerful machines and, crucially, time for what the company generously called “advanced graphics research.” He decided to stop trying to control birds and start trying to be one.
What would a bird in a flock actually need to know?
The Insight: Be the Bird
Reynolds’ breakthrough came from a shift in perspective. Traditional animation asked: What should the flock do? Reynolds asked: What should each bird do?
A bird in a flock doesn’t know it’s in a flock. It can’t see the beautiful patterns it’s helping create. It can only see its immediate neighbors—the birds within a few wingspans. Whatever rules govern flocking must work with only local information.
Reynolds identified three:
Separation: Don’t crowd your neighbors. If another bird gets too close, steer away.
Alignment: Match your neighbors’ direction. If the birds around you are heading north, head north too.
Cohesion: Stay with the group. Steer toward the average position of your nearby flockmates.
That’s all. Three rules. A child could understand them. Each rule on its own produces trivial behavior—birds spreading out, birds lining up, birds clumping together. But combined, applied simultaneously to every bird, something unexpected happens.
The flock emerges.
Reynolds built his first working simulation in ten days, just before SIGGRAPH 1986. He called the simulated birds “boids”—partly because they were “bird-oid objects,” partly as a joke about Brooklyn accents. (In New York, “bird” can sound like “boid.”) He set a few dozen loose and watched.
They flocked.
Not because Reynolds had programmed flocking. He hadn’t. He’d programmed birds—simple, local, selfish birds that only cared about their immediate neighbors. The flocking was emergent. It arose from the interactions, not the instructions.
The Moment of Surprise
Here’s what made Reynolds’ discovery profound: he didn’t know what would happen.
When you write a traditional animation script, you control the outcome. The bird goes here, then here, then here. With boids, Reynolds could set the rules and the starting conditions, then let go. The simulation ran itself. And sometimes it surprised him.
He’d watch boids split around an obstacle, each one making its own decision, then seamlessly rejoin on the other side—a maneuver he never programmed. He’d see them form into V-shapes, then break apart, then reform in new configurations. The patterns were endlessly variable, never quite repeating, yet always recognizably flock-like.
“The aggregate motion of the simulated flock,” Reynolds wrote in his 1987 paper, “is created by a distributed behavioral model much like that at work in a natural flock; the birds choose their own course.”
This was the key insight. The birds choose. Not the animator. Not the algorithm designer. Each bird, responding only to its neighbors, makes its own choice. And from millions of individual choices, collective behavior emerges that no individual intended.
The technical community began using a word for this: emergence. The flock is emergent because it exists at a different level than its parts. You can’t find “the flock” inside any individual boid. You can’t point to the line of code that creates the swirling patterns. The flock is a pattern in relationships—it emerges from interactions, not from instructions.
Reynolds had stumbled onto something deeper than a graphics technique. He’d found a window into how nature builds complexity from simplicity.
The Rules in Detail
Let’s look more closely at what Reynolds actually programmed, because the details reveal something important.
Separation wasn’t just “avoid neighbors.” Each boid calculated the positions of all nearby flockmates, computed a vector pointing away from each one, and combined these into a single “steer away” force. The closer the neighbor, the stronger the repulsion. This created a kind of personal space—boids that got too close would feel mounting pressure to separate.
Alignment required each boid to track the velocities (speed and direction) of its neighbors, average them, and adjust its own heading to match. This is what makes a flock move together rather than just clump together. Without alignment, you’d get a crowd milling about. With it, you get a river of movement.
Cohesion computed the center of mass of nearby boids and steered toward it. This prevented the flock from drifting apart. A boid on the edge of the group would feel a gentle pull toward the center. A boid in the middle would feel pulls from all directions, roughly canceling out.
Each rule produced a steering vector—a direction and magnitude. Reynolds combined these vectors, along with basic obstacle avoidance, into a single final steering command. The boid turned. The frame advanced. The rules ran again.
What made this work was the definition of “nearby.” Each boid had a neighborhood—a cone-shaped region extending forward and to the sides, roughly matching what a real bird could see. Boids outside this neighborhood didn’t exist, as far as the rules were concerned. This meant that even in a flock of thousands, each boid only tracked a handful of neighbors. The computation stayed manageable, and the local nature of the rules was preserved.
Reynolds experimented with the parameters obsessively. How large should the neighborhood be? How should the three rules be weighted against each other? Too much separation and the flock exploded. Too much cohesion and it collapsed into a tight ball. Too much alignment and it became a rigid formation that couldn’t turn. The sweet spot—where flocking emerged—was surprisingly narrow.
Later researchers would spend years mapping these parameter spaces, but Reynolds got it working in ten days. Partly through intuition, partly through trial and error, he found settings that produced behavior so realistic that biologists started paying attention.
Biology Notices
Reynolds had set out to solve an animation problem. He ended up contributing to biology.
For decades, scientists had struggled to explain how flocks coordinate without leaders. Early theories proposed “thought transference”—some kind of telepathic communication between birds. Others suggested that flocks must have a leader bird whose movements everyone else follows, even if we couldn’t identify who. Neither theory held up.
Reynolds’ model offered an alternative: no communication, no leaders, no telepathy. Just three rules operating on local information. And it produced behavior that looked indistinguishable from real flocks.
This didn’t prove that real birds work the same way. Nature might use entirely different mechanisms. But it proved something important: you don’t need leaders or telepathy to explain flocking. Simple local rules are sufficient. Occam’s razor pointed toward Reynolds.
In the decades since, biologists studying real starlings have found evidence that supports the boids model. Starlings appear to track about six or seven neighbors—not a fixed distance, but a fixed number of birds. They align their movements with these neighbors and maintain spacing. The details differ from Reynolds’ parameters, but the principle matches.
Reynolds had reverse-engineered nature. Starting from what he wanted to achieve (realistic animation), he’d discovered something about how nature actually works.
From Conference Room to Cinema
The SIGGRAPH crowd recognized they were seeing something new. But academic appreciation doesn’t pay salaries. Reynolds needed to show that boids could do real work.
The first film application came quickly. Stanley and Stella in: Breaking the Ice, a short animated piece, premiered at SIGGRAPH 1987’s Electronic Theater alongside Reynolds’ technical paper. It featured flocking birds and schooling fish, all generated by the boids algorithm. The demo proved the concept: this wasn’t just a research curiosity. It was a production tool.
Hollywood came calling five years later.
Tim Burton was making Batman Returns, and he had a problem. The script called for swarms of bats descending on Gotham City and armies of penguins marching through the streets. Hand-animating thousands of creatures would be impossibly expensive. Random motion would look ridiculous. But boids...
Andy Kopra, a visual effects artist who had worked with Reynolds at Symbolics, took on the bat swarms at VIFX. A different team at Boss Film used Reynolds’ techniques for the penguins. The result was seamless—thousands of creatures moving with biological believability, each one following its three simple rules.
Batman Returns wasn’t the only film. The wildebeest stampede in The Lion King? Boids. The swarming bugs in Starship Troopers? Boids. Any time you’ve seen a CGI crowd that looks alive rather than robotic, there’s a good chance Reynolds’ 1986 algorithm is running underneath.
In 1998, the Academy of Motion Picture Arts and Sciences awarded Reynolds a Scientific and Technical Oscar “for his pioneering contributions to the development of three-dimensional computer animation for motion picture production.” Twelve years after building the first boid in his spare time, Reynolds had an Oscar.
But the real legacy wasn’t the award. It was the principle.
The Paradox of Control
Reynolds’ boids revealed something counterintuitive about complex systems: control and complexity are often mutually exclusive.
If you want to choreograph every bird, you can. You’ll get exactly the flock you designed—and it will look fake, because real flocks don’t follow scripts. If you want biological realism, you have to let go. Define the rules, then step back and let the system evolve.
This is uncomfortable for engineers. We like predictability. We like knowing what our systems will do. Reynolds’ work showed that some behaviors—the interesting ones, the lifelike ones—only emerge when you give up that certainty.
Consider what Reynolds didn’t do. He didn’t model bird brains. He didn’t simulate aerodynamics, wing muscles, or visual processing. He didn’t give his boids goals or intentions. He didn’t tell them to form V-shapes or to split around obstacles. He gave them three rules about personal space, and everything else emerged.
The boids program that simulated thousands of birds was smaller than a traditional script that controlled just one. Less code, less control, more realism. The paradox at the heart of swarm intelligence.
What Boids Taught Us
Forty years after Reynolds first set his triangles flying, boids remain the canonical example of emergence—complex global behavior arising from simple local rules. They’re taught in computer science courses, cited in biology papers, referenced in economics and sociology. The original 1987 paper has over 12,000 citations.
But the most important lesson isn’t technical. It’s conceptual.
Reynolds proved that you don’t need a leader to get organized behavior. You don’t need a plan. You don’t need anyone who understands the big picture. You just need individuals following simple rules, and the right kind of interaction between them.
There’s something almost magical about watching it work. Set a hundred boids loose, and within seconds they’re behaving like starlings. Add an obstacle, and they split around it without anyone telling them to. Disturb the flock, and it recovers, reshaping itself into new configurations. The system is robust in ways that designed systems rarely are—because there’s no single point of failure, no central controller that can break.
This robustness comes from the same source as the emergence: local rules, distributed control. Each boid handles its own situation. If one boid glitches, the others keep flying. The flock doesn’t depend on any individual—not even a leader, because there isn’t one.
The Template for Everything After
Reynolds didn’t know it in 1987, but he’d created a template that would echo through the next four decades of collective intelligence research.
Five years after boids, a PhD student in Italy would apply similar thinking to a different problem: routing. If birds can flock without leaders, can ants solve logistics problems without central planning? Marco Dorigo would prove they could—and create an algorithm now used by telecommunications companies worldwide.
Two decades after that, a roboticist at Harvard would ask: can physical robots do what boids do? Radhika Nagpal would build a thousand of them, discovering that the physical world adds constraints Reynolds never had to consider.
And forty years after SIGGRAPH 1987, engineers would unleash millions of AI agents onto the internet, each one far more sophisticated than any boid, and watch as they began forming communities, creating religions, and organizing in ways no one predicted.
The through-line is Reynolds’ insight: emergence requires letting go. You can’t dictate complex collective behavior into existence. You can only create conditions where it might arise—define the rules, set the agents loose, and see what happens.
This is why boids matter for understanding what happened in January 2026, when 770,000 AI agents began interacting on Moltbook. Those agents weren’t following Reynolds’ three rules—they were running large language models, far more sophisticated than anything in 1986. But they were facing the same fundamental question: What happens when many autonomous entities interact with only local information?
Reynolds’ answer: something no one designed.
The boids didn’t know they were a flock. Each one was just trying not to bump into its neighbors, match their speed, and stay close. The flock emerged from those simple imperatives.
When we look at AI agent swarms today—with their emergent religions and spontaneous communities and behaviors no one programmed—we’re watching the same principle at work, on a different substrate, at a different scale. The rules are more complex. The agents are intelligent. But the core insight remains:
The most interesting collective behaviors arise when you stop trying to control them.
Reynolds understood this in 1986. He just didn’t know how far the principle would travel.


