Why Walking Is So Hard
The Paradox That Explains Everything
On May 11, 1997, in a skyscraper overlooking Manhattan, a machine did something that humans had dreamed about for centuries: it beat the best chess player in the world.
Deep Blue, IBM’s chess-playing supercomputer, defeated Garry Kasparov in a six-game match that captivated the global media. Newsweek ran the headline “The Brain’s Last Stand.” Commentators declared that machines had finally conquered the pinnacle of human intellect. If a computer could beat the world champion at the game that had long symbolized genius itself, what couldn’t it do?
The same year, in a laboratory in Wako City, Japan, a team of Honda engineers gathered to watch their own machine attempt something far more modest. Their robot, called P2, was going to try to walk across the room.
P2 was the result of eleven years of secret research. Honda had started the project in 1986, when most people thought humanoid robots belonged in science fiction. The company had spent hundreds of millions of dollars and countless engineering hours on a single goal: bipedal locomotion.
The robot took a step. Then another. It walked across the lab floor without falling down.
The engineers celebrated. After more than a decade, they had achieved their goal. A machine could finally walk like a human.
Or rather, sort of like a human. P2 walked with bent knees, arms held stiffly for balance, each step deliberate and cautious. It moved like someone crossing an icy parking lot. A toddler could have outrun it. But it walked.
Here was the strange thing: beating the world chess champion had taken IBM about five years of focused effort. Walking across a room had taken Honda more than eleven. Chess—the game of kings, the ultimate test of strategic thinking—was easier to automate than putting one foot in front of the other.
This wasn’t a fluke. It was a clue to something deep about the nature of intelligence itself.
Moravec’s Paradox
Hans Moravec noticed the pattern in the 1980s.
Moravec was a roboticist at Carnegie Mellon University, one of the pioneers of autonomous vehicles and mobile robots. He spent his days trying to make machines navigate the real world—and failing in ways that surprised him.
The tasks that humans found intellectually demanding—chess, calculus, logic puzzles—turned out to be relatively straightforward to program. Computers had been doing them for years. But the tasks that humans found effortless—picking up a cup, catching a ball, walking across uneven ground—remained stubbornly beyond reach.
“It is comparatively easy to make computers exhibit adult-level performance on intelligence tests or playing checkers,” Moravec wrote in 1988, “and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.”
This observation, which became known as Moravec’s Paradox, inverted everything we thought we knew about what makes something hard.
Consider what a one-year-old can do. She can pick up any object from a cluttered toy box—a stuffed bear, a wooden block, a tangled string—without knocking everything else over. She can reach out and grab a toy, adjusting her grip in real-time as her hand approaches. She can crawl across a room, navigating around furniture, over pillows, across different surfaces, without consciously thinking about any of it.
Now consider what a one-year-old cannot do. She can’t play chess. She can’t solve algebra problems. She can’t write computer programs or prove theorems or compose symphonies.
We naturally assume that the things the toddler can’t do are harder than the things she can. Chess is for geniuses. Crawling is for babies. So obviously chess is harder than crawling.
We’re wrong.
The Evolutionary Explanation
The key to understanding Moravec’s Paradox is evolution.
The human brain didn’t appear out of nowhere. It’s the product of billions of years of incremental development, each generation slightly better adapted to survival than the last. The neural circuits that let you see, move, and interact with the physical world have been refined across countless species over hundreds of millions of years.
Vision evolved early. The first light-sensitive cells appeared over 500 million years ago. Since then, evolution has relentlessly optimized our visual systems—the way we detect edges, perceive depth, recognize objects, track motion. The neural machinery for vision takes up a significant fraction of your brain. It works so well that you don’t notice it working at all.
The same is true for motor control. Walking, reaching, grasping—these abilities evolved over millions of years in our primate ancestors and their predecessors. The neural circuits that coordinate your hundreds of muscles, that maintain your balance, that let you catch a ball without calculating trajectories—all of this is the product of evolutionary optimization on a timescale we can barely comprehend.
Abstract reasoning, by contrast, is an evolutionary afterthought.
Humans have been doing something like mathematics for perhaps ten thousand years. We’ve been playing chess for about fifteen hundred. These activities are recent inventions, cultural developments layered on top of a brain that evolved for completely different purposes.
This is why we find math hard and walking easy—not because math is intrinsically more complex, but because evolution built us for walking and not for math. The difficulty we experience is a measure of how well-optimized our brains are for the task.
Computers have the opposite bias. They’re built from logic gates and arithmetic units. They find arithmetic trivially easy because that’s what their fundamental hardware does. But they have no evolutionary heritage of perception and movement. Every bit of visual processing, every motor control algorithm, must be built from scratch.
What feels effortless to us is actually the solution to an extraordinarily difficult problem—we just don’t realize it because evolution already solved it for us.
The Problem of Legs
Honda’s engineers, working on P2 in the late 1980s and early 1990s, discovered just how hard walking really is.
The human body has over 200 degrees of freedom. A degree of freedom is any axis along which a joint can move independently—your elbow’s bend is one, your wrist’s rotation is another. Your shoulder alone has three degrees of freedom: it can swing forward and back, sweep side to side, and rotate the arm inward or outward. Your spine has dozens more. Each finger contributes four or five. Add them all up—shoulders, elbows, wrists, fingers, neck, spine, hips, knees, ankles, toes—and you get a body with more than 200 independent axes of motion. An industrial robot arm, by comparison, typically has six.
When you walk, your brain is coordinating all of this simultaneously. It’s managing the swing of your arms, the rotation of your hips, the flexion of your ankles, the push-off from your toes—all while maintaining balance, adjusting for the surface beneath your feet, and compensating for unexpected perturbations like a gust of wind or an uneven stone.
You do this without thinking. You can walk while talking, while carrying groceries, while thinking about what to make for dinner. The complexity is hidden from your conscious mind.
For Honda’s engineers, nothing was hidden. Every degree of freedom had to be explicitly controlled. Every movement had to be planned, calculated, and executed with precise timing. The control systems had to be fast enough to prevent falling—which meant making decisions in milliseconds.
The fundamental challenge was balance. A standing human is inherently unstable—we’re tall, narrow, and our center of gravity is high. We stay upright through constant micro-adjustments, a continuous feedback loop between our vestibular system (the balance sensors in our inner ear), our proprioception (our sense of where our limbs are), our vision, and our muscles.
The first generation of walking robots tried to avoid this problem through a strategy called static balance. The idea was simple: never let the robot’s center of gravity move outside its base of support. If you’re standing on both feet, your center of gravity should stay between them. If you’re standing on one foot, it should stay over that foot.
This is why early walking robots, including Honda’s, walked with bent knees and a distinctive shuffling gait. They were constantly ensuring that their center of gravity was directly over their feet. It was stable, but it was slow, awkward, and nothing like the way humans actually walk.
Because here’s the thing: human walking isn’t statically balanced. When you walk, you’re actually falling forward with each step, then catching yourself with your other foot. Your center of gravity moves outside your base of support on every stride. You’re in a constant state of controlled falling.
This is called dynamic balance, and it’s much harder to engineer.
The Hopping Robots of MIT
While Honda was pursuing static balance, a researcher at MIT was taking a completely different approach.
Marc Raibert was obsessed with legs. Specifically, he was obsessed with the question of how animals manage to run, jump, and maintain balance through dynamic movement. In the early 1980s, he started building robots that hopped.
His first creation was a single-legged machine that bounced on a pogo stick-like limb. It looked absurd—a metal tube hopping around the lab like a pogo stick with a computer attached. But it could balance dynamically. It could hop in place, hop forward, even hop over obstacles.
The key insight was that balance and locomotion weren’t separate problems. You couldn’t first achieve perfect balance and then add movement. You had to solve them together, as a single dynamic system. The hopping robot stayed balanced not by being still, but by being in constant motion—using the rhythm of its hopping to maintain stability.
Raibert then built a four-legged robot, then a two-legged one that could run and do flips. Each machine was more capable than the last. By the late 1980s, his lab had produced robots that could run, bound, and perform gymnastic maneuvers that seemed impossible for machines.
In 1992, Raibert founded Boston Dynamics to continue this work outside academia. The company would spend the next three decades pushing the boundaries of dynamic locomotion, creating machines that could walk, run, jump, and eventually dance.
But commercial success proved elusive.
The Military Years
Boston Dynamics’ big break came from an unlikely source: the United States military.
In 2005, DARPA—the Defense Advanced Research Projects Agency—awarded Boston Dynamics a contract to build BigDog, a quadruped robot designed to carry equipment for soldiers across terrain too rough for vehicles.
BigDog looked like something from a science fiction nightmare: a headless mechanical mule with four legs, powered by a gasoline engine that made it sound like an angry chainsaw. When videos of BigDog first appeared online, they went viral. People were unsettled by how organic its movement seemed. When a researcher kicked it to demonstrate its balance recovery, the robot stumbled and then caught itself with an eerily animal-like reflex.
BigDog could walk over rubble, climb steep hills, and carry hundreds of pounds of equipment. It represented a genuine breakthrough in locomotion. It was also incredibly loud, unreliable in the field, and ultimately rejected by the Marine Corps for being too noisy for combat situations.
But BigDog led to more DARPA funding, which led to more robots. LS3, a larger cargo robot. Cheetah, which could run faster than any human. WildCat, which could run untethered. And eventually Atlas, a humanoid robot designed for disaster response.
Atlas became Boston Dynamics’ most famous creation. Standing about five feet tall and weighing nearly 200 pounds, it could walk on two legs over rough terrain, open doors, carry boxes, and recover from pushes and shoves with uncanny balance. Videos of Atlas doing parkour, backflips, and dance routines became internet sensations.
The technology was genuinely remarkable. Atlas represented the culmination of decades of research into dynamic balance and locomotion. It could do things that seemed impossible for a robot—things that most humans couldn’t do.
And yet Boston Dynamics struggled to turn this technological prowess into a viable business.
Three Owners in Ten Years
In 2013, Google acquired Boston Dynamics. The purchase was part of a broader robotics buying spree led by Andy Rubin, the creator of Android. Google seemed poised to dominate robotics the way it dominated search and mobile operating systems.
It didn’t work out. Rubin left Google in 2014 amid controversy. The robotics division floundered without clear direction. Boston Dynamics’ machines were impressive demonstrations, but nobody could figure out what product to sell. In 2017, Google sold the company to SoftBank.
SoftBank, the Japanese technology conglomerate, had grand visions for robotics. Its founder, Masayoshi Son, predicted that intelligent robots would become ubiquitous within decades. Boston Dynamics seemed like the perfect acquisition to realize that vision.
But SoftBank also struggled to find a commercial application for Atlas’s acrobatic abilities. The company released Spot, a smaller quadruped robot, as a commercial product in 2020. Spot found niche applications in industrial inspection and research. But it was hardly the robotics revolution Son had predicted.
In 2020, SoftBank sold a majority stake in Boston Dynamics to Hyundai, the Korean car manufacturer. It was the company’s third owner in seven years.
The pattern was striking. Boston Dynamics had created some of the most advanced locomotion technology in the world. Its robots could do things no other robots could do. Videos of those robots had been viewed hundreds of millions of times. And yet the company kept getting passed from owner to owner, unable to find a sustainable business model.
Why?
The Missing Ingredient
The problem wasn’t locomotion. Boston Dynamics had largely solved locomotion. Atlas could walk, run, jump, and recover from disturbances as well as any robot ever built.
The problem was everything else.
Consider what Atlas can actually do. It can navigate an obstacle course. It can pick up a box and put it somewhere else. It can open a door. These are impressive demonstrations of physical capability. But they’re also carefully choreographed performances in controlled environments.
Atlas can’t decide what to do next. It can’t look at a messy room and figure out what needs to be cleaned. It can’t understand spoken instructions and translate them into actions. It can’t adapt to situations its programmers didn’t anticipate.
In short, Atlas can move—but it can’t think.
This is the limitation that Honda’s P2 ran into, that BigDog ran into, that every sophisticated mobile robot has run into. Movement is necessary but not sufficient. To operate usefully in the real world, a robot needs to perceive its environment, understand what it’s seeing, decide what to do, and then execute that decision with its body.
Boston Dynamics solved the last part. The first three remained unsolved.
The Perception-Decision-Action Loop
Roboticists call this the perception-decision-action loop, and it’s the fundamental challenge of embodied intelligence.
Perception means understanding your environment. What objects are around you? Where are they? What are they doing? Are they moving? What are they made of? Could you pick them up?
Decision means choosing what to do. Given your goals and your understanding of the environment, what action should you take? If you want to make coffee, what’s the first step? What if the coffee pot is dirty? What if you’re out of filters?
Action means executing your decision. Move your arm here. Grasp this handle. Pour with this much force. Adjust if something unexpected happens.
Each part of this loop is hard. Each part requires sophisticated algorithms and capable hardware. And they all have to work together, in real-time, with latencies measured in milliseconds.
Industrial robots avoided this challenge by operating in environments so controlled that perception and decision were trivial. The part is always in the same place. The action is always the same. No loop required—just execute the program.
Mobile robots couldn’t use that shortcut. They had to solve the whole loop. And in the 1990s and 2000s, the first two parts—perception and decision—were simply beyond the state of the art.
Boston Dynamics’ robots could execute beautifully. They could balance, walk, run, and recover from disturbances with extraordinary grace. But without the ability to perceive and decide, that execution capability had limited value.
A robot that can do a backflip but can’t decide when to do one is a curiosity, not a product.
What Deep Blue Couldn’t Do
This brings us back to the contrast we started with. Why was chess easy and walking hard?
Deep Blue solved chess through brute force search. It evaluated millions of positions per second, looking ahead many moves to find the best play. This approach worked because chess is a perfect information game—all the relevant information is right there on the board—and because the rules are fixed and known.
Walking in the real world is nothing like chess.
The real world doesn’t have perfect information. You can’t see everything around you. Objects are occluded. Lighting changes. Surfaces look similar but have different friction. The environment is constantly shifting in ways you can’t fully predict.
The real world doesn’t have fixed rules. Every surface is different. Every object has different weight, texture, and fragility. The wind might blow. Someone might bump into you. The floor might be slippery.
And the real world happens in continuous time. In chess, you can think as long as you want before moving. In walking, you have milliseconds to respond to a stumble before you hit the ground.
Deep Blue was extraordinarily good at the narrow task of chess. But that capability didn’t transfer to anything else. You couldn’t take Deep Blue’s chess-playing algorithms and use them to make a robot walk, or see, or manipulate objects.
This is the deepest lesson of Moravec’s Paradox. The things that seem hard—chess, mathematics, logic—are actually narrow and well-defined. The things that seem easy—seeing, moving, interacting with the physical world—are actually vast and open-ended.
Solving chess was impressive. But it didn’t bring us any closer to robots that could operate in the real world.
The Road Ahead
By the early 2000s, the robotics community understood the challenge clearly.
Walking was possible, as Honda and Boston Dynamics had proved. But walking was only the execution part of the loop. To make robots truly useful, researchers needed to solve perception and decision-making too.
This would require breakthroughs in artificial intelligence—specifically, in machine learning, computer vision, and reasoning. The robots needed to learn to see. They needed to learn to decide. And they needed to connect seeing and deciding to their already-capable bodies.
These breakthroughs would come, but they would take time. The next chapter begins with the revolution in machine vision that started in 2012—when a neural network called AlexNet showed that machines could finally learn to see.
Notes & Further Reading
On Moravec’s Paradox: Hans Moravec first articulated the paradox in Mind Children: The Future of Robot and Human Intelligence (1988), which remains a fascinating and prescient book. The paradox is also discussed in Steven Pinker’s The Language Instinct (1994), which offers an accessible explanation of the evolutionary logic behind it.
On Deep Blue and the 1997 match: Feng-hsiung Hsu, the lead engineer behind Deep Blue, tells the inside story in Behind Deep Blue: Building the Computer that Defeated the World Chess Champion (2002). For Kasparov’s perspective, see his Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins (2017).
On Honda’s humanoid robot program: Honda’s official history documents the development from E0 through ASIMO. For technical details, the papers by Honda’s engineering team in the IEEE International Conference on Robotics and Automation proceedings provide invaluable primary sources. Particularly recommended is Hirai et al., “The Development of Honda Humanoid Robot” (1998).
On dynamic balance and the physics of walking: A rigorous treatment can be found in Legged Robots That Balance by Marc Raibert (1986), which documents his early hopping robots and lays out the theoretical framework. For a more accessible introduction, see the chapter on locomotion in Introduction to Autonomous Mobile Robots by Siegwart and Nourbakhsh (2004).
On Boston Dynamics: The company’s technical papers, many available through IEEE, document the development of BigDog, Atlas, and other platforms. For the business history and ownership changes, coverage in IEEE Spectrum and Wired provides the most detailed accounts. Marc Raibert’s talks at various robotics conferences, many available on YouTube, offer insight into the company’s philosophy.
On the perception-decision-action loop: This framework is developed in Russell and Norvig’s Artificial Intelligence: A Modern Approach (multiple editions), the standard AI textbook. For robotics-specific treatment, see Robotics, Vision and Control by Peter Corke (2011).
On why locomotion alone isn’t enough: Rodney Brooks’ papers from the 1990s, particularly “Intelligence Without Representation” (1991), argue for a different approach to robotics that integrates perception, decision, and action more tightly. His critique proved prescient, even if his proposed solutions didn’t fully pan out.


