Bodies
The Quiet Progress of Hardware
In the summer of 2017, in a rented factory space on the outskirts of Hangzhou, a 29-year-old engineer named Wang Xingxing was about to power on a robot dog for the first time.
The machine looked crude—exposed wiring, mismatched screws, a frame that wobbled slightly even at rest. Wang had built most of it himself over the past year, working with a team of four people and a budget that would barely cover one engineer’s salary at Boston Dynamics. The motors were his own design, manufactured at a local shop that usually made components for drones.
He flipped the switch. The robot shuddered, found its footing, and stood.
Then it walked. Awkwardly, uncertainly, but it walked.
Wang allowed himself a small smile. Boston Dynamics had spent decades and hundreds of millions of dollars developing their robot dogs. He had spent a year and less than a million yuan. His robot wasn’t as capable—not even close. But it worked. And more importantly, it was cheap.
“Their motors cost tens of thousands of dollars each,” Wang later explained to an investor. “Mine cost a few hundred yuan. Same basic physics, just different manufacturing approach.”
Five years later, Unitree’s robot dogs would sell for under $3,000, making them the highest-volume quadruped robots in the world. Researchers who once begged for access to expensive platforms could now buy one with a credit card. Wang had proven something that the robotics establishment had long doubted: capable robots didn’t have to be expensive robots.
The AI revolution made all the headlines. But in factories across China and labs around the world, a quieter revolution was underway. The robot’s body was finally becoming capable enough to matter—ready for whatever intelligence might eventually animate it.
The Muscles
A robot is only as good as its actuators—the motors that convert energy into motion.
This sounds like a technical detail. It’s actually the central constraint that shaped sixty years of robotics. Every movement a robot makes—every reach, every step, every grip—comes down to motors converting energy into force. And for most of history, the motors weren’t good enough.
Industrial robots solved this through brute force. The arms that weld cars and assemble electronics use powerful motors, heavy gearboxes, and rigid structures. They can move with micrometer precision and repeat the same motion a million times. They can also weigh hundreds of kilograms and cost hundreds of thousands of dollars.
Mobile robots face a different challenge. Every gram matters. Every watt matters. A walking robot has to lift its own weight with every step. A running robot has to accelerate and decelerate its legs hundreds of times per minute. The motors need to be powerful and light and efficient and responsive—and ideally cheap.
For decades, this was an impossible combination.
Boston Dynamics’ solution was custom engineering. Each robot got bespoke actuators designed specifically for its requirements. The hydraulic systems in early BigDog were developed in-house. The electric actuators in later robots were equally specialized. The results were spectacular—robots that could run, jump, and dance. The costs were equally spectacular. A single Atlas robot reportedly cost over a million dollars to build.
This was fine for research. It was useless for products.
The breakthrough came not from robotics, but from an industry that had nothing to do with robots at all.
The Drone Dividend
In 2006, a college student named Frank Wang founded DJI in his dorm room in Shenzhen. He wanted to build better flight controllers for model helicopters—a hobby project that grew into an obsession.
Within a decade, DJI would dominate the global drone market, selling millions of units per year. To achieve this scale, the company needed motors that were light, efficient, reliable, and above all cheap. The drone industry invested billions in developing brushless motors that met these requirements. Factories optimized their production lines. Costs plummeted.
A motor that cost $100 in 2010 cost $10 by 2020. A motor controller that once required custom electronics became a commodity chip.
None of this was intended for robots. But the physics of spinning a propeller and the physics of moving a robot leg have more in common than you might think. Both need to convert electricity into precise, powerful, efficient motion. Both benefit from lightweight materials, compact designs, and mass production.
Wang Xingxing, the founder of Unitree, understood this. His motors were essentially drone motors, modified for the different demands of legged locomotion. Where Boston Dynamics engineers designed from first principles, Wang borrowed from a supply chain that was already optimized for millions of units.
The same pattern played out across robotics. The smartphone industry had driven down the cost of inertial measurement units—the sensors that detect acceleration and rotation—from thousands of dollars to a few cents. The electric vehicle industry had driven down the cost of high-performance batteries and power electronics. The gaming industry had driven down the cost of processors capable of real-time control.
Each industry was optimizing for its own needs. None were thinking about robots. But their investments accumulated like tributaries feeding a river, and by the late 2010s, the river was strong enough to float a new generation of machines.
The Nerves
Vision gets the attention. Touch does the work.
Consider picking up an egg. Your eyes tell you where the egg is. They guide your hand through space. But the moment your fingers make contact, vision becomes almost useless. The final act of grasping—closing your fingers around the shell without cracking it—is guided entirely by touch.
You feel the smooth surface. You sense the resistance as you apply pressure. You detect the subtle shift that warns you the egg is about to slip. Your grip adjusts in real-time, faster than conscious thought, based on information flowing from your fingertips.
Robots have been nearly blind to all of this.
Industrial robots don’t need touch because they operate in controlled environments. The gripper closes with exactly the force specified in the program. If objects break, someone adjusts the program. There’s no adaptation, no sensing, no feedback.
Mobile robots operating in the real world face objects that vary in size, shape, weight, and fragility. A robot that handles eggs the same way it handles tennis balls will leave a trail of destruction. It needs to sense what it’s touching and adjust accordingly.
Force-torque sensors—devices that measure forces and torques at a robot’s joints—have existed for decades. But they were expensive, delicate, and difficult to integrate. A research-grade sensor might cost several thousand dollars and require careful calibration before every experiment.
The 2010s changed this. MEMS technology—the same fabrication techniques used for smartphone accelerometers—enabled cheap, compact force sensors. Startups developed flexible sensor skins that could wrap around robot fingers, providing distributed touch information across the entire gripping surface. The cost of adequate tactile sensing dropped from thousands of dollars to hundreds.
The shift was dramatic. Researchers who once couldn’t afford force sensors on their robots now embedded them everywhere. It completely changed what was possible.
The Skeleton
A mobile robot is, fundamentally, a battery with legs.
The energy density of batteries determines everything else. How long can the robot operate? How far can it travel? How much payload can it carry while still having energy left for useful work? For most of robotics history, the answer was: not long, not far, not much.
Honda’s ASIMO, the most famous humanoid robot of the 2000s, could operate for about 40 minutes on a full charge—and that was walking slowly on flat surfaces. More dynamic activities drained the battery faster. Early Boston Dynamics robots used gasoline engines because batteries simply couldn’t provide enough power for the kind of movement they wanted to achieve.
The transformation came from devices that fit in your pocket.
Lithium-ion batteries were commercialized in the early 1990s for laptops and cell phones. The market for portable electronics created intense pressure to improve energy density—every extra hour of battery life was a competitive advantage. Billions of dollars flowed into research. Incremental improvements accumulated year after year.
Electric vehicles accelerated the trend. Tesla’s success created massive demand for high-performance batteries, which funded research into next-generation chemistry. The gigafactories built to supply EVs achieved economies of scale that benefited everyone. Battery cells that cost $1,000 per kilowatt-hour in 2010 cost $150 by 2020.
By the early 2020s, a humanoid robot could operate for several hours on a battery pack that weighed a reasonable fraction of its body mass. Not all day—batteries remained a major limitation—but long enough for useful work.
The structural components of robots improved in parallel. Carbon fiber went from exotic aerospace material to commodity sporting goods component to standard robot construction. 3D printing enabled rapid prototyping of complex parts. Computer-aided design let small teams develop sophisticated mechanisms that once required large engineering departments.
The cumulative effect was dramatic. Building a capable robot body became accessible in a way it hadn’t been before. In 2010, developing a humanoid robot required the resources of a major corporation or decades of patient academic work. By 2020, well-funded startups could produce competitive prototypes in a year or two.
The Humanoid Bet
As hardware costs fell, a question that had simmered for decades came to a boil: should robots look like humans?
The engineering case against humanoid design is strong. Two legs are inherently less stable than four, or wheels. The human form places the center of mass high above a narrow base—terrible for balance. Five-fingered hands are complex to build and control. From a pure efficiency standpoint, the human body is a strange template for a machine.
The practical case for humanoid design is equally strong: the world is built for humans.
Doors are sized for human bodies. Stairs are scaled to human legs. Tools are shaped for human hands. Workspaces assume human reach. Vehicles have seats designed for human posture. An entire civilization of infrastructure exists, optimized over millennia for one particular body plan.
A robot with wheels needs ramps. A robot with specialized grippers needs adapters. A robot with unusual proportions may not fit through doorways, operate standard equipment, or work alongside humans in spaces designed for human density.
A humanoid robot, if it works at all, works everywhere humans work. And it has the potential to learn from humans directly. The internet is filled with videos of people performing every imaginable task—cooking, cleaning, repairs, assembly. A robot shaped like a human could, in principle, watch these videos and imitate the movements. A robot with wheels and claws cannot. In an age where AI learns from data, the humanoid form may unlock a dataset of billions of examples that already exists.
Marc Raibert, the founder of Boston Dynamics, was initially skeptical. He built quadrupeds, not bipeds, because four legs were more stable and easier to control. His robots impressed the world with their animal-like movement—dogs, cheetahs, creatures that moved with uncanny biological grace.
But when Boston Dynamics finally built Atlas, their humanoid, something shifted. Despite being harder to control, despite being less efficient, Atlas could navigate human spaces in ways the quadrupeds couldn’t. It could climb ladders. It could open doors with human handles. It could use human tools.
The startups that emerged in the 2020s made the humanoid bet explicitly. Figure, 1X, Sanctuary, Agility—each was building humanoid robots, betting that the advantages of human compatibility outweighed the engineering challenges.
Tesla Enters the Game
On August 19, 2021, at Tesla’s AI Day, Elon Musk announced that the company would build a humanoid robot.
The presentation was, by any reasonable standard, absurd. Tesla had no robotics experience. Their demonstration consisted of a human in a robot costume dancing on stage. Musk’s timeline—a prototype within a year—seemed impossible.
The robotics community was skeptical. Building humanoid robots was hard. It had taken Boston Dynamics thirty years to develop their capabilities. Who was Tesla to think they could catch up?
But Tesla had something the robotics companies lacked: manufacturing at scale.
Tesla’s electric vehicles used powerful, efficient motors—exactly the kind of actuators that humanoid robots needed. Tesla’s battery technology was world-class. Tesla’s factories were optimized for building complex electromechanical systems in high volumes. And Tesla had assembled a team experienced in the AI systems required for autonomous operation.
The theory was that much of the technology would transfer. A robot actuator isn’t fundamentally different from an EV motor, scaled down. A robot’s power management isn’t fundamentally different from a car’s battery system. The manufacturing processes that produced millions of vehicles could, with modifications, produce robots.
Fourteen months after the dancing costume, Tesla demonstrated a working prototype. Optimus walked slowly across a stage, waved to the audience, and performed simple manipulation tasks. It was primitive compared to Atlas—no backflips, no parkour. But it existed, and it had been built with speed and efficiency that the traditional robotics industry couldn’t match.
More importantly, Tesla was thinking about cost from the beginning. Musk talked about a target price of $20,000—roughly the cost of a cheap car. Traditional robotics companies had never operated with such constraints. Their robots cost whatever they cost, and the market would have to adapt.
The Manufacturing Thesis
The importance of manufacturing capability is easy to underestimate.
Academic robotics focused on novelty. A robot that could do something no robot had done before was a publishable result, a conference paper, a line on a CV. Whether that robot could be built reliably, at scale, at reasonable cost, was someone else’s problem—or no one’s problem at all.
But for robots to have real-world impact, they need to be manufactured in volumes that matter. A million-dollar robot that works perfectly is less valuable than a ten-thousand-dollar robot that works adequately, because the cheaper robot can actually be deployed. It can generate revenue. It can gather data. It can improve.
This was the lesson of Unitree’s success. Their robots weren’t as capable as Boston Dynamics’. But they were capable enough for many applications, and cheap enough that people actually bought them. Market presence created feedback loops: more users meant more use cases, more revenue for development, more data to improve the product.
In consumer electronics, this dynamic was well understood. The first smartphones weren’t as good as dedicated cameras, GPS devices, or music players. But they were good enough, and they improved rapidly as manufacturing scaled. Within years, the integrated device had replaced the specialized ones.
The manufacturing thesis suggested that the same dynamic might play out in robotics. The companies that could scale production fastest and drive down costs most aggressively might win—even if their initial products were less sophisticated than the research leaders.
This was why Tesla’s entry mattered. This was why Chinese companies like Unitree, Xiaomi, and BYD commanded attention. They had experience building complex products at scale, driving down costs, iterating quickly. They had supply chains, factory capacity, manufacturing expertise.
The question was whether robots would follow the path of smartphones and electric vehicles—where manufacturing capability ultimately mattered more than research leadership.
Hardware Ready
By the early 2020s, the components for capable robots existed at reasonable prices.
Motors were strong enough to power dynamic movement and cheap enough to use in commercial products. Sensors were precise enough to enable dexterous manipulation and small enough to embed throughout a robot’s body. Batteries could sustain hours of operation. Manufacturing techniques could produce complex robots at scale.
None of this meant the problems were solved. Humanoid robots still cost tens of thousands of dollars at minimum. Battery life was still measured in hours, not days. Actuators still wore out under heavy use. The gap between laboratory demonstrations and reliable commercial products remained wide.
But for the first time, hardware was no longer the fundamental barrier.
For sixty years, robotics had been constrained by what was physically possible to build. Motors weren’t good enough. Batteries didn’t last long enough. Sensors weren’t precise enough. Manufacturing couldn’t achieve the necessary tolerances at reasonable cost.
Now, piece by piece, those constraints were lifting. The body was ready.
What was missing was the mind. Not just the perception that deep learning had provided, or the motor skills that reinforcement learning had enabled, but something deeper—the ability to understand what humans actually wanted, to make sense of ambiguous instructions, to exercise the kind of common sense that humans take for granted.
Hardware had caught up. Now software needed to take the next leap.
Notes & Further Reading
On Unitree and Wang Xingxing: Coverage in IEEE Spectrum and Chinese tech media (36Kr, LatePost) documents Unitree’s rise. Wang has given several interviews explaining the company’s manufacturing-focused strategy.
On drone motors and the supply chain: Chris Anderson’s work on the maker movement and drone development provides context. The DJI story is covered in various business journalism, though Frank Wang rarely gives interviews.
On tactile sensing: Dahiya et al.’s “Tactile Sensing—From Humans to Humanoids” (2010) provides comprehensive background. For recent developments, work from the SynTouch and GelSight groups demonstrates the state of the art.
On battery technology: The Argonne National Laboratory maintains excellent resources on battery chemistry and cost trends. BloombergNEF’s annual battery price surveys document the dramatic cost declines.
On the humanoid debate: Rodney Brooks has written skeptically about humanoid designs in numerous essays. The counterargument is implicit in the strategies of Figure, Tesla, and other humanoid-focused companies.
On Tesla’s Optimus: Tesla’s AI Day presentations (2021, 2022, 2023) provide primary sources. Third-party analyses from IEEE Spectrum and robotics researchers offer critical perspectives. Ashlee Vance’s biography of Musk provides context on Tesla’s approach to new markets.
On manufacturing and robotics: The literature on technology diffusion and manufacturing learning curves provides theoretical framework. For historical parallels, the development of the smartphone industry is instructive—see various case studies on Apple, Samsung, and the Chinese supply chain.


