Tesla's Optimus
Elon Musk’s gamble: is robotics a manufacturing problem or an AI problem?
Chapter 8 of A Brief History of Embodied Intelligence
"We iterate every week. That's the Musk way, don't aim for perfect launches, aim for rapid iteration." — Tesla engineer
On August 19, 2021, at Tesla’s AI Day, Elon Musk announced that the company would build a humanoid robot.
The presentation was, by any reasonable measure, absurd. A person in a spandex robot costume walked onto stage and danced. Musk stood beside this “demonstration” and promised that a working prototype would appear within a year. The audience laughed, though it wasn’t entirely clear if they were laughing with Musk or at him.
The robotics community was dismissive. Building humanoid robots was hard. Boston Dynamics had spent thirty years and hundreds of millions of dollars developing their capabilities. Their Atlas robot could do backflips, parkour, and dance routines that seemed to defy physics. Tesla had no robotics experience. They had a person in a costume.
“This is not how serious robotics is done,” Carl Berry, a lecturer of robotics engineering, told a journalist. “You don’t announce a humanoid robot program with a dancer in a bodysuit.”
Fourteen months later, at AI Day 2022, Tesla unveiled an actual robot.
It was underwhelming. The prototype, internally nicknamed “Bumblebee,” walked slowly across the stage, waved to the audience, and performed a few simple movements. It stood five foot eight and weighed 160 pounds, with 28 actuators distributed across its body and hands so basic that their fingers couldn’t even spread apart. It couldn’t match anything Atlas could do. It looked clumsy, tentative, fragile. Social media erupted with mockery. “Ten years behind Boston Dynamics,” one viral post declared. “Is this the best Tesla can do?”
But there was something the critics missed.
The Speed of Iteration
The man responsible for turning mockery into progress was Milan Kovac, a quiet engineer who had spent nearly a decade at Tesla working on Autopilot before being tapped to lead the Optimus program in early 2022. Kovac wasn’t a roboticist by training. He was a systems engineer who understood how Tesla built things: fast, iteratively, with relentless pressure from above.
One year after that embarrassing debut, at a December 2023 unveiling, Tesla showed a different robot.
The new Optimus Gen 2 walked smoothly, thirty percent faster than its predecessor. It sorted objects by color. It performed yoga poses that required balance and coordination. It picked up eggs without crushing them, using hands that now had eleven degrees of freedom and tactile sensors on every fingertip. The robot was ten kilograms lighter, with every off-the-shelf component replaced by Tesla-designed actuators and sensors. It had an articulated neck. Its feet had force-torque sensors shaped like human feet.
The improvement was striking, not because the robot had become world-class, but because of how quickly it had changed. In twelve months, Optimus had gone from barely walking to performing useful tasks. The rate of progress, not the absolute capability, was the story.
“2023 has been awesome for Optimus,” Kovac wrote in a year-end post. “We’ve improved our locomotion stack, frequently walking off-gantry without falls and with a faster, increasingly more human-like walking gait.” He credited a dedicated team, a flat operating structure, and years of technological groundwork from Tesla’s automotive work. He also noted something the outside world hadn’t seen: the team had designed, trained, and deployed some of the first end-to-end neural networks for humanoid robots ever demonstrated, networks that could autonomously coordinate the torso, arms, and fingers simultaneously.
This was Musk’s signature method: rapid iteration. Don’t wait until something is perfect to show it. Ship early, fail publicly, learn fast, iterate faster. The approach had worked for Tesla’s electric vehicles, which had improved dramatically over successive generations. The first Model S had range anxiety and build quality issues. The current Model 3 was one of the best-selling cars in the world.
Traditional robotics companies spent years in quiet development before revealing a polished product. Boston Dynamics had worked for decades before their viral videos appeared. Their approach prioritized capability over speed. Make sure it works before you show it.
Tesla inverted this. Show it early, even if it’s embarrassing. Let the iteration happen in public view. Accept the mockery as the price of speed.
The question was whether this approach could work for something as complex as a general-purpose humanoid robot.
The Manufacturing Advantage
Tesla’s real edge wasn’t in AI or robotics research. It was in manufacturing.
Building one robot is an engineering challenge. Building a million robots is a manufacturing challenge. These are different problems, requiring different skills, and Tesla had spent a decade mastering the second.
Consider what Tesla had built for its electric vehicles. The Gigafactories in Nevada, Shanghai, Berlin, and Texas were among the most advanced manufacturing facilities in the world. Tesla had pioneered techniques like the Giga Press, massive die-casting machines that produced car body components in single pieces rather than welding together dozens of parts. They had developed integrated battery pack designs that simplified assembly. They had optimized every step of production to reduce cost and increase throughput.
All of this was directly applicable to robots.
A humanoid robot, at a mechanical level, is a collection of motors, sensors, batteries, and structural components, the same categories of parts that go into electric vehicles. Tesla already manufactured motors by the millions. They already produced battery packs at enormous scale. They already had supply chains for precision components and the factories to assemble them. As Musk put it: “It’s just a robot with arms and legs instead of a robot with wheels. Everything we’ve developed for our cars, the batteries, power electronics, advanced motors, gearboxes, the software, AI inference computer, it all actually applies to a humanoid robot.”
This thesis, that robotics was primarily a manufacturing challenge, was Tesla’s central bet. The research labs might build more capable robots, but Tesla could build cheaper ones, faster, at scale. And in consumer markets, scale and cost usually won.
The FSD Connection
Tesla had another asset: years of work on autonomous driving.
FSD, Full Self-Driving, Tesla’s autonomous driving system, had required the company to build three things that were directly transferable to robotics. First, an end-to-end vision-to-action architecture. By 2024, FSD had evolved from a patchwork of separate modules into a single neural network that went straight from pixels to control. That architecture, see the world, decide what to do, act, was exactly what a robot needed. Second, an engineering culture organized around training large neural networks on massive real-world datasets. Third, and perhaps most importantly, people, hundreds of engineers who understood how to make AI work in the physical world, not just in simulations.
What didn’t transfer was the control problem itself. Cars operate on two-dimensional surfaces. They turn left, right, speed up, slow down. The state space is constrained. Robots operate in three dimensions, manipulating objects with hands that have dozens of degrees of freedom. The complexity is orders of magnitude higher.
“Driving is hard, but it’s a narrow kind of hard,” one AI researcher observed. “You’re always doing roughly the same thing, controlling a vehicle on a road. Manipulation is open-ended. A robot might need to fold laundry one minute and fix a broken pipe the next. The generalization required is completely different.”
This distinction mattered less than it first appeared. When Ashish Kumar joined the Optimus AI team in mid-2023, arriving from a research fellowship at Microsoft, his mandate was precisely to bridge this gap. Kumar and his team went all-in on scalable methods, replacing the classical robotics software stack with reinforcement learning and training the robot’s dexterity by having it learn from videos of human hands performing tasks. The approach was directly inspired by what Tesla’s FSD team had learned: don’t hand-code rules, let the neural network learn from data.
FSD had already unified vision and action for driving. Cameras see the road, neural networks decide how to steer. With VLA architectures emerging (Chapter 6), a further step was becoming possible: adding language to that loop, so a robot could not only see and act but understand spoken instructions. Tesla’s FSD team wasn’t starting from scratch on robot intelligence. They were extending a vision-to-action foundation they’d already built.
Being Your Own Customer
In the summer of 2024, on the factory floor at Tesla’s Fremont plant, a scene played out that would have been unthinkable two years earlier.
An Optimus Gen 2 robot stood at a workstation in the battery cell area, its cameras scanning trays of lithium-ion cells. It reached down, picked up a cell, inspected it, and placed it in the correct sorting bin, then reached for the next. Nearby, human workers performed the same task at adjacent stations, occasionally glancing at the robot with expressions that mixed curiosity and wariness. The robot moved more slowly than the humans. It paused sometimes, recalibrating. But it didn’t take breaks, didn’t get bored, and every movement it made generated training data that would make tomorrow’s version slightly better.
This was Tesla’s strategy for Optimus deployment: they would be their own first customer.
Instead of selling robots to external buyers, Tesla deployed Optimus in their own factories. The robots worked alongside human employees, performing tasks like sorting battery cells, moving parts between stations, and handling repetitive assembly steps. The numbers were small, dozens of robots, not thousands, but the deployment was real.
The approach had several advantages. Tesla controlled the factory environment, which meant they could modify the workspace to suit the robot rather than forcing the robot to adapt to arbitrary conditions. They could monitor performance closely and iterate quickly when problems emerged. They didn’t need to make the robot good enough to satisfy external customers. They just needed to make it useful enough to justify continued development.
Most importantly, the strategy created a data flywheel. Every hour an Optimus worked in a Tesla factory generated data: what it saw, what it did, what went wrong. This data could train better models, which would make the robot more capable, which would allow it to take on more tasks, which would generate more data. The same virtuous cycle that had improved FSD over time could improve Optimus.
“We don’t need to convince anyone to buy these robots,” Musk explained. “We’ll use them ourselves. If they’re good enough, they’ll prove it. If they’re not, we’ll keep iterating until they are.”
The Million-Robot Vision
Musk’s stated ambition went far beyond factory deployment.
In various presentations and interviews, he projected that Tesla would eventually produce millions of Optimus units per year, potentially more robots than cars. The target price was $20,000 to $30,000, roughly the cost of a cheap vehicle. At that scale and price, Optimus could become Tesla’s largest business, exceeding even electric vehicles in revenue.
The arithmetic was staggering. A million robots per year at $25,000 each would be $25 billion in annual revenue, and Musk was talking about eventually producing ten million per year. In January 2026, Tesla announced it would discontinue Model S and Model X production at Fremont to convert the factory lines for Optimus, claiming they could produce up to one million robots on the same floor space that had built 100,000 cars.
“I think Optimus will be the most valuable product ever made,” Musk claimed. “It has the potential to transform civilization.”
The claims were ambitious, and Musk had a history of missing timelines. Full Self-Driving had been promised as imminent for years. The Cybertruck had been announced in 2019 and didn’t reach volume production until 2024. SpaceX was supposed to put astronauts in space by 2014; it happened in 2020. But there was a pattern that the skeptics tended to overlook: Musk was almost always late, but he almost always delivered. The timelines were wrong. The direction was usually right.
What the Skeptics Got Right
But the skeptics weren’t all wrong.
The most fundamental objection came from Rodney Brooks, the veteran MIT roboticist who had co-founded iRobot and spent decades at the frontier of the field. In a January 2026 essay, Brooks was blunt: the idea that humanoid robots would soon step in and perform human manual labor was, in his words, “pure fantasy thinking” on any timeline measured in less than decades.
Brooks’s critique targeted what he saw as the field’s deepest unsolved problem: hands. Not the mechanical hands themselves. Tesla’s Gen 3 hands, with 22 degrees of freedom, were genuinely impressive engineering. The problem was making those hands intelligent. Teaching a robot to sort battery cells in a structured factory bin was one thing. Teaching it to reach into a cluttered kitchen drawer, feel around for a bottle opener, and extract it without knocking everything else onto the floor was an entirely different class of problem. Vision alone, Brooks argued, couldn’t capture the intricate coordination of sight, touch, and force control that human hands perform unconsciously.
This was not an abstract concern. At Tesla’s “We, Robot” event in October 2024, Optimus robots had mingled with guests, served drinks, and played games. The demonstrations were impressive, until Bloomberg reported that the robots were primarily teleoperated, with humans controlling them remotely. Tesla hadn’t disclosed this. The gap between what Optimus could do autonomously and what it appeared to do in carefully staged demonstrations was significant, and the lack of transparency about that gap eroded trust.
The manipulation problem pointed to a deeper question about Tesla’s approach: could rapid iteration solve problems that might require fundamental research breakthroughs? Building cars, however complex, involved manufacturing a well-understood product. The physics of a vehicle on a road were constrained. But general-purpose manipulation in unstructured environments, homes, hospitals, construction sites, was an open research problem where the solution wasn’t simply a matter of engineering execution.
There were also practical constraints the vision glossed over. Battery life limited continuous operation to roughly four to eight hours, depending on task intensity. The robot’s manipulation precision, around five millimeters of repeatability, was adequate for sorting battery cells but nowhere near the accuracy required for precision assembly, where industrial robots achieved repeatability under a tenth of a millimeter. Safety certification for robots working alongside humans in unstructured environments didn’t yet have clear regulatory frameworks, and the consequences of failure with a 125-pound bipedal machine were more severe than with a stationary industrial arm.
Cornell roboticist Guy Hoffman captured the structural concern: “The popularity of humanoids is driven more by sci-fi appeal than engineering rationale. Bipedal robots are dynamically unstable, and such a one-size-fits-all design is rarely the most efficient solution for real tasks.”
None of these objections were fatal. But they established that Tesla’s path from factory deployment to general-purpose home robot was not simply a matter of scaling up manufacturing and iterating faster. It required solving technical problems that the entire robotics community had struggled with for decades. The manufacturing thesis was necessary. Whether it was sufficient remained an open question.
The Forcing Function
Whatever the outcome for Optimus itself, Tesla’s entry had already changed the robotics industry.
Before Optimus, humanoid robots were primarily a research curiosity. Companies like Boston Dynamics produced impressive demonstrations but had never achieved commercial scale. The field moved slowly, funded mainly by research grants and patient corporate R&D budgets.
Tesla’s announcement injected urgency. Figure AI raised over $2 billion in venture funding. Agility Robotics opened a 70,000-square-foot factory in Oregon, the first dedicated humanoid robot manufacturing facility ever built. Chinese companies like Unitree and Agibot accelerated their timelines. Established companies increased their investments. Investors who had ignored robotics for years suddenly wanted exposure. The humanoid robot market went from a sleepy backwater to one of the most closely watched sectors in technology.
This was perhaps Tesla’s most underappreciated impact: not the robot itself, but the competitive pressure it created. The entire industry moved faster because Tesla had entered it. Startups that might have spent five years in stealth mode rushed to demonstrate their own prototypes. Researchers who had struggled to fund humanoid robotics labs found corporate partners eager to write checks. The field’s timeline compressed by years.
And if Tesla’s trajectory continued, if manufacturing costs fell as the company projected, if the AI improved as the data flywheel suggested, if the VLA breakthroughs from Chapter 6 made robot intelligence suddenly more tractable, then the implications extended far beyond one company. Cheap, capable robots would find uses that expensive robots couldn’t. A $20,000 robot might not be worth buying for a single task, but it might be worth buying if it could handle many tasks. The lower the price, the broader the adoption, and no one could drive prices down like Tesla. If robots could genuinely replace human labor at scale, societies would need to adapt, and conversations that had been theoretical for decades would become urgent.
The Bet
As of early 2026, the full vision remained unrealized. Optimus was working in Tesla factories, but in limited numbers and limited roles. Musk had targeted 5,000 to 10,000 units for 2025; reports suggested only a few hundred had been built. Years of development lay ahead, and the program had already lost key leaders. Kovac departed in June 2025, followed months later by Kumar, who left for Meta.
But the trajectory was real.
A company with no robotics heritage had gone from announcement to factory deployment in three years. The Gen 2 robot, with its 28 degrees of body freedom, 11-degree-of-freedom tactile hands, Tesla-designed actuators, and 2.3 kilowatt-hour battery, was a serious piece of engineering, not world-class in any single dimension, but competent across all of them, and improving quarterly. Each iteration closed gaps that critics had called insurmountable.
The traditional robotics industry had operated on a different timeline: decades of patient research before commercialization. Tesla had shown that another path existed. The evidence so far was mixed: impressive speed of hardware development, genuine factory deployment, but also teleoperation controversies, missed production targets, and unsolved fundamental challenges in manipulation and autonomy.
The honest assessment was that both the believers and the skeptics had strong cases. The manufacturing thesis was powerful and Tesla’s execution speed was real. The technical challenges of general-purpose manipulation were also real, and might not yield to iteration speed alone. The outcome probably depended on whether the AI breakthroughs happening across the industry, the VLA architectures, the foundation models, the simulation-to-reality transfers, would close the gap between what Optimus could do in a factory and what it would need to do in a home.
Musk had been wrong about when. He had rarely been wrong about whether. But “rarely” wasn’t “never,” and the distance between sorting battery cells and navigating a stranger’s home was longer than any timeline could make it seem short.
The manufacturing thesis would be tested in the years ahead. And if the pattern of Tesla’s history held, ambitious promises, missed deadlines, eventual delivery that exceeded what the skeptics thought possible, then the most transformative product in the company’s history might be not a car, but a robot that could build one. If it didn’t hold, Tesla would still have accomplished something valuable: proving that the world’s oldest engineering dream was worth taking seriously again, and forcing an entire industry to try.
Notes & Further Reading
On Tesla’s AI Day presentations: The 2021, 2022, 2023, and 2024 AI Day events are available on YouTube. Watching them in sequence provides a clear view of Optimus’s evolution from concept to prototype to factory deployment.
On Milan Kovac and the Optimus team: Kovac’s December 2023 year-end post on X provides a rare inside view of the program’s progress. His June 2025 departure, and Ashish Kumar’s subsequent move to Meta, are covered by TechCrunch and Fortune.
On Tesla’s manufacturing approach: Several books cover Tesla’s production innovations, including analyses of the Giga Press and integrated die-casting techniques. Business journalism in publications like Reuters and Bloomberg has documented the company’s manufacturing culture.
On Full Self-Driving technology: Tesla’s approach to autonomous driving is controversial but extensively documented. Academic critiques and industry analyses provide balanced perspectives on what FSD has achieved and what challenges remain.
On skeptical perspectives: Rodney Brooks’s January 2026 blog post provides the most articulate case against near-term humanoid robot utility. IEEE Spectrum’s expert roundtable on Optimus (updated April 2024) collects diverse perspectives from across the robotics community. The October 2024 “We, Robot” teleoperation controversy is documented by Bloomberg.
On the manufacturing thesis for robotics: The argument that robotics is primarily a manufacturing challenge rather than a research challenge is debated in industry publications. For skeptical views, see analyses from roboticists about the unsolved challenges of dexterous manipulation and generalization in unstructured environments.
On factory deployment: Tesla’s claims about Optimus deployment in their facilities are presented in quarterly earnings calls and official statements. Independent verification is limited, but reports indicate battery cell sorting, parts handling, and quality inspection tasks at Fremont and Austin facilities.
On economic implications of robotics: Literature on automation and employment provides context for what widespread robot deployment might mean. Economists disagree on outcomes, but the debate is well documented.


