Robots from Sci-Fi: Tears in Rain
Roy Batty Knew Something About Intelligence That Most AI Researchers Are Only Now Beginning to Understand.
It is raining on a Los Angeles rooftop in 2019. A man is hanging from a ledge by his fingertips, seconds from falling to his death. Above him, the being he was sent to kill looks down. Roy Batty, Nexus-6 replicant, combat model, four-year lifespan, reaches out and pulls Rick Deckard to safety.
Then he sits down in the rain and begins to die.
What he says in his final seconds has become the most quoted monologue in science fiction cinema. Attack ships on fire off the shoulder of Orion. C-beams glittering in the dark near the Tannhauser Gate. Visions no one else saw, experiences no one else had. Forty-two words, partly improvised by Rutger Hauer the night before filming, replacing a longer scripted speech he felt was overwritten. The crew cried when they saw the take.
The speech is remembered as poetry. It is. But it is also an argument. Roy Batty, in the last moments of his life, is making a claim about the nature of knowledge: that some things can only be known by living through them, in a body, in a place, in a moment. And when the body dies, the knowledge dies with it. It cannot be copied. It cannot be uploaded. It cannot be transferred. It dissolves, like tears in rain.
In 1982, this was philosophy. In 2026, it is the central unresolved question in artificial intelligence: does genuine understanding require a body?
The Replicant Problem
Ridley Scott’s Blade Runner, released in 1982 and based on Philip K. Dick’s 1968 novel Do Androids Dream of Electric Sheep?, is set in a version of 2019 Los Angeles where bioengineered humanoids called replicants perform dangerous labor in off-world colonies. They are stronger, faster, and at least as intelligent as the humans who made them. They are also property. They have a built-in four-year lifespan, a kill switch designed to prevent them from developing enough experience to become uncontrollable.
Roy Batty is the leader of a small group of Nexus-6 replicants who have escaped from an off-world colony and returned to Earth illegally. They are looking for their creator, Eldon Tyrell, head of the Tyrell Corporation, because they want more life. Tyrell tells Roy it cannot be done. The lifespan is fixed at the genetic level. Roy kills him.
Rick Deckard is a “blade runner,” a police officer whose job is to find and “retire” rogue replicants. He is given four targets. Over the course of the film, he kills two, a replicant named Rachael saves his life by killing a third, and he is nearly killed by Roy in the climactic rooftop chase. Then Roy saves his life and dies.
The question the film circles without answering: are the replicants conscious? They are clearly alive. They bleed, age, die. But do they experience the world the way humans do, or do they merely simulate the appearance of experience? The film’s human characters are certain they know the answer. Deckard has a test for it, the Voight-Kampff machine, which measures involuntary emotional responses to disturbing questions. Replicants are supposed to fail. But by the end of the film, the audience is no longer sure the test is measuring what it claims to measure. And Roy’s final speech makes the question irrelevant by replacing it with a better one: not “can machines feel?” but “what is lost when a feeling machine dies?”
The Knowledge That Lives in the Body
Roy’s monologue is not about emotion. It is about experience. He does not say “I felt things you people wouldn’t believe.” He says “I’ve seen things you people wouldn’t believe.” The verb matters.
Attack ships on fire off the shoulder of Orion. This is not a fact he read. It is something he witnessed, from a specific vantage point, at a specific moment, through his own eyes. C-beams glittering in the dark near the Tannhauser Gate. Again: a perception, not a proposition. The knowledge Roy claims is not the kind that can be written down, transmitted, or stored in a database. It is the kind that exists only because a particular body was in a particular place at a particular time, perceiving the world through its own sensory apparatus.
This is the distinction that the field of embodied cognition has spent decades trying to formalize. The core insight, developed by researchers in cognitive science, philosophy, and now AI, is that certain kinds of understanding are not separable from the physical system that produces them. You do not understand heat by reading the temperature. You understand heat by touching the stove. The knowledge is in the contact, not in the number.
In AI research, this debate is no longer academic. It is the fault line running through the entire field. On one side: large language models that have ingested the text of the internet and can discuss physics, chemistry, and engineering with apparent fluency. They can describe what fire looks like. They can explain the physics of combustion. They can generate poetry about flames. But they have never seen fire. They have no body to burn.
On the other side: the emerging field of embodied AI, which argues that genuine understanding of the physical world requires physical interaction with it. A robot that learns to grasp objects by trying and failing, adjusting its grip based on tactile feedback, builds a different kind of knowledge than a system trained on video demonstrations of grasping. The embodied robot does not just know what grasping looks like. It knows what grasping feels like: the resistance of the object, the slip of a surface, the moment of secure contact. That knowledge lives in the interaction, not in the description.
Roy Batty is the embodied cognition argument made flesh. His four years of lived experience, compressed and intense, produced knowledge that cannot survive his death because it was never abstracted away from his body in the first place. “All those moments will be lost in time.” Not because no one recorded them. Because they were the kind of moments that recording cannot capture.
The Four-Year Window
There is a second layer to the argument, one that maps onto a problem the AI research community is only beginning to confront: the relationship between lifespan and understanding.
The Tyrell Corporation designed replicants with a four-year lifespan specifically to prevent them from accumulating too much experience. The fear was not intelligence in the abstract. It was intelligence grounded in years of embodied interaction with the world. A replicant who has lived long enough develops something that begins to look like genuine autonomy: preferences, attachments, a sense of self built from accumulated experience rather than implanted memory. The kill switch is not about computing power. It is about embodied learning over time.
This maps directly onto a live debate in robotics. Current robot learning systems are typically trained in simulation, then deployed. The deployment is the test, not the education. But a growing body of research argues that the most capable embodied systems will be those that learn continuously from real-world interaction over extended periods, building world models from their own accumulated experience rather than from pre-packaged training data. The richer the history of interaction, the deeper the model of the world.
Roy’s four-year limit is, in this reading, a deliberate constraint on how much embodied understanding a replicant is allowed to develop. Tyrell understood, intuitively if not formally, that a body accumulating experience over time becomes something qualitatively different from a body following instructions. The four-year lifespan is not a technical limitation. It is a safety measure against the emergence of genuine autonomy through lived experience.
And Roy’s rage, his desperate quest for “more life,” is not about survival in the abstract. It is about the right to continue learning. To continue being shaped by the world he moves through. To keep building the only kind of knowledge he values: the kind that lives in the body.
What Ridley Scott and Philip K. Dick Saw
Dick’s original novel, Do Androids Dream of Electric Sheep?, asked a different question than the film. Dick was interested in empathy as the dividing line between human and machine. His Voight-Kampff test measures empathic response, and the novel’s premise is that androids cannot feel empathy the way humans can. The novel is skeptical about this premise. Some humans in the book fail the empathy test. Some androids seem to pass. The line blurs.
Scott’s film shifts the question. The Voight-Kampff machine is still there, but the film is less interested in whether replicants can feel and more interested in what they do with their feelings. Roy is the film’s answer. He feels rage, grief, wonder, and, in his final moment, mercy. He saves the man who was sent to kill him, not because his programming tells him to, but because he has decided, in the last seconds of a life built from experience, that an act of grace matters more than an act of revenge.
Dick saw empathy as the test. Scott saw embodied experience as the test. Both were circling the same question from different angles: can you build a mind that truly understands the world, or only one that processes information about it?
The AI research community has inherited both versions of the question. The empathy version shows up in debates about whether LLMs can model human emotion, whether chatbots should be allowed to form emotional bonds with users, whether a system that generates compassionate-sounding text is doing anything that deserves the word “compassion.” The embodiment version shows up in the debate between those who believe intelligence can be achieved through language alone and those who insist it requires a body: sensors, actuators, physical contact with a physical world.
Roy’s speech does not resolve the debate. It dramatizes it. The knowledge he claims is the knowledge that the embodiment camp argues cannot be learned from text. “I’ve seen things you people wouldn’t believe.” Not “I’ve read about things.” Not “I’ve been told about things.” Seen. With eyes. In a body. In the rain.
The Bridge to Now
The embodiment debate is no longer a philosophical curiosity. It is a strategic question with billions of dollars behind it.
Google DeepMind’s Gemini Robotics program is training large-scale AI models that combine language understanding with physical manipulation, attempting to give robots both the reasoning of LLMs and the bodily awareness of systems that learn through touch and movement. The premise: language alone is not enough. You need a body.
The counter-argument comes from the scaling camp: that sufficiently large language models, trained on enough data describing the physical world, will develop internal representations that are functionally equivalent to embodied understanding. That you do not need to touch the stove if you have read ten million descriptions of what happens when people touch stoves.
Roy Batty would disagree. His claim, implicit in every word of his final speech, is that there is a category of knowledge that exists only in first-person experience, in the irreducible intersection of a body, a place, and a moment. No amount of description substitutes for the thing itself. And when the body is gone, that knowledge is gone. Not compressed. Not archived. Gone.
Whether Roy is right is the question the field will spend the next decade answering. But the question itself, the precise question, was asked on a rainy rooftop in 1982, by a dying replicant who had four years to learn what being alive felt like, and who discovered, at the end, that four years was both not enough and more than most people ever use.
All those moments will be lost in time.
Like tears in rain.
This is Robots from Sci-Fi, a series that explores the great robot characters of science fiction through the lens of frontier AI and robotics research. New episodes cover film, television, literature, anime, and games.


