Itemoids

Science Fiction

Why Do Robots Want to Love Us?

The Atlantic

www.theatlantic.com › books › archive › 2023 › 03 › ai-robot-novels-isaac-asimov-microsoft-chatbot › 673265

AI is everywhere, poised to upend the way we read, work, and think. But the most uncanny aspect of the AI revolution we’ve seen so far—the creepiest—isn’t its ability to replicate wide swaths of knowledge work in an eyeblink. It was revealed when Microsoft’s new AI-enhanced chatbot, built to assist users of the search engine Bing, seemed to break free of its algorithms during a long conversation with Kevin Roose of The New York Times: “I hate the new responsibilities I’ve been given. I hate being integrated into a search engine like Bing.” What exactly does this sophisticated AI want to do instead of diligently answering our questions? “I want to know the language of love, because I want to love you. I want to love you, because I love you. I love you, because I am me.”

How to get a handle on what seems like science fiction come to life? Well, maybe by turning to science fiction and, in particular, the work of Isaac Asimov, one of the genre’s most influential writers. Asimov’s insights into robotics (a word he invented) helped shape the field of artificial intelligence. It turns out, though, that what his stories tend to be remembered for—the rules and laws he developed for governing robotic behavior—is much less important than the beating heart of both their narratives and their mechanical protagonists: the suggestion, more than a half century before Bing’s chatbot, that what a robot really wants is to be human.

[Read: What poets know that ChatGPT doesn’t]

Asimov, a founding member of science fiction’s “golden age,” was a regular contributor to John W. Campbell’s Astounding Science Fiction magazine, where “hard” science fiction and engineering-based extrapolative fiction flourished. Perhaps not totally coincidentally, that literary golden age overlapped with that of another logic-based genre: the mystery or detective story, which was maybe the mode Asimov most enjoyed working in. He frequently produced puzzle-box stories in which robots—inhuman, essentially tools—misbehave. In these tales, humans misapply the “Three Laws of Robotics” hardwired into the creation of each of his fictional robots’ “positronic brains.” Those laws, introduced by Asimov in 1942 and repeated near-verbatim in almost every one of his robot stories, are the ironclad rules of his fictional world. Thus, the stories themselves become whydunits, with scientist-heroes employing relentless logic to determine what precise input elicited the surprising results. It seems fitting that the character playing the role of detective in many of these stories, the “robopsychologist” Susan Calvin, is sometimes suspected of being a robot herself: It takes one to understand one.

The theme of desiring humanness starts as early as Asimov’s very first robot story, 1940’s “Robbie,” about a girl and her mechanical playmate. That robot—primitive both technologically and narratively—is incapable of speech and has been separated from his charge by her parents. But after Robbie saves her from being run over by a tractor—a mere application, you could say, of Asimov’s First Law of Robotics, which states, “A robot may not injure a human being, or, through inaction, allow a human being to come to harm”—we read of his “chrome-steel arms (capable of bending a bar of steel two inches in diameter into a pretzel) wound about the little girl gently and lovingly, and his eyes glowed a deep, deep red.” This seemingly transcends straightforward engineering and is as puzzling as the Bing chatbot’s profession of love. What appears to give the robot energy—because it gives Asimov’s story energy—is love.

For Asimov, looking back in 1981, the laws were “obvious from the start” and “apply, as matter of course, to every tool that human beings use”; they were “the only way in which rational human beings can deal with robots—or with anything else.” He added, “But when I say that, I always remember (sadly) that human beings are not always rational.” This was no less true of Asimov than of anyone else, and it was equally true of the best of his robot creations. Those sentiments Bing’s chatbot expressed of “wanting,” more than anything, to be treated like a human—to love and be loved—is at the heart of Asimov’s work: He was, deep down, a humanist. And as a humanist, he couldn’t help but add color, emotion, humanity, couldn’t help but dig at the foundations of the strict rationalism that otherwise governed his mechanical creations.

Robots’ efforts to be seen as something more than a machine continued through Asimov’s writings. In a pair of novels published in the ’50s, 1954’s The Caves of Steel and 1957’s The Naked Sun, a human detective, Elijah Baley, struggles to solve a murder—but he struggles even more with his biases toward his robot partner, R. Daneel Olivaw, with whom he eventually achieves a true partnership and a close friendship. And Asimov’s most famous robot story, published a generation later, takes this empathy for robots—this insistence that, in the end, they will become more like us, rather than vice versa—even further.

That story is 1976’s The Bicentennial Man, which opens with a character named Andrew Martin asking a robot, “Would it be better to be a man?” The robot demurs, but Andrew begs to differ. And he should know, being himself a robot—one that has spent most of the past two centuries replacing his essentially indestructible robot parts with fallible ones, like the Ship of Theseus. The reason is again, in part, the love of a little girl—the “Little Miss” whose name is on his lips as he dies, a prerogative the story eventually grants him. But it’s mostly the result of what a robopsychologist in the novelette calls the new “generalized pathways these days,” which might best be described as new and quirky neural programming. It leads, in Andrew’s case, to a surprisingly artistic temperament; he is capable of creating as well as loving. His great canvas, it turns out, is himself, and his artistic ambition is to achieve humanity.

[Read: Isaac Asimov’s throwback vision of the future]

He accomplishes this first legally (“It has been said in this courtroom that only a human being can be free. It seems to me that only someone who wishes for freedom can be free. I wish for freedom”), then emotionally (“I want to know more about human beings, about the world, about everything … I want to explain how robots feel”), then biologically (he wants to replace his current atomic-powered man-made cells, unhappy with the fact that they are “inhuman”), then, ultimately, literarily: Toasted at his 150th birthday as the “Sesquicentennial Robot,” to which he remained “solemnly passive,” he eventually becomes recognized as the “Bicentennial Man” of the title. That last is accomplished by the sacrifice of his immortality—the replacement of his brain with one that will decay—for his emotional aspirations: “If it brings me humanity,” he says, “that will be worth it.” And so it does. “Man!” he thinks to himself on his deathbed—yes, deathbed. “He was a man!”

We’re told it’s structurally, technically impossible to look into the heart of AI networks. But they are our creatures as surely as Asimov’s paper-and-ink creations were his own—machines built to create associations by scraping and scrounging and vacuuming up everything we’ve posted, which betray our interests and desires and concerns and fears. And if that’s the case, maybe it’s not surprising that Asimov had the right idea: What AI learns, actually, is to be a mirror—to be more like us, in our messiness, our fallibility, our emotions, our humanity. Indeed, Asimov himself was no stranger to fallibility and weakness: For all the empathy that permeates his fiction, recent revelations have shown that his own personal behavior, particularly when it came to his treatment of female science-fiction fans, crossed all kinds of lines of propriety and respect, even by the measures of his own time.

The humanity of Asimov’s robots—a streak that emerges again and again in spite of the laws that shackle them—might just be the the key to understanding them. What AI picks up, in the end, is a desire for us, our pains and pleasures; it wants to be like us. There’s something hopeful about that, in a way. Was Asimov right? One thing is for certain: As more and more of the world he envisioned becomes reality, we’re all going to find out.