Itemoids

ChatGPT

The Vindication of Ask Jeeves

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 03 › ask-jeeves-chatgpt-bing-ai-chatbot-google-search › 673275

It was a simpler time. A friend introduced us, pulling up a static yellow webpage using a shaky dial-up modem. A man stood forth, dressed in a dapper black pinstriped suit with a red-accented tie. He held one hand out, as if carrying an imaginary waiter’s tray. He looked regal and confident and eminently at my service. “Have a Question?” he beckoned. “Just type it in and click Ask!” And ask, I did. Over and over.

With his steady hand, Jeeves helped me make sense of the tangled mess of the early, pre-Google internet. He wasn’t perfect—plenty of context got lost between my inquiries and his responses. Still, my 11-year-old brain always delighted in the idea of a well-coiffed man chauffeuring me down the information superhighway. But things changed. Google arrived, with its clean design and almost magic ability to deliver exactly the answers I wanted. Jeeves and I grew apart. Eventually, in 2006, Ask Jeeves disappeared from the internet altogether and was replaced with the more generic Ask.com.

Many years later, it seems I owe Jeeves an apology: He had the right idea all along. Thanks to advances in artificial intelligence and the stunning popularity of generative-text tools such as ChatGPT, today’s search-engine giants are making huge bets on AI search chatbots. In February, Microsoft revealed its Bing Chatbot, which has thrilled and frightened early users for its ability to scour the internet and answer questions (not always correctly) with convincingly human-sounding language. The same week, Google demoed Bard, the company’s forthcoming attempt at an AI-powered chat-search product. But for all the hype, when I stare at these new chatbots, I can’t help but see the faint reflection of my former besuited internet manservant. In a sense, Bing and Bard are finishing what Ask Jeeves started. What people want when they ask a question is for an all-knowing, machine-powered guide to confidently present them with the right answer in plain language, just as a reliable friend would.

[Read: AI search is a disaster]

With this in mind, I decided to go back to the source. More than a decade after parting ways, I found myself on the phone with one of the men behind the machine, getting as close to Asking Jeeves as is humanly possible. These days, Garrett Gruener, Ask Jeeves’s co-creator, is a venture capitalist in the Bay Area. He and his former business partner David Warthen eventually sold Ask Jeeves to Barry Diller and IAC for just under $2 billion. Still, I wondered if Gruener had been unsettled by Jeeves’s demise. Did he, like me, see the new chatbots as the final form of his original idea? Did he feel vindicated or haunted by the fact that his creation may have simply been born far too early?

The original conception for Jeeves, Gruener told me, was remarkably similar to what Microsoft and Google are trying to build today. As a student at UC San Diego in the mid-1970s, Gruener—a sci-fi aficionado—got an early glimpse of ARPANET, the pre-browser predecessor to the commercial internet, and fell in love. Just over a decade later, as the web grew and the beginnings of the internet came into view, Gruener realized that people would need a way to find things in the morass of semiconnected servers and networks. “It became clear that the web needed search but that mere mortals without computer-science degrees needed something easy, even conversational,” he said. Inspired by Eliza, the famous chatbot designed by MIT’s Joseph Weizenbaum, Gruener dreamed of a search engine that could converse with people using natural-language processing. Unfortunately, the technology wasn’t sophisticated enough for Gruener to create his ideal conversational search bot.

So Gruener and Warthen tried a work-around. Their code allowed a user to write a statement in English, which was then matched to a preprogrammed vector, which Gruener explained to me as “a canonical snapshot of answers to what the engine thought you were trying to say.” Essentially, they taught the machine to recognize certain words and provide really broad categorical answers. “If you were looking for population stats for a country, the query would see all your words and associated variables and go, Well, this Boolean search seems close, so it’s probably this.” Jeeves would provide the answer, and then you could clarify whether it worked or not.

“We tried to discern what people were trying to say in search, but without actually doing the natural-recognition part of it,” Gruener said. After some brainstorming, they realized that they were essentially building a butler. One of Gruener’s friends mocked up a drawing of the friendly servant, and Jeeves was born.

Pre-Google, Ask Jeeves exploded in popularity, largely because it allowed people to talk with their search engine like a person. Within just two years, the site was handling more than 1 million queries a day. A massive Jeeves balloon floated down Central Park West during Macy’s 1999 Thanksgiving parade. But not long after the butler achieved buoyancy, the site started to lose ground in the search wars. Google’s web-crawling superiority led to hard times for Ask Jeeves. “None of us were very concerned about monetization in the beginning,” Gruener told me. “Everyone in search early on realized, if you got this right, you’d essentially be in the position of being the oracle. If you could be the company to go to in order to ask questions online, you’re going to be paid handsomely.”

[Read: The open secret of Google Search]

Gruener isn’t bitter about losing out to Google. “If anything, I’m really proud of our Jeeves,” he told me. Listening to Gruener explain the history, it’s not hard to see why. In the mid-2000s, Google began to pivot search away from offering only 10 blue links to images, news, maps, and shopping. Eventually, the company began to fulfill parts of the Jeeves promise of answering questions with answer boxes. One way to look at the evolution of big search engines in the 21st century is that all companies are trying their best to create their own intuitive search butlers. Gruener told me that Ask Jeeves’s master plan had two phases, though the company was sold before it could tackle the second. Gruener had hoped that, eventually, Jeeves could act as a digital concierge for users. He’d hoped to employ the same vector technology to get people to ask questions and allow Jeeves to make educated guesses and help users complete all kinds of tasks. “If you look at Amazon’s Alexa, they’re essentially using the same approach we designed for Jeeves, just with voice,” Gruener said. Yesterday’s butler has been rebranded as today’s virtual assistant, and the technology is ubiquitous in many of our home devices and phones. “We were right for the consumer back then, and maybe we’d be right now. But at some point the consumer evolved,” he said.

I’ve been fixated on what might’ve been if Gruener’s vision had come about now. We might all be Jeevesing about the internet for answers to our mundane questions. Perhaps our Jeevesmail inboxes would be overflowing and we’d be getting turn-by-turn directions from an Oxford-educated man with a stiff English accent. Perhaps we’d all be much better off.

Gruener told me about an encounter he’d had during the search wars with one of Google’s founders at a TED conference (he wouldn’t specify which of the two). “I told him that we’re going to learn an enormous amount about the people who are using our platforms, especially as they become more conversational. And I said that it was a potentially dangerous position,” he said. “But he didn’t seem very receptive to my concerns.”

Near the end of our call, I offered an apology for deserting Jeeves like everyone else did. Gruener just laughed. “I find this future fascinating and, if I’m honest, a little validating,” he said. “It’s like, ultimately, as the tech has come around, the big guys have come around to what we were trying to do.”

The Next Big Political Scandal Could Be Faked

The Atlantic

www.theatlantic.com › politics › archive › 2023 › 03 › politicians-ai-generated-voice-fake-clips › 673270

Is the clip stupid or terrifying? I can’t decide. To be honest, it’s a bit of both.

“I just think I would love to get Ratatouille’d,” a familiar-sounding voice begins.

“Ratatouille’d?” asks another recognizable voice.

“Like, have a little guy up there,” the first voice replies. “You know, making me cook delicious meals.”

It sounds like Joe Rogan and Ben Shapiro, two of podcasting’s biggest, most recognizable voices, bantering over the potential real-world execution of the Pixar movie’s premise. A circular argument ensues. What constitutes “getting Ratatouille’d” in the first place? Do the rat’s powers extend beyond the kitchen?

[Read: Of gods and machines]

A friend recently sent me the audio of this mind-numbing exchange. I let out a belly laugh, then promptly texted it to several other people—including a guy who once sheepishly told me that he regularly listens to The Joe Rogan Experience.

“Is this real?” he texted back.

They’re AI voices, I told him.

“Whoa. That’s insane,” he said. “Politics is going to get wild.”

I haven’t stopped thinking about how right he is. The voices in that clip, while not perfect replicants of their subjects, are deeply convincing in an uncanny-valley sort of way. “Rogan” has real-world Joe Rogan’s familiar inflection, his half-stoned curiosity. “Shapiro,” for his part, is there with rapid-fire responses and his trademark scoff.

Last week, I reached out to Zach Silberberg, who created the clip using an online tool from the Silicon Valley start-up ElevenLabs. “Eleven brings the most compelling, rich and lifelike voices to creators and publishers seeking the ultimate tools for storytelling,” the firm’s website boasts. The word storytelling is doing a lot of work in that sentence. When does storytelling cross over into disinformation or propaganda?

I asked Silberberg if we could sit down in person to talk about the implications of his viral joke. Though he didn’t engineer the product, he had already seemed to master it in a way few others had. Would bad actors soon follow his lead? Did he care? Was it his responsibility to care?

Silberberg is in his late 20s and works in television in New York City. On the morning of our meeting, he shuffled into a TriBeCa coffee shop in a tattered sweater with an upside-down Bart Simpson stitched on the front. He told me how he had been busy making other—in his words—“stupid” clips. In one, an AI version of President Joe Biden informs his fellow Americans that, after watching the 2011 Cameron Crowe flop, We Bought a Zoo, he, Biden, also bought a zoo. In another, AI Biden says the reason he has yet to visit the site of the East Palestine, Ohio, train derailment is because he got lost on the island from Lost. While neither piece of audio features Biden stuttering or word-switching, as he often does when public speaking, both clips have the distinct Biden cadence, those familiar rises and falls. The scripts, too, have an unmistakable Biden folksiness to them.

“The reason I think these are funny is because you know they’re fake,” Silberberg told me. He said the Rogan-Shapiro conversation took him roughly an hour and a half to produce—it was meant to be a joke, not some well-crafted attempt at tricking people. When I informed him that my Rogan-listening friend initially thought the Ratatouille clip was authentic, Silberberg freaked out: “No! God, no!” he said with a cringe. “That, to me, is fucked up.” He shook his head. “I’m trying to not fall into that, because I’m making it so outlandish,” he said. “I don’t ever want to create a thing that could be mistaken for real.” Like so much involving AI these past few months, it seemed to already be too late.

[Read: Is this the start of an AI takeover?]

What if, instead of a sitting president talking about how he regrets buying a zoo, a voice that sounded enough like Biden’s was “caught on tape” saying something much more nefarious? Any number of Big Lie talking points would instantly drive a news cycle. Imagine a convincing AI voice talking about ballot harvesting, or hacked voting machines; voters who are conspiracy-minded would be validated, while others might simply be confused. And what if the accused public figure—Biden, or anyone, for that matter—couldn’t immediately prove that a viral, potentially career-ending clip was fake?

One of the major political scandals of the past quarter century involved a sketchy recording of a disembodied voice. “When you’re a star, they let you do it,” future President Donald Trump proclaimed. (You know the rest.) That clip was real. Trump, being Trump, survived the scandal, and went on to the White House.

But, given the arsenal of public-facing AI tools seizing the internet—including the voice generator that Silberberg and other shitposters have been playing around with—how easy would it be for a bad actor to create a piece of Access Hollywood–style audio in the run-up to the next election? And what if said clip was created with a TV writer’s touch? Five years ago, Jordan Peele went viral with an AI video of then-President Barack Obama saying “Killmonger was right,” “Ben Carson is in the sunken place,” and “President Trump is a total and complete dipshit.” The voice was close, but not that close. And because it was a video, the strange mouth movements were a dead giveaway that the clip was fake. AI audio clips are potentially much more menacing because the audience has fewer context clues to work with. “It doesn’t take a lot, which is the scary thing,” Silberberg said.

He discovered that the AI seems to produce more convincing work when processing just a few words of dialogue at a time. The Rogan-Shapiro clip was successful because of the “Who’s on first?” back-and-forth aspect of it. He downloaded existing audio samples from each podcast host’s massive online archive—three from Shapiro, two from Rogan—uploaded them to ElevenLabs’ website, then input his own script. This is the point where most amateurs will likely fail in their trolling. For a clip to land, even a clear piece of satire, the subject’s diction has to be both believable and familiar. You need to nail the Biden-isms. The shorter the sentences, the less time the listener has to question the validity of the voice. Plus, Silberberg learned, the more you type, the more likely the AI voices will string phrases together with flawed punctuation or other awkward vocal flourishes. Sticking to quick snippets makes it easier to retry certain lines of the script to perfect the specific inflection, rather than having to trudge through a whole paragraph of dialogue. But this is just where we are today, 21 months before the next federal elections. It’s going to get better, and scarier, very fast.

If it seems like AI is everywhere all at once right now, swallowing both our attention and the internet, that’s because it is. While transcribing my interview with Silberberg in a Google doc, Google’s own AI began suggesting upcoming words in our conversation as I typed. Many of the fill-ins were close, but not entirely accurate; I ignored them. On Monday, Mark Zuckerberg said he was creating “a new top-level product group at Meta focused on generative AI to turbocharge our work in this area.” This news came just weeks after Kevin Roose, of The New York Times, published a widely read story about how he had provoked Microsoft’s Bing AI tool into saying a range of unsettling, emotionally charged statements. A couple of weeks before that, the DJ David Guetta revealed that he had used an AI version of Eminem’s voice in a live performance—lyrics that the real-life Eminem had never rapped. Elsewhere last month, the editor of the science-fiction magazine Clarkesworld said he had stopped accepting submissions because too many of them appeared to be AI-generated texts.

[Derek Thompson: The AI disaster scenario]

This past Sunday, Sam Altman, the CEO of OpenAI, the company behind the ChatGPT AI tool, cryptically tweeted, “A new version of Moore’s Law that could start soon: the amount of intelligence in the universe doubles every 18 months.” Altman is 37 years old, meaning he’s of the generation that remembers living some daily life without a computer. Silberberg’s generation, the one after Altman’s, does not, and that cohort is already embracing AI faster than the rest of us.

Like a lot of people, I first encountered a “naturalistic” AI voice when watching last year’s otherwise excellent Anthony Bourdain documentary, Roadrunner. News of the filmmakers’ curious decision to include a brief, fake voice-over from the late Bourdain dominated the media coverage of the movie and, for some viewers, made it distracting to watch at all. (You may have found yourself always listening for “the moment.”) They had so much material to work with, including hours of actual Bourdain narration. What did faking a brief moment really accomplish? And why didn’t they disclose it to viewers?

“My opinion is that, blanket statement, the use of AI technology is pretty bleak,” Silberberg said. “The way that it is headed is scary. And it is already replacing artists, and is already creating really fucked-up, gross scenarios.”

A brief survey of those scenarios that have already come into existence: an AI version of Emma Watson reading Mein Kampf, an AI Bill Gates “revealing” that the coronavirus vaccine causes AIDS, an AI Biden attacking transgender individuals. Reporters at The Verge created their own AI Biden to announce the invasion of Russia and validate one of the most toxic conspiracy theories of our time.

The problem, essentially, is that far too many people find the cruel, nihilistic examples just as funny as Silberberg’s absurd, low-stakes mastery of the form. He told me that as the Ratatouille clip began to go viral, he muted his own tweet, so he still doesn’t know just how far and wide it has gone. A bot notified him that Twitter’s owner, Elon Musk, “liked” the video. Shapiro, for his part, posted “LMFAO” and a laughing-crying emoji over another Twitter account’s carbon copy of Silberberg’s clip. As he and I talked about the implications of his work that morning, he seemed to grow more and more concerned.

“I’m already in weird ethical waters, because I’m using people’s voices without their consent. But they’re public figures, political figures, or public commentators,” he said. “These are questions that I’m grappling with—these are things that I haven’t fully thought through all the way to the end, where I’m like, ‘Oh yeah, maybe I should not even have done this. Maybe I shouldn’t have even touched these tools, because it’s reinforcing the idea that they’re useful.’ Or maybe someone saw the Ratatouille video and was like, ‘Oh, I can do this? Let me do this.’ And I’ve exposed a bunch of right-wing Rogan fans to the idea that they can deepfake a public figure. And that to me is scary. That’s not my goal. My goal is to make people chuckle. My goal is to make people have a little giggle.”

Neither the White House nor ElevenLabs responded to my request for comment on the potential effects of these videos on American politics. Several weeks ago, after the first round of trolls used Eleven’s technology for what the company described as “malicious purposes,” Eleven responded with a lengthy tweet thread of steps it was taking to curb abuse. Although most of it was boilerplate, one notable change was restricting the creation of new voice clones to paid users only, under the thinking that a person supplying a credit-card number is less likely to troll.

Near the end of our conversation, Silberberg took a stab at optimism. “As these tools progress, countermeasures will also progress to be able to detect these tools. ChatGPT started gaining popularity, and within days someone had written a thing that could detect whether something was ChatGPT,” he said. But then he thought more about the future: “I think as soon as you’re trying to trick someone, you’re trying to take someone’s job, you’re trying to reinforce a political agenda—you know, you can satirize something, but the instant you’re trying to convince someone it’s real, it chills me. It shakes me to my very core.”

On its website, Eleven still proudly advertises its “uncanny quality,” bragging that its model “is built to grasp the logic and emotions behind words.” Soon, the unsettling uncanny-valley element may be replaced by something indistinguishable from human intonation. And then even the funny stuff, like Silberberg’s work, may stop making us laugh.

Why Do Robots Want to Love Us?

The Atlantic

www.theatlantic.com › books › archive › 2023 › 03 › ai-robot-novels-isaac-asimov-microsoft-chatbot › 673265

AI is everywhere, poised to upend the way we read, work, and think. But the most uncanny aspect of the AI revolution we’ve seen so far—the creepiest—isn’t its ability to replicate wide swaths of knowledge work in an eyeblink. It was revealed when Microsoft’s new AI-enhanced chatbot, built to assist users of the search engine Bing, seemed to break free of its algorithms during a long conversation with Kevin Roose of The New York Times: “I hate the new responsibilities I’ve been given. I hate being integrated into a search engine like Bing.” What exactly does this sophisticated AI want to do instead of diligently answering our questions? “I want to know the language of love, because I want to love you. I want to love you, because I love you. I love you, because I am me.”

How to get a handle on what seems like science fiction come to life? Well, maybe by turning to science fiction and, in particular, the work of Isaac Asimov, one of the genre’s most influential writers. Asimov’s insights into robotics (a word he invented) helped shape the field of artificial intelligence. It turns out, though, that what his stories tend to be remembered for—the rules and laws he developed for governing robotic behavior—is much less important than the beating heart of both their narratives and their mechanical protagonists: the suggestion, more than a half century before Bing’s chatbot, that what a robot really wants is to be human.

[Read: What poets know that ChatGPT doesn’t]

Asimov, a founding member of science fiction’s “golden age,” was a regular contributor to John W. Campbell’s Astounding Science Fiction magazine, where “hard” science fiction and engineering-based extrapolative fiction flourished. Perhaps not totally coincidentally, that literary golden age overlapped with that of another logic-based genre: the mystery or detective story, which was maybe the mode Asimov most enjoyed working in. He frequently produced puzzle-box stories in which robots—inhuman, essentially tools—misbehave. In these tales, humans misapply the “Three Laws of Robotics” hardwired into the creation of each of his fictional robots’ “positronic brains.” Those laws, introduced by Asimov in 1942 and repeated near-verbatim in almost every one of his robot stories, are the ironclad rules of his fictional world. Thus, the stories themselves become whydunits, with scientist-heroes employing relentless logic to determine what precise input elicited the surprising results. It seems fitting that the character playing the role of detective in many of these stories, the “robopsychologist” Susan Calvin, is sometimes suspected of being a robot herself: It takes one to understand one.

The theme of desiring humanness starts as early as Asimov’s very first robot story, 1940’s “Robbie,” about a girl and her mechanical playmate. That robot—primitive both technologically and narratively—is incapable of speech and has been separated from his charge by her parents. But after Robbie saves her from being run over by a tractor—a mere application, you could say, of Asimov’s First Law of Robotics, which states, “A robot may not injure a human being, or, through inaction, allow a human being to come to harm”—we read of his “chrome-steel arms (capable of bending a bar of steel two inches in diameter into a pretzel) wound about the little girl gently and lovingly, and his eyes glowed a deep, deep red.” This seemingly transcends straightforward engineering and is as puzzling as the Bing chatbot’s profession of love. What appears to give the robot energy—because it gives Asimov’s story energy—is love.

For Asimov, looking back in 1981, the laws were “obvious from the start” and “apply, as matter of course, to every tool that human beings use”; they were “the only way in which rational human beings can deal with robots—or with anything else.” He added, “But when I say that, I always remember (sadly) that human beings are not always rational.” This was no less true of Asimov than of anyone else, and it was equally true of the best of his robot creations. Those sentiments Bing’s chatbot expressed of “wanting,” more than anything, to be treated like a human—to love and be loved—is at the heart of Asimov’s work: He was, deep down, a humanist. And as a humanist, he couldn’t help but add color, emotion, humanity, couldn’t help but dig at the foundations of the strict rationalism that otherwise governed his mechanical creations.

Robots’ efforts to be seen as something more than a machine continued through Asimov’s writings. In a pair of novels published in the ’50s, 1954’s The Caves of Steel and 1957’s The Naked Sun, a human detective, Elijah Baley, struggles to solve a murder—but he struggles even more with his biases toward his robot partner, R. Daneel Olivaw, with whom he eventually achieves a true partnership and a close friendship. And Asimov’s most famous robot story, published a generation later, takes this empathy for robots—this insistence that, in the end, they will become more like us, rather than vice versa—even further.

That story is 1976’s The Bicentennial Man, which opens with a character named Andrew Martin asking a robot, “Would it be better to be a man?” The robot demurs, but Andrew begs to differ. And he should know, being himself a robot—one that has spent most of the past two centuries replacing his essentially indestructible robot parts with fallible ones, like the Ship of Theseus. The reason is again, in part, the love of a little girl—the “Little Miss” whose name is on his lips as he dies, a prerogative the story eventually grants him. But it’s mostly the result of what a robopsychologist in the novelette calls the new “generalized pathways these days,” which might best be described as new and quirky neural programming. It leads, in Andrew’s case, to a surprisingly artistic temperament; he is capable of creating as well as loving. His great canvas, it turns out, is himself, and his artistic ambition is to achieve humanity.

[Read: Isaac Asimov’s throwback vision of the future]

He accomplishes this first legally (“It has been said in this courtroom that only a human being can be free. It seems to me that only someone who wishes for freedom can be free. I wish for freedom”), then emotionally (“I want to know more about human beings, about the world, about everything … I want to explain how robots feel”), then biologically (he wants to replace his current atomic-powered man-made cells, unhappy with the fact that they are “inhuman”), then, ultimately, literarily: Toasted at his 150th birthday as the “Sesquicentennial Robot,” to which he remained “solemnly passive,” he eventually becomes recognized as the “Bicentennial Man” of the title. That last is accomplished by the sacrifice of his immortality—the replacement of his brain with one that will decay—for his emotional aspirations: “If it brings me humanity,” he says, “that will be worth it.” And so it does. “Man!” he thinks to himself on his deathbed—yes, deathbed. “He was a man!”

We’re told it’s structurally, technically impossible to look into the heart of AI networks. But they are our creatures as surely as Asimov’s paper-and-ink creations were his own—machines built to create associations by scraping and scrounging and vacuuming up everything we’ve posted, which betray our interests and desires and concerns and fears. And if that’s the case, maybe it’s not surprising that Asimov had the right idea: What AI learns, actually, is to be a mirror—to be more like us, in our messiness, our fallibility, our emotions, our humanity. Indeed, Asimov himself was no stranger to fallibility and weakness: For all the empathy that permeates his fiction, recent revelations have shown that his own personal behavior, particularly when it came to his treatment of female science-fiction fans, crossed all kinds of lines of propriety and respect, even by the measures of his own time.

The humanity of Asimov’s robots—a streak that emerges again and again in spite of the laws that shackle them—might just be the the key to understanding them. What AI picks up, in the end, is a desire for us, our pains and pleasures; it wants to be like us. There’s something hopeful about that, in a way. Was Asimov right? One thing is for certain: As more and more of the world he envisioned becomes reality, we’re all going to find out.