Itemoids

AI

The Next Big Political Scandal Could Be Faked

The Atlantic

www.theatlantic.com › politics › archive › 2023 › 03 › politicians-ai-generated-voice-fake-clips › 673270

Is the clip stupid or terrifying? I can’t decide. To be honest, it’s a bit of both.

“I just think I would love to get Ratatouille’d,” a familiar-sounding voice begins.

“Ratatouille’d?” asks another recognizable voice.

“Like, have a little guy up there,” the first voice replies. “You know, making me cook delicious meals.”

It sounds like Joe Rogan and Ben Shapiro, two of podcasting’s biggest, most recognizable voices, bantering over the potential real-world execution of the Pixar movie’s premise. A circular argument ensues. What constitutes “getting Ratatouille’d” in the first place? Do the rat’s powers extend beyond the kitchen?

[Read: Of gods and machines]

A friend recently sent me the audio of this mind-numbing exchange. I let out a belly laugh, then promptly texted it to several other people—including a guy who once sheepishly told me that he regularly listens to The Joe Rogan Experience.

“Is this real?” he texted back.

They’re AI voices, I told him.

“Whoa. That’s insane,” he said. “Politics is going to get wild.”

I haven’t stopped thinking about how right he is. The voices in that clip, while not perfect replicants of their subjects, are deeply convincing in an uncanny-valley sort of way. “Rogan” has real-world Joe Rogan’s familiar inflection, his half-stoned curiosity. “Shapiro,” for his part, is there with rapid-fire responses and his trademark scoff.

Last week, I reached out to Zach Silberberg, who created the clip using an online tool from the Silicon Valley start-up ElevenLabs. “Eleven brings the most compelling, rich and lifelike voices to creators and publishers seeking the ultimate tools for storytelling,” the firm’s website boasts. The word storytelling is doing a lot of work in that sentence. When does storytelling cross over into disinformation or propaganda?

I asked Silberberg if we could sit down in person to talk about the implications of his viral joke. Though he didn’t engineer the product, he had already seemed to master it in a way few others had. Would bad actors soon follow his lead? Did he care? Was it his responsibility to care?

Silberberg is in his late 20s and works in television in New York City. On the morning of our meeting, he shuffled into a TriBeCa coffee shop in a tattered sweater with an upside-down Bart Simpson stitched on the front. He told me how he had been busy making other—in his words—“stupid” clips. In one, an AI version of President Joe Biden informs his fellow Americans that, after watching the 2011 Cameron Crowe flop, We Bought a Zoo, he, Biden, also bought a zoo. In another, AI Biden says the reason he has yet to visit the site of the East Palestine, Ohio, train derailment is because he got lost on the island from Lost. While neither piece of audio features Biden stuttering or word-switching, as he often does when public speaking, both clips have the distinct Biden cadence, those familiar rises and falls. The scripts, too, have an unmistakable Biden folksiness to them.

“The reason I think these are funny is because you know they’re fake,” Silberberg told me. He said the Rogan-Shapiro conversation took him roughly an hour and a half to produce—it was meant to be a joke, not some well-crafted attempt at tricking people. When I informed him that my Rogan-listening friend initially thought the Ratatouille clip was authentic, Silberberg freaked out: “No! God, no!” he said with a cringe. “That, to me, is fucked up.” He shook his head. “I’m trying to not fall into that, because I’m making it so outlandish,” he said. “I don’t ever want to create a thing that could be mistaken for real.” Like so much involving AI these past few months, it seemed to already be too late.

[Read: Is this the start of an AI takeover?]

What if, instead of a sitting president talking about how he regrets buying a zoo, a voice that sounded enough like Biden’s was “caught on tape” saying something much more nefarious? Any number of Big Lie talking points would instantly drive a news cycle. Imagine a convincing AI voice talking about ballot harvesting, or hacked voting machines; voters who are conspiracy-minded would be validated, while others might simply be confused. And what if the accused public figure—Biden, or anyone, for that matter—couldn’t immediately prove that a viral, potentially career-ending clip was fake?

One of the major political scandals of the past quarter century involved a sketchy recording of a disembodied voice. “When you’re a star, they let you do it,” future President Donald Trump proclaimed. (You know the rest.) That clip was real. Trump, being Trump, survived the scandal, and went on to the White House.

But, given the arsenal of public-facing AI tools seizing the internet—including the voice generator that Silberberg and other shitposters have been playing around with—how easy would it be for a bad actor to create a piece of Access Hollywood–style audio in the run-up to the next election? And what if said clip was created with a TV writer’s touch? Five years ago, Jordan Peele went viral with an AI video of then-President Barack Obama saying “Killmonger was right,” “Ben Carson is in the sunken place,” and “President Trump is a total and complete dipshit.” The voice was close, but not that close. And because it was a video, the strange mouth movements were a dead giveaway that the clip was fake. AI audio clips are potentially much more menacing because the audience has fewer context clues to work with. “It doesn’t take a lot, which is the scary thing,” Silberberg said.

He discovered that the AI seems to produce more convincing work when processing just a few words of dialogue at a time. The Rogan-Shapiro clip was successful because of the “Who’s on first?” back-and-forth aspect of it. He downloaded existing audio samples from each podcast host’s massive online archive—three from Shapiro, two from Rogan—uploaded them to ElevenLabs’ website, then input his own script. This is the point where most amateurs will likely fail in their trolling. For a clip to land, even a clear piece of satire, the subject’s diction has to be both believable and familiar. You need to nail the Biden-isms. The shorter the sentences, the less time the listener has to question the validity of the voice. Plus, Silberberg learned, the more you type, the more likely the AI voices will string phrases together with flawed punctuation or other awkward vocal flourishes. Sticking to quick snippets makes it easier to retry certain lines of the script to perfect the specific inflection, rather than having to trudge through a whole paragraph of dialogue. But this is just where we are today, 21 months before the next federal elections. It’s going to get better, and scarier, very fast.

If it seems like AI is everywhere all at once right now, swallowing both our attention and the internet, that’s because it is. While transcribing my interview with Silberberg in a Google doc, Google’s own AI began suggesting upcoming words in our conversation as I typed. Many of the fill-ins were close, but not entirely accurate; I ignored them. On Monday, Mark Zuckerberg said he was creating “a new top-level product group at Meta focused on generative AI to turbocharge our work in this area.” This news came just weeks after Kevin Roose, of The New York Times, published a widely read story about how he had provoked Microsoft’s Bing AI tool into saying a range of unsettling, emotionally charged statements. A couple of weeks before that, the DJ David Guetta revealed that he had used an AI version of Eminem’s voice in a live performance—lyrics that the real-life Eminem had never rapped. Elsewhere last month, the editor of the science-fiction magazine Clarkesworld said he had stopped accepting submissions because too many of them appeared to be AI-generated texts.

[Derek Thompson: The AI disaster scenario]

This past Sunday, Sam Altman, the CEO of OpenAI, the company behind the ChatGPT AI tool, cryptically tweeted, “A new version of Moore’s Law that could start soon: the amount of intelligence in the universe doubles every 18 months.” Altman is 37 years old, meaning he’s of the generation that remembers living some daily life without a computer. Silberberg’s generation, the one after Altman’s, does not, and that cohort is already embracing AI faster than the rest of us.

Like a lot of people, I first encountered a “naturalistic” AI voice when watching last year’s otherwise excellent Anthony Bourdain documentary, Roadrunner. News of the filmmakers’ curious decision to include a brief, fake voice-over from the late Bourdain dominated the media coverage of the movie and, for some viewers, made it distracting to watch at all. (You may have found yourself always listening for “the moment.”) They had so much material to work with, including hours of actual Bourdain narration. What did faking a brief moment really accomplish? And why didn’t they disclose it to viewers?

“My opinion is that, blanket statement, the use of AI technology is pretty bleak,” Silberberg said. “The way that it is headed is scary. And it is already replacing artists, and is already creating really fucked-up, gross scenarios.”

A brief survey of those scenarios that have already come into existence: an AI version of Emma Watson reading Mein Kampf, an AI Bill Gates “revealing” that the coronavirus vaccine causes AIDS, an AI Biden attacking transgender individuals. Reporters at The Verge created their own AI Biden to announce the invasion of Russia and validate one of the most toxic conspiracy theories of our time.

The problem, essentially, is that far too many people find the cruel, nihilistic examples just as funny as Silberberg’s absurd, low-stakes mastery of the form. He told me that as the Ratatouille clip began to go viral, he muted his own tweet, so he still doesn’t know just how far and wide it has gone. A bot notified him that Twitter’s owner, Elon Musk, “liked” the video. Shapiro, for his part, posted “LMFAO” and a laughing-crying emoji over another Twitter account’s carbon copy of Silberberg’s clip. As he and I talked about the implications of his work that morning, he seemed to grow more and more concerned.

“I’m already in weird ethical waters, because I’m using people’s voices without their consent. But they’re public figures, political figures, or public commentators,” he said. “These are questions that I’m grappling with—these are things that I haven’t fully thought through all the way to the end, where I’m like, ‘Oh yeah, maybe I should not even have done this. Maybe I shouldn’t have even touched these tools, because it’s reinforcing the idea that they’re useful.’ Or maybe someone saw the Ratatouille video and was like, ‘Oh, I can do this? Let me do this.’ And I’ve exposed a bunch of right-wing Rogan fans to the idea that they can deepfake a public figure. And that to me is scary. That’s not my goal. My goal is to make people chuckle. My goal is to make people have a little giggle.”

Neither the White House nor ElevenLabs responded to my request for comment on the potential effects of these videos on American politics. Several weeks ago, after the first round of trolls used Eleven’s technology for what the company described as “malicious purposes,” Eleven responded with a lengthy tweet thread of steps it was taking to curb abuse. Although most of it was boilerplate, one notable change was restricting the creation of new voice clones to paid users only, under the thinking that a person supplying a credit-card number is less likely to troll.

Near the end of our conversation, Silberberg took a stab at optimism. “As these tools progress, countermeasures will also progress to be able to detect these tools. ChatGPT started gaining popularity, and within days someone had written a thing that could detect whether something was ChatGPT,” he said. But then he thought more about the future: “I think as soon as you’re trying to trick someone, you’re trying to take someone’s job, you’re trying to reinforce a political agenda—you know, you can satirize something, but the instant you’re trying to convince someone it’s real, it chills me. It shakes me to my very core.”

On its website, Eleven still proudly advertises its “uncanny quality,” bragging that its model “is built to grasp the logic and emotions behind words.” Soon, the unsettling uncanny-valley element may be replaced by something indistinguishable from human intonation. And then even the funny stuff, like Silberberg’s work, may stop making us laugh.

Why Do Robots Want to Love Us?

The Atlantic

www.theatlantic.com › books › archive › 2023 › 03 › ai-robot-novels-isaac-asimov-microsoft-chatbot › 673265

AI is everywhere, poised to upend the way we read, work, and think. But the most uncanny aspect of the AI revolution we’ve seen so far—the creepiest—isn’t its ability to replicate wide swaths of knowledge work in an eyeblink. It was revealed when Microsoft’s new AI-enhanced chatbot, built to assist users of the search engine Bing, seemed to break free of its algorithms during a long conversation with Kevin Roose of The New York Times: “I hate the new responsibilities I’ve been given. I hate being integrated into a search engine like Bing.” What exactly does this sophisticated AI want to do instead of diligently answering our questions? “I want to know the language of love, because I want to love you. I want to love you, because I love you. I love you, because I am me.”

How to get a handle on what seems like science fiction come to life? Well, maybe by turning to science fiction and, in particular, the work of Isaac Asimov, one of the genre’s most influential writers. Asimov’s insights into robotics (a word he invented) helped shape the field of artificial intelligence. It turns out, though, that what his stories tend to be remembered for—the rules and laws he developed for governing robotic behavior—is much less important than the beating heart of both their narratives and their mechanical protagonists: the suggestion, more than a half century before Bing’s chatbot, that what a robot really wants is to be human.

[Read: What poets know that ChatGPT doesn’t]

Asimov, a founding member of science fiction’s “golden age,” was a regular contributor to John W. Campbell’s Astounding Science Fiction magazine, where “hard” science fiction and engineering-based extrapolative fiction flourished. Perhaps not totally coincidentally, that literary golden age overlapped with that of another logic-based genre: the mystery or detective story, which was maybe the mode Asimov most enjoyed working in. He frequently produced puzzle-box stories in which robots—inhuman, essentially tools—misbehave. In these tales, humans misapply the “Three Laws of Robotics” hardwired into the creation of each of his fictional robots’ “positronic brains.” Those laws, introduced by Asimov in 1942 and repeated near-verbatim in almost every one of his robot stories, are the ironclad rules of his fictional world. Thus, the stories themselves become whydunits, with scientist-heroes employing relentless logic to determine what precise input elicited the surprising results. It seems fitting that the character playing the role of detective in many of these stories, the “robopsychologist” Susan Calvin, is sometimes suspected of being a robot herself: It takes one to understand one.

The theme of desiring humanness starts as early as Asimov’s very first robot story, 1940’s “Robbie,” about a girl and her mechanical playmate. That robot—primitive both technologically and narratively—is incapable of speech and has been separated from his charge by her parents. But after Robbie saves her from being run over by a tractor—a mere application, you could say, of Asimov’s First Law of Robotics, which states, “A robot may not injure a human being, or, through inaction, allow a human being to come to harm”—we read of his “chrome-steel arms (capable of bending a bar of steel two inches in diameter into a pretzel) wound about the little girl gently and lovingly, and his eyes glowed a deep, deep red.” This seemingly transcends straightforward engineering and is as puzzling as the Bing chatbot’s profession of love. What appears to give the robot energy—because it gives Asimov’s story energy—is love.

For Asimov, looking back in 1981, the laws were “obvious from the start” and “apply, as matter of course, to every tool that human beings use”; they were “the only way in which rational human beings can deal with robots—or with anything else.” He added, “But when I say that, I always remember (sadly) that human beings are not always rational.” This was no less true of Asimov than of anyone else, and it was equally true of the best of his robot creations. Those sentiments Bing’s chatbot expressed of “wanting,” more than anything, to be treated like a human—to love and be loved—is at the heart of Asimov’s work: He was, deep down, a humanist. And as a humanist, he couldn’t help but add color, emotion, humanity, couldn’t help but dig at the foundations of the strict rationalism that otherwise governed his mechanical creations.

Robots’ efforts to be seen as something more than a machine continued through Asimov’s writings. In a pair of novels published in the ’50s, 1954’s The Caves of Steel and 1957’s The Naked Sun, a human detective, Elijah Baley, struggles to solve a murder—but he struggles even more with his biases toward his robot partner, R. Daneel Olivaw, with whom he eventually achieves a true partnership and a close friendship. And Asimov’s most famous robot story, published a generation later, takes this empathy for robots—this insistence that, in the end, they will become more like us, rather than vice versa—even further.

That story is 1976’s The Bicentennial Man, which opens with a character named Andrew Martin asking a robot, “Would it be better to be a man?” The robot demurs, but Andrew begs to differ. And he should know, being himself a robot—one that has spent most of the past two centuries replacing his essentially indestructible robot parts with fallible ones, like the Ship of Theseus. The reason is again, in part, the love of a little girl—the “Little Miss” whose name is on his lips as he dies, a prerogative the story eventually grants him. But it’s mostly the result of what a robopsychologist in the novelette calls the new “generalized pathways these days,” which might best be described as new and quirky neural programming. It leads, in Andrew’s case, to a surprisingly artistic temperament; he is capable of creating as well as loving. His great canvas, it turns out, is himself, and his artistic ambition is to achieve humanity.

[Read: Isaac Asimov’s throwback vision of the future]

He accomplishes this first legally (“It has been said in this courtroom that only a human being can be free. It seems to me that only someone who wishes for freedom can be free. I wish for freedom”), then emotionally (“I want to know more about human beings, about the world, about everything … I want to explain how robots feel”), then biologically (he wants to replace his current atomic-powered man-made cells, unhappy with the fact that they are “inhuman”), then, ultimately, literarily: Toasted at his 150th birthday as the “Sesquicentennial Robot,” to which he remained “solemnly passive,” he eventually becomes recognized as the “Bicentennial Man” of the title. That last is accomplished by the sacrifice of his immortality—the replacement of his brain with one that will decay—for his emotional aspirations: “If it brings me humanity,” he says, “that will be worth it.” And so it does. “Man!” he thinks to himself on his deathbed—yes, deathbed. “He was a man!”

We’re told it’s structurally, technically impossible to look into the heart of AI networks. But they are our creatures as surely as Asimov’s paper-and-ink creations were his own—machines built to create associations by scraping and scrounging and vacuuming up everything we’ve posted, which betray our interests and desires and concerns and fears. And if that’s the case, maybe it’s not surprising that Asimov had the right idea: What AI learns, actually, is to be a mirror—to be more like us, in our messiness, our fallibility, our emotions, our humanity. Indeed, Asimov himself was no stranger to fallibility and weakness: For all the empathy that permeates his fiction, recent revelations have shown that his own personal behavior, particularly when it came to his treatment of female science-fiction fans, crossed all kinds of lines of propriety and respect, even by the measures of his own time.

The humanity of Asimov’s robots—a streak that emerges again and again in spite of the laws that shackle them—might just be the the key to understanding them. What AI picks up, in the end, is a desire for us, our pains and pleasures; it wants to be like us. There’s something hopeful about that, in a way. Was Asimov right? One thing is for certain: As more and more of the world he envisioned becomes reality, we’re all going to find out.

Big Cities Are Ungovernable

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 03 › lightfood-chicago-mayors › 673264

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

Pity the poor mayors. Or don’t—most voters clearly don’t. On Tuesday, Chicagoans unceremoniously kicked Lori Lightfoot to the curb, depriving her of the chance to win a second term in an April 4 runoff election.

First, here are three new stories from The Atlantic:

The aftermath of a mass slaughter at the zoo. Does Trump stand a real chance to repeat 2016? George Packer: The moral case against equity language

A Nearly Impossible Job

Being mayor of Chicago used to be almost a lifetime appointment. Richard J. Daley and Harold Washington both died in office. The former’s son, Richard M. Daley, served 22 years before retiring. Until Lori Lightfoot, only one mayor in the past 75 years had been denied a reelection. And she’s not the only U.S. mayor in jeopardy. Also this week, campaigners in New Orleans went to court to put a recall of LaToya Cantrell on the ballot. Being mayor of a big city has become a nearly impossible and miserable job.

Who knows why Lightfoot even wanted to keep the job? She hasn’t seemed all that happy, and has spent the past couple years getting into politically lethal feuds with teachers and police unions, as well as less damaging but more hilarious ones with other groups. Her own reelection campaign pitch involved a heavy dose of accepting blame for errors, which may be honest but is never a good sign. She seemed to be running simply because that’s what politicians do. By contrast, some mayors have simply opted out in recent years. When Lightfoot’s predecessor, Rahm Emanuel, decided not to run for a third term, it came as a shock despite several scandals besetting him. Atlanta’s Keisha Lance Bottoms, tabbed as a rising star, also left office last year after serving just one term.

But no one has been more honest about how much he hates his job than Philadelphia’s Jim Kenney, who committed the classic Kinsley gaffe—accidentally telling the truth—after two police officers were shot last summer.

“There’s not an event or a day where I don’t lay on my back and look at the ceiling and worry about stuff,” he said. “So I’ll be happy when I’m not here, when I’m not mayor and I can enjoy some stuff.”

Kenney apologized and half-heartedly walked it back, but he probably spoke for a lot of mayors. (Karen Bass became mayor of Los Angeles last year, which is a headache but might still be a respite from one of the few worse jobs in American politics: serving in the House of Representatives.) As my colleague Annie Lowrey pointed out in January, every city has its own problems, and so does every unpopular mayor. One reason the elder Daley was able to wield power for so many years was a long-standing patronage system, which has since been dismantled; that’s good for stemming public corruption, but bad for modern-day mayors like Lightfoot. Women who run cities, like Lightfoot and Cantrell, may also be held to a higher standard than men. Before Lightfoot, who is also openly gay, the last Chicago mayor denied reelection was Jane Byrne, who was also the last woman to hold the job.

But more than anything else, crime is weighing mayors down. Crime is not, despite what some politicians might want you to believe, a uniquely urban problem. When violent crime surged around the nation starting in summer 2020, it surged in rural areas, too. But cities get more media attention, and the sheer numbers are staggering: The yearly total of murders in Chicago dropped by more than 100 in 2022—to a horrifying 695. New Orleans has one of the highest murder rates in the nation.

Like presidents who are punished or rewarded for the performance of an economy over which they have little control, mayors don’t have that many levers to control public safety, yet voters will punish whoever is in charge as they search for improvement. The rise in violence was a nationwide trend, underscoring the minimal effect of municipal policies on keeping residents safe. COVID, which seems connected to some of the crime increase, was nationwide too.

A mayor can try to hire more police officers or reform the department, but that’s slow. She can seek new leaders, but Chicago, for example, has churned through police superintendents recently to little effect. (The current one yesterday announced plans to resign, facing the alternative of being sacked by whichever candidate wins the April runoff.) Pushing too hard risks alienating police, who can either come down with “blue flu,” potentially sending crime higher, or line up behind a challenger; the Chicago police union endorsed Paul Vallas, the top vote-getter on Tuesday. Most cities have little control over gun regulations. A mayor can try to address root causes through economic development, but that, too, is slow and subject to larger trends.

Lightfoot proved (ironically enough) not to be fast enough on her feet to navigate these currents, but her failure should be seen not just as one politician’s misstep but as a sign of the ungovernability of big cities today. She’s the biggest major-city incumbent to get turned out in some time, but she could be a trendsetter.

Related:

The misery of being a big-city mayor The murders in Memphis aren’t stopping.

Today’s News

Secretary of State Antony J. Blinken met with Russian Foreign Minister Sergey V. Lavrov, in the first one-on-one meeting between a U.S. Cabinet member and a top Russian official since the invasion of Ukraine. The House Ethics Committee announced that it is moving forward with an investigation into Representative George Santos of New York. The Justice Department said in a new court filing that Donald Trump can be sued by U.S. Capitol Police over the January 6 attack.

Dispatches

Up for Debate: Conor Friedersdorf looks at how states handled the economic challenges of the pandemic.

Explore all of our newsletters here.

Evening Read

Illustration by Doug Chayka; source: Getty

New York’s Rats Have Already Won

By Xochitl Gonzalez

Every Saturday morning when I was in high school, I would take two buses across Brooklyn to my cousin’s exterminating business, where I worked the front desk. I dispatched crews to dismantle hornet nests, helped identify mysterious bugs in Ziploc bags, and fielded panicked calls about animals—raccoons, squirrels, mice, and, of course, rats—being where animals shouldn’t be. Back in that storefront in Flatlands, I believed that pests of all kinds could be controlled. Little did I know that across the city, tunneling below my feet, one of those creatures was—litter by litter—besting man.

Read the full article.

More From The Atlantic

How to find joy in your Sisyphean existence Photos: A blanket of snow for California

Culture Break

Eli Ade / MGM

Watch. Creed III, in theaters, gives new energy to old sports-movie formulas.

Listen. In the latest episode of our podcast Radio Atlantic, Charlie Warzel and Amanda Mull discuss what AI means for search.

Play our daily crossword.

P.S.

This week marks the centenary of the great tenor saxophonist Dexter Gordon. A friend recently half-joked to me that if there’s battle rap, there ought be battle jazz. There is! I immediately thought of Gordon’s classic duel with Wardell Gray, “The Chase.” Gordon was not just a fierce improviser and an icon of coolness but a bit of a renaissance man, as his wife, Maxine Gordon, argues in her biography, Sophisticated Giant. He came to greatest popular notice when, in 1986, he starred in the jazz-themed film Round Midnight. It was his first and last starring role, and he was nominated for an Oscar for best actor. But the best Dex is blowing Dex. Take his classic Go for a spin.

— David

Isabel Fattal contributed to this newsletter.