Itemoids

Succession

Inside the Israeli Crack-Up

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 07 › israel-judicial-reform-and-protests-netanyahu › 674849

Israel in the past six months has felt like a madhouse, a political protest the size of New Jersey, an unending traffic jam, a lab for bad ideas, a glimpse of the future of Western democracy in the social-media age. It has also been a classroom, even for those of us who think we’re experts. I’ve lived and written here for nearly 30 years. But as I stood among thousands of other protesters outside the Knesset on Monday, the midday heat so strong that I almost longed for relief from the police water cannon, I realized that I was learning to see the country with new eyes.

Inside the Knesset, the most extreme government in Israel’s history was legislating the first stage of its plan to move power from the courts and into its own hands, changing the rules of the democratic game. The law passed. The protest couldn’t stop it. One opposition lawmaker described it as the hardest day of his life, and he used to be the No. 2 man in the Mossad. Like many people here, I’ve been at a demonstration almost every week since the beginning of the year. This one had the same chanting and flag-waving, but it seemed desperate, with an undercurrent less of defiance than of fear. A chapter in Israeli history was ending. We don’t know what comes next.

[Yair Rosenberg: Israel has already lost]

The Israeli breakdown of 2023 has thrown into sharp relief the country’s submerged assumptions and blind spots, as well as my own.

The state of Israel was declared in a rush on May 14, 1948, amid an attack by the combined forces of the Arab world. The declaration of independence in Tel Aviv that day promised a constitution “no later than the 1st of October,” but we never got around to it. Instead, we’ve relied on stop-gap measures, political deals that seemed logical at the time, and an unwritten idea of the way things are done. Israel was held together less by law than by custom. Like many Israelis, I sensed this without grasping the risk. These customs were almost invisible when they were in effect. They’re possible to see clearly now because they’re gone.

It was customary, for example, for a prime minister to resign if facing prosecution. It was customary not to put criminals in charge of law enforcement. It was customary to respect civil servants, to listen to the soldiers and spies who keep Israelis safe in a dangerous region, and never to politicize the judiciary.

The last norm, discarded along with the rest by the current government, is at the heart of our troubles. In the Israeli system, a simple majority had no official limit on its power. So the Supreme Court evolved into a check on the state, protecting civil rights and fighting corruption with legal tools that themselves had an ad hoc air. But early this year, having secured less than 49 percent of the popular vote, Prime Minister Benjamin Netanyahu’s new government announced a “legal reform” that would neuter the court and thus remove the only institutional check on government power. Netanyahu hadn’t presented this plan before the election. The press conference had the tone of a declaration of war. Without judicial review, the government can delay elections, outlaw opposition parties, expand the power of clerics, and appoint officials convicted of corruption. (All of these ideas have been suggested by members of his coalition.)

The truth was always that a majority in the Knesset could free itself from all restraints merely by voting to do so. The only barrier, it turns out, was the customary deference to norms. These existed only as long as we all believed in them, and the void left by their absence is now filled by suspicion and protest.

Last weekend, tens of thousands of people marched with Israeli flags up the highway from Tel Aviv to Jerusalem. As I write, a park near the Knesset is full of tents housing protesters. When I was at the encampment, volunteer teams were making food and distributing water bottles. Groups with flags were walking uphill to the protest zone outside the Knesset while others descended, red-faced and hoarse, to rest in the shade. Those looking for inspiration in this dark year have found it in this extraordinary mobilization, manifested not on Facebook but on the street, every single week since January. No one, least of all the government, saw it coming. This raises the question of where everyone has been until now. After all, Netanyahu and the right have been in power, with only a brief break, since 2009.

The short answer is, in tech and on vacation. After Palestinian suicide bombings and rockets destroyed Israel’s political left in the late ’90s and early aughts, and amid an economic boom, liberal Israelis of the middle class pursued prosperity, often found it, and fell into a political slumber. Meanwhile the settler movement and its sympathizers were hard at work gaining power in state institutions and gluing together an alliance with Likud and the ultra-Orthodox parties, using the language of Jewish tradition and of hostility toward the liberal state dreamed up by Israel’s founders.

Liberal Israelis held to their old assumption about the settlements, which is that they’re temporary and external to the state of Israel, and the settlers are fringe eccentrics. They assumed that Netanyahu’s basic aim was to achieve peace and prosperity for citizens – the same goal as theirs, that is, even if he pursued it in ways they didn’t like.

These assumptions have been shattered by the people now in power. Itamar Ben-Gvir, the minister in charge of the police, and Bezalel Smotrich, the finance minister, who also controls part of the defense ministry, come from the messianic settler movement, which has an entirely different goal: Jewish domination of the entire land of Israel and a state governed by some form of religious law. This is the ideology that drove the assassin of Prime Minister Yitzhak Rabin in 1995 and the mass murderer of Muslim worshipers in Hebron in 1994, Baruch Goldstein, whose photo Ben-Gvir kept on his living-room wall until recently.

For this extreme element, war is not a horror to be avoided at all costs but a trial that would be justified to further God’s plan, or an event that might even be desirable when the time is ripe. As cabinet ministers, they’ve been given the potential power to help start a war, whether with an expulsion of Palestinian residents in Jerusalem, for example, or a provocation at the Muslim holy sites on the Temple Mount. For level-headed Israelis in uniform, and for parents whose teenagers face the draft, this is the stuff of nightmares.

The protests erupted when Israelis were forced to realize that not only are the settlers not going anywhere in the West Bank; they’ve assumed central functions of government in Israel proper and are moving fast to knock out the only remaining brake on their power. With the Supreme Court out of the way, a transformation of the state will be possible. These are the stakes, and they help explain the surge of anger and dread we’ve seen, and particularly the extraordinary announcement from thousands of military reservists, including pilots and command personnel, that they’ll refuse to report for duty. This is less a calculated pressure tactic than a howl of distress. Had I not aged out of the infantry reserves six years ago, I’d consider doing the same.

Another unpleasant reality on display in the recent upheaval is the fault line that runs between Israeli Jews with roots in Europe (known as Ashkenazim) and those with roots in the Islamic world (Mizrahim, in our local shorthand). We expend great effort to pretend that our debates are only about policy, not identity, but that isn’t true. The grievance felt by many families whose roots are in places like Casablanca and Algiers, and who were sidelined by the country’s Eastern European founders and the official narratives, has not faded—on the contrary, it seems to have grown.

Anyone at the demonstrations understands that the protesters are mostly middle-class Ashkenazim. The cops guarding and occasionally manhandling us are mainly working-class Mizrahim, as are the traditional Likud rank and file. Most people in fighter squadrons, commando companies, and intelligence outfits are Ashkenazi and liberal. The academy and the tech boardrooms are much the same. This sociological fact says nothing good about our society. At least half of the Jewish population here is Mizrahi, but we’ve never had a Mizrahi prime minister, and the Supreme Court has a woeful lack of ethnic diversity.

Good leadership could address the divide. But for politicians like Netanyahu, divisions aren’t problems—they’re weapons. He hoped to use the fury of this electorate as political jet fuel, gambling that it would propel him upward and not blow us all to pieces. Likud’s grievance coalition with settlers and the ultra-Orthodox now openly derides the Supreme Court as a hostile Ashkenazi elite, the civil service as a “deep state,” air-force pilots as privileged brats, and army officers as traitors.  

[Natan Sachs: Israel on the brink]

Netanyahu’s reputation, even among opponents, was that of a political grand master. This reputation joins many other assumptions on the trash heap of 2023. Netanyahu is a shell who’s lost everything but his old baritone. The forces he released have escaped his control and now others are in charge, people who see politics not as a mechanism for solving problems but as an arena for spoils, confrontation, and revenge.

Never in all my years here have I heard so much talk of emigration. Israelis once thought our internal problems and external conflicts could be resolved, so sticking it out made sense. Today the opposite is true; we do not seem on our way to a happy resolution.

When I moved to Israel in 1995, finding Nikes or Levis was difficult and travel was a luxury. In 2023, the protesters are in the same Zara tank tops and Garmins you see in Berlin or Palo Alto. Middle-class Israelis speak English. They watch Succession. They have other options. If this government is not an aberration but the new normal, many will leave.

The rectangular bulk of the Knesset sits in a tidy section of Jerusalem, across from the Israel Museum and down the street from the Supreme Court, among fences and flowerbeds. I pass by often on my daily errands, and it always seems orderly and permanent. But on Monday the same road was closed, a turbulence of police trucks and blue-and-white flags. Inside the building, the forces of disintegration were at work.  Everything seemed to be moving fast, faster than we could grasp. The edifices of state felt as tenuous as holograms, as if I could pass my hand through them. The pink flowers planted in rows on the median disappeared under the sneakers of protesters, and then under the hoofs of the horses pushing us back. I looked down again. The irrigation pipes had been ripped out and the flower bed was mud.

We Are All Evangelicals Now

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 07 › oppenheimer-movie-moralizing-reviews-social-media › 674823

When I was growing up in a conservative evangelical community, one of the top priorities was to manage children’s consumption of art. The effort was based on a fairly straightforward aesthetic theory: Every artwork has a clear message, and consuming messages that conflict with Christianity will harm one’s faith. Helpfully, there was a song whose lyrics consisted precisely of this aesthetic theory: “Input Output.”  

Input, output,
What goes in is what comes out.
Input, output,
That is what it’s all about.
Input, output,
Your mind is a computer whose
Input, output daily you must choose.

The search for the “inputs” of secular artwork sometimes took a paranoid form—such as the belief in subliminal messages recorded in reverse, or in isolated frames from Lion King where smoke allegedly forms the word sex. Most often, however, the analysis was more direct. Portraying a behavior or describing a belief, unless accompanied immediately by a clear negative judgment, is an endorsement and a recommendation, and people who consume such messages will become more likely to behave and believe in that way.

[Read: Defining evangelical]

This theory underwrote the whole edifice of Christian contemporary music, which aimed to replace a particularly powerful avenue for negative messages. One of my running jokes for many years has been that all Top 40 music is effectively Christian contemporary music now; American Idol confirmed the hegemony of the “praise band” vocal style. More clear is the fact that all mainstream criticism—especially of film and television—is evangelical in form, if not in content. Every artwork is imagined to have a clear message; the portrayal of a given behavior or belief is an endorsement and a recommendation; consumption of artwork with a given message will directly result in the behaviors or beliefs portrayed. This is one of the few phenomena where the “both sides” cliché is true: Left-wing critics are just as likely to do this as their right-wing opponents. For every video of a right-wing provocateur like Ben Shapiro decrying the woke excesses of Barbie, there is a review praising the Mattel product tie-in as a feminist fable.

Here, however, I am more concerned with the critical practices of my comrades on the left. Among leftist publications, Jacobin stands out for its reductive and moralizing cultural coverage. Addressing the other major movie of this past weekend, for instance, the critic Eileen Jones worried in a recent column, “If you’re already convinced of the dangers of nuclear war, superseded only by the ongoing end-times series of rolling climate catastrophes that now seem more likely to kill us all, this film is going to lack a certain urgency.” Sadly, instead of an educational presentation on nuclear war, film audiences will instead find a biopic that takes some liberties with its subject’s life and character for the sake of creating a Hollywood blockbuster. Jones finds more to like in Barbie, despite “the familiar, toothless, you-go-girl pseudo-feminist pieties that Mattel has been monetizing for decades, alongside the nostalgic how-can-our-consumer-products-be-bad affirmations of Barbie as some sort of magic, wholesomely progressive uniter of generations of mothers and daughters.”

This trend is not limited to one publication. It is pervasive in online culture, above all on social media. For instance, over coffee on the morning after the epic Barbenheimer Friday, I learned some disturbing facts about Oppenheimer on Twitter. At least one viewer was worried that the film about the man who created the nuclear bomb did not include any Japanese characters. Indeed, it did not even directly portray his invention’s horrific consequences. Surely this aesthetic choice was meant to minimize his actions by rendering his victims invisible. (An article in New York magazine drew attention to the same absence.) I also learned that the area surrounding Los Alamos was actually cleared of Indigenous and Hispanic residents, another bit of history that is effectively erased by the film.

[John Hendrickson: Oppenheimer nightmares? You’re not alone. ]

Let’s imagine, though, that those complaints had been anticipated and addressed. Let’s imagine an entire subplot of a family going about their business in Hiroshima. We get to know and like them, to relate to them as our fellow human beings. Then, shockingly, they are incinerated by a nuclear blast. One can already hear the complaints. If the family were portrayed as too morally upstanding, it would be a dehumanizing portrayal that idealizes them as perfect victims. If they had moral flaws, the film would be subtly suggesting that they deserved their fate. And either way, the film would be attacked for offering up their suffering as a spectacle for our enjoyment. The same would go for the displaced population of Los Alamos—by portraying them as passive victims with no agency, critics would surely complain, the film would be reinscribing white authority.

Obviously leftists do not have to be as paranoid in their quest for messages supportive of the status quo as Christians playing their records backwards in the hopes of finding satanic content.  And of course we are a long way from having anything like the real-world thought police of Stalinism. During that dark era of Soviet history, writers and artists were expected to subscribe to the standards of socialist realism—which, instead of portraying the sordid and brutal reality of the present, anticipated the future reality of socialism by showing heroic workers building a utopian society. Those who fell short of those ideological expectations could expect a personal phone call from Comrade Stalin, if not worse. By contrast, it seems relatively harmless to hope that films and TV shows might reflect one’s own politics and to lament when they fail to do so. Yet the very fact that the demand is so open-ended that it is impossible to imagine an artwork meeting its largely unstated and unarticulated standards shows that something has gone wrong here.

To be clear, I don’t want to defend Oppenheimer in any way. I have not actually seen the film. Nothing anyone is saying is necessarily wrong; it’s just not interesting. Like most film and TV viewers, I read reviews because I want to decide whether or not to see a given movie or show, or else to think it through from a fresh perspective. For example, I note that Oppenheimer is very long—how is the pacing? Does it maintain a clear focus throughout, or does it indulge the common vice of biopics by trying to cram too much in? The type of critical literature that concerns me does not address such basic aesthetic questions, or does so only incidentally.

Even more insidiously, though, the logical goal of such very narrow standards could be to create artwork that is straightforward political propaganda. We’ve seen how badly that turned out for the evangelicals (and, indeed, for the Stalinists). Even if we are unlikely to face the scourge of a Leninist equivalent to VeggieTales, however, this style of criticism infantilizes its audience members by assuming they are essentially ideology-processing machines—unlike the wise commentator who somehow manages to see through the deception.

Political problems cannot be solved on the aesthetic level. And it’s much more likely that people are consuming politics as a kind of aesthetic performance or as a way of expressing aesthetic preferences than that they are somehow reading their politics off Succession, for example (“Welp, I guess rich people are good now. Better vote Republican!”). Just as the reduction of art to political propaganda leads to bad art, the aestheticization of politics leads to bad, irresponsible politics. That’s because aesthetics and politics are not the same thing. They are not totally unrelated, obviously, but they are also and even primarily different. A political message can be part of an aesthetic effect, just as a political movement can benefit from an aesthetic appeal. But we get nowhere if we confuse or collapse these categories.

This story was adapted from a post on Adam Kotsko's blog, An Und Für Sich.

AI Won’t Really Kill Us All, Will It?

The Atlantic

www.theatlantic.com › podcasts › archive › 2023 › 07 › ai-wont-really-kill-us-all-will-it › 674648

In recent months, many, many researchers and computer scientists involved in creating artificial intelligence have been warning the world that they’ve created something unbelievably dangerous. Something that might eventually lead humanity to extinction. Paul Christiano, who worked at Open AI, put it this way: “If, God forbid, they were trying to kill us, they would definitely kill us.” Such warnings can sound bombastic and overblown—but then again, they’re often coming from the people who understand this technology best.

In this episode of Radio Atlantic, host Hanna Rosin talks to The Atlantic’s executive editor, Adrienne LaFrance, and staff writer Charlie Warzel about how seriously we should take these warnings. Should we think of these AI doomers as street preachers? Or are they canny Silicon Valley marketers trying to emphasize the power of what they’ve built?

In Europe, there is already a broad conversation about limiting AI surveillance technology and inserting pauses before approving commercial uses. In the U.S., coalitions of researchers and legislators have called for a “pause,” without any specifics. Meanwhile, with all this talk of killer robots, humanity may be overlooking the more immediate dangers posed by AI. We talk about where things stand and how to orient ourselves to the coming dangers.

Listen to the conversation here:

Subscribe here: Apple Podcasts | Spotify | Stitcher | Google Podcasts | Pocket Casts

The following transcript has been edited for clarity.

Hanna Rosin: I remember when I was a little kid being alone in my room one night watching this movie called The Day After. It was about nuclear war, and for some absurd reason, it was airing on regular network TV.

The Day After:

Denise: It smells so bad down here. I can’t even breathe!

Denise’s mom: Get ahold of yourself, Denise.

Rosin: I particularly remember a scene where a character named Denise—my best friend’s name was Denise—runs panicked out of her family’s nuclear-fallout shelter.

The Day After:

Denise: Let go of me. I can’t see!

Mom: You can’t go! Don’t go up there!

Brother: Wait a minute!

Rosin: It was definitely, you know, “extra.” Also, to teenage me, genuinely terrifying. It was a very particular blend of scary ridiculousness I hadn’t experienced since—until a couple of weeks ago, when someone sent me a link to this YouTube video with Paul Christiano, who is an artificial intelligence researcher.

Paul Christiano: The most likely way we die is not that AI comes out of the blue and kills us, but involves that we’ve deployed AI everywhere. And if, God forbid, they were trying to kill us, they would definitely kill us.

Rosin: Christiano was talking on this podcast called Bankless. And then I started to notice other major AI researchers saying similar things:

Norah O’Donnell on CBS News: More than 1,300 tech scientists, leaders, researchers, and others are now asking for a pause.

Bret Baier on Fox News: Top story right out of a science-fiction movie.

Rodolfo Ocampo on 7NEWS Australia: Now it’s permeating the cognitive space. Before, it was more the mechanical space.

Michael Usher on 7NEWS Australia: There needs to be at least a six-month stop on the training of these systems.

Fox News: Contemporary AI systems are now being human-competitive.

Yoshua Bengio talking with Tom Bilyeu: We have to get our act together.

Eliezer Yudkowsky on the Bankless podcast: We’re hearing the last winds begin to blow, the fabric of reality start to fray.

Rosin: And I’m thinking, Is this another campy Denise moment? Am I terrified? Is it funny? I can’t really tell, but I do suspect that the very “doomiest” stuff at least is a distraction. There are likely some actual dangers with AI that are less flashy but maybe equally life-altering.

So today we’re talking to The Atlantic’s executive editor, Adrienne LaFrance, and staff writer Charlie Warzel, who’ve been researching and tracking AI for some time.

___

Rosin: Charlie, Adrienne—when these experts are saying, “Worry about the extinction of humanity,” what are they actually talking about?

Adrienne LaFrance: Let’s game out the existential doom, for sure. [Laughter.]

Rosin: Thanks!

LaFrance: When people warn about the extinction of humanity at the hands of AI, that’s literally what they mean—that all humans will be killed by the machines. It sounds very sci-fi. But the nature of the threat is that you imagine a world where more and more we rely on artificial intelligence to complete tasks or make judgments that previously were reserved for humans. Obviously, humans are flawed. The fear assumes a moment at which AI’s cognitive abilities eclipse our species—and so all of a sudden, AI is really in charge of the biggest and most consequential decisions that humans make. You can imagine they’re making decisions in wartime about when to deploy nuclear weapons—and you could very easily imagine how that could go sideways.

Rosin: Wait; but I can’t very easily imagine how that would go sideways. First of all, wouldn’t a human put in many checks before you would give access to a machine?

LaFrance: Well, one would hope. But one example would be that you give the AI the imperative to “Win this war, no matter what.” And maybe you’re feeding in other conditions that say “We don’t want mass civilian casualties.” But ultimately, this is what people refer to as an “alignment problem”—you give the machine a goal, and it will do whatever it takes to reach that goal. And that includes maneuvers that humans can’t anticipate, or that go against human ethics.

Charlie Warzel: A sort of a meme of this that has been around for a long time is called “the paper clip–maximizer problem.” You tell a sentient artificial intelligence, “We want you to build as many paper clips as fast as possible, and in the most efficient way.” And the AI goes through all the computations and says, “Well, really, the thing that is stopping us from building as many paper clips as we can is the fact that humans have other goals. So we better just eradicate humans.”

Rosin: Why can’t you just program in: “Machine, you are allowed to do anything to make those paper clips, short of killing everyone.”

Warzel: Well, let me lay out a classic AI doomer’s scenario that may be easier to imagine. Let’s say five, 10 years down the line, a supercomputer is able to process that much more information—on a scale of a hundred-X more powerful than whatever we have now. It knows how to build iterations of itself, so it builds a model. That model has all that intelligence—plus maybe a multiplier there of a little bit.

And that one builds a model, and another one builds a model. It just keeps building these models—and it gets to a point where it’s replicated enough that it’s sort of like a gene that is mutating.

Rosin: So this is the alignment thing. It’s suddenly like: We’re going along, we have the same objectives. And all of a sudden, the AI takes a sharp left turn and realizes that actually humans are the problem.

Warzel: Right. It can hack a bank; it can pose as a human. It can figure out a way through all of its knowledge of computer code to either socially engineer by impersonating someone—or it can actually hack and steal funds from a bank, get money, pose as a human being, and basically get someone involved by funding a state actor or a terrorist cell or something. Then they use the money that it’s gotten and pay the group to release a bioweapon, and—

Rosin: And, just to interject before you play it out completely, there’s no intention here. Right? It’s not necessarily intending to gain power the way, say an autocrat would be, or intending to rule the world? It’s simply achieving an objective that it began with, in the most effective way possible.

Warzel: Right. So this speaks to the idea that once you build a machine that is so powerful and you give it an imperative, there may not be enough alignment parameters that a human can set to keep it in check.

Rosin: I followed your scenario completely. That was very helpful, except you don’t sound at all worried.

Warzel: I don’t know if I buy any of it.

Rosin: You don’t even sound somber!

LaFrance: [Laughter.] Why don’t you like humans, Charlie?

Warzel: I’m anti-human. This is my hot take. [Laughter.]

Rosin: But that was a real question, Charlie. Why don’t you take this seriously? Is it because you think steps haven’t been worked out? Or is it because you think there are a lot of checks in place, like there are with human cloning? What is the real reason why you, Charlie, can intelligently lay out this scenario but not actually take it seriously?

Warzel: Well, bear with me here. Are you familiar with the South Park underpants gnomes?

South Park Gnomes (singing): Gotta go to work. Work, work, work. Search for underpants. Hey!

Warzel: For those blissfully unaware, the underpants gnomes are from South Park. But what’s important is that they have a business model that is notoriously vague.

South Park Gnome: “Collecting underpants is just Phase 1!”

Warzel: Phase 1 is to collect underpants. Phase 2?

South Park Gnome 1: Hey, what is Phase 2?

South Park Gnome 2: Phase 1, we collect underpants.

Gnome 1: Yah, yah, yah. But what is Phase 2?

Warzel: It’s a question mark.

Gnome 2: Well, Phase 3 is profit! Get it?

Warzel: And that’s become a cultural signifier over the last decade or so for a really vague business plan. When you listen to a lot of the AI doomers, you have somebody who is obviously an expert, who’s obviously incredibly smart. And they’re saying: Step 1, build an incredibly powerful artificial-intelligence system that maybe gets close to, or actually surpasses, human intelligence.

Step 2: question mark. Step 3: existential doom.

I just have never really heard a very good walkthrough of Step 2, or 2 and a half.

No one is saying that we have reached the point of no return.

LaFrance: Wait. But Charlie, I think you did give us Step 2. Because Step 2 is the AI hacks a bank and pays a terrorist, and the terrorists unleash a virus that kills humanity. I would also say that I think what people who are most worried would argue is that there isn’t time for a checklist. And that’s the nature of their worries.

And there are some who’ve said we are past the point of no return.

Warzel: And I get that. I’ll just say my feeling on this is that image of the Terminator 2: Judgment Day–type robots rolling over human skulls feels like a distraction from the bigger problems, because—

Rosin: Wait; you said it’s a distraction from bigger problems. And this is what I want to know, so I’m not distracted by the shiny doom movie. What are actually the things that we need to worry about, or pay attention to?

LaFrance: The possibility of wiping out entire job categories and industries, though that is a phenomenon we’ve experienced throughout technological history. That’s a real threat to people’s real lives and ability to buy groceries.

And I have real questions about what it means for the arts and our sense of what art is and whose work is valued, specifically with regard to artists and writers. But, Charlie, what are yours?

Warzel: Well, I think before we talk about exterminating the human race, I’m worried about financial institutions adopting these types of automated generative AI machines. And if you have an investment firm that is using a powerful piece of technology, and you wanna optimize for a very specific stock or a very specific commodity, then you get the possibility of something like that paper-clip problem. With: “Well, what’s the best way to drive the price of corn up?”

Rosin: Cause a famine.

Warzel: Right. Or start conflict in a certain region. Now, again—there’s still a little bit of that underpants gnome–ish quality to this. But I think a good analog for this is from the social-media era. Back when Mark Zuckerberg was making Facebook in his Harvard dorm room, it would have been silly to imagine it could lead to ethnic cleansing or genocide in somewhere like Myanmar.

But ultimately, when you create powerful networks, you connect people. There’s all sorts of unintended consequences.

Rosin: So given the speed and suddenness with which these bad things can happen, you can understand why lots of intelligent people are asking for a pause. Do you think that’s even possible? Is that the right thing to do?

LaFrance: No. I think it’s unrealistic, certainly, to expect tech companies to slow themselves down. It’s intensely competitive right now. I’m not convinced that regulation right now would be the right move, either. We’d have to know exactly what that looks like.

We saw it with social platforms, when they called for Congress to regulate them and then at the same time they’re lobbying very hard not to be regulated.

Rosin: I see. So what you’re saying is that it’s a cynical public play, and what they’re looking for are sort of toothless regulations.

LaFrance: I think that is unquestionably one dynamic at play. Also, to be fair, I think that many of the people who are building this technology are indeed very thoughtful, and hopefully reflecting with some degree of seriousness about what they’re unleashing.

So I don’t wanna suggest that they’re all just doing it for political reasons. But there certainly is that element.

When it comes to how we slow it down, I think it has to be individual people deciding for themselves how they think this world should be. I’ve had conversations with people who are not journalists, who are not in tech, but who are unbridled in their enthusiasm for what this will all mean. Someone recently mentioned to me how excited he was that AI could mean that they could just surveil their workers all the time and that they could tell exactly what workers were doing and what websites they were visiting. At the end of the day, they could get a report that shows how productive they were. To me, that’s an example of something that could very quickly be seen among some people as culturally acceptable.

We really have to push back against that in terms of civil liberties. To me, this is much more threatening than the existential doom, in the sense that these are the sorts of decisions that are being made right now by people who have genuine enthusiasm for changing the world in ways that seem small, but are actually big.

I think it is crucially important that we act right now, because norms will be hardened before most people have a chance to grasp what’s happening.

Rosin: I guess I just don’t know who “we” is in that sentence. And it makes me feel a little vulnerable to think that every individual and their family and their friends has to decide for themselves—as opposed to, say, the European model, where you just put some basic regulations in place. The EU already passed a resolution to ban certain forms of public surveillance like facial recognition, and to review AI systems before they go fully commercial.

Warzel: Even if you do put regulations on things, it doesn’t stop somebody from building something on their own. It wouldn’t be as powerful as the multibillion-dollar supercomputer from Open AI, but those models will be out in the world. Those models may not have some of the restrictions that some of these companies, who are trying to build them thoughtfully, are going to have.

Maybe you’ll have people like we have in the software industry creating AI malware and selling it to the highest bidder, whether that’s a foreign government or a terrorist group, or a state-sponsored cell of some kind.

And there is also the idea of a geopolitical race, which is part of all of this. Behind closed doors they are talking about an AI race with China.

So, there are all these very, very, thorny problems.

You have all of that—and then you have the cultural issues. Those are the ones that I think we will see and feel really acutely before we feel any of this other stuff.

Rosin: What is an example of a cultural issue?

Warzel: You have all of these systems that are optimized for scale with a real cold, hard machine logic.

And I think that artificial intelligence is sort of the truest sort of almost-final realization of scale. It is a scale machine; like it is human intelligence at a scale that humans can’t have. That’s really worrisome to me.

Like, hey, do you like Succession? Well, AI’s gonna generate 150 seasons of Succession for you to watch. It’s like: I don’t wanna necessarily live in that world, because it’s not made by people. It’s a world without limits.

The whole idea of being alive and being a human is encountering and embracing limitations of all kinds. Including our own knowledge, and our ability to do certain things. If we insert artificial intelligence, in the most literal sense it really is sort of like strip-mining the humanity out of a lot of life. And that is really worrisome.

Rosin: I mean, Charlie, that sounds even worse than the doom scenarios I started with. Because how am I—say, as one writer or Person X, who as Adrienne started out saying, is trying to pay for their groceries—supposed to take a stance against this enormous global force?

LaFrance: We have to assert that our purpose on the planet is not just an efficient world.

Rosin: Yeah.

LaFrance: We have to insist on that.

Rosin: Charlie, do you have any tiny bits of optimism for us?

Warzel: I am probably just more of a realist. You can look at the way that we have coexisted with all kinds of technologies as a story where the disruption comes in, things never feel the same as they were, and there’s usually a chaotic period of upheaval—and then you sort of learn to adapt. I’m optimistic that humanity is not going to end. I think that is the best I can do here.

Rosin: I hear you struggling to be definitive, but I feel like what you are getting at is that you have faith in our history of adaptation. We have learned to live with really cataclysmic and shattering technologies many times in the past. And you just have faith that we can learn to live with this one.

Warzel: Yeah.

Rosin: On that sort of tiny bit of optimism, Charlie Warzel and Adrienne LaFrance: Thanks for helping me feel safe enough to crawl out of my bunker, at least for now.

AI Won’t Really Kill Us All, Will It?

The Atlantic

www.theatlantic.com › politics › archive › 2023 › 07 › ai-wont-really-kill-us-all-will-it › 674648

For months, more than a thousand researchers and technology experts involved in creating artificial intelligence have been warning us that they’ve created something that may be dangerous, something that might eventually lead humanity to become extinct. In this Radio Atlantic episode, The Atlantic’s executive editor, Adrienne LaFrance, and staff writer Charlie Warzel talk about how seriously we should take these warnings, and what else we might consider worrying about.

Listen to the conversation here:

Subscribe here: Apple Podcasts | Spotify | Stitcher | Google Podcasts | Pocket Casts

The following transcript has been edited for clarity.

Hanna Rosin: I remember when I was a little kid being alone in my room one night watching this movie called The Day After. It was about nuclear war, and for some absurd reason, it was airing on regular network TV.

The Day After:

Denise: It smells so bad down here. I can’t even breathe!

Denise’s mom: Get ahold of yourself, Denise.

Rosin: I particularly remember a scene where a character named Denise—my best friend’s name was Denise—runs panicked out of her family’s nuclear-fallout shelter.

The Day After:

Denise: Let go of me. I can’t see!

Mom: You can’t go! Don’t go up there!

Brother: Wait a minute!

Rosin: It was definitely, you know, “extra.” Also, to teenage me, genuinely terrifying. It was a very particular blend of scary ridiculousness I hadn’t experienced since—until a couple of weeks ago, when someone sent me a link to this YouTube video with Paul Christiano, who is an artificial intelligence researcher.

Paul Christiano: The most likely way we die is not that AI comes out of the blue and kills us, but involves that we’ve deployed AI everywhere. And if, God forbid, they were trying to kill us, they would definitely kill us.

Rosin: Christiano was talking on this podcast called Bankless. And then I started to notice other major AI researchers saying similar things:

Norah O’Donnell on CBS News: More than 1,300 tech scientists, leaders, researchers, and others are now asking for a pause.

Bret Baier on Fox News: Top story right out of a science-fiction movie.

Rodolfo Ocampo on 7NEWS Australia: Now it’s permeating the cognitive space. Before, it was more the mechanical space.

Michael Usher on 7NEWS Australia: There needs to be at least a six-month stop on the training of these systems.

Fox News: Contemporary AI systems are now being human-competitive.

Yoshua Bengio talking with Tom Bilyeu: We have to get our act together.

Eliezer Yudkowsky on the Bankless podcast: We’re hearing the last winds begin to blow, the fabric of reality start to fray.

Rosin: And I’m thinking, Is this another campy Denise moment? Am I terrified? Is it funny? I can’t really tell, but I do suspect that the very “doomiest” stuff at least is a distraction. There are likely some actual dangers with AI that are less flashy but maybe equally life-altering.

So today we’re talking to The Atlantic’s executive editor, Adrienne LaFrance, and staff writer Charlie Warzel, who’ve been researching and tracking AI for some time.

___

Rosin: Charlie, Adrienne—when these experts are saying, “Worry about the extinction of humanity,” what are they actually talking about?

Adrienne LaFrance: Let’s game out the existential doom, for sure. [Laughter.]

Rosin: Thanks!

LaFrance: When people warn about the extinction of humanity at the hands of AI, that’s literally what they mean—that all humans will be killed by the machines. It sounds very sci-fi. But the nature of the threat is that you imagine a world where more and more we rely on artificial intelligence to complete tasks or make judgments that previously were reserved for humans. Obviously, humans are flawed. The fear assumes a moment at which AI’s cognitive abilities eclipse our species—and so all of a sudden, AI is really in charge of the biggest and most consequential decisions that humans make. You can imagine they’re making decisions in wartime about when to deploy nuclear weapons—and you could very easily imagine how that could go sideways.

Rosin: Wait; but I can’t very easily imagine how that would go sideways. First of all, wouldn’t a human put in many checks before you would give access to a machine?

LaFrance: Well, one would hope. But one example would be that you give the AI the imperative to “Win this war, no matter what.” And maybe you’re feeding in other conditions that say “We don’t want mass civilian casualties.” But ultimately, this is what people refer to as an “alignment problem”—you give the machine a goal, and it will do whatever it takes to reach that goal. And that includes maneuvers that humans can’t anticipate, or that go against human ethics.

Charlie Warzel: A sort of a meme of this that has been around for a long time is called “the paper clip–maximizer problem.” You tell a sentient artificial intelligence, “We want you to build as many paper clips as fast as possible, and in the most efficient way.” And the AI goes through all the computations and says, “Well, really, the thing that is stopping us from building as many paper clips as we can is the fact that humans have other goals. So we better just eradicate humans.”

Rosin: Why can’t you just program in: “Machine, you are allowed to do anything to make those paper clips, short of killing everyone.”

Warzel: Well, let me lay out a classic AI doomer’s scenario that may be easier to imagine. Let’s say five, 10 years down the line, a supercomputer is able to process that much more information—on a scale of a hundred-X more powerful than whatever we have now. It knows how to build iterations of itself, so it builds a model. That model has all that intelligence—plus maybe a multiplier there of a little bit.

And that one builds a model, and another one builds a model. It just keeps building these models—and it gets to a point where it’s replicated enough that it’s sort of like a gene that is mutating.

Rosin: So this is the alignment thing. It’s suddenly like: We’re going along, we have the same objectives. And all of a sudden, the AI takes a sharp left turn and realizes that actually humans are the problem.

Warzel: Right. It can hack a bank; it can pose as a human. It can figure out a way through all of its knowledge of computer code to either socially engineer by impersonating someone—or it can actually hack and steal funds from a bank, get money, pose as a human being, and basically get someone involved by funding a state actor or a terrorist cell or something. Then they use the money that it’s gotten and pay the group to release a bioweapon, and—

Rosin: And, just to interject before you play it out completely, there’s no intention here. Right? It’s not necessarily intending to gain power the way, say an autocrat would be, or intending to rule the world? It’s simply achieving an objective that it began with, in the most effective way possible.

Warzel: Right. So this speaks to the idea that once you build a machine that is so powerful and you give it an imperative, there may not be enough alignment parameters that a human can set to keep it in check.

Rosin: I followed your scenario completely. That was very helpful, except you don’t sound at all worried.

Warzel: I don’t know if I buy any of it.

Rosin: You don’t even sound somber!

LaFrance: [Laughter.] Why don’t you like humans, Charlie?

Warzel: I’m anti-human. This is my hot take. [Laughter.]

Rosin: But that was a real question, Charlie. Why don’t you take this seriously? Is it because you think steps haven’t been worked out? Or is it because you think there are a lot of checks in place, like there are with human cloning? What is the real reason why you, Charlie, can intelligently lay out this scenario but not actually take it seriously?

Warzel: Well, bear with me here. Are you familiar with the South Park underpants gnomes?

South Park Gnomes (singing): Gotta go to work. Work, work, work. Search for underpants. Hey!

Warzel: For those blissfully unaware, the underpants gnomes are from South Park. But what’s important is that they have a business model that is notoriously vague.

South Park Gnome: “Collecting underpants is just Phase 1!”

Warzel: Phase 1 is to collect underpants. Phase 2?

South Park Gnome 1: Hey, what is Phase 2?

South Park Gnome 2: Phase 1, we collect underpants.

Gnome 1: Yah, yah, yah. But what is Phase 2?

Warzel: It’s a question mark.

Gnome 2: Well, Phase 3 is profit! Get it?

Warzel: And that’s become a cultural signifier over the last decade or so for a really vague business plan. When you listen to a lot of the AI doomers, you have somebody who is obviously an expert, who’s obviously incredibly smart. And they’re saying: Step 1, build an incredibly powerful artificial-intelligence system that maybe gets close to, or actually surpasses, human intelligence.

Step 2: question mark. Step 3: existential doom.

I just have never really heard a very good walkthrough of Step 2, or 2 and a half.

No one is saying that we have reached the point of no return.

LaFrance: Wait. But Charlie, I think you did give us Step 2. Because Step 2 is the AI hacks a bank and pays a terrorist, and the terrorists unleash a virus that kills humanity. I would also say that I think what people who are most worried would argue is that there isn’t time for a checklist. And that’s the nature of their worries.

And there are some who’ve said we are past the point of no return.

Warzel: And I get that. I’ll just say my feeling on this is that image of the Terminator 2: Judgment Day–type robots rolling over human skulls feels like a distraction from the bigger problems, because—

Rosin: Wait; you said it’s a distraction from bigger problems. And this is what I want to know, so I’m not distracted by the shiny doom movie. What are actually the things that we need to worry about, or pay attention to?

LaFrance: The possibility of wiping out entire job categories and industries, though that is a phenomenon we’ve experienced throughout technological history. That’s a real threat to people’s real lives and ability to buy groceries.

And I have real questions about what it means for the arts and our sense of what art is and whose work is valued, specifically with regard to artists and writers. But, Charlie, what are yours?

Warzel: Well, I think before we talk about exterminating the human race, I’m worried about financial institutions adopting these types of automated generative AI machines. And if you have an investment firm that is using a powerful piece of technology, and you wanna optimize for a very specific stock or a very specific commodity, then you get the possibility of something like that paper-clip problem. With: “Well, what’s the best way to drive the price of corn up?”

Rosin: Cause a famine.

Warzel: Right. Or start conflict in a certain region. Now, again—there’s still a little bit of that underpants gnome–ish quality to this. But I think a good analog for this is from the social-media era. Back when Mark Zuckerberg was making Facebook in his Harvard dorm room, it would have been silly to imagine it could lead to ethnic cleansing or genocide in somewhere like Myanmar.

But ultimately, when you create powerful networks, you connect people. There’s all sorts of unintended consequences.

Rosin: So given the speed and suddenness with which these bad things can happen, you can understand why lots of intelligent people are asking for a pause. Do you think that’s even possible? Is that the right thing to do?

LaFrance: No. I think it’s unrealistic, certainly, to expect tech companies to slow themselves down. It’s intensely competitive right now. I’m not convinced that regulation right now would be the right move, either. We’d have to know exactly what that looks like.

We saw it with social platforms, when they called for Congress to regulate them and then at the same time they’re lobbying very hard not to be regulated.

Rosin: I see. So what you’re saying is that it’s a cynical public play, and what they’re looking for are sort of toothless regulations.

LaFrance: I think that is unquestionably one dynamic at play. Also, to be fair, I think that many of the people who are building this technology are indeed very thoughtful, and hopefully reflecting with some degree of seriousness about what they’re unleashing.

So I don’t wanna suggest that they’re all just doing it for political reasons. But there certainly is that element.

When it comes to how we slow it down, I think it has to be individual people deciding for themselves how they think this world should be. I’ve had conversations with people who are not journalists, who are not in tech, but who are unbridled in their enthusiasm for what this will all mean. Someone recently mentioned to me how excited he was that AI could mean that they could just surveil their workers all the time and that they could tell exactly what workers were doing and what websites they were visiting. At the end of the day, they could get a report that shows how productive they were. To me, that’s an example of something that could very quickly be seen among some people as culturally acceptable.

We really have to push back against that in terms of civil liberties. To me, this is much more threatening than the existential doom, in the sense that these are the sorts of decisions that are being made right now by people who have genuine enthusiasm for changing the world in ways that seem small, but are actually big.

I think it is crucially important that we act right now, because norms will be hardened before most people have a chance to grasp what’s happening.

Rosin: I guess I just don’t know who “we” is in that sentence. And it makes me feel a little vulnerable to think that every individual and their family and their friends has to decide for themselves—as opposed to, say, the European model, where you just put some basic regulations in place. The EU already passed a resolution to ban certain forms of public surveillance like facial recognition, and to review AI systems before they go fully commercial.

Warzel: Even if you do put regulations on things, it doesn’t stop somebody from building something on their own. It wouldn’t be as powerful as the multibillion-dollar supercomputer from Open AI, but those models will be out in the world. Those models may not have some of the restrictions that some of these companies, who are trying to build them thoughtfully, are going to have.

Maybe you’ll have people like we have in the software industry creating AI malware and selling it to the highest bidder, whether that’s a foreign government or a terrorist group, or a state-sponsored cell of some kind.

And there is also the idea of a geopolitical race, which is part of all of this. Behind closed doors they are talking about an AI race with China.

So, there are all these very, very, thorny problems.

You have all of that—and then you have the cultural issues. Those are the ones that I think we will see and feel really acutely before we feel any of this other stuff.

Rosin: What is an example of a cultural issue?

Warzel: You have all of these systems that are optimized for scale with a real cold, hard machine logic.

And I think that artificial intelligence is sort of the truest sort of almost-final realization of scale. It is a scale machine; like it is human intelligence at a scale that humans can’t have. That’s really worrisome to me.

Like, hey, do you like Succession? Well, AI’s gonna generate 150 seasons of Succession for you to watch. It’s like: I don’t wanna necessarily live in that world, because it’s not made by people. It’s a world without limits.

The whole idea of being alive and being a human is encountering and embracing limitations of all kinds. Including our own knowledge, and our ability to do certain things. If we insert artificial intelligence, in the most literal sense it really is sort of like strip-mining the humanity out of a lot of life. And that is really worrisome.

Rosin: I mean, Charlie, that sounds even worse than the doom scenarios I started with. Because how am I—say, as one writer or Person X, who as Adrienne started out saying, is trying to pay for their groceries—supposed to take a stance against this enormous global force?

LaFrance: We have to assert that our purpose on the planet is not just an efficient world.

Rosin: Yeah.

LaFrance: We have to insist on that.

Rosin: Charlie, do you have any tiny bits of optimism for us?

Warzel: I am probably just more of a realist. You can look at the way that we have coexisted with all kinds of technologies as a story where the disruption comes in, things never feel the same as they were, and there’s usually a chaotic period of upheaval—and then you sort of learn to adapt. I’m optimistic that humanity is not going to end. I think that is the best I can do here.

Rosin: I hear you struggling to be definitive, but I feel like what you are getting at is that you have faith in our history of adaptation. We have learned to live with really cataclysmic and shattering technologies many times in the past. And you just have faith that we can learn to live with this one.

Warzel: Yeah.

Rosin: On that sort of tiny bit of optimism, Charlie Warzel and Adrienne LaFrance: Thanks for helping me feel safe enough to crawl out of my bunker, at least for now.