Itemoids

ChatGPT

AI Is Like … Nuclear Weapons?

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 03 › ai-gpt4-technology-analogy › 673509

The concern, as Edward Teller saw it, was quite literally the end of the world. He had run the calculations, and there was a real possibility, he told his Manhattan Project colleagues in 1942, that when they detonated the world’s first nuclear bomb, the blast would set off a chain reaction. The atmosphere would ignite. All life on Earth would be incinerated. Some of Teller’s colleagues dismissed the idea, but others didn’t. If there were even a slight possibility of atmospheric ignition, said Arthur Compton, the director of a Manhattan Project lab in Chicago, all work on the bomb should halt. “Better to accept the slavery of the Nazi,” he later wrote, “than to run a chance of drawing the final curtain on mankind.”

I offer this story as an analogy for—or perhaps a contrast to—our present AI moment. In just a few months, the novelty of ChatGPT has given way to utter mania. Suddenly, AI is everywhere. Is this the beginning of a new misinformation crisis? A new intellectual-property crisis? The end of the college essay? Of white-collar work? Some worry, as Compton did 80 years ago, for the very future of humanity, and have advocated pausing or slowing down AI development; others say it’s already too late.

In the face of such excitement and uncertainty and fear, the best one can do is try to find a good analogy—some way to make this unfamiliar new technology a little more familiar. AI is fire. AI is steroids. AI is an alien toddler. (When I asked for an analogy of its own, GPT-4 suggested Pandora’s box—not terribly reassuring.) Some of these analogies are, to put it mildly, better than others. A few of them are even useful.

Given the past three years, it’s no wonder that pandemic-related analogies abound. AI development has been compared to gain-of-function research, for example. Proponents of the latter work, in which potentially deadly viruses are enhanced in a controlled laboratory setting, say it’s essential to stopping the next pandemic. Opponents say it’s less likely to prevent a catastrophe than to cause one—whether via an accidental leak or an act of bioterrorism.

At a literal level, this analogy works pretty well. AI development really is a kind of gain-of-function research—except algorithms, not viruses, are the things gaining the functions. Also, both hold out the promise of near-term benefits: This experiment could help to prevent the next pandemic; this AI could help to cure your cancer. And both come with potential, world-upending risks: This experiment could help to cause a pandemic many times deadlier than the one we just endured; this AI could wipe out humanity entirely. Putting a number to the probabilities for any of these outcomes, whether good or bad, is no simple thing. Serious people disagree vehemently about their likelihood.

[Read: Bird flu leaves the world with an existential choice]

What the gain-of-function analogy fails to capture are the motivations and incentives driving AI development. Experimental virology is an academic undertaking, mostly carried out at university laboratories by university professors, with the goal at least of protecting people. It is not a lucrative enterprise. Neither the scientists nor the institutions they represent are in it to get rich. The same cannot be said when it comes to AI. Two private companies with billion-dollar profits, Microsoft (partnered with OpenAI) and Google (partnered with Anthropic), are locked in a battle for AI supremacy. Even the smaller players in the industry are flooded with cash. Earlier this year, four top AI researchers at Google quit to start their own company, though they weren’t exactly sure what it would do; about a week later, it had a $100 million valuation. In this respect, the better analogy is …

Social media. Two decades ago, there was fresh money—lots of it—to be made in tech, and the way to make it was not by slowing down or waiting around or dithering about such trifles as the fate of democracy. Private companies moved fast at the risk of breaking human civilization, to hell with the haters. Regulations did not keep pace. All of the same could be said about today’s AI.

[Read: Money will kill ChatGPT’s magic]

The trouble with the social-media comparison is that it undersells the sheer destructive potential of AI. As damaging as social media has been, it does not present an existential threat. Nor does it appear to have conferred, on any country, very meaningful strategic advantage over foreign adversaries, worries about TikTok notwithstanding. The same cannot be said of AI. In that respect, the better analogy is …

Nuclear weapons. This comparison captures both the gravity of the threat and where that threat is likely to originate. Few individuals could muster the colossal resources and technical expertise needed to construct and deploy a nuclear bomb. Thankfully, nukes are the domain of nation-states. AI research has similarly high barriers to entry and similar global geopolitical dynamics. The AI arms race between the U.S. and China is under way, and tech executives are already invoking it as a justification for moving as quickly as possible. As was the case for nuclear-weapons research, citing international competition has been a way of dismissing pleas to pump the brakes.

But nuclear-weapons technology is much narrower in scope than AI. The utility of nukes is purely military; and governments, not companies or individuals, build and wield them. That makes their dangers less diffuse than those that come from AI research. In that respect, the better analogy is …

Electricity. A saw is for cutting, a pen for writing, a hammer for pounding nails. These things are tools; each has a specific function. Electricity does not. It’s less a tool than a force, more a coefficient than a constant, pervading virtually all aspects of life. AI is like this too—or it could be.

[Read: What have humans just unleashed?]

Except that electricity never (really) threatened to kill us all. AI may be diffuse, but it’s also menacing. Not even the nuclear analogy quite captures the nature of the threat. Forget the Cold War–era fears of American and Soviet leaders with their fingers hovering above little red buttons. The biggest threat of superintelligent AI is not that our adversaries will use it against us. It’s the superintelligent AI itself. In that respect, the better analogy is …

Teller’s fear of atmospheric ignition. Once you detonate the bomb—once you build the superintelligent AI—there is no going back. Either the atmosphere ignites or it doesn’t. No do-overs. In the end, Teller’s worry turned out to be unfounded. Further calculations demonstrated that the atmosphere would not ignite—though two Japanese cities eventually did—and the Manhattan Project moved forward.

No further calculations will rule out the possibility of AI apocalypse. The Teller analogy, like all the others, only goes so far. To some extent, this is just the nature of analogies: They are illuminating but incomplete. But it also speaks to the sweeping nature of AI. It encompasses elements of gain-of-function research, social media, and nuclear weapons. It is like all of them—and, in that way, like none of them.