Itemoids

Damon

Hiding Behind the AI Apocalypse

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 05 › altman-hearing-ai-existential-risk › 674096

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

Yesterday, the OpenAI CEO Sam Altman testified before a Senate judiciary subcommittee about the “significant harm” that ChatGPT and similar generative-AI tools could pose to the world. When I asked Damon Beres, The Atlantic’s technology editor, for his read on the hearing, he noted that Altman’s emphasis on the broader existential risks of AI might conveniently elide some of the more quotidian problems of this new technology. I called Damon today to talk about that, and to see what else has been on his mind as he follows this story.

First, here are four new stories from The Atlantic:

What makes the Durham report a sinister flop Has North Carolina found an abortion compromise? TV isn’t about to get worse. It already is. The billionaires who are threatening democracy

A Missed Opportunity

Isabel Fattal: Can you talk a bit more about Altman’s emphasis on the existential risks of AI, and what that focus might leave out?

Damon Beres: Discussing artificial intelligence in terms of vague existential risks actually allows Altman, and others discussing the future of artificial intelligence, to dodge some of the everyday impacts that we’re already seeing from the technology. For those who work in developing these tools, it’s a clever way of putting the ball in the court of lawmakers and essentially saying, This stuff is so big and abstract, and I’m fully on board with the idea that it should be regulated, and I want to be your partner in all this, but this is something that you have to wrestle with.

Isabel: What are some examples of these everyday impacts that get lost?

Damon: There was not really any talk at the hearing about the impacts of AI on labor. There were broad allusions to the idea of job loss. But there are so many specific ways that jobs are already threatened by automation today. Amazon is pushing for greater automation on its warehouse floors. The Writers Guild of America strike has brought the issue of AI-generated writing in entertainment to the forefront, but the strike didn’t come up in specific terms.

Additionally, we’ve seen AI deployed in a broad range of settings that deeply affect how people live their lives every day. Four years ago, there was a study on the algorithms that determined whether patients at Brigham and Women’s Hospital in Boston should receive extra proactive medical care. And the way this artificial-intelligence system was set up ended up privileging relatively healthy white patients over sicker Black patients.That’s an example of artificial intelligence being deployed in a setting that is not necessarily getting meaningful governmental oversight but is fundamentally having a significant impact on human lives.

Of course, Sam Altman and OpenAI have their own corner of the world that they operate in. ChatGPT isn’t the same thing as a hospital program. But given the opportunity for lawmakers to think seriously about the impacts of artificial intelligence and what regulation could look like, it seems a little bit like a missed opportunity—we’ve known about these problems for a long time.

Isabel: Where do you think lawmakers should begin the conversation about AI regulation?

Damon: The EU is working on an AI act that would essentially regulate the development and deployment of new AI systems. And China has drafted policies that would enforce a certain set of rules over generative-AI products similar to ChatGPT, and also limit the kind of content these AI tools can create. So there are already a couple of precedents out there. There have also been a number of interesting proposals put forth here in the U.S. by AI experts who’ve been paying attention to this for quite a long time.

It’s encouraging that we’re having these conversations, but on the other hand, the horse has left the barn in a very real way. ChatGPT is already out there. We’re already facing the potential of job disruption. We’re already facing the potential for the internet to be flooded by spammy content and disinformation to a greater extent than maybe anyone would have thought possible even a couple of years ago.

And some of these large language models are already out of the hands of the technology companies themselves, let alone the government. For example, in March, an AI language model created by Meta, Facebook’s parent company, leaked. This was supposed to be a tool that would be available to AI researchers. It ended up pirated, essentially, and released on 4chan. Anyone who knows where to look can access and download this technology. It’s not ready-made like ChatGPT, but it can be developed and purposed in such a way. And once that’s out on the internet, there’s no putting the genie back in the bottle.

There’s also still a need for oversight of the existing AI applications used in health care, law enforcement, surveillance, real estate—those sorts of things.

Isabel: With those existing applications of AI that have been around for years, it seems like the horse is really far from the barn at this point.

Damon: I think that’s right. We are interacting with what would be defined as artificial intelligence countless times throughout the day. You might wake up and talk to your Alexa device. You might see algorithmically sorted content when you look at your phone and read Facebook or even Apple News over breakfast. There are instances where you might be in the hospital and, unbeknownst to you, the type of care that you’re getting could be influenced by how your data are processed by an algorithm. AI is a gigantic category of technology that has been in development for decades upon decades at this point. Some of the most consequential impacts are those outside of tools like ChatGPT.

Related:

Before AI takes over, make plans to give everyone money. A chatbot is secretly doing my job.

Today’s News

President Joe Biden and Speaker Kevin McCarthy stated their intention to reach a deal on the federal government’s debt ceiling, which could occur as early as Sunday. The Supreme Court rejected a request to block state and local bans on assault-style weapons in Illinois. A UN agency says that the world will likely experience record temperatures in the next five years, and that it is poised to breach the crucial threshold of a 1.5 degrees Celsius temperature increase above preindustrial levels by 2027.

Dispatches

The Weekly Planet: Nowhere in the U.S. should expect a cool summer, Matteo Wong writes—but even a less punishing season than recent summers would be hotter than historical norms.

Explore all of our newsletters here.

Evening Read

Shelby Tauber / Reuters

Latinos Can Be White Supremacists

By Adam Serwer

A gunman turned a Dallas mall into an abattoir earlier this month, and parts of the American right reacted in disbelief. Not at the sixth mass shooting in a public place this year—by now these events have become numbingly routine—but that the suspect identified might have been motivated by white-supremacist ideology.

Why? Because the suspect was identified as one Mauricio Garcia.

Read the full article.

More From The Atlantic

The problem with counterfeit people Premature calls for Ukraine-Russia talks are dangerous. Photos: sepak takraw, a sport of airborne athleticism

Culture Break

Universal

Read. “A Week Later,” a poem by Sharon Olds in which she bids farewell to her husband of 32 years.

“And it came to me, / for moments at a time, moment after moment, / to be glad for him that he is with the one / he feels was meant for him.”

Watch. Fast X (in theaters this week), to understand why staff writer David Sims will only watch Fast XI, or whatever numeral it gets assigned, out of “grim professional obligation.”

Play our daily crossword.

Katherine Hu contributed to this newsletter.