Itemoids

Ukrainian

AI Companies Are Trying to Have It Both Ways

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 07 › ai-companies-openai-voluntary-safeguards › 674812

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

Last week, seven technology companies appeared at the White House and agreed to voluntary guardrails around the use of AI. In promising to take these steps, the companies are nodding to the potential risks of their creations without pausing their aggressive competition.

First, here are four new stories from The Atlantic:

Why you have to care about these 12 colleges The perfect service to make everyone at the airport hate you Liberal suburbs have their own border wall. The ticks are winning.

A Convenient Gesture

I was sitting in a dorm lobby slash seminar room the first time I heard someone compare Silicon Valley in the 2010s to Florence during the Renaissance. I was a college student in the Bay Area at the time, in 2013, and professors and peers were often talking about how we were in a unique period of flourishing that would reshape humanity. It proved true in some ways—that era of tech, when companies such as Twitter and Facebook were freshly public and start-ups abounded, did change things (though the time’s strain of techno-optimism somewhat curdled in the years that followed).

I thought about that sentiment again this morning while reading Ross Andersen’s new article for the September issue of The Atlantic, which profiles OpenAI and its CEO, Sam Altman. “You are about to enter the greatest golden age,” Ross heard Altman tell a group of students. At another point, Altman says that the AI revolution will be “different from previous technological changes,” and that it will be “like a new kind of society.” That Altman believes AI will reshape the world is clear. How exactly that transformation will play out is less clear. In recent months, as AI tools have achieved widespread usage and interest, OpenAI and its competitors have been doing an interesting dance: They are boosting their technology while also warning, many times in apocalyptic terms, of its potential harms.

On Friday, leaders from seven major AI companies—OpenAI, Amazon, Anthropic, Google, Inflection, Meta, and Microsoft—met with Joe Biden and agreed to a set of voluntary safeguards. The companies pledged, sometimes in vague terms, to take actions such as releasing information about security testing, sharing research with academics and governments, reporting “vulnerabilities” in their systems, and working on mechanisms that tell people when content is AI generated. Many of these are steps that the companies were already taking. And because the commitments made at the White House are voluntary, they aren’t enforceable regulations. Still, they allow the companies, and Biden, to signal to the public that they are working on AI safety. In agreeing to these voluntary precautions, these companies are nodding to the possible risks of their creations while also sacrificing little in their aggressive competition.

“For AI firms, this is a dream scenario, where they can ease regulatory pressure by pretending this fixes the problem, while ultimately continuing business as usual,” Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project, told me in an email. He added that other companies whose products pose safety risks, such as car manufacturers and nuclear-power plants, don’t get to self-regulate.

Altman has emerged as a main character of the AI industry, staking his claim as both a champion of the technology and a reasonable adult in the room. As Ross reports, the OpenAI CEO went on an international listening tour this spring, meeting with heads of state and lawmakers. In May, he appeared before Congress saying that he wanted AI to be regulated—which can be viewed both as a civically responsible move and as a way to shift some responsibility onto Congress, which is likely to act slowly. So far, no comprehensive, binding regulations have emerged from these conversations and congressional hearings. And the companies keep growing.

Leaders in the AI industry are forthcoming about the risks of their tools. A couple of months ago, AI luminaries, including Altman and Bill Gates, signed a one-sentence statement reading: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” (Altman and other AI builders have invited comparisons to Robert Oppenheimer.) But the doomsday warnings also have the effect of making the technology sound pretty groundbreaking. Last month, my colleague Matteo Wong wrote about how this message is not just alarming but also self-serving: “The CEOs, like demigods, are wielding a technology as transformative as fire, electricity, nuclear fission, or a pandemic-inducing virus. You’d be a fool not to invest.”

Another upside: As my colleague Damon Beres said in an edition of this newsletter in May, discussing these technologies in vague, existential terms “actually allows Altman, and others discussing the future of artificial intelligence, to dodge some of the everyday impacts that we’re already seeing from the technology.” AI is indeed having very real effects now: Chat tools are eroding jobs and reshaping college classrooms.

By asking for regulations, Damon added, the heads of these companies can cleverly put the ball in the lawmakers’ court. (If Congress takes forever to pass laws, well, at least the industry tried!) Critics have pointed out that one of Altman’s regulation ideas—a new agency that would oversee the AI industry—may take decades to build. In those decades, AI could become ubiquitous. Others have noted that, in suggesting that Congress pass a law requiring AI firms to have licenses to operate above a certain capacity, big companies like OpenAI can entrench themselves while potentially making it harder for smaller players to compete.

The tech industry may have learned a lesson from its PR disasters in the late 2010s. Instead of testifying after a fiasco happens, as Mark Zuckerberg did following the Cambridge Analytica debacle, leaders have lately been approaching Washington and requesting regulations instead. Sam Bankman-Fried, for example, managed to shore up his image by charming Washington and appearing dedicated to serious regulations—that is, before FTX collapsed. And after years of lobbying against regulations, Facebook has in recent years begun requesting them.

It’s easy to be cynical about self-imposed guardrails and to see them as toothless. But Friday’s pledge acknowledged that there is work to be done, and the fact that bitter industry rivals aligned on that fact shows that, at the very least, it’s no longer good PR to skirt government guardrails completely. The old way of doing things is no longer so palatable. For now, though, companies may keep trying to have it both ways. As one expert told Matteo, “You have to wonder: If you think this is so dangerous, why are you still building it?

Related:

Does Sam Altman know what he’s creating? AI doomerism is a decoy.

Today’s News

Israeli lawmakers ratified the first piece of a legislative package designed to weaken the country’s Supreme Court following months of protests and repeated warnings from the Biden administration. Elon Musk rebranded Twitter to “X”, replacing the former blue bird logo. Russian drones destroyed grain infrastructure in an attack on Ukrainian ports along the Danube, a key export route.

Evening Read

Ben Kothe / The Atlantic

America’s Corporate Tragedy

By Caitlin Flanagan

I was a child soldier in the California grape strikes, my labors conducted outside the Shattuck Avenue co-op in Berkeley. There I was, maybe 7 or 8 years old, shaking a Folgers coffee can full of coins at the United Farm Workers’ table where my mother was garrisoned two to three afternoons a week. I did most of my work alongside her, but several times an hour I would do what child soldiers have always done: served in a capacity that only a very small person could. I’d go out in the parking lot and slip between cars to make sure no one was getting away without donating some coins or signing a petition. I’d pop up next to a driver’s-side window and give the can an aggressive rattle. I wasn’t Jimmy Hoffa, but I wasn’t playing any games either.

My parents were old-school leftists, born in the 1920s and children during the Great Depression. They would never, ever cross a picket line, fail to participate in a boycott, lose sight of strikers’ need for money when they weren’t getting paychecks. My parents would never suggest that poverty was caused by lack of intelligence or effort. We were not a religious family (to say the least), but I had a catechism: One worker is powerless; many workers can bring a company to its knees.

Read the full article.

More From The Atlantic

The wrong-apartment problem How older people get scammed

Culture Break

Harold M. Lambert / Getty

Read. Claude Glass as Night Song,” a new poem by Janelle Tan.

“i wanted your chest beating / in my chest, / so i couldn’t look at you.”

Watch. Oppenheimer (in theaters now) is everywhere—including in people’s nightmares.

Play our daily crossword.

P.S.

Speaking of new-technology panic, my colleague Jacob Stern has a fun and fascinating article up about the initial reactions to … PowerPoint? Apparently, in 2003, some found the slideshow technology sinister. Jacob describes “a techno-scare of the highest order that has now been almost entirely forgotten: the belief that PowerPoint—that most enervating member of the Office software suite, that universal metonym for soporific meetings—might be evil.” I haven’t made a PowerPoint in years (a quick tour through my files suggests that my last attempt at a slideshow was ahead of my sister’s graduation, in 2020—I found one file with single slide reading “Good job, Annie” in Arial font, and another featuring a photo of her and the family dog). I almost never think about PowerPoint, so it was interesting to read about a time when people did so with alarm. How times change!

— Lora

Katherine Hu contributed to this newsletter.

When you buy a book using a link in this newsletter, we receive a commission. Thank you for supporting The Atlantic.

Russia’s aggression against Ukraine leaves no room for negotiations

Euronews

www.euronews.com › 2023 › 07 › 24 › russias-aggression-against-ukraine-leaves-no-room-for-negotiations

Instead of attempts to bargain with Putin, it should now be obvious that the only way to secure a lasting peace is via Ukrainian victory and the decisive defeat of Russian imperialism, Peter Dickinson writes.

The latest Russian strike on Ukraine's Odessa leaves 1 dead, many hurt and a cathedral badly damaged

Euronews

www.euronews.com › 2023 › 07 › 23 › the-latest-russian-strike-on-ukraines-odessa-leaves-1-dead-many-hurt-and-a-cathedral-badly

Russia struck the Ukrainian Black Sea city of Odesa again on Sunday, local officials said, keeping up a barrage of attacks that have damaged critical port infrastructure in southern Ukraine in the past week.

Vadym Prystaiko, sacked as Ukrainian ambassador to the UK by Volodymyr Zelenskyy

Euronews

www.euronews.com › 2023 › 07 › 21 › vadym-prystaiko-sacked-as-ukrainian-ambassador-to-the-uk-by-volodymyr-zelenskyy

The Ukrainian President's decision follows Prystaiko's criticism of Zelenskyy's response to a row over British military aid.