Itemoids

Altman

A Chaotic Week at OpenAI

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 11 › the-man-who-tried-to-overthrow-sam-altman › 676101

This is Atlantic Intelligence, an eight-week series in which The Atlantic’s leading thinkers on AI will help you understand the complexity and opportunities of this groundbreaking technology. Sign up here.

It’s been an unbelievable few days for OpenAI, the influential company behind products such as ChatGPT, the image-generating DALL-E, and GPT-4. On Friday, its CEO, Sam Altman, was suddenly fired by the company’s board. Chaos immediately followed: A majority of the company’s workers revolted, negotiations were held, and now a new agreement has been reached to return Altman to his throne.

It’s a tale of corporate mutiny fit for streaming, and we’ve been following it closely at The Atlantic. The turmoil at OpenAI is juicy, yes, but it is not just gossip: Whatever happens here will be of major consequence to the future of AI development. This is a company that has been at odds with itself over the possibility that an all-powerful “artificial general intelligence” might emerge from its research, potentially dooming humanity if it’s not carefully aligned with society’s best interests. Even though Altman has returned, the OpenAI shake-up will likely change how the technology is developed from here, with significant outcomes for you, me, and everyone else.

Yesterday, our staff writer Ross Andersen reflected on time spent with Ilya Sutskever, OpenAI’s chief scientist and the man who struck out against Altman last week. The relationship—and the rift—between these two men encapsulates the complex dynamic within OpenAI overall. Whatever agreement has been reached on paper to return Altman to his post, the fundamental tension between AI’s promise and peril will persist. In many ways, the story is just beginning.

Damon Beres, senior editor

Jim Wilson / The New York Times / Redux

OpenAI’s Chief Scientist Made a Tragic Miscalculation

By Ross Andersen

Ilya Sutskever, bless his heart. Until recently, to the extent that Sutskever was known at all, it was as a brilliant artificial-intelligence researcher. He was the star student who helped Geoffrey Hinton, one of the “godfathers of AI,” kick off the so-called deep-learning revolution. In 2015, after a short stint at Google, Sutskever co-founded OpenAI and eventually became its chief scientist; so important was he to the company’s success that Elon Musk has taken credit for recruiting him. (Sam Altman once showed me emails between himself and Sutskever suggesting otherwise.) Still, apart from niche podcast appearances and the obligatory hour-plus back-and-forth with Lex Fridman, Sutskever didn’t have much of a public profile before this past weekend. Not like Altman, who has, over the past year, become the global face of AI.

On Thursday night, Sutskever set an extraordinary sequence of events into motion. According to a post on X (formerly Twitter) by Greg Brockman, the former president of OpenAI and the former chair of its board, Sutskever texted Altman that night and asked if the two could talk the following day. Altman logged on to a Google Meet at the appointed time on Friday and quickly learned that he’d been ambushed. Sutskever took on the role of Brutus, informing Altman that he was being fired. Half an hour later, Altman’s ouster was announced in terms so vague that for a few hours, anything from a sex scandal to a massive embezzlement scheme seemed possible.

Read the full article.

What to Read Next

The events of the past few days are just one piece of the OpenAI saga. Over the past year, the company has struggled to balance an imperative from Altman to swiftly move products into the public’s hands with a concern that the technology was not being appropriately subject to safety assessments. The Atlantic told that story on Sunday, incorporating interviews with 10 current and former OpenAI employees.

Inside the chaos at OpenAI: This tumultuous weekend showed just how few people have a say in the progression of what might be the most consequential technology of our age, Charlie Warzel and Karen Hao write. The money always wins: As is always true in Silicon Valley, a great idea can get you only so far, Charlie writes. Does Sam Altman know what he’s creating?: Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk, Ross Andersen writes in his profile of the CEO from our September issue. P.S.

Looking for a book to read over the long weekend? Try Your Face Belongs to Us, by Kashmir Hill, about the secretive facial-recognition start-up dismantling the concept of privacy. Jesse Barron has a review in The Atlantic here.

— Damon

The OpenAI Mess Is About One Big Thing

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 11 › openai-sam-altman-corporate-governance › 676080

This is Work in Progress, a newsletter by Derek Thompson about work, technology, and how to solve some of America’s biggest problems. Sign up here to get it every week.

OpenAI fired its chief executive, Sam Altman, on Friday, accusing him of “not being consistently candid” with its board of directors. This kicked off several days of utter nonsense that astonished the tech world—and probably delighted a bunch of business-school types who now have a great example of why the incredibly boring-sounding term corporate governance is actually extremely important.

Friday: The hour before everything went sideways, OpenAI’s board of directors consisted of just six people, including Altman. Its president, Greg Brockman, apparently took Altman’s side, while the other four members—the chief scientist Ilya Sutskever and three nonemployee members—voted to ether their CEO. Soon after, Brockman quit the company.

Sunday: OpenAI invited Altman back to the office to discuss the prospect of rehiring him as CEO. Despite pressure from Microsoft, however, the board members declined to rehire Altman. Instead, they announced that the next chief executive of the company would be an outsider: Emmett Shear, the former CEO of Twitch, a live-video streaming service.

Monday: The Microsoft chief executive Satya Nadella announced that he would hire Altman, along with other OpenAI workers, to start a new AI-research division within Microsoft. Then roughly 700 of the nearly 800 employees at OpenAI signed a letter demanding the return of Altman as CEO and the resignations of all the board members who stood against him.

If this seems dizzying, the next bit might require Dramamine. Sutskever played the key role in firing Altman over Google Meet on Friday, then declined to rehire him on Sunday, and then signed the letter on Monday demanding the return of Altman and the firing of his own board-member co-conspirators. On X (formerly Twitter), Sutskever posted an apology to the entire company, writing, “I deeply regret my participation in the board’s actions.” Altman replied with three red hearts. One imagines Brutus, halfway through the stabbing of Caesar, pulling out the knife, offering Caesar some gauze, lightly stabbing him again, and then finally breaking down in apologetic tears and demanding that imperial doctors suture the stomach wound. (Soon after, in post-op, Caesar dispatches a courier to send Brutus a brief message inked on papyrus: “<3.”)

We still don’t know much about the OpenAI fracas. We don’t know a lot about Altman’s relationship (or lack thereof) with the board that fired him. We don’t know what Altman did in the days before his firing that made this drastic step seem unavoidable to the board. In fact, the board members who axed Altman have so far refused to elaborate on the precise cause of the firing. But here is what we know for sure: Altman’s ouster stemmed from the bizarre way that OpenAI is organized.

In the first sentence of this article, I told you that “on Friday, OpenAI fired its chief executive, Sam Altman.” Perhaps the most technically accurate way to put that would have been: “On Friday, the board of directors of the nonprofit entity of OpenAI, Inc., fired Sam Altman, who is most famous as the lead driver of its for-profit subsidiary, OpenAI Global LLC.” Confusing, right?

In 2015, Sam Altman, Elon Musk, and several other AI luminaries founded OpenAI as a nonprofit institution to build powerful artificial intelligence. The idea was that the most important technology in the history of humankind (as some claim) ought to “benefit humanity as a whole” rather than narrowly redound to the shareholders of a single firm. As Ross Andersen explained in an Atlantic feature this summer, they structured OpenAI as a nonprofit to be “unconstrained by a need to generate financial return.”

After several frustrating years, OpenAI realized that it needed money—a lot of money. The cost of computational power and engineering talent to build a digital superintelligence turned out to be astronomically high. Plus, Musk, who had been partly financing the organization’s operations, suddenly left the board in 2018 after a failed attempt to take over the firm. This left OpenAI with a gaping financial hole.

OpenAI therefore opened a for-profit subsidiary that would be nested under the OpenAI nonprofit. The entrepreneur and writer Sam Lessin called this structure a corporate “turducken,” referring to the dubious Thanksgiving entrée in which a cooked duck is stuffed inside a cooked turkey. In this turducken-esque arrangement, the original board would continue to “govern and oversee” all for-profit activities.

When OpenAI, the nonprofit, created OpenAI, the for-profit, nobody imagined what would come next: the ChatGPT boom. Internally, employees predicted that the rollout of the AI chatbot would be a minor event; the company referred to it as a “low-key research preview” that wasn’t likely to attract more than 100,000 users. Externally, the world went mad for ChatGPT. It became, by some measures, the fastest-growing consumer product in history, garnering more than 1 billion users.

Slowly, slowly, and then very quickly, OpenAI, the for-profit, became the star of the show. Altman pushed fast commercialization, and he needed even more money to make that possible. In the past few years, Microsoft has committed more than $10 billion to OpenAI in direct cash and in credits to use its data and cloud services. But unlike a typical corporate arrangement, where being a major investor might guarantee a seat or two on the board of directors, Microsoft’s investments got them nothing. OpenAI’s operating agreement states without ambiguity, “Microsoft has no board seat and no control.” Today, OpenAI’s corporate structure—according to OpenAI itself—looks like this.

(OpenAI)

In theory, this arrangement was supposed to guarantee morality plus money. The morality flowed from the nonprofit board of directors. The money flowed from Microsoft, the second biggest company in the world, which has lots of cash and resources to help OpenAI achieve its mission of building a general superintelligence.

But rather than align OpenAI’s commercial and ethical missions, this organizational structure created a house divided against itself. On Friday, this conflict played out in vivid color. Altman, the techno-optimist bent on commercialization, lost out to Sutskever, the Brutus cum mad scientist fearful that super-smart AI poses an existential risk to humanity. This was shocking. But from an organizational standpoint, it wasn’t surprising. A for-profit start-up rapidly developing technology hand in glove with Microsoft was nested under an all-powerful nonprofit board that believed it was duty-bound to resist rapid development of AI and Big Tech alliances. That does not make any sense.

Everything is obvious in retrospect, especially failure, and I don’t want to pretend that I saw any of this coming. I don’t think anybody saw this coming. Microsoft’s investments accrued over many years. ChatGPT grew over many months. That all of this would blow up without any warning was inconceivable.

But that’s the thing about technology. Despite manifestos that claim that the annunciation of tech is a matter of cosmic inevitability, technology is, for now, made by people—flawed people, who may be brilliant and sometimes clueless, who change their mind and then change their mind again. Before we build an artificial general intelligence to create progress without people, we need dependable ways to organize people to work together to build complex things within complex systems. The term for that idea is corporate structure.

OpenAI is on the precipice of self-destruction because, in its attempt to build an ethically pristine institution to protect a possible superintelligence, it built just another instrument of minority control, in which a small number of nonemployees had the power to fire its chief executive in an instant.

In AI research, there is something called the “alignment problem.” When we engineer an artificial intelligence, we ought to make sure that the machine has the intentions and values of its architects. Oddly, the architects of OpenAI created an institution that is catastrophically unaligned, in which the board of directors and the chief executive are essentially running two incompatibly different companies within the same firm. Last week, the biggest question in technology was whether we might live long enough to see humans invent aligned superintelligence. Today, the more appropriate question is: Will we live long enough to see AI’s architects invent a technology for aligning themselves?

When Hollywood Put World War III on Television

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 11 › the-day-after-hollywood-world-war-iii › 676084

This story seems to be about:

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

The ABC made-for-television movie The Day After premiered on November 20, 1983. It changed the way many Americans thought about nuclear war—but the fear now seems forgotten.

First, here are three new stories from The Atlantic:

There is no good way to travel anywhere in America. OpenAI’s chief scientist made a tragic miscalculation. What did hip-hop do to women’s minds?

A Preview of Hell

We live in an anxious time. Some days, it can feel like the wheels are coming off and the planet is careening out of control. But at least it’s not 1983, the year that the Cold War seemed to be in its final trajectory toward disaster.

Forty years ago today, it was the morning after The Day After, the ABC TV movie about a nuclear exchange between the United States and the Soviet Union. Roughly 100 million people tuned in on Sunday night, November 20, 1983, and The Day After holds the record as the most-watched made-for-television movie in history.

I remember the movie, and the year, vividly. I was 22 and in graduate school at Columbia University, studying the Soviet Union. It’s hard to explain to people who worry about, say, climate change—a perfectly legitimate concern—what it was like to live with the fear not that many people could die over the course of 20 or 50 or 100 years but that the decision to end life on most of the planet in flames and agony could happen in less time than it would take you to finish reading this article.

I will not recount the movie for you; there isn’t much of a plot beyond the stories of people who survive the fictional destruction of Kansas City. There is no detailed scenario, no explanation of what started the war. (This was by design; the filmmakers wanted to avoid making any political points.) But in scenes as graphic as U.S. television would allow, Americans finally got a look at what the last moments of peace, and the first moments of hell, might look like.

Understanding the impact of The Day After is difficult without a sense of the tense Cold War situation during the previous few years. There was an unease (or “a growing feeling of hysteria,” as Sting would sing a few years later in “Russians”) in both East and West that the gears of war were turning and locking, a doomsday ratchet tightening click by click.

The Soviet-American détente of the 1970s was brief and ended quickly. By 1980, President Jimmy Carter was facing severe criticism about national defense even within his own party. He responded by approving a number of new nuclear programs, and unveiling a new and highly aggressive nuclear strategy. The Soviets thought Carter had lost his mind, and they were actually more hopeful about working with the Republican nominee, Ronald Reagan. Soviet fears intensified when Reagan, once in office, took Carter’s decisions and put them on steroids, and in May 1981 the KGB went on alert looking for signs of impending nuclear attack from the United States. In November 1982, Soviet leader Leonid Brezhnev died and was replaced by the KGB boss, Yuri Andropov. The chill in relations between Washington and Moscow became a hard frost.

And then came 1983.

In early March, Reagan gave his famous speech in which he called the Soviet Union an “evil empire” and accused it of being “the focus of evil in the modern world.” Only a few weeks after that, he gave a major televised address to the nation in which he announced plans for space-based missile defenses, soon mocked as “Star Wars.” Two months later, I graduated from college and headed over to the Soviet Union to study Russian for the summer. Everywhere I went, the question was the same: “Why does your president want a nuclear war?” Soviet citizens, bombarded by propaganda, were certain the end was near. So was I, but I blamed their leaders, not mine.

When I returned, I packed my car in Massachusetts and began a road trip to begin graduate school in New York City on September 1, 1983. As I drove, news reports on the radio kept alluding to a missing Korean airliner.

The jet was Korean Air Lines Flight 007. It was downed by Soviet fighter jets for trespassing in Soviet airspace, killing all 269 souls aboard. The shoot down produced an immense outpouring of rage at the Soviet Union that shocked Kremlin leaders. Soviet sources later claimed that this was the moment when Andropov gave up—forever—on any hope of better relations with the West, and as the fall weather of 1983 got colder, the Cold War got hotter.

We didn’t know it at the time, but in late September, Soviet air defenses falsely reported a U.S. nuclear attack against the Soviet Union: We’re all still alive thanks to a Soviet officer on duty that day who refused to believe the erroneous alert. On October 10, Reagan watched The Day After in a private screening and noted in his diary that it “greatly depressed” him.

On October 23, a truck bomber killed 241 U.S. military personnel in the Marine barracks in Beirut.

Two days after that, the United States invaded Grenada and deposed its Marxist-Leninist regime, an act the Soviets thought could be the prelude to overthrowing other pro-Soviet regimes—even in Europe. On November 7, the U.S. and NATO began a military communications exercise code-named Able Archer, exactly the sort of traffic and activity the Soviets were looking for. Moscow definitely noticed, but fortunately, the exercise wound down in time to prevent any further confusion.

This was the global situation when, on November 20, The Day After aired.

Three days later, on November 23, Soviet negotiators walked out of nuclear-arms talks in Geneva. War began to feel—at least to me—inevitable.

In today’s Bulwark newsletter, the writer A. B. Stoddard remembers how her father, ABC’s motion-picture president Brandon Stoddard, came up with the idea for The Day After. “He wanted Americans, not politicians, to grapple with what nuclear war would mean, and he felt ‘fear had really paralyzed people.’ So the movie was meant to force the issue.”

And so it did, perhaps not always productively. Some of the immediate commentary bordered on panic. (In New York, I recall listening to the antinuclear activist Helen Caldicott on talk radio after the broadcast, and she said nuclear war was a mathematical certainty if Reagan was reelected.) Henry Kissinger, for his part, asked if we should make policy by “scaring ourselves to death.”

Reagan, according to the scholar Beth Fischer, was in “shock and disbelief” that the Soviets really thought he was headed for war, and in late 1983 “took the reins” and began to redirect policy. He found no takers in the Kremlin for his new line until the arrival of Mikhail Gorbachev in 1985, and both men soon affirmed that a nuclear war cannot be won and must never be fought—a principle that in theory still guides U.S. and Russian policy.

In the end, we got through 1983 mostly by dumb luck. If you’d asked me back then as a young student whether I’d be around to talk about any of this 40 years later, I would have called the chances a coin toss.

But although we might feel safer, I wonder if Americans really understand that thousands of those weapons remain on station in the United States, Russia, and other nations, ready to launch in a matter of minutes. The Day After wasn’t the scariest nuclear-war film—that honor goes to the BBC’s Threads—but perhaps more Americans should take the time to watch it. It’s not exactly a holiday movie, but it’s a good reminder at Thanksgiving that we are fortunate for the changes over the past 40 years that allow us to give thanks in our homes instead of in shelters made from the remnants of our cities and towns—and to recommit to making sure that future generations don’t have to live with that same fear.

Related:

We have no nuclear strategy. I want my mutually assured destruction.

Today’s News

The Wisconsin Supreme Court heard oral arguments in a legal challenge to one of the most severely gerrymandered legislative district maps in the country. A gunman opened fire in an Ohio Walmart last night, injuring four people before killing himself. Various storms are expected to cause Thanksgiving travel delays across the United States this week.

Evening Read


Illustration by Ricardo Rey

Does Sam Altman Know What He’s Creating?

By Ross Andersen

(From July)

On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers. With his heel perched on the edge of his swivel chair, he looked relaxed. The powerful AI that his company had released in November had captured the world’s imagination like nothing in tech’s recent history. There was grousing in some quarters about the things ChatGPT could not yet do well, and in others about the future it may portend, but Altman wasn’t sweating it; this was, for him, a moment of triumph.

In small doses, Altman’s large blue eyes emit a beam of earnest intellectual attention, and he seems to understand that, in large doses, their intensity might unsettle. In this case, he was willing to chance it: He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.

Read the full article.

More From The Atlantic

It’s too easy to buy stuff you don’t want. Harvard has a brand problem. Here’s how to fix it. How Mike Birbiglia got sneaky-famous

Culture Break

Illustration by Jared Bartman / The Atlantic. Sources: Heritage Images / Getty; Nikola Vukojevic / Getty; Philippe PACHE / Getty; Dan Cristian Pădureț / Unsplash; dpwk / Openverse; Annie Spratt / Unsplash.

Read. These six books might change how you think about mental illness.

Watch. Interstellar (streaming on Paramount+) is one of the many films in which Christopher Nolan tackles the promise and peril of technology.

Play our daily crossword.

P.S.

If you want to engage in nostalgia for a better time when serious people could discuss serious issues, I encourage you to watch not only The Day After but the roundtable held on ABC right after the broadcast. Following a short interview with then–Secretary of State George Shultz, Ted Koppel moderated a discussion among Kissinger, former Secretary of Defense Robert McNamara, former National Security Adviser Brent Scowcroft, the professor Elie Wiesel, the scientist Carl Sagan, and the conservative writer William F. Buckley. The discussion ranged across questions of politics, nuclear strategy, ethics, and science. It was pointed, complex, passionate, and respectful—and it went on for an hour and a half, including audience questions.

Try to imagine something similar today, with any network, cable or broadcast, blocking out 90 precious minutes for prominent and informed people to discuss disturbing matters of life and death. No chyrons, no smirky hosts, no music, no high-tech sets. Just six experienced and intelligent people in an unadorned studio talking to one another like adults. (One optimistic note: Both McNamara and Kissinger that night thought it was almost unimaginable that the superpowers could cut their nuclear arsenals in half in 10 or even 15 years. And yet, by 1998, the U.S. arsenal had been reduced by more than half, and Kissinger in 2007 joined Shultz and others to argue for going to zero.)

I do not miss the Cold War, but I miss that kind of seriousness.

Tom

Katherine Hu contributed to this newsletter.

When you buy a book using a link in this newsletter, we receive a commission. Thank you for supporting The Atlantic.