Itemoids

Microsoft

The Money Always Wins

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 11 › sam-altman-open-ai-microsoft-investment-profit › 676077

It’s been four full days since Sam Altman’s shocking dismissal from OpenAI, and we still have no idea where he’s going to land. There are suggestions that Altman, one of the most powerful figures in AI, could return to the company if the board changes significantly—talks are reportedly under way. But there is also an offer on the table from Microsoft to start a new AI research group there, which would be a cruelly ironic outcome for OpenAI, which was founded as a nonprofit with the goal of drawing talent away from Silicon Valley’s biggest companies and developing AI safely.

How Altman got to this moment is telling. In the days after his firing, he managed to prove that he is far more than a figurehead, winning over a majority of OpenAI employees (including Ilya Sutskever, the company’s chief scientist and the reported architect of his dismissal—it’s, uh, complicated) and some of the tech industry’s biggest luminaries. A number of OpenAI’s most powerful investors rallied around him. Altman may no longer run his own company, but, for now, he is emboldened. On Twitter this weekend, legions of OpenAI employees signaled their loyalty to him “I am Spartacus!”–style; Altman responded with a flurry of heart emojis. Getting unexpectedly fired in front of a global audience is assuredly stressful, but one gets the sense that it also amounted to a huge ego flex for the 38-year-old tech executive. You can see it in the weekend’s most indelible image: a selfie tweeted by Altman on Sunday as he visited OpenAI’s San Francisco offices to continue negotiations, lips pursed in mock disgust, a visitor’s lanyard clutched in his hand. “First and last time i ever wear one of these,” he wrote.” Altman was having fun. He was winning.

This is the triumph of a Bay Area operator and dealmaker over OpenAI’s charter, which purports to place the betterment of humanity above profit and personality. It’s a similar story for Microsoft and its CEO, Satya Nadella, who have invested billions in OpenAI and were reportedly blindsided by Altman’s firing. Quickly, the company used its investment in OpenAI, much of which is reportedly in the form of computing power instead of cash, as leverage to reopen negotiations. Those talks may fizzle, and Nadella may indeed bring Altman and former OpenAI President Greg Brockman over to Microsoft; if other OpenAI staffers flood in, as has been speculated, it would be akin to Microsoft acquiring Silicon Valley’s most sought-after company for little more than the price of its employees’ salaries. It’s a win-win situation for the tech giant: Regardless of what happens to OpenAI, the company will keep the access it currently has to OpenAI’s data and intellectual property, or it could subsume the company altogether. The immediate endgame seems similarly comfortable for Altman. He returns to his company with more power than ever before, or he continues his work with Microsoft’s full backing. Either way, he won’t be wearing the guest pass again.

[Read: OpenAI’s chief scientist made a tragic miscalculation]

So although there is still much we don’t know about this saga and how it might end, one thing feels abundantly clear: The money always wins.

As my colleague Karen Hao and I reported over the weekend, the central tension coursing through OpenAI in the past year was whether the company should commercialize, raise money, and grow to further its ambitions of building an artificial general intelligence—a technology so powerful that it could outperform humans in most tasks—or whether it ought to focus its efforts on the safety of its potentially dangerous innovations. Altman represented the former faction, and his aggressive business decisions appear to have been a key factor in his dismissal.

After the shock of Altman’s firing subsided, I noticed a sense of admiration from some industry observers toward OpenAI’s board. Yes, the decision to sack the CEO was brazen and badly messaged, and the implications for the company and its investments may have been poorly thought out. But it was principled, an indication that OpenAI’s nonprofit corporate structure was working exactly as intended to protect the fate of the company’s technology from the whims of one leader. “Somebody finally held the tech bros accountable!” a tech executive texted me on Saturday morning. A former social-media executive proposed a tantalizing counterfactual to me: What if Facebook had been able to fire CEO Mark Zuckerberg before the turmoil of the 2016 election? What would the world look like now?

Altman may have been a true believer in OpenAI’s charter. But he’s also a true believer in scale and profit. His tenure as CEO was partly an argument that, in order to change the world with your technology, you need the money to build it and the ability to get others to invest in it. If Sutskever was the visionary of OpenAI, Altman was seemingly the person who could sell it to people. And it is Altman who reportedly leveraged his business relationships to put immense pressure on OpenAI’s board. He didn’t call OpenAI’s bluff over the weekend: Instead, he demonstrated what the company might look like without its multibillion-dollar corporate investments and without its money man. According to Bloomberg, that future included some investors potentially writing down the value of their OpenAI holdings to nothing.

[Read: Inside the chaos at OpenAI]

Now Altman and his team could be going to Microsoft to develop new artificial-intelligence tools, unimpeded by a charter. A cynical person might argue that, there, he would no longer need to maintain the pretense of answering first to humanity—as an employee of one of the world’s biggest technology companies, his primary obligation would be fiduciary. He would answer to Nadella and to shareholders. But no matter how noble Altman's intentions are, any moral leanings he might have ultimately mean very little to the money, which, regardless of where he lands, will continue to flow toward Microsoft and toward whatever products Altman and his team build. As of this afternoon, Microsoft was worth $1 trillion more than Google.


Silicon Valley is peerless when it comes to mythologizing its ideas men (and yes, they tend to be men.). In the industry’s telling, technologies and their founders succeed in a meritocratic fashion, based on the genius of the idea and the skill of its execution. OpenAI’s self-mythologizing went a step further, positioning itself almost in opposition to its own industry—a company so committed to an ideology and a purity of product that it would self-immolate to protect itself and others. Over the weekend, this ideology crashed against the rocks of a capitalist reality. As is always true in Silicon Valley, a great idea can get you only so far. It’s the money that gets you over the finish line.

OpenAI’s Chief Scientist Made a Tragic Miscalculation

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 11 › openai-ilya-sutskever-sam-altman-fired › 676072

Ilya Sutskever, bless his heart. Until recently, to the extent that Sutskever was known at all, it was as a brilliant artificial-intelligence researcher. He was the star student who helped Geoffrey Hinton, one of the “godfathers of AI,” kick off the so-called deep-learning revolution. In 2015, after a short stint at Google, Sutskever co-founded OpenAI and eventually became its chief scientist; so important was he to the company’s success that Elon Musk has taken credit for recruiting him. (Sam Altman once showed me emails between himself and Sutskever suggesting otherwise.) Still, apart from niche podcast appearances and the obligatory hour-plus back-and-forth with Lex Fridman, Sutskever didn’t have much of a public profile before this past weekend. Not like Altman, who has, over the past year, become the global face of AI.

On Thursday night, Sutskever set an extraordinary sequence of events into motion. According to a post on X (formerly Twitter) by Greg Brockman, the former president of OpenAI and the former chair of its board, Sutskever texted Altman that night and asked if the two could talk the following day. Altman logged on to a Google Meet at the appointed time on Friday and quickly learned that he’d been ambushed. Sutskever took on the role of Brutus, informing Altman that he was being fired. Half an hour later, Altman’s ouster was announced in terms so vague that for a few hours, anything from a sex scandal to a massive embezzlement scheme seemed possible.

I was surprised by these initial reports. While reporting a feature for The Atlantic last spring, I got to know Sutskever a bit, and he did not strike me as a man especially suited to coups. Altman, in contrast, was built for a knife fight in the technocapitalist mud. By Saturday afternoon, he had the backing of OpenAI’s major investors, including Microsoft, whose CEO, Satya Nadella, was reportedly furious that he’d received almost no notice of his firing. Altman also secured the support of the troops: More than 700 of OpenAI’s 770 employees have now signed a letter threatening to resign if he is not restored as chief executive. On top of these sources of leverage, Altman has an open offer from Nadella to start a new AI-research division at Microsoft. If OpenAI’s board proves obstinate, he can set up shop there and hire nearly every one of his former colleagues.

[From the September 2023 issue: Does Sam Altman know what he’s creating?]

As late as Sunday night, Sutskever was at OpenAI’s offices working on behalf of the board. But yesterday morning, the prospect of OpenAI’s imminent disintegration and, reportedly, an emotional plea from Anna Brockman—Sutskever officiated the Brockmans’ wedding—gave him second thoughts. “I deeply regret my participation in the board’s actions,” he wrote in a post on X. “I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.” Later that day, in a bid to wish away the entire previous week, he joined his colleagues in signing the letter demanding Altman’s return.

Sutskever did not return a request for comment, and we don’t yet have a full account of what motivated him to take such dramatic action in the first place. Neither he nor his fellow board members have released a clear statement explaining themselves, and their vague communications have stressed that there was no single precipitating incident. Even so, some of the story is starting to fill out. Among many other colorful details, my colleagues Karen Hao and Charlie Warzel reported that the board was irked by Altman’s desire to quickly ship new products and models rather than slowing things down to emphasize safety. Others have said that their hand was forced, at least in part, by Altman’s extracurricular fundraising efforts, which are said to have included talks with parties as diverse as Jony Ive, aspiring NVIDIA competitors, and investors from surveillance-happy autocratic regimes in the Middle East.

[Read: Inside the chaos at OpenAI]

This past April, during happier times for Sutskever, I met him at OpenAI’s headquarters in San Francisco’s Mission District. I liked him straightaway. He is a deep thinker, and although he sometimes strains for mystical profundity, he’s also quite funny. We met during a season of transition for him. He told me that he would soon be leading OpenAI’s alignment research—an effort focused on training AIs to behave nicely, before their analytical abilities transcend ours. It was important to get alignment right, he said, because superhuman AIs would be, in his charming phrase, the “final boss of humanity.”

Sutskever and I made a plan to talk a few months later. He’d already spent a great deal of time thinking about alignment, but he wanted to formulate a strategy. We spoke again in June, just weeks before OpenAI announced that his alignment work would be served by a large chunk of the company’s computing resources, some of which would be devoted to spinning up a new AI to help with the problem. During that second conversation, Sutsekever told me more about what he thought a hostile AI might look like in the future, and as the events of recent days have transpired, I have found myself thinking often of his description.

“The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,” Sutskever said. Although large language models, such as those that power ChatGPT, have come to define most people’s understanding of OpenAI, they were not initially the company’s focus. In 2016, the company’s founders were dazzled by AlphaGo, the AI that beat grandmasters at Go. They thought that game-playing AIs were the future. Even today, Sutskever remains haunted by the agentlike behavior of those that they built to play Dota 2, a multiplayer game of fantasy warfare. “They were localized to the video-game world” of fields, forts, and forests, he told me, but they played as a team and seemed to communicate by “telepathy,” skills that could potentially generalize to the real world. Watching them made him wonder what might be possible if many greater-than-human intelligences worked together.

In recent weeks, he may have seen what felt to him like disturbing glimpses of that future. According to reports, he was concerned that the custom GPTs that Altman announced on November 6 were a dangerous first step toward agentlike AIs. Back in June, Sutskever warned me that research into agents could eventually lead to the development of “an autonomous corporation” composed of hundreds, if not thousands, of AIs. Working together, they could be as powerful as 50 Apples or Googles, he said, adding that this would be “tremendous, unbelievably disruptive power.”

It makes a certain Freudian sense that the villain of Sutskever’s ultimate alignment horror story was a supersize Apple or Google. OpenAI’s founders have long been spooked by the tech giants. They started the company because they believed that advanced AI would be here sometime soon, and that because it would pose risks to humanity, it shouldn’t be developed inside a large, profit-motivated company. That ship may have sailed when OpenAI’s leadership, led by Altman, created a for-profit arm and eventually accepted more than $10 billion from Microsoft. But at least under that arrangement, the founders would still have some control. If they developed an AI that they felt was too dangerous to hand over, they could always destroy it before showing it to anyone.

Sutskever may have just vaporized that thin reed of protection. If Altman, Brockman, and the majority of OpenAI’s employees decamp to Microsoft, they may not enjoy any buffer of independence. If, on the other hand, Altman returns to OpenAI, and the company is more or less reconstituted, he and Microsoft will likely insist on a new governance structure or at least a new slate of board members. This time around, Microsoft will want to ensure that there are no further Friday-night surprises. In a terrible irony, Sutskever’s aborted coup may have made it more likely that a large, profit-driven conglomerate develops the first super-dangerous AI. At this point, the best he can hope for is that his story serves as an object lesson, a reminder that no corporate structure, no matter how well intended, can be trusted to ensure the safe development of AI.

The Schism That Toppled Sam Altman

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 11 › sam-altman-open-ai-what-happened › 676062

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

I spoke with my colleagues Karen Hao and Charlie Warzel this afternoon about the tensions at the heart of the AI community, and how Sam Altman’s firing may ironically entrench the power of a tech giant.

First, here are three new stories from The Atlantic:

Has anyone noticed that Trump is really old? The other Ozempic revolution No, you shouldn’t “date ’em ’til you hate ’em.”

An Enabling Mantra

For a while earlier this year, Sam Altman was everywhere. As the head of OpenAI, the company that launched ChatGPT, he quickly became an emissary of the future of the technology. He appeared before Congress and foreign heads of state to discuss how AI would reshape society. As recently as last week, he was hyping up the future of his company. Then, suddenly, Altman was fired. Below is a brief timeline of the drama that unfolded:

Friday afternoon: In a blog post, the company said that Altman “was not consistently candid in his communications with the board.” Greg Brockman—the president of OpenAI who, along with Altman, had encouraged the rapid commercialization of the company’s technology—quit in solidarity. Mira Murati, formerly the chief technology officer of the company, was named interim CEO. Over the weekend: By Sunday night, OpenAI had rejected Altman’s bid to return to his job, and Microsoft (a major investor in OpenAI) had hired him to lead an AI-research lab. Emmett Shear, the former CEO of Twitch, stepped into the top role at OpenAI on an interim basis, replacing Murati. Today: Some 700 of OpenAI’s 770 employees signed a letter saying that they may leave the company and join Altman at Microsoft if he and Brockman are not reinstated at OpenAI.

What happens next may be hugely consequential for the future of AI—particularly for the question of whether profits or existential fears will drive its path forward. My colleagues Karen Hao and Charlie Warzel spoke with 10 current and former OpenAI employees, and in an article published last night, they explained how a simmering years-long tension at the company led to Altman’s ouster.

Lora Kelley: I was shocked to see the news on Friday that Sam Altman had been fired. Was this news just as stunning to those who closely watch OpenAI and the industry?

Karen Hao: It was a huge shock to me. OpenAI was at the height of its power. Altman was still doing so many meetings all around the world and hyping up the company.

Charlie Warzel: Sam Altman was essentially the avatar of the generative-AI revolution. You would think he would have a lot of leverage in discussions. If he had just simply left to start his own thing, it would have made some sense to me. It would have still been dramatic, but the fact that it was announced in this cryptic blog post accusing him of not being candid was wild. It’s one of the most shocking tech stories of the past couple of years.

Lora: You wrote in your article about the different factions within OpenAI: Some employees and leaders thought launching products and putting AI into the hands of everyday users was the right path forward, while others were more cautious and thought that stronger safety measures needed to be taken. How did that dynamic emerge over the past few years?

Karen: Sam Altman sent out an email back in 2019 acknowledging that there were different “tribes” at OpenAI. Because of the way that OpenAI was founded—the original story was that Elon Musk and Sam Altman came together and specifically founded OpenAI kind of as an entity to counteract Big Tech—it was always in the crosshairs of a lot of different ideas about AI: What is the purpose of the technology? How should we build it? How should an entity be structured? As the technology got more powerful—specifically with the catalyst of ChatGPT—so did the Game of Thrones mentality of who got to control it. That came to a head with this news this weekend.

Charlie: There is not only a power struggle but also this quasi-religious belief in what is being built or what could potentially be built. You can’t discount the fact that there are these true believers who are both energized by the idea of an all-powerful AI and horrified by it. That adds an unstable dynamic to the conversation.

Lora: You wrote in your article that this whole situation illustrates the fact that a very small group of people is shaping the future of AI. Given that OpenAI is so closely tied to the future of the technology, I’m curious: To what extent do you think of OpenAI as a traditional tech company? Did this weekend change how you see it?

Karen: The board successfully maintained its action to keep Altman out, but the question is whether or not there will still be a company left when everything falls into place. If all 700-plus employees who have signed on to the letter say that they’re going to leave and join Altman and Brockman at Microsoft now, then did firing Altman really make any difference? The whole company would be disintegrated, and OpenAI employees are ultimately going to continue commercializing, just as a branch of Microsoft.

But if, for some reason, a significant number of employees stays at OpenAI, and the company continues to move forward, then that would suggest a different model emerging. The board would have successfully taken action on its nonprofit-driven mission and very dramatically turned the company in a different direction, not on the basis of shareholders or profit optimization.

It’s too early to tell, and it really is up to the employees themselves.

Charlie: I can’t stop thinking that, if OpenAI was founded in opposition to the way that traditional tech companies were trying to develop and commercialize AI, and it was a sanctuary for those who wanted to build this technology safely, then the principled decision by the board to fire Altman, and the chain of events it has set in motion, may drive a bunch of their talent—certainly their CEO and president—into the arms of one of the largest tech companies in the world.

Karen: Ultimately, both the techno-optimists and the other faction have the same endgame: They’re both trying to control the technology. One is using morality as a cover for that, and the other one is using capitalism as its banner. But both are saying This is for the good of humanity, and they’re using that as their enabling mantra for a seizure of power and control.

Charlie: This is a very small group of people with a lot of power. This is fundamentally a power struggle.

Related:

Inside the chaos at OpenAI The sudden fall of Sam Altman

Today’s News

The Supreme Court rejected an appeal from the former police officer Derek Chauvin for his conviction in the murder of George Floyd. Javier Milei, a hard-right libertarian who has drawn comparisons to Donald Trump, will be Argentina’s next president. President Joe Biden stated that he believes a deal to release some of the hostages Hamas is holding in Gaza is close at hand.

Evening Read

Alex Webb / Magnum

How the Hillbillies Remade America

By Max Fraser

On April 29, 1954, a cross section of Cincinnati’s municipal bureaucracy—joined by dozens of representatives drawn from local employers, private charities, the religious community, and other corners of the city establishment—gathered at the behest of the mayor’s office to discuss a new problem confronting the city. Or, rather, about 50,000 new problems, give or take. That was roughly the number of Cincinnati residents who had recently migrated to the city from the poorest parts of southern Appalachia. The teachers, police officials, social workers, hiring-department personnel, and others who gathered that day in April had simply run out of ideas about what to do about them.

“Education does not have importance to these people as it does to us,” observed one schoolteacher. “They work for a day or two, and then you see them no more,” grumbled an employer.

Read the full article.

More From The Atlantic

Christopher Nolan on the promise and peril of technology The Confederate general whom all the other Confederates hated

Culture Break

Illustration by The Atlantic. Source: Jupiterimages / Getty.

Read. Justin Torres’s Blackouts, this year’s winner of the National Book Award for Fiction, is a complex story about recovering the history of erased and ignored gay lives.

Watch. The Ballad of Songbirds and Snakes (in theaters now) reveals how The Hunger Games always understood the power of entertainment.

Play our daily crossword.

Katherine Hu contributed to this newsletter.

When you buy a book using a link in this newsletter, we receive a commission. Thank you for supporting The Atlantic.