Itemoids

Ross Andersen

A Chaotic Week at OpenAI

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 11 › the-man-who-tried-to-overthrow-sam-altman › 676101

This is Atlantic Intelligence, an eight-week series in which The Atlantic’s leading thinkers on AI will help you understand the complexity and opportunities of this groundbreaking technology. Sign up here.

It’s been an unbelievable few days for OpenAI, the influential company behind products such as ChatGPT, the image-generating DALL-E, and GPT-4. On Friday, its CEO, Sam Altman, was suddenly fired by the company’s board. Chaos immediately followed: A majority of the company’s workers revolted, negotiations were held, and now a new agreement has been reached to return Altman to his throne.

It’s a tale of corporate mutiny fit for streaming, and we’ve been following it closely at The Atlantic. The turmoil at OpenAI is juicy, yes, but it is not just gossip: Whatever happens here will be of major consequence to the future of AI development. This is a company that has been at odds with itself over the possibility that an all-powerful “artificial general intelligence” might emerge from its research, potentially dooming humanity if it’s not carefully aligned with society’s best interests. Even though Altman has returned, the OpenAI shake-up will likely change how the technology is developed from here, with significant outcomes for you, me, and everyone else.

Yesterday, our staff writer Ross Andersen reflected on time spent with Ilya Sutskever, OpenAI’s chief scientist and the man who struck out against Altman last week. The relationship—and the rift—between these two men encapsulates the complex dynamic within OpenAI overall. Whatever agreement has been reached on paper to return Altman to his post, the fundamental tension between AI’s promise and peril will persist. In many ways, the story is just beginning.

Damon Beres, senior editor

Jim Wilson / The New York Times / Redux

OpenAI’s Chief Scientist Made a Tragic Miscalculation

By Ross Andersen

Ilya Sutskever, bless his heart. Until recently, to the extent that Sutskever was known at all, it was as a brilliant artificial-intelligence researcher. He was the star student who helped Geoffrey Hinton, one of the “godfathers of AI,” kick off the so-called deep-learning revolution. In 2015, after a short stint at Google, Sutskever co-founded OpenAI and eventually became its chief scientist; so important was he to the company’s success that Elon Musk has taken credit for recruiting him. (Sam Altman once showed me emails between himself and Sutskever suggesting otherwise.) Still, apart from niche podcast appearances and the obligatory hour-plus back-and-forth with Lex Fridman, Sutskever didn’t have much of a public profile before this past weekend. Not like Altman, who has, over the past year, become the global face of AI.

On Thursday night, Sutskever set an extraordinary sequence of events into motion. According to a post on X (formerly Twitter) by Greg Brockman, the former president of OpenAI and the former chair of its board, Sutskever texted Altman that night and asked if the two could talk the following day. Altman logged on to a Google Meet at the appointed time on Friday and quickly learned that he’d been ambushed. Sutskever took on the role of Brutus, informing Altman that he was being fired. Half an hour later, Altman’s ouster was announced in terms so vague that for a few hours, anything from a sex scandal to a massive embezzlement scheme seemed possible.

Read the full article.

What to Read Next

The events of the past few days are just one piece of the OpenAI saga. Over the past year, the company has struggled to balance an imperative from Altman to swiftly move products into the public’s hands with a concern that the technology was not being appropriately subject to safety assessments. The Atlantic told that story on Sunday, incorporating interviews with 10 current and former OpenAI employees.

Inside the chaos at OpenAI: This tumultuous weekend showed just how few people have a say in the progression of what might be the most consequential technology of our age, Charlie Warzel and Karen Hao write. The money always wins: As is always true in Silicon Valley, a great idea can get you only so far, Charlie writes. Does Sam Altman know what he’s creating?: Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk, Ross Andersen writes in his profile of the CEO from our September issue. P.S.

Looking for a book to read over the long weekend? Try Your Face Belongs to Us, by Kashmir Hill, about the secretive facial-recognition start-up dismantling the concept of privacy. Jesse Barron has a review in The Atlantic here.

— Damon

The OpenAI Mess Is About One Big Thing

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 11 › openai-sam-altman-corporate-governance › 676080

This is Work in Progress, a newsletter by Derek Thompson about work, technology, and how to solve some of America’s biggest problems. Sign up here to get it every week.

OpenAI fired its chief executive, Sam Altman, on Friday, accusing him of “not being consistently candid” with its board of directors. This kicked off several days of utter nonsense that astonished the tech world—and probably delighted a bunch of business-school types who now have a great example of why the incredibly boring-sounding term corporate governance is actually extremely important.

Friday: The hour before everything went sideways, OpenAI’s board of directors consisted of just six people, including Altman. Its president, Greg Brockman, apparently took Altman’s side, while the other four members—the chief scientist Ilya Sutskever and three nonemployee members—voted to ether their CEO. Soon after, Brockman quit the company.

Sunday: OpenAI invited Altman back to the office to discuss the prospect of rehiring him as CEO. Despite pressure from Microsoft, however, the board members declined to rehire Altman. Instead, they announced that the next chief executive of the company would be an outsider: Emmett Shear, the former CEO of Twitch, a live-video streaming service.

Monday: The Microsoft chief executive Satya Nadella announced that he would hire Altman, along with other OpenAI workers, to start a new AI-research division within Microsoft. Then roughly 700 of the nearly 800 employees at OpenAI signed a letter demanding the return of Altman as CEO and the resignations of all the board members who stood against him.

If this seems dizzying, the next bit might require Dramamine. Sutskever played the key role in firing Altman over Google Meet on Friday, then declined to rehire him on Sunday, and then signed the letter on Monday demanding the return of Altman and the firing of his own board-member co-conspirators. On X (formerly Twitter), Sutskever posted an apology to the entire company, writing, “I deeply regret my participation in the board’s actions.” Altman replied with three red hearts. One imagines Brutus, halfway through the stabbing of Caesar, pulling out the knife, offering Caesar some gauze, lightly stabbing him again, and then finally breaking down in apologetic tears and demanding that imperial doctors suture the stomach wound. (Soon after, in post-op, Caesar dispatches a courier to send Brutus a brief message inked on papyrus: “<3.”)

We still don’t know much about the OpenAI fracas. We don’t know a lot about Altman’s relationship (or lack thereof) with the board that fired him. We don’t know what Altman did in the days before his firing that made this drastic step seem unavoidable to the board. In fact, the board members who axed Altman have so far refused to elaborate on the precise cause of the firing. But here is what we know for sure: Altman’s ouster stemmed from the bizarre way that OpenAI is organized.

In the first sentence of this article, I told you that “on Friday, OpenAI fired its chief executive, Sam Altman.” Perhaps the most technically accurate way to put that would have been: “On Friday, the board of directors of the nonprofit entity of OpenAI, Inc., fired Sam Altman, who is most famous as the lead driver of its for-profit subsidiary, OpenAI Global LLC.” Confusing, right?

In 2015, Sam Altman, Elon Musk, and several other AI luminaries founded OpenAI as a nonprofit institution to build powerful artificial intelligence. The idea was that the most important technology in the history of humankind (as some claim) ought to “benefit humanity as a whole” rather than narrowly redound to the shareholders of a single firm. As Ross Andersen explained in an Atlantic feature this summer, they structured OpenAI as a nonprofit to be “unconstrained by a need to generate financial return.”

After several frustrating years, OpenAI realized that it needed money—a lot of money. The cost of computational power and engineering talent to build a digital superintelligence turned out to be astronomically high. Plus, Musk, who had been partly financing the organization’s operations, suddenly left the board in 2018 after a failed attempt to take over the firm. This left OpenAI with a gaping financial hole.

OpenAI therefore opened a for-profit subsidiary that would be nested under the OpenAI nonprofit. The entrepreneur and writer Sam Lessin called this structure a corporate “turducken,” referring to the dubious Thanksgiving entrée in which a cooked duck is stuffed inside a cooked turkey. In this turducken-esque arrangement, the original board would continue to “govern and oversee” all for-profit activities.

When OpenAI, the nonprofit, created OpenAI, the for-profit, nobody imagined what would come next: the ChatGPT boom. Internally, employees predicted that the rollout of the AI chatbot would be a minor event; the company referred to it as a “low-key research preview” that wasn’t likely to attract more than 100,000 users. Externally, the world went mad for ChatGPT. It became, by some measures, the fastest-growing consumer product in history, garnering more than 1 billion users.

Slowly, slowly, and then very quickly, OpenAI, the for-profit, became the star of the show. Altman pushed fast commercialization, and he needed even more money to make that possible. In the past few years, Microsoft has committed more than $10 billion to OpenAI in direct cash and in credits to use its data and cloud services. But unlike a typical corporate arrangement, where being a major investor might guarantee a seat or two on the board of directors, Microsoft’s investments got them nothing. OpenAI’s operating agreement states without ambiguity, “Microsoft has no board seat and no control.” Today, OpenAI’s corporate structure—according to OpenAI itself—looks like this.

(OpenAI)

In theory, this arrangement was supposed to guarantee morality plus money. The morality flowed from the nonprofit board of directors. The money flowed from Microsoft, the second biggest company in the world, which has lots of cash and resources to help OpenAI achieve its mission of building a general superintelligence.

But rather than align OpenAI’s commercial and ethical missions, this organizational structure created a house divided against itself. On Friday, this conflict played out in vivid color. Altman, the techno-optimist bent on commercialization, lost out to Sutskever, the Brutus cum mad scientist fearful that super-smart AI poses an existential risk to humanity. This was shocking. But from an organizational standpoint, it wasn’t surprising. A for-profit start-up rapidly developing technology hand in glove with Microsoft was nested under an all-powerful nonprofit board that believed it was duty-bound to resist rapid development of AI and Big Tech alliances. That does not make any sense.

Everything is obvious in retrospect, especially failure, and I don’t want to pretend that I saw any of this coming. I don’t think anybody saw this coming. Microsoft’s investments accrued over many years. ChatGPT grew over many months. That all of this would blow up without any warning was inconceivable.

But that’s the thing about technology. Despite manifestos that claim that the annunciation of tech is a matter of cosmic inevitability, technology is, for now, made by people—flawed people, who may be brilliant and sometimes clueless, who change their mind and then change their mind again. Before we build an artificial general intelligence to create progress without people, we need dependable ways to organize people to work together to build complex things within complex systems. The term for that idea is corporate structure.

OpenAI is on the precipice of self-destruction because, in its attempt to build an ethically pristine institution to protect a possible superintelligence, it built just another instrument of minority control, in which a small number of nonemployees had the power to fire its chief executive in an instant.

In AI research, there is something called the “alignment problem.” When we engineer an artificial intelligence, we ought to make sure that the machine has the intentions and values of its architects. Oddly, the architects of OpenAI created an institution that is catastrophically unaligned, in which the board of directors and the chief executive are essentially running two incompatibly different companies within the same firm. Last week, the biggest question in technology was whether we might live long enough to see humans invent aligned superintelligence. Today, the more appropriate question is: Will we live long enough to see AI’s architects invent a technology for aligning themselves?

When Hollywood Put World War III on Television

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 11 › the-day-after-hollywood-world-war-iii › 676084

This story seems to be about:

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

The ABC made-for-television movie The Day After premiered on November 20, 1983. It changed the way many Americans thought about nuclear war—but the fear now seems forgotten.

First, here are three new stories from The Atlantic:

There is no good way to travel anywhere in America. OpenAI’s chief scientist made a tragic miscalculation. What did hip-hop do to women’s minds?

A Preview of Hell

We live in an anxious time. Some days, it can feel like the wheels are coming off and the planet is careening out of control. But at least it’s not 1983, the year that the Cold War seemed to be in its final trajectory toward disaster.

Forty years ago today, it was the morning after The Day After, the ABC TV movie about a nuclear exchange between the United States and the Soviet Union. Roughly 100 million people tuned in on Sunday night, November 20, 1983, and The Day After holds the record as the most-watched made-for-television movie in history.

I remember the movie, and the year, vividly. I was 22 and in graduate school at Columbia University, studying the Soviet Union. It’s hard to explain to people who worry about, say, climate change—a perfectly legitimate concern—what it was like to live with the fear not that many people could die over the course of 20 or 50 or 100 years but that the decision to end life on most of the planet in flames and agony could happen in less time than it would take you to finish reading this article.

I will not recount the movie for you; there isn’t much of a plot beyond the stories of people who survive the fictional destruction of Kansas City. There is no detailed scenario, no explanation of what started the war. (This was by design; the filmmakers wanted to avoid making any political points.) But in scenes as graphic as U.S. television would allow, Americans finally got a look at what the last moments of peace, and the first moments of hell, might look like.

Understanding the impact of The Day After is difficult without a sense of the tense Cold War situation during the previous few years. There was an unease (or “a growing feeling of hysteria,” as Sting would sing a few years later in “Russians”) in both East and West that the gears of war were turning and locking, a doomsday ratchet tightening click by click.

The Soviet-American détente of the 1970s was brief and ended quickly. By 1980, President Jimmy Carter was facing severe criticism about national defense even within his own party. He responded by approving a number of new nuclear programs, and unveiling a new and highly aggressive nuclear strategy. The Soviets thought Carter had lost his mind, and they were actually more hopeful about working with the Republican nominee, Ronald Reagan. Soviet fears intensified when Reagan, once in office, took Carter’s decisions and put them on steroids, and in May 1981 the KGB went on alert looking for signs of impending nuclear attack from the United States. In November 1982, Soviet leader Leonid Brezhnev died and was replaced by the KGB boss, Yuri Andropov. The chill in relations between Washington and Moscow became a hard frost.

And then came 1983.

In early March, Reagan gave his famous speech in which he called the Soviet Union an “evil empire” and accused it of being “the focus of evil in the modern world.” Only a few weeks after that, he gave a major televised address to the nation in which he announced plans for space-based missile defenses, soon mocked as “Star Wars.” Two months later, I graduated from college and headed over to the Soviet Union to study Russian for the summer. Everywhere I went, the question was the same: “Why does your president want a nuclear war?” Soviet citizens, bombarded by propaganda, were certain the end was near. So was I, but I blamed their leaders, not mine.

When I returned, I packed my car in Massachusetts and began a road trip to begin graduate school in New York City on September 1, 1983. As I drove, news reports on the radio kept alluding to a missing Korean airliner.

The jet was Korean Air Lines Flight 007. It was downed by Soviet fighter jets for trespassing in Soviet airspace, killing all 269 souls aboard. The shoot down produced an immense outpouring of rage at the Soviet Union that shocked Kremlin leaders. Soviet sources later claimed that this was the moment when Andropov gave up—forever—on any hope of better relations with the West, and as the fall weather of 1983 got colder, the Cold War got hotter.

We didn’t know it at the time, but in late September, Soviet air defenses falsely reported a U.S. nuclear attack against the Soviet Union: We’re all still alive thanks to a Soviet officer on duty that day who refused to believe the erroneous alert. On October 10, Reagan watched The Day After in a private screening and noted in his diary that it “greatly depressed” him.

On October 23, a truck bomber killed 241 U.S. military personnel in the Marine barracks in Beirut.

Two days after that, the United States invaded Grenada and deposed its Marxist-Leninist regime, an act the Soviets thought could be the prelude to overthrowing other pro-Soviet regimes—even in Europe. On November 7, the U.S. and NATO began a military communications exercise code-named Able Archer, exactly the sort of traffic and activity the Soviets were looking for. Moscow definitely noticed, but fortunately, the exercise wound down in time to prevent any further confusion.

This was the global situation when, on November 20, The Day After aired.

Three days later, on November 23, Soviet negotiators walked out of nuclear-arms talks in Geneva. War began to feel—at least to me—inevitable.

In today’s Bulwark newsletter, the writer A. B. Stoddard remembers how her father, ABC’s motion-picture president Brandon Stoddard, came up with the idea for The Day After. “He wanted Americans, not politicians, to grapple with what nuclear war would mean, and he felt ‘fear had really paralyzed people.’ So the movie was meant to force the issue.”

And so it did, perhaps not always productively. Some of the immediate commentary bordered on panic. (In New York, I recall listening to the antinuclear activist Helen Caldicott on talk radio after the broadcast, and she said nuclear war was a mathematical certainty if Reagan was reelected.) Henry Kissinger, for his part, asked if we should make policy by “scaring ourselves to death.”

Reagan, according to the scholar Beth Fischer, was in “shock and disbelief” that the Soviets really thought he was headed for war, and in late 1983 “took the reins” and began to redirect policy. He found no takers in the Kremlin for his new line until the arrival of Mikhail Gorbachev in 1985, and both men soon affirmed that a nuclear war cannot be won and must never be fought—a principle that in theory still guides U.S. and Russian policy.

In the end, we got through 1983 mostly by dumb luck. If you’d asked me back then as a young student whether I’d be around to talk about any of this 40 years later, I would have called the chances a coin toss.

But although we might feel safer, I wonder if Americans really understand that thousands of those weapons remain on station in the United States, Russia, and other nations, ready to launch in a matter of minutes. The Day After wasn’t the scariest nuclear-war film—that honor goes to the BBC’s Threads—but perhaps more Americans should take the time to watch it. It’s not exactly a holiday movie, but it’s a good reminder at Thanksgiving that we are fortunate for the changes over the past 40 years that allow us to give thanks in our homes instead of in shelters made from the remnants of our cities and towns—and to recommit to making sure that future generations don’t have to live with that same fear.

Related:

We have no nuclear strategy. I want my mutually assured destruction.

Today’s News

The Wisconsin Supreme Court heard oral arguments in a legal challenge to one of the most severely gerrymandered legislative district maps in the country. A gunman opened fire in an Ohio Walmart last night, injuring four people before killing himself. Various storms are expected to cause Thanksgiving travel delays across the United States this week.

Evening Read


Illustration by Ricardo Rey

Does Sam Altman Know What He’s Creating?

By Ross Andersen

(From July)

On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers. With his heel perched on the edge of his swivel chair, he looked relaxed. The powerful AI that his company had released in November had captured the world’s imagination like nothing in tech’s recent history. There was grousing in some quarters about the things ChatGPT could not yet do well, and in others about the future it may portend, but Altman wasn’t sweating it; this was, for him, a moment of triumph.

In small doses, Altman’s large blue eyes emit a beam of earnest intellectual attention, and he seems to understand that, in large doses, their intensity might unsettle. In this case, he was willing to chance it: He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.

Read the full article.

More From The Atlantic

It’s too easy to buy stuff you don’t want. Harvard has a brand problem. Here’s how to fix it. How Mike Birbiglia got sneaky-famous

Culture Break

Illustration by Jared Bartman / The Atlantic. Sources: Heritage Images / Getty; Nikola Vukojevic / Getty; Philippe PACHE / Getty; Dan Cristian Pădureț / Unsplash; dpwk / Openverse; Annie Spratt / Unsplash.

Read. These six books might change how you think about mental illness.

Watch. Interstellar (streaming on Paramount+) is one of the many films in which Christopher Nolan tackles the promise and peril of technology.

Play our daily crossword.

P.S.

If you want to engage in nostalgia for a better time when serious people could discuss serious issues, I encourage you to watch not only The Day After but the roundtable held on ABC right after the broadcast. Following a short interview with then–Secretary of State George Shultz, Ted Koppel moderated a discussion among Kissinger, former Secretary of Defense Robert McNamara, former National Security Adviser Brent Scowcroft, the professor Elie Wiesel, the scientist Carl Sagan, and the conservative writer William F. Buckley. The discussion ranged across questions of politics, nuclear strategy, ethics, and science. It was pointed, complex, passionate, and respectful—and it went on for an hour and a half, including audience questions.

Try to imagine something similar today, with any network, cable or broadcast, blocking out 90 precious minutes for prominent and informed people to discuss disturbing matters of life and death. No chyrons, no smirky hosts, no music, no high-tech sets. Just six experienced and intelligent people in an unadorned studio talking to one another like adults. (One optimistic note: Both McNamara and Kissinger that night thought it was almost unimaginable that the superpowers could cut their nuclear arsenals in half in 10 or even 15 years. And yet, by 1998, the U.S. arsenal had been reduced by more than half, and Kissinger in 2007 joined Shultz and others to argue for going to zero.)

I do not miss the Cold War, but I miss that kind of seriousness.

Tom

Katherine Hu contributed to this newsletter.

When you buy a book using a link in this newsletter, we receive a commission. Thank you for supporting The Atlantic.

Christopher Nolan on the Promise and Peril of Technology

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 11 › christopher-nolan-interview-technology-oppenheimer-interstellar › 676044

By the time I sat down with Christopher Nolan in his posh hotel suite not far from the White House, I guessed that he was tired of Washington, D.C. The day before, he’d toured the Oval Office and had lunch on Capitol Hill. Later that night, I’d watched him receive an award from the Federation for American Scientists, an organization that counts Robert Oppenheimer, the subject of Nolan’s most recent film, among its founders. Onstage, he’d briefly jousted with Republican Senator Todd Young on the subject of AI regulation. He’d endured a joke, repeated too many times by Senate Majority Leader Chuck Schumer, about the subject of his next film—“It’s another biopic: Schumer.”

The award was sitting on an end table next to Nolan, who was dressed in brown slacks, a gray vest, and a navy suit jacket—his Anglo-formality undimmed by decades spent living in Los Angeles. “It’s heavy, and glass, and good for self-defense,” he said of the award, while filling his teacup. I suggested that it may not be the last trophy he receives this winter. Despite an R-rating and a three-hour runtime, Oppenheimer made nearly $1 billion at the box office, and it’s now the odds-on favorite to win Nolan his first Best Picture and Best Director statuettes at the Oscars.

“Don’t jinx me,” he said.

I had come to ask Nolan about technology—both its promise and its perils—as a theme across his filmography. What follows is a condensed and edited transcript of our conversation, in which we discuss the similarities between Nikola Tesla and Robert Oppenheimer, the techno-optimism of Interstellar, how Inception anticipated the social-media age, and why he hasn’t yet made a film about artificial intelligence.

Ross Andersen: It’s a low science to infer someone’s worldview from their art. But we now have 12 feature films from you, and thinking about them as a whole, it seems to me that one of the reasons you might have been drawn to Robert Oppenheimer’s story is that, like him, you feel quite conflicted about technology.

Christopher Nolan: I think it’s more that the conflict that a lot of us feel about technology is inherently dramatic. I’ve always been a fan of science fiction, which I think is often better referred to as speculative fiction, where you’re looking at particular trends—technological, but also sociological, economic—and where they might go, and exaggerating the present-day moment. There’s a lot of drama to be derived from that, and I’ve certainly enjoyed playing in that field.

I don’t think of The Dark Knight trilogy, for example, as science fiction per se. But it is speculative fiction. The whole thing with Gotham City was to exaggerate a contemporary American city in all sorts of ways that would bring out some of the more dramatic elements. What my brother’s screenplay for that film brought out very strongly was the idea that surveillance could be pursued through cellphones, and that was way ahead of its time. At the time, the idea that you could image an entire city through cellphones was very improbable and exotic. I remember saying to him, “Are people really going to believe that?” Now I think people sort of view that as our reality.

Andersen: I recently watched The Prestige, and it seemed to me that Nikola Tesla, as you portray him in that film, is a kind of a proto-Oppenheimer.

Nolan: Oh yeah, very much so. I don’t know if you know this, but Tesla was, somewhat controversially, credited with coming up with the concept of mutually assured destruction. When he died—by then having succumbed to a form of madness—government officials descended on the hotel room where he was staying and went through his papers. Please fact-check all of this, by the way. It’s been a long time since I looked at the material. As a filmmaker, you sort of glibly give all of these facts, because in Hollywood, it’s all a sales pitch. [Editor’s note: This article has been fact-checked.] It was rumored that he had scribbled down a design for a sort of death ray, and while I don’t think there was any hard science behind it, the concept was that this weapon would be so powerful that if both sides had it, it would end war.

That’s very similar to the conclusions that Oppenheimer came to. When people are that smart, they can find a way to make anything make sense. It seemed to me that he had a notion that until the bomb is used, people won’t really understand it. That’s a pretty extreme rationalization, and Oppenheimer’s story is full of those mental gymnastics. He was a very ethical person, but he also had a brilliantly abstracted philosophical way of looking at everything he was involved with, and that can lead you to pretty strange places.

[Read: Oppenheimer’s cry of despair in The Atlantic]

Andersen: Inception is also about a risky technology that emerges from military research. But instead of a bomb, it’s a dream-sharing technology that compels the main characters to turn inward into mazes of their own creation, so much so that even though they have small children, they have trouble pulling themselves out of those worlds. As our digital worlds evolve and become more transfixing over time, have you seen some resonances with that material?

Nolan: When the film came out, in 2010, the smartphone was exploding in popularity, and some of its inward-looking structure was actually based on the branching mechanisms of the iPod. I’d been using iPods to listen to music, and on the menu screens, you have these branching networks that allow you to go deeper into different catalogs. This was a time when people were first looking at the potential of carrying a whole world in your pocket, the kind of stuff that William Gibson had written about years earlier as pure science fiction. Those sorts of things were starting to become part of people’s everyday lives, and so people started to look at reality differently. They started to think about realities within realities. This was all unwitting, by the way: There’s a tendency to speak about your past work as though everything was planned and intentional. You try to analyze in hindsight what was going on in your head, and what synchronized with the world. But at the time, and as I continue to work, I try to be instinctive and unselfconscious, and open to the things that move me in the world.

Christopher Nolan and Cillian Murphy on the set of Oppenheimer (Melinda Sue Gordon / Universal Pictures)

Andersen: In The Prestige and Inception, the consequences of misusing technology are largely confined to the personal sphere. But in your Batman films, and more recently in Tenet and certainly Oppenheimer, the consequences of technological misuse extend to millions of people, if not all of humanity. What drew you toward these larger stories of planetary or even cosmic scale, as your career has progressed?

Nolan: I’m not sure it’s so much of a progression. Each story has its own reasons for a technology to be contained in a particular scale. Inception is about recursion, so the scale is internal. It’s infinities within infinities. I think Oppenheimer is an interesting case, because what I’ve done there is to take for granted the large scale, the global implications. This is someone whose activities and actions changed the world forever, with the highest stakes possible, and because we all go into the film knowing that, I felt that I could look at the story entirely from his point of view, to try and make it as personal as possible. I was hoping that the effect at the end—when the global implications seep in and you start to see gaps and cracks in his thinking, and his sense of guilt and stress—would be more powerful for not having been discussed or presented earlier in the film. So I think Oppenheimer is a combination of the two things: It’s very personal, but the real-world stakes of the story are sort of undeniable.

Andersen: Interstellar seems like an outlier in your work, with respect to technology. The film’s hero, Cooper (played by Matthew McConaughey), is an engineer who can’t stop reminding us that he’s an engineer. He aches with nostalgia for the Apollo missions. He thinks that humans have turned away from the stars—and the film seems to agree with him. In the end, it’s really science and technology and the exploratory spirit (along with love) that deliver humanity from extinction. Is it right to think of Interstellar as a defense or even celebration of technological ambition, and if so, how does that sit alongside something like Oppenheimer?

Nolan: It very much is that. I don’t want to speak for my brother, who worked on the script for years, but I know that one of the things that fed into it was this experience we had while scouting locations for The Dark Knight in Hong Kong. We both went to see a documentary about the Apollo missions voiced by Tom Hanks. There’s a part about the ridiculous idea that the moon landings were faked, and I think we were both—and Jonah in particular—very struck by how sad it was that the filmmakers felt the need to address such an absurd conspiracy theory, and how that diminished the achievements of everyone involved. This fed very directly into the character of Cooper and his idea that society had started to devalue the spirit of exploration. Now, is that consistent with the other ways in which our work—and my work—has addressed technology? Not necessarily, but at the same time, these films are not didactic. They aren’t intended to convey specific messages about society. They’re just trying to tell great stories.

[Read: I want to watch Tenet again. Unfortunately.]

Andersen: Interstellar also gives us one of Hollywood’s most sublime scientific spectacles with the black hole, Gargantua. In Oppenheimer, we get another one, but now, instead of a morally neutral object, it’s the Trinity atomic-bomb test. How did that difference play into the creative choices you made while shooting?

Nolan: When I was writing the script for Oppenheimer, my initial creative impulse was that the Trinity test needed to be portrayed with as much realism as possible, to put you into the heads of the scientists who were engaged in creating and testing it. If you look at the end of The Dark Knight Rises, there is a very beautifully rendered nuclear explosion that’s done with computer graphics. Paul Franklin and his team did an excellent job, and an enormous amount of research and detail went into it. But the technology of computer graphics is inherently a bit distancing and safe, which worked for that film because Batman has saved the day and the explosion is no longer threatening people. I knew this would need to be different, and I knew that the imagery would have to be beautiful and terrifying at the same time, and I felt very strongly that only real things that are photographed could achieve that. As a filmmaker, you choose the methodology that’s going to give you the appropriate resonance, and the resonance we needed for Trinity was massive threat and hypnotic beauty at the same time.

Andersen: Given your obvious interests in technology and personal identity and the nature of consciousness, it’s curious to me that we don’t yet have a film from you that takes AI as its central subject.

Nolan: Well, my brother has done four seasons of Westworld and five seasons of Person of Interest, which are amazing, prescient explorations of artificial intelligence and the security state and data security. That, and look, I’m a huge fan of 2001: A Space Odyssey, which in its elemental, Kubrickian simplicity kind of says everything there is to say about artificial intelligence.

Andersen: There’s another scene in Interstellar that is one of the most emotionally gutting sequences in any of your films. As a consequence of gravity’s distortions of time, Cooper has missed decades of his kids’ lives, and he watches all of these video messages that they sent during that period, in sequence, while just shaking and sobbing. It’s a really visceral experience, especially for parents. How did you conceive of the idea for that scene?

Nolan: The wonderful truth is that it was in my brother’s script, and one of the things that made me want to do the film. As a parent, it seemed like such a powerful story moment. It was always the north star of the film, this beautiful sequence—and some of the actual words in the script, the specifics of what was said in the messages, never changed. We filmed McConaughey’s reaction first, in close-up. You never do that in a scene. You start with a wide shot and then warm up. But he hadn’t seen the video messages—we’d filmed them all in advance, so that everything would be there in the moment—and he wanted to give us his first reaction. We shot it twice close-up, and I think I used the second one, because the first one was too raw. Then we shot the monitors, and the wider shots, and put it together.

The last piece of the puzzle was a beautiful piece of music by Hans Zimmer that hadn’t really found a place in the film. I think he literally referred to it as “organ doodle.” My editor, Lee Smith, and I tried playing it just while we were in the room playing a cut, and we both felt that it was devastating. The other thing we did, which I don’t think I’ve done in any of my other films, is to treat the music as a diegetic sound: When the messages stop, the music stops. It almost breaks the fourth wall, and it’s not the sort of thing that I like to do, but it felt perfect and apt for that moment.

Andersen: I’ve heard you express in interviews about Oppenheimer, and in the script of the film itself, this idea that the Manhattan Project was the most important thing that ever happened—and I think I hear a bit of a corrective in that claim. Do you think that, generally speaking, in our popular historical consciousness, science and technology get short shrift?

Nolan: I haven’t really thought about it in those terms. To be completely blunt, I was trying to express why I wanted to make the film and why I think the film is dramatic. But I think the argument that Oppenheimer is the most important man who ever lived because he changed the world forever is pretty hard to refute. The only real argument against it is the “key man of history” argument, which is to say, if not Oppeneimer, it would have been Teller who brought the Manhattan Project to its fruition, but that’s parallel-universe stuff. In our universe, it was Oppenheimer who brought the project to its fruition. He changed the world, and it can never be changed back.

Andersen: I’ve followed your career long enough to know that you keep your projects under wraps until you’re good and ready.

Nolan: Then you’re wasting your last question.

Andersen: Well, it’s a meta-question about where you might go from here. You’ve just done this epic film. It’s three hours long. It contemplates the fate of humanity, and the possibility that we might extinguish ourselves. It seems to me that you can only go smaller from here—although I’m happy to be corrected—and I wonder if that will be a challenge for you?

Nolan: You want every new project to be a challenge, and I think there’s a lot of misunderstanding about what really gives scale to a film. You can look at it in terms of budget. You can look at it in terms of shooting locations. You can look at it in terms of story. I don’t tend to think in those terms. I don’t think about, “Oh, I’ve done a big one; now I’ll do a small one.” In my kind of work, Oppenheimer was pretty lean; in terms of budget, it was a lot smaller than some of my other films. I try not to be reactive in my choices. To me, it’s really about finding the story that I want to be engaged with in the years it takes to make a film.

Andersen: Has one gripped you?

Nolan: I’m not going to answer that.

Peter Thiel Is Taking a Break From Democracy

The Atlantic

www.theatlantic.com › politics › archive › 2023 › 11 › peter-thiel-2024-election-politics-investing-life-views › 675946

This story seems to be about:

It wasn’t clear at first why Peter Thiel agreed to talk to me.

He is, famously, no friend of the media. But Thiel—co-founder of PayPal and Palantir, avatar of techno-libertarianism, bogeyman of the left—consented to a series of long interviews at his home and office in Los Angeles. He was more open than I expected him to be, and he had a lot to say.

But the impetus for these conversations? He wanted me to publish a promise he was going to make, so that he would not be tempted to go back on his word. And what was that thing he needed to say, loudly? That he wouldn’t be giving money to any politician, including Donald Trump, in the next presidential campaign.

Already, he has endured the wrath of Trump. Thiel tried to duck Trump’s calls for a while, but in late April the former president managed to get him on the phone. Trump reminded Thiel that he had backed two of Thiel’s protégés, Blake Masters and J. D. Vance, in their Senate races last year. Thiel had given each of them more than $10 million; now Trump wanted Thiel to give the same to him.

When Thiel declined, Trump “told me that he was very sad, very sad to hear that,” Thiel recounted. “He had expected way more of me. And that’s how the call ended.”

Months later, word got back to Thiel that Trump had called Masters to discourage him from running for Senate again, and had called Thiel a “fucking scumbag.”

Thiel’s hope was that this article would “lock me into not giving any money to Republican politicians in 2024,” he said. “There’s always a chance I might change my mind. But by talking to you, it makes it hard for me to change my mind. My husband doesn’t want me to give them any more money, and he’s right. I know they’re going to be pestering me like crazy. And by talking to you, it’s going to lock me out of the cycle for 2024.”

This matters because of Thiel’s unique role in the American political ecosystem. He is the techiest of tech evangelists, the purest distillation of Silicon Valley’s reigning ethos. As such, he has become the embodiment of a strain of thinking that is pronounced—and growing—among tech founders.

And why does he want to cut off politicians? It’s not that they are mediocre as individuals, and therefore incapable of bringing about the kinds of civilization-defining changes a man like him would expect to see. His disappointment runs deeper than that. Their failure to make the world conform to his vision has soured him on the entire enterprise—to the point where he no longer thinks it matters very much who wins the next election.

Not for the first time, Peter Thiel has lost interest in democracy.

Thiel’s decision to endorse Trump at the Republican National Convention in 2016 surprised some of his closest friends. Thiel has cultivated an image as a man of ideas, an intellectual who studied philosophy with René Girard and owns first editions of Leo Strauss in English and German. Trump quite obviously did not share these interests, or Thiel’s libertarian principles.

But four months earlier, Thiel had seen an omen. On March 18, 2016, a jury delivered an extraordinary $115 million verdict to Hulk Hogan in his invasion-of-privacy lawsuit against Gawker Media, whose website had published portions of a sex tape featuring Hogan. Thiel had secretly funded the litigation against Gawker, which had mocked him for years and outed him as gay. The verdict drove the company out of business.

For Thiel, the outcome was more than vindication. It was a sign. When the jury came back, “my instant reaction at that point was ‘Wow, maybe Trump wins the election,’” he told me. In his mind, Gawker was a stand-in for the media writ large, hostile to the presumptive Republican nominee; Hogan was a Trumplike figure; and the jury—the voters—had taken his side.

Thiel himself had not yet publicly embraced Trump. In the Republican primary, he had backed Carly Fiorina, the former Hewlett-Packard CEO and a fellow Stanford alum, with a $2 million contribution. Though his candidate had lost, he planned to attend the RNC as a delegate.

Then came a call from Donald Trump Jr. Thiel had never met father or son, and had yet to give money to Trump’s campaign, but the younger Trump had noticed his name on the delegate list. The convention was 10 days away, and Trump was short on high-profile endorsements. “Do you want to speak?” Don Jr. asked. Thiel thought it might be fun.

He sounded out his old friend Reid Hoffman, the co-founder of LinkedIn, who has since become his political nemesis. “We were talking, and he said, ‘I think I’m going to—I’m considering going and giving a speech at the Republican National Convention,’” Hoffman recalled. “And I laughed, thinking he was joking. Right? And it was like, ‘No, no, no, I’m not joking.’”

For years, Thiel had been saying that he generally favored the more pessimistic candidate in any presidential race because “if you’re too optimistic, it just shows you’re out of touch.” He scorned the rote optimism of politicians who, echoing Ronald Reagan, portrayed America as a shining city on a hill. Trump’s America, by contrast, was a broken landscape, under siege.

Thiel is not against government in principle, his friend Auren Hoffman (who is no relation to Reid) says. “The ’30s, ’40s, and ’50s—which had massive, crazy amounts of power—he admires because it was effective. We built the Hoover Dam. We did the Manhattan Project,” Hoffman told me. “We started the space program.”

But the days when great men could achieve great things in government are gone, Thiel believes. He disdains what the federal apparatus has become: rule-bound, stifling of innovation, a “senile, central-left regime.” His libertarian critique of American government has curdled into an almost nihilistic impulse to demolish it.

“‘Make America great again’ was the most pessimistic slogan of any candidate in 100 years, because you were saying that we are no longer a great country,” Thiel told me. “And that was a shocking slogan for a major presidential candidate.”

He thought people needed to hear it. Thiel gave $1.25 million to the Trump campaign, and had an office in Trump Tower during the transition, where he suggested candidates for jobs in the incoming administration. (His protégé Michael Kratsios was named chief technology officer, but few of Thiel’s other candidates got jobs.)

“Voting for Trump was like a not very articulate scream for help,” Thiel told me. He fantasized that Trump’s election would somehow force a national reckoning. He believed somebody needed to tear things down—slash regulations, crush the administrative state—before the country could rebuild.

He admits now that it was a bad bet.

“There are a lot of things I got wrong,” he said. “It was crazier than I thought. It was more dangerous than I thought. They couldn’t get the most basic pieces of the government to work. So that was—I think that part was maybe worse than even my low expectations.”

But if supporting Trump was a gamble, Thiel told me, it’s not one he regrets.

Reid Hoffman, who has known Thiel since college, long ago noticed a pattern in his old friend’s way of thinking. Time after time, Thiel would espouse grandiose, utopian hopes that failed to materialize, leaving him “kind of furious or angry” about the world’s unwillingness to bend to whatever vision was possessing him at the moment. “Peter tends to be not ‘glass is half empty’ but ‘glass is fully empty,’” Hoffman told me.

Disillusionment was a recurring theme in my conversations with Thiel. He is worth between $4 billion and $9 billion. He lives with his husband and two children in a glass palace in Bel Air that has nine bedrooms and a 90-foot infinity pool. He is a titan of Silicon Valley and a conservative kingmaker. Yet he tells the story of his life as a series of disheartening setbacks.

Born in Germany, the son of a mining engineer, Thiel lived briefly in South West Africa (modern-day Namibia) as a child but grew up primarily in Ohio and California. After graduating from Stanford and then Stanford Law, he worked briefly on the East Coast before heading back to Silicon Valley.

In 1998, Thiel teamed up with Max Levchin, a brilliant computer scientist, and together they founded the company that became PayPal, with the declared purpose of creating a libertarian alternative to government currency. That grand ambition went unfulfilled, but PayPal turned out to be a terrific way to pay for online purchases, which were growing exponentially. In 2002, eBay bought the company for $1.5 billion.

In 2004, Thiel co-founded Palantir Technologies, a private intelligence firm that does data mining for government and private clients at home and abroad. The CIA’s venture-capital arm, called In-Q-Tel, was his first outside investor.

This was also the year he placed the most celebrated wager in the history of venture capital. He met Mark Zuckerberg, liked what he heard, and became Facebook’s first outside investor. Half a million dollars bought him 10 percent of the company, most of which he cashed out for about $1 billion in 2012. He came to regret the sale, however; at Facebook’s market peak, in 2021, his stake would have been worth many times more.

Thiel made some poor investments, losing enormous sums by going long on the stock market in 2008, when it nose-dived, and then shorting the market in 2009, when it rallied. But on the whole, he has done exceptionally well. Alex Karp, his Palantir co-founder, who agrees with Thiel on very little other than business, calls him “the world’s best venture investor.”

Thiel told me this is indeed his ambition, and he hinted that he may have achieved it. But his dreams have always been much, much bigger than that.

He longs for a world in which great men are free to work their will on society, unconstrained by government or regulation or “redistributionist economics” that would impinge on their wealth and power—or any obligation, really, to the rest of humanity. He longs for radical new technologies and scientific advances on a scale most of us can hardly imagine. He takes for granted that this kind of progress will redound to the benefit of society at large.

More than anything, he longs to live forever.

Thiel does not believe death is inevitable. Calling death a law of nature is, in his view, just an excuse for giving up. “It’s something we are told that demotivates us from trying harder,” he said. He has spent enormous sums trying to evade his own end but feels that, if anything, he should devote even more time and money to solving the problem of human mortality.

[From the January/February 2023 issue: Adam Kirsch on the people cheering for humanity’s end]

Thiel grew up reading a great deal of science fiction and fantasy—Heinlein, Asimov, Clarke. But especially Tolkien; he has said that he read the Lord of the Rings trilogy at least 10 times. Tolkien’s influence on his worldview is obvious: Middle-earth is an arena of struggle for ultimate power, largely without government, where extraordinary individuals rise to fulfill their destinies. Also, there are immortal elves who live apart from men in a magical sheltered valley.

Did his dream of eternal life trace to The Lord of the Rings? I wondered.

Yes, Thiel said, perking up. “There are all these ways where trying to live unnaturally long goes haywire” in Tolkien’s works. But you also have the elves. “And then there are sort of all these questions, you know: How are the elves different from the humans in Tolkien? And they’re basically—I think the main difference is just, they’re humans that don’t die.”

“So why can’t we be elves?” I asked.

Thiel nodded reverently, his expression a blend of hope and chagrin.

“Why can’t we be elves?” he said.

Thiel’s abandonment of Trump is not the first time he has decided to step away from politics.

During college, he co-founded The Stanford Review, gleefully throwing bombs at identity politics and the university’s diversity-minded reform of the curriculum. He co-wrote The Diversity Myth in 1995, a treatise against what he recently called the “craziness and silliness and stupidity and wickedness” of the left.

As he built his companies and grew rich, he began pouring money into political causes and candidates—libertarian groups such as the Endorse Liberty super PAC, in addition to a wide range of conservative Republicans, including Senators Orrin Hatch and Ted Cruz and the anti-tax Club for Growth’s super PAC.

But something changed for Thiel in 2009, the first of several swings of his political pendulum. That year he wrote a manifesto titled “The Education of a Libertarian,” in which he disavowed electoral politics as a vehicle for reshaping society. The people, he concluded, could not be trusted with important decisions. “I no longer believe that freedom and democracy are compatible,” he wrote.

It was a striking declaration. An even more notable one followed: “Since 1920, the vast increase in welfare beneficiaries and the extension of the franchise to women—two constituencies that are notoriously tough for libertarians—have rendered the notion of ‘capitalist democracy’ into an oxymoron.” (He elaborated, after some backlash, that he did not literally oppose women’s suffrage, but neither did he affirm his support for it.)

Thiel laid out a plan, for himself and others, “to find an escape from politics in all its forms.” He wanted to create new spaces for personal freedom that governments could not reach—spheres where the choices of one great man could still be paramount. “The fate of our world may depend on the effort of a single person who builds or propagates the machinery of freedom,” he wrote. His manifesto has since become legendary in Silicon Valley, where his worldview is shared by other powerful men (and men hoping to be Peter Thiel).

Thiel’s investment in cryptocurrencies, like his founding vision at PayPal, aimed to foster a new kind of money “free from all government control and dilution.” His decision to rescue Elon Musk’s struggling SpaceX in 2008—with a $20 million infusion that kept the company alive after three botched rocket launches—came with aspirations to promote space as an open frontier with “limitless possibility for escape from world politics.” (I tried to reach Musk at X, requesting an interview, but got a poop emoji in response.)

It was seasteading that became Thiel’s great philanthropic cause in the late aughts and early 2010s. The idea was to create autonomous microstates on platforms in international waters. This, Thiel believed, was a more realistic path toward functioning libertarian societies in the short term than colonizing space. He gave substantial sums to Patri Friedman, the grandson of the economist Milton Friedman, to establish the nonprofit Seasteading Institute.

Thiel told a room full of believers at an institute conference in 2009 that most people don’t think seasteading is possible and will therefore not interfere until it’s too late. “The question of whether seasteading is desirable or possible in my mind is not even relevant,” he said. “It is absolutely necessary.”

Engineering challenges aside, Max Levchin, his friend and PayPal co-founder, dismissed the idea that Thiel would ever actually move to one of these specks in the sea. “There’s zero chance Peter Thiel would live on Sealand,” he said, noting that Thiel likes his comforts too much. (Thiel has mansions around the world and a private jet. Seal performed at his 2017 wedding, at the Belvedere Museum in Vienna.)

By 2015, six years after declaring his intent to change the world from the private sector, Thiel began having second thoughts. He cut off funding for the Seasteading Institute—years of talk had yielded no practical progress–and turned to other forms of escape. He already had German and American citizenship, but he invested millions of dollars in New Zealand and obtained citizenship there in 2011. He bought a former sheep station on 477 acres in the lightly populated South Island that had the makings of an End Times retreat in the country where the Lord of the Rings films were shot. Sam Altman, the former venture capitalist and now CEO of OpenAI, revealed in 2016 that in the event of global catastrophe, he and Thiel planned to wait it out in Thiel’s New Zealand hideaway.

When I asked Thiel about that scenario, he seemed embarrassed and deflected the question. He did not remember the arrangement as Altman did, he said. “Even framing it that way, though, makes it sound so ridiculous,” he told me. “If there is a real end of the world, there is no place to go.”

[From the September 2023 issue: Ross Andersen on Sam Altman’s ambitious, ingenious, terrifying quest to create a new form of intelligence]

Over and over, Thiel has voiced his discontent with what’s become of the grand dreams of science fiction in the mid-20th century. “We’d have colonies on the moon, you’d have robots, you’d have flying cars, you’d have cities in the ocean, under the ocean,” he said in his Seasteading Institute keynote. “You’d have eco farming. You’d turn the deserts into arable land. There were sort of all these incredible things that people thought would happen in the ’50s and ’60s and they would sort of transform the world.”

None of that came to pass. Even science fiction turned hopeless—nowadays, you get nothing but dystopias. The tech boom brought us the iPhone and Uber and social media, none of them a fundamental improvement to the human condition. He hungered for advances in the world of atoms, not the world of bits.

For a time, Thiel thought he knew how to set things right. Founders Fund, the venture-capital firm he established in 2005 with Luke Nosek and Ken Howery, published a manifesto that complained, “We wanted flying cars, instead we got 140 characters.” The fund, therefore, would invest in smart people solving hard problems “that really have the potential to change the world.”

I joined Thiel one recent Tuesday afternoon for a videoconference to review a pair of start-ups in his portfolio. In his little box on the Zoom screen, he looked bored.

Daniel Yu, connecting from Zanzibar, made a short, lucid presentation. His company, Wasoko, was an ecommerce platform for mom-and-pop stores in Africa, supplying shopkeepers with rice, soap, toilet paper, and other basics. Africa is the fastest-urbanizing region in the world, and Wasoko’s gross margin had doubled since last year.

Thiel was looking down at his briefing papers. He read something about Wasoko becoming “the Alibaba of Africa”—a pet peeve. “Anything that’s the something of somewhere is the nothing of nowhere,” he said, a little sourly.

Next up was a company called Laika Mascotas, in Bogotá. Someone on the call described it as the Chewy of Latin America. Thiel frowned. The company delivered pet supplies directly to the homes of consumers. It had quadrupled its revenues every year for three years. The CEO, Camilo Sánchez Villamarin, walked through the numbers. Thiel thanked him and signed off.

This was not what Thiel wanted to be doing with his time. Bodegas and dog food were making him money, apparently, but he had set out to invest in transformational technology that would advance the state of human civilization.

The trouble is not exactly that Thiel’s portfolio is pedestrian or uninspired. Founders Fund has holdings in artificial intelligence, biotech, space exploration, and other cutting-edge fields. What bothers Thiel is that his companies are not taking enough big swings at big problems, or that they are striking out.

“It was harder than it looked,” Thiel said. “I’m not actually involved in enough companies that are growing a lot, that are taking our civilization to the next level.”

“Because you couldn’t find those companies?” I asked.

“I couldn’t find them,” he said. “I couldn’t get enough of them to work.”

In 2018, a Russian named Daniil Bisslinger handed Thiel his business card. The card described him as a foreign-service officer. Thiel understood otherwise. He believed that Bisslinger was an intelligence officer with the FSB, the successor to the Soviet KGB. (A U.S. intelligence official later told me Thiel was right. The Russian embassy in Berlin, where Bisslinger has been based, did not respond to questions about him.)

Thiel received an invitation that day, and then again in January 2022, to meet with Russian President Vladimir Putin. No agenda was specified. Thiel had been fascinated by Putin’s czarlike presence in a room in Davos years before, all “champagne and caviar, and you had sort of this gaggle of, I don’t know, Mafia-like-looking oligarchs standing around him,” he recalled, but he did not make the trip.

Instead, he reported the contact to the FBI, for which Thiel had become a confidential human source code-named “Philosopher.” Thiel’s role as an FBI informant, first reported by Insider, dated back to May 2021. Charles Johnson, a tech investor, right-wing attention troll, and longtime associate of Thiel’s, told me he himself had become an FBI informant some time ago. Johnson introduced Thiel to FBI Special Agent Johnathan Buma.

A source with close knowledge of the relationship said Buma told Thiel that he did not want to know about Thiel’s contacts with U.S. elected officials or political figures, which were beyond the FBI’s investigative interests. Buma saw his interactions with Thiel, this source said, as strictly “a counterintelligence, anti-influence operation” directed at foreign governments.

Thiel responded to my questions about his FBI relationship with a terse “no comment.” A close associate, speaking with Thiel’s permission, said “it would be strange if Peter had never met with people from the deep state,” including “three-letter agencies, especially given the fact that he founded Palantir 20 years ago.”

Johnson told me he knows he has a reputation as a right-wing agitator, but said that he had fostered that image in order to gather information for the FBI and other government agencies. (He said he is now a supporter of President Joe Biden.) “I recognize that I’m an imperfect messenger,” he said. He told me a great many things about Thiel and others that I could not verify, but knowledgeable sources confirmed his role in recruiting Thiel for Buma. He and Thiel have since fallen out. “We are taking a permanent break from one another,” Thiel texted Johnson about a year ago. “Starting now.”

In at least 20 hours of logged face-to-face meetings with Buma, Thiel reported on what he believed to be a Chinese effort to take over a large venture-capital firm, discussed Russian involvement in Silicon Valley, and suggested that Jeffrey Epstein—a man he had met several times—was an Israeli intelligence operative. (Thiel told me he thinks Epstein “was probably entangled with Israeli military intelligence” but was more involved with “the U.S. deep state.”)

Buma, according to a source who has seen his reports, once asked Thiel why some of the extremely rich seemed so open to contacts with foreign governments. “And he said that they’re bored,” this source said. “‘They’re bored.’ And I actually believe it. I think it’s that simple. I think they’re just bored billionaires.”

In Thiel’s Los Angeles office, he has a sculpture that resembles a three-dimensional game board. Ascent: Above the Nation State Board Game Display Prototype is the New Zealander artist Simon Denny’s attempt to map Thiel’s ideological universe. The board features a landscape in the aesthetic of Dungeons & Dragons, thick with monsters and knights and castles. The monsters include an ogre labeled “Monetary Policy.” Near the center is a hero figure, recognizable as Thiel. He tilts against a lion and a dragon, holding a shield and longbow. The lion is labeled “Fair Elections.” The dragon is labeled “Democracy.” The Thiel figure is trying to kill them.

Thiel saw the sculpture at a gallery in Auckland in December 2017. He loved the piece, perceiving it, he told me, as “sympathetic to roughly my side” of the political spectrum. (In fact, the artist intended it as a critique.) At the same show, he bought a portrait of his friend Curtis Yarvin, an explicitly antidemocratic writer who calls for a strong-armed leader to govern the United States as a monarch. Thiel gave the painting to Yarvin as a gift.

When I asked Thiel to explain his views on democracy, he dodged the question. “I always wonder whether people like you … use the word democracy when you like the results people have and use the word populism when you don’t like the results,” he told me. “If I’m characterized as more pro-populist than the elitist Atlantic is, then, in that sense, I’m more pro-democratic.”

This felt like a debater’s riposte, not to be taken seriously. He had given a more honest answer before that: He told me that he no longer dwells on democracy’s flaws, because he believes we Americans don’t have one. “We are not a democracy; we’re a republic,” he said. “We’re not even a republic; we’re a constitutional republic.”

He said he has no wish to change the American form of government, and then amended himself: “Or, you know, I don’t think it’s realistic for it to be radically changed.” Which is not at all the same thing.

When I asked what he thinks of Yarvin’s autocratic agenda, Thiel offered objections that sounded not so much principled as practical.

“I don’t think it’s going to work. I think it will look like Xi in China or Putin in Russia,” Thiel said, meaning a malign dictatorship. “It ultimately I don’t think will even be accelerationist on the science and technology side, to say nothing of what it will do for individual rights, civil liberties, things of that sort.”

Still, Thiel considers Yarvin an “interesting and powerful” historian. “One of the big things that he always talks about is the New Deal and FDR in the 1930s and 1940s,” Thiel said. “And the heterodox take is that it was sort of a light form of fascism in the United States.”

Franklin D. Roosevelt, in this reading of history, used a domineering view of executive authority, a compliant Congress, and an intimidated Supreme Court to force what Thiel called “very, very drastic change in the nature of our society.” Yarvin, Thiel said, argues that “you should embrace this sort of light form of fascism, and we should have a president who’s like FDR again.”

It would be hard to find an academic historian to endorse the view that fascism, light or otherwise, accounted for Roosevelt’s presidential power. But I was interested in something else: Did Thiel agree with Yarvin’s vision of fascism as a desirable governing model? Again, he dodged the question.

“That’s not a realistic political program,” he said, refusing to be drawn any further.

Looking back on Trump’s years in office, Thiel walked a careful line. He was disenchanted with the former president, who did not turn out to be the revolutionary Thiel had hoped he might be. A number of things were said and done that Thiel did not approve of. Mistakes were made. But Thiel was not going to refashion himself a Never Trumper in retrospect.

The first time Thiel and I spoke, I asked about the nature of his disappointment. Later, he referred back to that question in a way that suggested he felt constrained. “I have to somehow give the exact right answer, where it’s like, ‘Yeah, I’m somewhat disenchanted,’” he told me. “But throwing him totally under the bus? That’s like, you know—I’ll get yelled at by Mr. Trump. And if I don’t throw him under the bus, that’s—but—somehow, I have to get the tone exactly right.”

Discouraged by Trump’s performance, Thiel had quietly stepped aside in the 2020 election. He wrote no check to the second Trump campaign, and said little or nothing about it in public. He had not made any grand resolution to stay out. He just wasn’t moved to get in.

Thiel knew, because he had read some of my previous work, that I think Trump’s gravest offense against the republic was his attempt to overthrow the election. I asked how he thought about it.

[From the January/February 2022 issue: Barton Gellman on Donald Trump’s next coup]

“Look, I don’t think the election was stolen,” he said. But then he tried to turn the discussion to past elections that might have been wrongly decided. Bush-Gore in 2000, for instance: Thiel thought Gore was probably the rightful victor. Before that, he’d gotten started on a riff about Kennedy-Nixon.

He came back to Trump’s attempt to prevent the transfer of power. “I’ll agree with you that it was not helpful,” he said.

Trump’s lies about the election were, however, a big issue in last year’s midterms. Thiel was a major donor to J. D. Vance, who won his Senate race in Ohio, and Blake Masters, who lost in Arizona. Both ran as election deniers, as did many of the other House and Senate candidates Thiel funded that year. Thiel expressed no anxieties about their commitment to election denial.  

But now, heading into 2024, he was getting out of politics again. Beyond his disappointment with Trump, there is another piece of the story, which Thiel reluctantly agreed to discuss. In July, Puck reported that Democratic operatives had been digging for dirt on Thiel since before the 2022 midterm elections, conducting opposition research into his personal life with the express purpose of driving him out of politics. (The reported leaders of the oppo campaign did not respond to my questions.) Among other things, the operatives are said to have interviewed a young model named Jeff Thomas, who told them he was having an affair with Thiel, and encouraged Thomas to talk to Ryan Grim, a reporter for The Intercept. Grim did not publish a story during election season, as the opposition researchers hoped he would, but he wrote about Thiel’s affair in March, after Thomas died by suicide.

Thiel declined to comment on Thomas’s death, citing the family’s request for privacy. He deplored the dirt-digging operation, telling me in an email that “the nihilism afflicting American politics is even deeper than I knew.”

He also seemed bewildered by the passions he arouses on the left. “I don’t think they should hate me this much,” he said.

On the last Thursday in April, Thiel stood in a ballroom at the Metropolitan Club, one of New York’s finest Gilded Age buildings. Decorative marble fireplaces accented the intricate panel work in burgundy and gold, all beneath Renaissance-style ceiling murals. Thiel had come to receive an award from The New Criterion, a conservative magazine of literature and politics, and to bask in the attention of nearly 300 fans.

These were Thiel’s people, and he spoke at the closed-press event with a lot less nuance than he had in our interviews. His after-dinner remarks were full of easy applause lines and in-jokes mocking the left. Universities had become intellectual wastelands, obsessed with a meaningless quest for diversity, he told the crowd. The humanities writ large are “transparently ridiculous,” said the onetime philosophy major, and “there’s no real science going on” in the sciences, which have devolved into “the enforcement of very curious dogmas.”

Thiel reprised his longtime critique of “the diversity myth.” He made a plausible point about the ideological monoculture of the DEI industry: “You don’t have real diversity,” he said, with “people who look different but talk and think alike.” Then he made a crack that seemed more revealing.

“Diversity—it’s not enough to just hire the extras from the space-cantina scene in Star Wars,” he said, prompting laughter.

Nor did Thiel say what genuine diversity would mean. The quest for it, he said, is “very evil and it’s very silly.” Evil, he explained, because “the silliness is distracting us from very important things,” such as the threat to U.S. interests posed by the Chinese Communist Party.

His closing, which used the same logic, earned a standing ovation.

“Whenever someone says ‘DEI,’” he exhorted the crowd, “just think ‘CCP.’”

Somebody asked, in the Q&A portion of the evening, whether Thiel thought the woke left was deliberately advancing Chinese Communist interests. Thiel answered with an unprompted jab at a fellow billionaire.

“It’s always the difference between an agent and asset,” he said. “And an agent is someone who is working for the enemy in full mens rea. An asset is a useful idiot. So even if you ask the question ‘Is Bill Gates China’s top agent, or top asset, in the U.S.?’”—here the crowd started roaring—“does it really make a difference?”

Thiel sometimes uses Gates as a foil in his public remarks, so I asked him what he thought of the Giving Pledge, the campaign Gates conceived in 2010—with his then-wife, Melinda French Gates, and Warren Buffett—to persuade billionaires to give away more than half their wealth to charitable causes. (Disclosure: One of my sons works for the Bill & Melinda Gates Foundation.) About 10 years ago, Thiel told me, a fellow venture capitalist called to broach the question. Vinod Khosla, a co-founder of Sun Microsystems, had made the Giving Pledge a couple of years before. Would Thiel be willing to talk with Gates about doing the same?

“I don’t want to waste Bill Gates’s time,” Thiel replied.

Thiel feels that giving his billions away would be too much like admitting he had done something wrong to acquire them. The prevailing view in Europe, he said, and more and more in the United States, “is that philanthropy is something an evil person does.” It raises a question, he said: “What are you atoning for?”

He also lacked sympathy for the impulse to spread resources from the privileged to those in need. When I mentioned the terrible poverty and inequality around the world, he said, “I think there are enough people working on that.”

And besides, a different cause moves him far more.   

One night in 1999, or possibly 2000, Thiel went to a party in Palo Alto with Max Levchin, where they heard a pitch for an organization called the Alcor Life Extension Foundation.

Alcor was trying to pioneer a practical method of biostasis, a way to freeze the freshly dead in hope of revivification one day. Don’t picture the reanimation of an old, enfeebled corpse, enthusiasts at the party told Levchin. “The idea, of course, is that long before we know how to revive dead people, we would learn how to repair your cellular membranes and make you young and virile and beautiful and muscular, and then we’ll revive you,” Levchin recalled.

Levchin found the whole thing morbid and couldn’t wait to get out of there. But Thiel signed up as an Alcor client.

Should Thiel happen to die one day, best efforts notwithstanding, his arrangements with Alcor provide that a cryonics team will be standing by. The moment he is declared legally dead, medical technicians will connect him to a machine that will restore respiration and blood flow to his corpse. This step is temporary, meant to protect his brain and slow “the dying process.”

“The patient,” as Alcor calls its dead client, “is then cooled in an ice water bath, and their blood is replaced with an organ preservation solution.” Next, ideally within the hour, Thiel’s remains will be whisked to an operating room in Scottsdale, Arizona. A medical team will perfuse cryoprotectants through his blood vessels in an attempt to reduce the tissue damage wrought by extreme cold. Then his body will be cooled to –196 degrees Celsius, the temperature of liquid nitrogen. After slipping into a double-walled, vacuum-insulated metal coffin, alongside (so far) 222 other corpsicles, “the patient is now protected from deterioration for theoretically thousands of years,” Alcor literature explains.

All that will be left for Thiel to do, entombed in this vault, is await the emergence of some future society that has the wherewithal and inclination to revive him. And then make his way in a world in which his skills and education and fabulous wealth may be worth nothing at all.

Thiel knows that cryonics “is still not working that well.” When flesh freezes, he said, neurons and cellular structures get damaged. But he figures cryonics is “better than the alternative”—meaning the regular kind of death that nobody comes back from.

Of course, if he had the choice, Thiel would prefer not to die in the first place. In the 2000s, he became enamored with the work of Aubrey de Grey, a biomedical gerontologist from England who predicted that science would soon enable someone to live for a thousand years. By the end of that span, future scientists would have devised a way to extend life still further, and so on to immortality.

A charismatic figure with a prodigious beard and a doctorate from Cambridge, de Grey resembled an Orthodox priest in mufti. He preached to Thiel for hours at a time about the science of regeneration. De Grey called his research program SENS, short for “strategies for engineered negligible senescence.”

Thiel gave several million dollars to de Grey’s Methuselah Foundation and the SENS Research Foundation, helping fund a lucrative prize for any scientist who could stretch the life span of mice to unnatural lengths. Four such prizes were awarded, but no human applications have yet emerged.

I wondered how much Thiel had thought through the implications for society of extreme longevity. The population would grow exponentially. Resources would not. Where would everyone live? What would they do for work? What would they eat and drink? Or—let’s face it—would a thousand-year life span be limited to men and women of extreme wealth?

“Well, I maybe self-serve,” he said, perhaps understating the point, “but I worry more about stagnation than about inequality.”

Thiel is not alone among his Silicon Valley peers in his obsession with immortality. Oracle’s Larry Ellison has described mortality as “incomprehensible.” Google’s Sergey Brin aspires to “cure death.” Dmitry Itskov, a leading tech entrepreneur in Russia, has said he hopes to live to 10,000.

If anything, Thiel thinks about death more than they do—and kicks himself for not thinking about it enough. “I should be investing way more money into this stuff,” he told me. “I should be spending way more time on this.”

And then he made an uncomfortable admission about that frozen death vault in Scottsdale, dipping his head and giving a half-smile of embarrassment. “I don’t know if that would actually happen,” he said. “I don’t even know where the contracts are, where all the records are, and so—and then of course you’d have to have the people around you know where to do it, and they’d have to be informed. And I haven’t broadcast it.”

You haven’t told your husband? Wouldn’t you want him to sign up alongside you?

“I mean, I will think about that,” he said, sounding rattled. “I will think—I have not thought about that.”

He picked up his hand and gestured. Stop. Enough about his family.

Thiel already does a lot of things to try to extend his life span: He’s on a Paleo diet; he works out with a trainer. He suspects that nicotine is a “really good nootropic drug that raises your IQ 10 points,” and is thinking about adding a nicotine patch to his regimen. He has spoken of using human-growth-hormone pills to promote muscle mass. Until recently he was taking semaglutide, the drug in Ozempic; lately he has switched to a weekly injection of Mounjaro, an antidiabetic drug commonly used for weight loss. He doses himself with another antidiabetic, metformin, because he thinks it has a “significant effect in suppressing the cancer risk.”

In the HBO series Silicon Valley, one of the characters (though not the one widely thought to be modeled on Thiel) had a “blood boy” who gave him regular transfusions of youthful serum. I thought Thiel would laugh at that reference, but he didn’t.

“I’ve looked into all these different, I don’t know, somewhat heterodox things,” he said, noting that parabiosis, as the procedure is called, seems to slow aging in mice. He wishes the science were more advanced. No matter how fervent his desire, Thiel’s extraordinary resources still can’t buy him the kind of “super-duper medical treatments” that would let him slip the grasp of death. It is, perhaps, his ultimate disappointment.

“There are all these things I can’t do with my money,” Thiel said.