Itemoids

ChatGPT

Don’t Be Misled by GPT-4’s Gift of Gab

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 03 › dont-be-misled-by-gpt-4s-gift-of-gab › 673411

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

Yesterday, not four months after unveiling the text-generating AI ChatGPT, OpenAI launched its latest marvel of machine learning: GPT-4. The new large-language model (LLM) aces select standardized tests, works across languages, and can even detect the contents of images. But is GPT-4 smart?

First, here are three new stories from The Atlantic:

Welcome to the big blur. Ted Lasso is no longer trying to feel good. How please stopped being polite A Chatty Child

Before I get into OpenAI’s new robot wonder, a quick personal story.

As a high-school student studying for my college-entrance exams roughly two decades ago, I absorbed a bit of trivia from my test-prep CD-ROM: Standardized tests such as the SAT and ACT don’t measure how smart you are, or even what you know. Instead, they are designed to gauge your performance on a specific set of tasks—that is, on the exams themselves. In other words, as I gleaned from the nice people at Kaplan, they are tests to test how you test.

I share this anecdote not only because, as has been widely reported, GPT-4 scored better than 90 percent of test takers on a simulated bar exam, and got a 710 out of 800 on the reading and writing section of the SAT. Rather, it provides an example of how one’s mastery of certain categories of tasks can easily be mistaken for broader skill command or competence. This misconception worked out well for teenage me, a mediocre student who nonetheless conned her way into a respectable university on the merits of a few crams.

But just as tests are unreliable indicators of scholastic aptitude, GPT-4’s facility with words and syntax doesn’t necessarily amount to intelligence—simply, to a capacity for reasoning and analytic thought. What it does reveal is how difficult it can be for humans to tell the difference.

“Even as LLMs are great at producing boilerplate copy, many critics say they fundamentally don’t and perhaps cannot understand the world,” my colleague Matteo Wong wrote yesterday. “They are something like autocomplete on PCP, a drug that gives users a false sense of invincibility and heightened capacities for delusion.”

How false is that sense of invincibility, you might ask? Quite, as even OpenAI will admit.

“Great care should be taken when using language model outputs, particularly in high-stakes contexts,” OpenAI representatives cautioned yesterday in a blog post announcing GPT-4’s arrival.

Although the new model has such facility with language that, as the writer Stephen Marche noted yesterday in The Atlantic, it can generate text that’s virtually indistinguishable from that of a human professional, its user-prompted bloviations aren’t necessarily deep—let alone true. Like other large-language models before it, GPT-4 “‘hallucinates’ facts and makes reasoning errors,” according to OpenAI’s blog post. Predictive text generators come up with things to say based on the likelihood that a given combination of word patterns would come together in relation to a user’s prompt, not as the result of a process of thought.

My partner recently came up with a canny euphemism for what this means in practice: AI has learned the gift of gab. And it is very difficult not to be seduced by such seemingly extemporaneous bursts of articulate, syntactically sound conversation, regardless of their source (to say nothing of their factual accuracy). We’ve all been dazzled at some point or another by a precocious and chatty toddler, or momentarily swayed by the bloated assertiveness of business-dude-speak.

There is a degree to which most, if not all, of us instinctively conflate rhetorical confidence—a way with words—with comprehensive smarts. As Matteo writes,“That belief underpinned Alan Turing’s famous imitation game, now known as the Turing Test, which judged computer intelligence by how ‘human’ its textual output read.”

But, as anyone who’s ever bullshitted a college essay or listened to a random sampling of TED Talks can surely attest, speaking is not the same as thinking. The ability to distinguish between the two is important, especially as the LLM revolution gathers speed.

It’s also worth remembering that the internet is a strange and often sinister place, and its darkest crevasses contain some of the raw material that’s training GPT-4 and similar AI tools. As Matteo detailed yesterday:

Microsoft’s original chatbot, named Tay and released in 2016, became misogynistic and racist, and was quickly discontinued. Last year, Meta’s BlenderBot AI rehashed anti-Semitic conspiracies, and soon after that, the company’s Galactica—a model intended to assist in writing scientific papers—was found to be prejudiced and prone to inventing information (Meta took it down within three days). GPT-2 displayed bias against women, queer people, and other demographic groups; GPT-3 said racist and sexist things; and ChatGPT was accused of making similarly toxic comments. OpenAI tried and failed to fix the problem each time. New Bing, which runs a version of GPT-4, has written its own share of disturbing and offensive text—teaching children ethnic slurs, promoting Nazi slogans, inventing scientific theories.

The latest in LLM tech is certainly clever, if debatably smart. What’s becoming clear is that those of us who opt to use these programs will need to be both.

Related:

ChatGPT changed everything. Now its follow-up is here. The difference between speaking and thinking Today’s News A federal judge in Texas heard a case that challenges the U.S. government’s approval of one of the drugs used for medication abortions. Credit Suisse’s stock price fell to a record low, prompting the Swiss National Bank to pledge financial support if necessary. General Mark Milley, the chair of the Joint Chiefs of Staff, said that the crash of a U.S. drone over the Black Sea resulted from a recent increase in “aggressive actions” by Russia. Dispatches The Weekly Planet: The Alaska oil project will be obsolete before it’s finished, Emma Marris writes. Up for Debate: Conor Friedersdorf argues that Stanford Law’s DEI dean handled a recent campus conflict incorrectly.

Explore all of our newsletters here.

Evening Read Arsh Raziuddin / The Atlantic

Nora Ephron’s Revenge

By Sophie Gilbert

In the 40 years since Heartburn was published, there have been two distinct ways to read it. Nora Ephron’s 1983 novel is narrated by a food writer, Rachel Samstat, who discovers that her esteemed journalist husband is having an affair with Thelma Rice, “a fairly tall person with a neck as long as an arm and a nose as long as a thumb and you should see her legs, never mind her feet, which are sort of splayed.” Taken at face value, the book is a triumphant satire—of love; of Washington, D.C.; of therapy; of pompous columnists; of the kind of men who consider themselves exemplary partners but who leave their wives, seven months pregnant and with a toddler in tow, to navigate an airport while they idly buy magazines. (Putting aside infidelity for a moment, that was the part where I personally believed that Rachel’s marriage was past saving.)

Unfortunately, the people being satirized had some objections, which leads us to the second way to read Heartburn: as historical fact distorted through a vengeful lens, all the more salient for its smudges. Ephron, like Rachel, had indeed been married to a high-profile Washington journalist, the Watergate reporter Carl Bernstein. Bernstein, like Rachel’s husband—whom Ephron named Mark Feldman in what many guessed was an allusion to the real identity of Deep Throat—had indeed had an affair with a tall person (and a future Labour peer), Margaret Jay. Ephron, like Rachel, was heavily pregnant when she discovered the affair. And yet, in writing about what had happened to her, Ephron was cast as the villain by a media ecosystem outraged that someone dared to spill the secrets of its own, even as it dug up everyone else’s.

Read the full article.

More From The Atlantic

“Financial regulation has a really deep problem” The strange intimacy of New York City Culture Break Colin Hutton / Apple TV+

Read. Bootstrapped, by Alissa Quart, challenges our nation’s obsession with self-reliance.

Watch. The first episode of Ted Lasso’s third season, on AppleTV+.

Play our daily crossword.

P.S.

“Everyone pretends. And everything is more than we can ever see of it.” Thus concludes the Atlantic contributor Ian Bogost’s 2012 meditation on the enduring legacy of the late British computer scientist Alan Turing. Ian’s story on Turing’s indomitable footprint is well worth revisiting this week.

— Kelli

Isabel Fattal contributed to this newsletter.

Welcome to the Big Blur

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 03 › gpt4-arrival-human-artificial-intelligence-blur › 673399

The question will be simple but perpetual: Person or machine? Every encounter with language, other than in the flesh, will now bring with it that small, consuming test. For some—teachers, professors, journalists—the question of humanity will be urgent and essential. Who made these words? For what purpose? For those who operate in the large bureaucratic apparatus of boilerplate—copywriters, lawyers, advertisers, political strategists—the question will be irrelevant except as a matter of efficiency. How will they use new artificial-intelligence technology to accelerate the production of language that was already mostly automatic? For everyone, the question will now hover, quotidian and cosmic, over words wherever you find them: Who’s there?   

At its core, technology is a dream of expansion—a dream of reaching beyond the limits of the here and now, and of transcending the constraints of the physical environment: frontiers crossed, worlds conquered, networks spread. But the post-Turing-test world is not a leap into the great external unknown. It’s a sinking down into a great interior unknown. The sensation is not enlightenment, sudden clarification, but rather eeriness, a shiver on the skin. And as AI systems become more integrated into our lives, they will alter the foundations of society. They will change the way we work, the way we communicate, and the way we relate to one another. They will challenge our assumptions about what it means to be human, and will force us to confront difficult questions about the nature of consciousness, the limits of knowledge, and the role of technology in our lives.

The above was written half by myself and half by ChatGPT. Perhaps you could figure out which half is which if you parsed it closely or if you used an AI text detector. But how sure are you? Do you have the time or energy to figure it out? And in the end, how clear can you, or anyone else, be? We are entering a big blur, and its challenges are practical as much as philosophical.

Today, we witnessed the unveiling of GPT-4, the latest large language model from OpenAI. The new version is multimodal: You can input images or text, and generate text outcomes. (Put in a picture of what’s on your kitchen counter, for example, and ask what you should cook for dinner.) But the primary advance is in highly sophisticated linguistic tasks. “The distinction between GPT-3.5 and GPT-4 can be subtle,” OpenAI acknowledged with the release of the product. “The difference comes out when the complexity of the task reaches a sufficient threshold.” The new version is particularly good at exams: It tested in the 90th percentile on the uniform bar exam, and the 88th on the LSAT, although it still flunked AP English. The difference between GPT-4 and its predecessors is that it’s better, more human-seeming, at more things. The blur is getting blurrier.

Natural-language processing has lurched into the public consciousness with stagger steps. We met it through DALL-E 2, Stable Diffusion, then ChatGPT. Stories about AI typically portray one of two themes: fear or greed. Each new arrival has been filtered through a series of hopes and anxieties—entirely appropriate to recently evolved hominids confronted with some new phenomenon on the savanna. Will this kill me? Can I eat it? With the arrival of text-to-image generation, the cry soon went up that these new technologies would exploit and replace the handiwork of human artists. But creative people are still the ones commanding the programs. There is now a new kind of artist: the prompt engineer. When the San Francisco Ballet released an AI-generated ad campaign, it also employed nearly 30 designers and other creatives.

[Read: We’re witnessing the birth of a new artistic medium]

The conventional fear—It’s coming for our jobs!—underrated the consequences of artificial intelligence in a very real sense, as if these developments were akin to the arrival of the mechanical awl, as if the stakes were a handful of creative-class jobs. No, the arrival of GPT-4 and the language programs preceding it forces us to confront much bigger questions: What is the value of originality? How does language construct meaning? And even, what is the nature of a person?

Sam Altman, the CEO of OpenAI, presaged the release of GPT-4 with a remark that reveals just how far removed the technologists are from any serious discussion of consciousness. In a tweet, he predicted that soon “the amount of intelligence in the universe [would double] every 18 months,” as if intelligence is something you mined like cobalt. It seems necessary to repeat what is obvious from any single use of a large language model: The dream of an artificial consciousness is a nonstarter. No linguistic machine is any closer to artificial consciousness than a car is. The advancement of generative artificial intelligence is not an advancement toward artificial personhood for a simple, absolute reason: There is no falsifiable thesis of consciousness. You cannot find a researcher who can define, in a testable way, what consciousness is. Also, the limitations of the tech itself preclude the longed-for arrival of a manufactured soul. Natural-language processing is a statistical pattern-matching operation, a series of instructions, incapable of intention. It can only ever be the expressed intention of a person.

If an artificial person arrives, it will be not because engineers have liberated algorithms from being instructions, but because they have figured out that human beings are nothing more than a series of instructions. An artificial consciousness would be a demonstration that free will is illusory. In the meantime, the soul remains, like a medieval lump in the throat. Natural-language processing provides, like all the other technologies, the humbling at the end of empowerment, the condition of lonely apes with fancy tools.

That our antique fantasies and anxieties are useless wouldn’t matter so much if they weren’t so obscuring. OpenAI, the organization behind GPT-4, ChatGPT, and DALL-E 2, is concerned with the creation of an artificial general intelligence, or a machine that is smarter than a human. But to situate AGI in terms of people is not interesting. Instead, think of it as a problem-solving machine capable of flexibly moving between contexts.

A local example: A friend of mine has a son in French immersion. (I’m in Canada.) His son hates reading the school’s French children's books. So my friend went to ChatGPT and had it write a children’s French book about his son’s favorite superhero, specifying the grade level and length. (OpenAI explicitly claims that one of the uses of GPT-4 will be sophisticated tutoring technologies. Khan Academy is one of their new partners.) ChatGPT followed the instructions. In algorithmic culture, if you want a book, you just ask a machine to make you one. The first blur is the line between the human and the mechanical in language. But from that blur will spread others, in this case the blur between creator and consumer. I literally cannot conceive of the consequences of this transition. What is a book if a reader automatically generates one at will?

There isn’t language to describe the mechanization of language. The word intelligence in artificial intelligence has been terribly misleading, and yet what other word would suit the case? ChatGPT is intelligent in the sense that it can create coherence. But by any other definition of intelligence, it isn’t. When Google announced its 540-billion-parameter language model, PaLM, last year, the company said, in some promotional materials, that PaLM is capable of “understanding.” Yes, PaLM can understand what you mean if you tell it to write a romantic poem or to translate a passage into Bengali. But as even some Google executives acknowledge, it doesn’t “understand” romantic poetry or Bengali as anything more than a series of patterns. It does not “understand” the way I understand romantic poetry or Bengali. It has understanding but not understanding.

The word understanding itself is now a blur.

Natural-language processing doesn’t analyze the meaning in words. It analyzes patterns in text-based tokens by way of a deep-learning technology called a transformer (the T in GPT). So a program like ChatGPT doesn’t process the first sentence of this paragraph in terms of subjects, verbs, and objects. It cycles through the connections between the hundreds of billions of words in its data set, which might one day comprise something like the entire internet. The essential blur is in the structure of the transformer: Its meaning comes through unfathomable processing.

The underlying structure of the tech, more even than its effects, will shape the future. In algorithmic culture, history itself will become a lump of supercomputer fodder from which meaning is extracted. To the transformer, all previous art, all previous language, exists as intellectual pulp. There is no difference between Yeats’s Byzantium and your most recent email. Natural-language processing is an unfathomable disintegration followed by an unfathomable reintegration. All human expression is like an enormous junkyard in fog, where a mechanical claw strips everything down to the smallest bolts and reconfigures them in any approximation you can name.

[Read: GPT-4 is here. Just how powerful is it?]

A disintegrated history means a disintegrated future. History as a lump of tokens cannot be reconfigured by a sudden gust of revelation into fresh insight or a new vision. All you will be able to do is make more past. All you will be able to write is more tokens. In algorithmic culture, the archives will be the source of power. They will also be prisons. Use ChatGPT for a bit and you’ll see the deal it invisibly offers: The machine allows you to write whatever you like, instantly, freely, with no effort, just so long as it’s like everything that has come before. GPT-4 is stronger than its predecessors, but it doesn’t change the fundamental arrangement.

The old fantasies about the future were strikingly poor. Space travel turned out to be a minor subset of the travel industry for the ultrarich. The metaverse is boring; not even its designers want to hang out there. Instead of the imagined utopias or dystopias rendered out of fear and greed that have consumed the imaginations of the recent past, technology is leading to a big blur. Instead of radical clarity, a deep and abiding confusion.  

Confusion is natural. In one passage from The Gutenberg Galaxy, Marshall McLuhan described other periods of confusion at moments of technological changes to language:

An age in rapid transition is one which exists on the frontier between two cultures and between conflicting technologies. Every moment of its consciousness is an act of translation of each of these cultures into the other. Today we live on the frontier between five centuries of mechanism and the new electronics, between the homogeneous and the simultaneous. It is painful but fruitful. The sixteenth century Renaissance was an age on the frontier between two thousand years of alphabetic and manuscript culture, on the one hand, and the new mechanism of repeatability and quantification, on the other.   


McLuhan’s concept of the interface, published in 1962, is much more useful than disruption as a way of understanding the birth of natural-language processing. For McLuhan, the Renaissance was not a moment in time, or a period, or a revolution in thinking. Rather it was an exchange between different epochs. And that exchange was subtle and profound. For example, the regulation of print—the precision and replicability that distinguished typeset texts from scribal manuscripts—was an aesthetic framework in the approach to knowledge that gave rise to the scientific method. Some of the subtle and profound consequences of the translation between technologies took centuries to reveal themselves. McLuhan points out that the idea of  a personal voice in a continuous narrative—what we have come to think of as the defining feature of printed texts—did not arrive until long after the printing press.  

Even in these early days, when the sheer power of these new linguistic tools still mesmerizes, the necessary counter-gesture is already surfacing. Artificial intelligence creates an object that is a subject, voices that aren’t voices, faces that aren’t faces. Algorithmic culture lives in between, in a world where the human is the flickering continuation of past patterns coughed up and then spat out ephemerally.  

But the human isn’t going anywhere. Recently I attended a bar mitzvah. It’s a brilliant ceremony. You don’t just read from the Torah. You give a speech. To be an adult, in society, is to have something to say, a perspective that the community can take seriously. Why should you write your paper yourself? Because you’re a person. A person wants to be heard.

Every culture works by reaction and counterreaction. For several hundred years, the education system has focused on teaching children to write like machines, to learn codes of grammar and syntax, to make the correct gestures in the correct places, to remember the systems and to apply them. Now there’s ChatGPT for that. The children who will triumph will be the ones who can write not like machines, but like human beings. That’s an enormously more difficult skill to impart or master than sentence structure. The writing that matters will stride straight down the center of the road to say, Here I am. I am here now. It’s me.

Mike Pence Is Warning Us About Trump

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 03 › mike-pence-trump-january-6 › 673402

This story seems to be about:

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

A former vice president of the United States identified a sitting president as a mortal danger. In another time, it would have been the Story of the Century. Instead, it was the Kerfuffle of the Week, and it is already dissolving away in the new media cycle.

But first, here are three new stories from The Atlantic.

Is Ron DeSantis flaming out already? NFL owners are making an example of Lamar Jackson. ChatGPT changed everything. Now its follow-up is here.

Broken Sycophants

Mike Pence stunned Washington at this weekend’s annual Gridiron Club dinner and gained the attention of the media and the ire of the White House by making an offensive joke about the Cabinet member Pete Buttigieg.

At the same event, by the way, Pence affirmed that on January 6, 2021, Donald Trump—at the time, the president of the United States—endangered his life along with the lives of his family, the members and staff of Congress, and numerous law-enforcement officers. Trump did this by inciting a mob to attack the Capitol, stop our constitutional process by force, and allow him to remain in office.

“Donald Trump was wrong,” Pence said at the white-tie event, which was attended by journalists, politicians, and other D.C. insiders. “I had no right to overturn the election, and his reckless words endangered my family and everyone at the Capitol that day, and I know that history will hold Donald Trump accountable.” He continued:

What happened that day was a disgrace. And it mocks decency to portray it any other way. For as long as I live, I will never, ever diminish the injuries sustained, the lives lost, or the heroism of law enforcement on that tragic day.

Yet here we are, three days later, talking about inappropriate jokes. This is the story now? That Pence tried out a dumb gag line aimed at Buttigieg? Make no mistake, the joke was stupid and disrespectful, but perhaps we might zero in on the more important point: Pence told us something horrifying this weekend about the condition of our democracy. The national underreaction to his comments, however, is a warning that we have all become too complacent about the danger my former party now represents.

Let us stipulate here that Pence is shamefully late to this criticism and has no obvious intention of going further. He had his one moment of courage, and there will be no others. My friend Neal Katyal, the former acting solicitor general, was present at the dinner, and he rightly lambasted Pence for posturing while refusing to answer a subpoena about what happened on January 6. “There are great actors at the gridiron,” he tweeted after the dinner. “But no one, and I mean no one, could pretend to be [Mike Pence] with a backbone.”

Nevertheless, we should not lose focus. I am still almost vertiginous at hearing a former constitutional officer of the United States government say what Pence said out loud. After all the violence, all the court cases, all the horrific videos (the stuff that will never air on Tucker Carlson’s show), and all the needless deaths, I am almost relieved that I’m still capable of being shocked. I was a boy during Watergate—I delivered the local newspaper that announced President Richard Nixon’s resignation, in 1974—but that long-ago scandal now seems like a polite comedy of errors next to the conspiracy fueled by Trump’s monstrous narcissism.

Even before Pence’s Gridiron-dinner speech, I had a conversation last week with Tom Joscelyn, one of the principal authors of the House’s January 6 committee report. Joscelyn is worried, as am I, that Americans don’t really yet grasp the degree to which the Republicans have been taken over by their most extreme wing. “The American right is overrun with grievance politics now,” he told me. “And they’ve married that approach to an authoritarian movement and cult of personality” around Trump.

Joscelyn is not a man who rattles easily: He was Rudy Giuliani’s senior counterterrorism adviser back in 2007, when “America’s mayor” was gearing up to run for president. He thinks Giuliani’s sad decline, in which he has become a kind of political Dorian Gray right before our eyes, is emblematic of the Republican collapse and surrender to Trump. He argues, and I agree, that Trump’s opponents, especially those running against him in the GOP, are not taking this threat as seriously as they should. Trump “puts the auto in autocrat,” Joscelyn said, because Trump sublimates everything to his personal needs, including his party. (I would argue that this is why Trump, despite his fascist rhetoric and Mussolini-like strutting, is incapable of the consistency and discipline required to build a truly fascist movement, but that’s an argument for another day.)  

Today, as Joscelyn notes, the GOP has ceased to function as a normal political party. There is no consistent ideology or set of policies, no internal mechanisms to check the power of the Trump cult. Even the people who want to dislodge Trump as the leader of the party and the 2024 nominee dare not to take him on in a direct confrontation. Trump’s critics are often accused of having “Trump Derangement Syndrome,” an irrational hatred of Trump that forces disagreement with Trump on everything, but Joscelyn rightly points out that Trump’s Republican enablers are the ones who have had to betray all of their deepest beliefs merely to avoid being cast out. Trump, he says, “broke his sycophants, not his critics.”

Which brings us back to Pence. It might not sound like much for Pence to admit what millions of people already know, but within the Republican Party, this is about as close as you can get to open heresy; Pence’s team deliberated making even this small move against Trump. Yet Pence’s comments have been shrugged off by both the press and the public.

To put into perspective how numb we’ve become, let’s do a thought experiment. Imagine, for example, if Hubert Humphrey, after the riots that broke out in 1968 at the Democratic National Convention, said later, “Lyndon Johnson encouraged those anti-war protesters and put me and hundreds of other people in danger. History will hold President Johnson accountable.” Those two sentences would have shaken the foundations of American democracy and changed history.

But not today. Instead, we’ve already moved on to whether Pence should apologize for a clumsy and offensive joke. (He should.) This, however, is the danger of complacency. What would have been a gigantic, even existential political crisis in a more virtuous and civic-minded nation is now one of many stories about Donald Trump that rush past our eyes and ears.

Voters are tired, and the national media are committed to treating the GOP as a mainstream party. Trump and his coterie are counting on this exhaustion to return to national power, but so are people such as Florida Governor Ron DeSantis, who is using Trump’s themes of bigotry, grievance, and cultural panic to harness that same authoritarian energy for his own purposes. Republican leaders have no intention of speaking truth—or decency—to their base, and until someone in the party of Lincoln is able to muster even the tiniest fraction of Lincoln’s courage, we will indulge our complacency about the Republicans at our peril.

Related:

Anne Applebaum: History will judge the complicit. (From 2020) The January 6 whitewash will backfire.

Today’s News

A Russian military jet hit the propeller of an American drone, causing the drone to go down over the Black Sea, according to U.S. officials. Russia has denied contact with the drone. Meta, Facebook’s parent company, plans to lay off another 10,000 workers—its second round of job cuts in recent months. Ohio is suing Norfolk Southern after one of its trains, carrying hazardous chemicals, was derailed in the state last month.

Dispatches

Work in Progress: The end of Silicon Valley Bank is also the end of a Silicon Valley myth, Derek Thompson writes.

Explore all of our newsletters here.

Evening Read

CBS Photo Archive / Getty

How Not to Cover a Bank Run

By Brian Stelter

On September 17, 2008, the Financial Times reporter John Authers decided to run to the bank. In his Citi account was a recently deposited check from the sale of his London apartment. If the big banks melted down, which felt like a distinct possibility among his Wall Street sources, he would lose most of his money, because the federal deposit-insurance limit at the time was $100,000. He wanted to transfer half the balance to the Chase branch next door, just in case.

When Authers arrived at Citi, he found “a long queue, all well-dressed Wall Streeters,” all clearly spooked by the crisis, all waiting to move money around. Chase was packed with bankers too. Authers had walked into a big story—but he didn’t share it with readers for 10 years. The column he eventually published, titled “In a Crisis, Sometimes You Don’t Tell the Whole Story,” was, he wrote this week, “the most negatively received column I’ve ever written.”

Read the full article.

More From The Atlantic

China plays peacemaker. The failed promise of having it all Photos: Winners of the 2023 Sony World Photography Awards Open Competition

Culture Break

Arsh Raziuddin

Read. Our editors suggest 10 poetry collections to read again and again.

Listen. Start Holy Week, a new narrative podcast by Vann R. Newkirk II about the revolutionary week that followed Martin Luther King Jr.’s assassination.

Play our daily crossword.

P.S.

Now that The Last of Us, HBO’s series based on the game of the same name, has aired its finale, I’ll write about the show later in the week. I hope The Last of Us, which has been remarkable in every aspect, illustrates how, for many years, computer games have had plots more intricate and more involving than much of the stuff Hollywood has been cranking out now for decades. (I say this fully aware of the creativity of this year’s Best Picture, Everything Everywhere All at Once. But I will remind you that it is also the 30th anniversary of The Beverly Hillbillies, a terrible movie full of great actors that I think was an early sign of American cultural exhaustion.)

I have particularly high hopes—that I fear will be dashed—for Amazon Prime’s upcoming Fallout series. Unlike The Last of Us, the Fallout games, set long after a global nuclear war, leaven the despair and violence of postapocalyptic survival with outrageous humor. If you’ve been watching Hello Tomorrow!, the Apple TV+ series that features the always excellent Billy Crudup selling lunar condos in a reimagined 1950s full of robots and floating cars—and yes, we are living in a golden age of television—you have a taste of what the world of Fallout looks like. I can only hope that Amazon’s series about life after the Bomb doesn’t turn out to be a bomb itself.

— Tom

Isabel Fattal contributed to this newsletter.