Itemoids

American

Is Biden Relying On the Wrong Slogan?

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 10 › bidenomics-2024-campaign-term › 675533

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

In embracing the term Bidenomics, Joe Biden is clapping back at his critics. But he’s also attaching his legacy to a notoriously unwieldy part of American life.

First, here are four new stories from The Atlantic:

Kevin McCarthy’s brief speakership meets its end. The smartest man who ever lived Telling the truth about Taiwan The greatest invention in the history of humanity

The Baggage of Bidenomics

The president is selling coffee mugs emblazoned with an image of himself with lasers shooting out of his eyes. The mugs are part of his campaign’s attempt to alchemize, with some success, the right-wing slogan “Let’s Go Brandon” (long story, but it’s MAGA-world slang for “Fuck Joe Biden”) into the “Dark Brandon” meme. In reclaiming the insult, Biden’s camp is turning a slogan used by Trump supporters into a self-aware catchphrase.

His team has attempted to pull a similar maneuver by taking ownership of Bidenomics, a term initially used as an insult. The exact origins of Bidenomics are a bit murky, but the portmanteau seems to have emerged in several newspaper columns critiquing Biden’s economic policy last year. Biden’s team ran with it, pushing the term on social media and in public statements. Reclaiming Bidenomics is a bit trickier than “Dark Brandon”: The term is less snappy, and it’s not being used in response to a clear foil.

The economy is notoriously hard to control, so an individual’s attempt to associate themselves with it can be risky. Lori Cox Han, a political-science professor at Chapman University, in California, who has written about Bidenomics, explained to me that presidents tend to be blamed when things are going badly with the economy, and try to get credit when all is well. But right now, the economy is a mixed bag. “I’m not sure a lot of people are feeling as enthusiastic about the economy as the Biden team wants them to be,” she said—and she’s not confident that a clever slogan will change people’s minds. Politico reported that several top Biden allies have privately raised concerns about the phrase to the White House.

One particular challenge of Bidenomics, Allen Adamson, a branding expert and a co-founder of the marketing firm Metaforce, told me, is that the term’s meaning is not inherently clear. Beyond simply linking Biden with the economy, the slogan doesn’t say much about the president’s policies, or about how Americans should make sense of the complexities of the economy right now. It doesn’t help matters for Biden that many Americans retain a negative view about broader economic trends: Although inflation has been cooling and unemployment is less than 4 percent, a recent NBC poll of 1,000 American voters found that fewer than 30 percent were very or somewhat satisfied with the economy.

Biden’s team has been defining Bidenomics in part by saying what it’s not: An adviser of his called it the opposite of Reaganomics, a policy that emphasized tax cuts, and a recent Instagram post from the president’s official account placed Bidenomics and “MAGAnomics side by side in a split screen, comparing Donald Trump’s economic agenda unfavorably with Biden’s. Biden officials also reportedly said last month that highlighting the contrast between Biden’s economic plans and Trump’s policies will be crucial to Biden’s campaign.

Biden is not the first politician to try to use a jab for his own advantage. One-off political insults, especially against female politicians, have proved canny pegs for branding and fundraising efforts in the past. They often pack a double punch: They draw negative attention to the (usually male) rival who delivered the insult while highlighting whatever traits the insulted politician or their allies seek to foreground. Elizabeth Warren started selling “Nevertheless, She Persisted” merchandise after Mitch McConnell made the remark while rebuking her on the Senate floor in 2017. The same year, Hillary Clinton promoted “Nasty Woman” shirts, referencing Donald Trump’s name-calling during a presidential debate. And earlier this year, Nikki Haley sold beer koozies reading “Past My Prime?” after then–CNN host Don Lemon suggested that she was; she has said that the koozies raised $25,000.

With Bidenomics, the maneuver is not so clean. The term, emerging as it did in critical coverage, comes with baggage. Jacob Neiheisel, a political-science professor at SUNY Buffalo, told me that rebranding known terms tends to work best when leaders take something that’s already popular and inject it with new energy. When Franklin D. Roosevelt was president, for example, he took the term liberal and got the public to associate it with him and his New Deal projects.

Once public perception of a concept has solidified, it can be very hard to change. Coming up with a new slogan is much simpler than trying to remake a known one, Adamson told me. But taking on the challenge of redefining a jab can be “a bravado move,” he said. It sends the message that Biden’s team will not tolerate name-calling. Still, he said, Biden’s choice to tie his own reputation directly to a thriving economy is “phenomenally high-risk.”

If Biden is lucky, economic indicators (and attitudes) will improve, and he can claim some shine. But branding isn’t everything. And despite the administration’s best efforts, a lot of people may not have even heard of Bidenomics, whether in its newspaper-column or political-slogan era. Even Biden himself, asked by a group of reporters about Bidenomics this summer, joked, “I don’t know what the hell that is.”

Related:

The bad-vibes economy “My hometown is getting a $100 billion dose of Bidenomics.”

Today’s News

The House voted to oust Kevin McCarthy as speaker. Hunter Biden pleaded not guilty to gun charges in federal court. The United Nations Security Council will send armed forces to Haiti to combat violence from gangs.

Dispatches

Work in Progress: The backlash to college has gone too far, David Deming writes. Getting a four-year degree is still a good investment.

Explore all of our newsletters here.

Evening Read

Guy Billout

Is Google Making Us Stupid?

(From 2008)

“Dave, stop. Stop, will you? Stop, Dave. Will you stop, Dave?” So the supercomputer HAL pleads with the implacable astronaut Dave Bowman in a famous and weirdly poignant scene toward the end of Stanley Kubrick’s 2001: A Space Odyssey. Bowman, having nearly been sent to a deep-space death by the malfunctioning machine, is calmly, coldly disconnecting the memory circuits that control its artificial “ brain. “Dave, my mind is going,” HAL says, forlornly. “I can feel it. I can feel it.”

I can feel it, too. Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going—so far as I can tell—but it’s changing … Immersing myself in a book or a lengthy article used to be easy. My mind would get caught up in the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore. Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle.

I think I know what’s going on.

Read the full article.

More From The Atlantic

The care and feeding of Supreme Court justices A gory amalgam of truth and spectacle Our Special Forces’ capacity to evolve despite failure

P.S.

Courtesy of Lionsgate

Read. The cartoonist Daniel Clowes has written some of the great haters, slackers, and screwups in modern comics. His new graphic novel, Monica, imagines if one of them grew up.

Watch. With Saw X, the long-running franchise is back—and finally putting its most defining antagonist in the spotlight.

Play our daily crossword.

P.S.

Today I went to the courthouse for the trial of the FTX founder Sam Bankman-Fried. I arrived at federal court around 4 a.m., where jury selection began for his six-week trial on seven charges related to fraud and conspiracy (he has pleaded not guilty to all charges brought against him so far). Bankman-Fried appeared in a courtroom flanked by lawyers (and sporting notably shorn hair). I’ll be returning to the courthouse to follow this story in the coming days.

— Lora

Katherine Hu contributed to this newsletter.

When you buy a book using a link in this newsletter, we receive a commission. Thank you for supporting The Atlantic.

The Hard Lesson of Mogadishu

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 10 › battle-of-mogadishu-black-hawk-down-incident-us-special-operations-forces-evolution › 675529

Thirty years ago today, the U.S. military was involved in a brief but brutal battle in Somalia. In a series of firefights over two bloody days, 18 members of America’s most elite Special Forces and hundreds of Somali militiamen were killed. This was the Battle of Mogadishu, which the journalist Mark Bowden (now an Atlantic contributing writer) famously reported for The Philadelphia Inquirer and later adapted as the book and the film Black Hawk Down.

Although the American units involved fought courageously, and inflicted heavy losses on their adversaries, the Battle of Mogadishu exposed significant weaknesses in U.S. Special Operations Forces’ capability. The televised images of dead Americans being dragged down dusty streets were scarring not only for the Clinton administration, and the American public viewing them on the evening news, but also for the units themselves.

As painful as defeats are, lost battles can end up being the greatest teachers for military organizations. The battle marked an important waypoint in the evolution of our Special Operations Forces, and to this day carries important lessons for them.

In the battle’s aftermath, for example, the Army’s primary special-missions unit—which, like many such units, grants a lot of authority to its noncommissioned officers—concluded that, on balance, it did not have as strong an officer corps as it needed. (Its ground-force commander during the battle did distinguish himself, however, and would later be America’s last NATO commander in Afghanistan.)

The 75th Ranger Regiment, the unit in which I would later serve, was a relative newcomer to such assignments and was largely unfamiliar with urban warfare. So the training I received looked very different—incorporating lessons learned in Somalia—from what my predecessors a decade prior would have had.

In my service with the Rangers, I got to know several of the men who’d fought in the Battle of Mogadishu. Some went on to fight in Iraq or Afghanistan; I did tours in both countries alongside some of them. I’ve been texting with a few of them lately, letting them know that I will be thinking of them today.

Organizations learn in different ways, but large organizations—especially large corporations and military groups—are usually the most resistant to learning. Even in the face of impending doom, such major entities generally find it easiest to keep doing what feels familiar. One of the things that has marked the evolution of U.S. Special Operations Forces, though, is a remarkable willingness to learn and adapt. They need that same willingness today.

Despite the fact that rangers predate the nation’s founding, since such raiding forces fought in the French and Indian War, the United States was a relatively late adopter in the postwar period when it came to elite special-operations forces. This is in contrast with several U.S. allies, such as France, Germany, the U.K., and Israel, all of which developed elite national counterterrorism forces in response to armed extremist movements in the 1960s and ’70s.

Although Navy SEALs, Army Green Berets, and Ranger companies all fought in Vietnam, they did so largely under the command of conventional military forces. The task force that fought in Somalia was a relatively new phenomenon: a “national mission force” with members from each of the military’s four services that served as a strategic asset operating outside the regional combatant commands, such as Central Command, or Centcom, established by 1986’s Goldwater-Nichols Act.

That force was itself the result of an earlier fiasco: the failed effort to rescue 52 embassy staff held hostage in Tehran following the Iranian Revolution in 1979. Eight Americans died in Iran, partly because the various Special Operations units involved had not really worked with one another before, and because the U.S. Army had no special-operations aviation unit to speak of—which proved a particular vulnerability in that operation.

The Army responded to the Iranian reverse by forming the 160th Special Operations Aviation Regiment, the famous “Night Stalkers” who flew in Somalia. In addition, the elite Special Operations units in each service began training together on a regular basis. The Ranger Regiment, which historically specialized in seizing airfields and conducting raids deep in enemy territory, began its gradual transformation into the kinetic force it is today.

As they had after Iran, these units learned and evolved after Somalia. This task force became the most lethal man-hunting special-operations outfit the world has ever known. Operations such as the capture of Saddam Hussein, the elimination of Osama bin Laden, and the killing of the ISIS leader Abu Bakr al-Baghdadi are all testament to that.

The War on Terror that began after 9/11 is over, but our Special Operations Forces must continue to develop. Last year, the civilian and military leadership of the U.S. Special Operations Forces published a new strategy. It says all the right things, shifting the focus away from fighting non-state actors and toward deterring competitor states such as China and Russia. But the national-security leaders with whom I speak convey concern that these forces are too preoccupied with finding and killing terrorists.

That remains an important mission, but one not as strategically significant as in years past. For example, some of those senior figures have also made clear to me their impatience with the conventional forces that have attempted to take on complicated psychological operations. They point to some high-profile missteps in this arena, notably the use of fake accounts on social-media platforms, and express annoyance that the forces best equipped for such work—our Special Operations Forces—have not yet fully committed to the job.

The Battle of Mogadishu was a political and military disaster that forced our Special Operations Forces to recruit, train, and organize themselves differently. Out of respect for the sacrifices made 30 years ago, we should not wait for another lost battle to evolve anew.

The Care and Feeding of Supreme Court Justices

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 10 › clarence-thomas-supreme-court-conservative › 675497

In addition to going on expensive vacations with wealthy right-wing donors who have interests before the Supreme Court, Justice Clarence Thomas has, ProPublica reported last week, secretly participated in fundraising efforts for organizations bankrolled by the Koch network, the right-wing advocacy organization founded by the billionaire brothers Charles and the late David Koch. Thomas was “brought in to speak,” staffers told ProPublica, “in the hopes that such access would encourage donors to continue giving.”

Although the failure to disclose the trip to Palm Springs, California, on the Kochs’ dime might have violated federal law, it’s hardly the only example of Thomas hiding financial relationships with wealthy conservatives. Harlan Crow, the right-wing billionaire who frequently brings Thomas on luxury vacations—although by no means the only right-wing billionaire who has done so—also owns the land Thomas’s mother currently lives on, and has paid private-school tuition for Thomas’s nephew, whom Thomas is raising. Thomas is not the sole right-wing justice benefiting from his cozy relationships with affluent ideologues; Justice Samuel Alito has also enjoyed the generosity of such men. Thomas is also implacably opposed to financial-disclosure laws that illuminate connections between the wealthy and the powerful, and the rest of the Court’s conservatives are inching closer to his view.

If you want to understand the brazen indifference to ethics standards exhibited here, it helps to go back to Robert Bork.

Bork, the father of the legal doctrine of originalism, was supposed to be a Supreme Court justice. Originalism promises to interpret the Constitution as it was understood at the time its provisions were adopted, but in practice it is most often a semi-spiritual, therapeutic approach in which conservatives look back at the Founders and see themselves, affirming as their original intent whatever the popular opinions on the contemporary right happen to be. Originalists mock “living constitutionalism,” or the idea that the Constitution should be interpreted in light of modern circumstances, but their own constitutionalism is simply undead.  

[Adam Serwer: What was Clarence Thomas thinking?]

President Ronald Reagan nominated Bork for the high court in 1987, but instead of becoming a justice, he became a martyr. Bork’s nomination was defeated because of his opposition to laws that bar discrimination on the basis of race and sex and his opposition to legal abortion, and because he was a willing participant in Richard Nixon’s corrupt schemes to shield his own criminal acts.

The Bork nomination is an early example of something we’ve seen often in the Trump years—an underlying agreement about the basic facts that is obscured by heated disagreement over whether those facts are good. No one disputes that Bork described the Civil Rights Act’s non-discrimination requirement as based on a “principle of unsurpassed ugliness,” there is only disagreement over whether the federal government can outlaw Jim Crow businesses.

Conservatives frequently invoke Bork’s name as a representation of Democratic ruthlessness and partisanship, but the most vicious critiques of Bork were accurate, if uncharitable, and the qualities that liberals found objectionable were precisely those that endeared him to the conservative legal movement. Indeed, with six right-wing ideologues on the Court, Republicans are now demanding that the justices impose on the country the very unpopular version of America that Bork wanted to live in.

The Bork nomination went down. It was not the first, but the 11th, Supreme Court nomination to fail, and unlike Barack Obama’s 2016 nominee, Merrick Garland, Bork actually got a hearing and a vote. The important event, however, is what came next: The nomination of Anthony Kennedy, who had once compared the Roe v. Wade decision finding a constitutional right to an abortion to the Dred Scott decision, which upheld slavery. Kennedy was confirmed almost unanimously and took the seat that was supposed to go to Bork.

By 1991, as the legal reporter Jeffrey Toobin wrote in his book The Nine, when Thomas was confirmed to replace retiring Justice Thurgood Marshall, eight of the nine justices had been appointed by Republicans and the lone Democratic appointed justice was Byron White, who himself opposed abortion. With those numbers, the Roe precedent was supposed to be living on borrowed time. Indeed, the Supreme Court was about to take up another abortion case, Planned Parenthood v. Casey—one in which the future justice and then–federal judge Samuel Alito had argued that Roe should be overturned—that would provide a perfect opportunity to destroy Roe.

Instead, it would take another 30 years to overturn, because three of the recent Republican appointees—Kennedy, Sandra Day O’Connor, and David Souter—joined Harry Blackmun and John Paul Stevens in preserving the right to an abortion. This is partly why the conservative myth that Bork was mistreated endures—had Bork been on the bench instead of Kennedy, the right would have won this particular fight decades ago, and many others besides.

The conservative legal movement needed judicial nominees to be more partisan, more ideological, and more tightly controlled. That is the context in which the regular stories of the conservative justices’ closeness to wealthy right-wing donors and partisan organizations should be understood. You could call this the conservative legal movement’s Good Behavior Project: One aspect is making sure that nominees are sufficiently ideological not to diverge from the party line, or to do so rarely. The Federalist Society’s role in nominating judges resolves this pipeline problem. Another aspect is ensuring that they do not grow more ideologically idiosyncratic with age, something that can happen to appointees from either party.

Social ties between justices and partisan actors are not novel, of course. During the era of Franklin D. Roosevelt—who made eight appointments over his four terms—many of the justices were very close socially to partisan actors with whom they shared an ideologically liberal outlook.

So it shouldn't surprise us that the justices are political actors, or that their rulings often find pretexts to favor their personal beliefs. The revelation that they profit off their jobs and hobnob with the wealthy is both shocking and banal. The asymmetry is that conservatives built an effective infrastructure for reinforcing and rewarding that sort of partisan loyalty, approaching the courts (as FDR did) as a question of political power and getting the right people in the right places at the right time, while liberals continue to subscribe to babble about the majesty and impartiality of the law. It is one thing to engage in such rhetoric for political purposes as the conservative legal movement does, while actually building political power, it is quite another to act as though the law and Constitution are genuinely self-enforcing while doing little to enforce them.

[From the September 2019 issue: Deconstructing Clarence Thomas]

The justices should be held accountable for breaking the rules or the law when they do so, and for the many ways they are making American life more dangerous, less democratic, and less free, while hiding their ideological crusade behind a facade of neutrality. But you cannot fault conservative legal movement for doing everything it can to build the world they want to live in. You can fault their opposition for not doing the same. The recent reporting on the justices tight social and financial relationships with right-wing billionaires is valuable—and threatening to their agenda—because it exposes the justices’ deceptions and self-deceptions about how the system really works.

By financing the justices’ lavish lifestyles and forging close social ties between donors with interests before the Court and the justices themselves, donors with interests before the Court and social ties to the justices can apply pressure that ensures the justices avoid making decisions that could alienate them from the luxury and companionship to which they have become accustomed, without ever making specific demands. This assures Good Behavior.

I am not saying that the justices reach opinions they believe are wrong—but that in most cases, they would not even allow themselves to consider the alternative. An act as direct as a bribe risks the possibility of the target growing a conscience, because there is no way to rationalize the act. Not wanting to be ostracized from one’s social circle, one’s friends and political allies—that is the kind of thing that keeps justices from even considering changing their minds. The motivation feels internal rather than external, and therefore does not feel like corruption in the way that accepting a wad of cash would.

As the justices themselves have ruled—unanimously, I might add—the absence of explicit this-for-that exchanges of money for “official acts” means that such leverage does not count as bribery. This is one of the ironies of the modern era: There was certainly more individual corruption in the past, more suitcases of cash changing hands, more personal profiteering. There is more institutional corruption now—explicit ideological rejection of duty toward segments of the public that are not part of one’s faction. A democratic society can survive, even thrive, with the former. The latter is potentially terminal.

The Smartest Man Who Ever Lived

The Atlantic

www.theatlantic.com › magazine › archive › 2023 › 11 › maniac-book-benjamin-labatut-john-von-neumann › 675443

This story seems to be about:

If the most dangerous invention to emerge from World War II was the atomic bomb, the computer now seems to be running a close second, thanks to recent developments in artificial intelligence. Neither the bomb nor the computer can be credited to, or blamed on, any single scientist. But if you trace the stories of these two inventions back far enough, they turn out to intersect in the figure of John von Neumann, the Hungarian-born polymath sometimes described as the smartest man who ever lived. Though he is less famous today than some of his contemporaries—Albert Einstein, J. Robert Oppenheimer, Richard Feynman—many of them regarded him as the most impressive of all. Hans Bethe, who won the Nobel Prize in Physics in 1967, remarked: “I have sometimes wondered whether a brain like von Neumann’s does not indicate a species superior to that of man.”

Born in Budapest in 1903, von Neumann came to the U.S. in 1930, and in 1933 he joined the Institute for Advanced Study, in Princeton, New Jersey. Like many émigré physicists, he consulted on the Manhattan Project, helping develop the implosion method used to detonate the first atomic bombs. Just weeks before Hiroshima, he also published a paper laying out a model for a programmable digital computer. When Los Alamos National Laboratory got its first computer, in 1952, it was built on the design principles known as “von Neumann architecture.” The machine was jokingly christened MANIAC, and the full name followed, devised to fit the acronym: Mathematical Analyzer, Numerical Integrator, and Computer.

And that’s not all. Von Neumann also established the mathematical framework for quantum mechanics, described the mechanism of genetic self-replication before the discovery of DNA, and founded the field of game theory, which became central to both economics and Cold War geostrategy. By the time he died of cancer, in 1957, possibly due to radiation exposure at Los Alamos, he was one of the American government’s most valued advisers on nuclear weapons and strategy. His hospital bed at Walter Reed Army Medical Center was guarded by a security detail, to make sure he didn’t reveal any secrets in his delirium.

In his new novel, The MANIAC, the Chilean writer Benjamín Labatut suggests that the name of the computer von Neumann helped invent fits the physicist himself all too well. If our world often seems mad—if we are unable to distinguish the real from the virtual, avid for technological power we can’t use wisely, always coming up with new ways to destroy ourselves—then perhaps the great minds that invented our world could not have been entirely sane. But did the man who helped create nuclear weapons and artificial intelligence know that he was putting the human future in jeopardy? Or was the thrill of scientific discovery so intense that he didn’t care?

The MANIAC sets out to penetrate this mystery with imaginary testimonies by real people—siblings and teachers, colleagues and lovers—who knew von Neumann at different stages of his life. Labatut mingles biographical facts with fictional episodes and details to take us through each stage, from the child prodigy in Budapest to the dying man in Washington, D.C., raging as his mind erodes. Along the way, the scientific and mathematical background of von Neumann’s work is sketched in for a lay audience.

From the very beginning, Labatut makes it clear that von Neumann is no ordinary human being. His mother jots down notes on his development, as in a baby book: “Did not cry after doctor’s slap / Unnerving / Looked more like middle-aged man not newborn.” His math professor tells the class about an “exceedingly difficult” theorem that no one has been able to prove, only to see the boy raise his hand, go to the chalkboard, and write down a complete proof: “Years, all my years of work, passed by in a second … After that, I was afraid of von Neumann.”

Even as the novel trains its focus on von Neumann, however, its structure keeps him at a distance; he is not a person we come to know so much as a problem we need to solve. The problem, all of the narrators agree, is that his genius was exhilarating and frightening in equal measure. “What he could do. It was so rare and beautiful that to watch him was to weep,” his math tutor says. “Yes, I saw that, but I also saw something else. A sinister, machinelike intelligence that lacked the restraints that bind the rest of us.”

[From the June 2018 issue: Henry A. Kissinger on how the Enlightenment ends]

Labatut is intent on casting von Neumann as a Faustian figure, a man who transgressed the limits of knowledge to become something more and less than human. This idea may be Labatut’s greatest departure from biographical fact. In reality, the “maniac” seems to have impressed people with his cheerfulness and zest for life. In Ananyo Bhattacharya’s 2022 biography, The Man From the Future, von Neumann is described by his friend and fellow physicist Eugene Wigner as “a cheerful man, an optimist who loved money and believed firmly in human progress.” By contrast, the Wigner who narrates several sections of The MANIAC speaks of von Neumann as a “luciferin” figure who “ranged beyond what was reasonable, until he finally lost himself.”

Labatut’s dark vision of modern science, and the way he skillfully distorts von Neumann’s biography to communicate that darkness, will be familiar to readers of When We Cease to Understand the World, his first work to be translated into English, in 2020. Blending biographical facts with outrageous fables, that novel offered miniature portraits of five 20th-century geniuses, including Fritz Haber, a chemist who invented both new fertilizers and chemical weapons, and Werner Heisenberg, the pioneer of quantum mechanics. The narrative technique owes a good deal to W. G. Sebald, who loved to ruminate on strange and troubling episodes from history, blurring the boundary between fact and fiction.

Labatut, however, is far freer in his distortions, which become more flamboyant and surreal with each section of the book. He depicts some of the most important figures in 20th-century science as haunted men, driven to madness by their pursuit of total knowledge. By the time we read that the French physicist Louis de Broglie, traumatized by the suicide of his best friend, commissioned an insane artist to create a replica of Notre-Dame Cathedral made of human feces, we are clearly in the realm of fable.

Yet the truly shocking thing is how many of the horrors described in When We Cease to Understand the World are entirely factual. The first gas attack in history, during the Battle of Ypres in 1915, actually did make “hundreds of men [fall] to the ground convulsing, choking on their own phlegm, yellow mucus bubbling in their mouths, their skin turning blue from lack of oxygen.” And Haber’s wife, Clara, really did shoot herself in the heart, bleeding to death in the arms of her young son, possibly out of guilt over her husband’s role in creating gas warfare. When Labatut tells the story of 20th-century science as a dark parable, he is extrapolating from history but not entirely falsifying it.

The MANIAC opens with a short, third-person narrative that has no explicit connection with the life of John von Neumann, but would have fit perfectly in the earlier book. It is the true story of Paul Ehrenfest, an Austrian physicist who was a friend of Einstein’s, and whose life ended in an act of horror: In 1933, he killed his 15-year-old son, Wassik, who lived in an institution for children with Down syndrome, and then himself. Though Ehrenfest lived in the Netherlands, Labatut suggests that he may have been motivated by fear of the Nazis, who had come to power in Germany earlier that year and passed a new law mandating the forced sterilization of people with disabilities. In Labatut’s telling, Ehrenfest’s act was a premonition not just of Nazi crimes, but of the terrifying development of modern science. He could think of no better way to keep his son “safe from the strange new rationality that was beginning to take shape all around them, a profoundly inhuman form of intelligence that was completely indifferent to mankind’s deepest needs.” For Ehrenfest, the most disturbing thing about this monstrous spirit is that it springs from within science itself, “hovering over his colleagues’ heads at meetings and conferences, peering over their shoulders … a truly malignant influence, both logic-driven and utterly irrational, and though still fledgling and dormant it was undeniably gathering strength, wanting desperately to break into the world.”

Ehrenfest’s response is an act of madness, but Labatut suggests that von Neumann’s failure to be disturbed by the rise of the “inhuman” betrays an even deeper madness. Like the sorcerer’s apprentice, von Neumann helped the malignant spirit of modern science “break into the world” without thinking about the price the world would pay. “The problem with those games, the many terrible games that spring forth from humanity’s unbridled imagination,” his wife, Klara, muses, “is that when they are played in the real world … we come face-to-face with dangers that we may not have the knowledge or the wisdom to overcome.”

The MANIAC drives this point home in a variety of ways, starting with an early-childhood memory shared by von Neumann’s brother Nicholas. One night, their banker father brought home a Jacquard loom, which could be “programmed” to weave different patterns using sets of punch cards—a kind of primitive ancestor of the computer. The young János—his original Hungarian name, later Americanized to John—grows obsessed with the device, refusing to eat or sleep while he tinkers with it, trying to learn how it works. Soon the boy panics, fearing that he won’t be able to put the loom back together and it will be taken away: “He said that he simply could not part with the machine.” The details of János’s experience are imaginary, but the episode allows Labatut to offer a tidy preview of von Neumann’s fatal flaw, as well as a little lesson in computer history.

[From the May 1964 issue: The computers of tomorrow]

This is a much tamer kind of fictionalizing than in When We Cease to Understand the World, and in general The MANIAC feels like a more accessible and conventional treatment of its predecessor’s basic idea—the moral corruption at the core of modern science. This is partly because Labatut has set himself a more difficult narrative challenge by focusing on a single life at greater length. He has to convey biographical details about von Neumann to readers who have never heard of him, introduce complex concepts from a range of scientific fields, and simultaneously weave all this information into a moody allegory about knowledge and transgression.

This means the literary spell is often broken by sentences that sound like they could have come from a textbook (“In 1901, Bertrand Russell, one of Europe’s foremost logicians, discovered a fatal paradox in set theory”), and others that could be intoned in a movie preview (“He was the smartest human being of the 20th century … His name was Neumann János Lajos. A.k.a. Johnny von Neumann”). The fact that The MANIAC is Labatut’s first book written in English, rather than Spanish, may also play a role in this tonal unevenness.

The MANIAC describes von Neumann’s work on the atomic bomb, but it strongly suggests that his most troublingly inhuman achievement was laying the groundwork for artificial intelligence. Late in the novel, we learn about von Neumann’s work on cellular automata, which combined two of his major interests: computing and game theory. In his book Theory of Self-Reproducing Automata, he imagined a grid of cells in which each cell changed its state—say, from “on” to “off,” or from one color to another—according to inputs received from its neighbors. Essentially, this was a way of modeling how systems could evolve from simplicity to complexity based on what we now call an algorithm, the iterative application of a set of rules. The concept has been highly influential in the study of both biological life and artificial intelligence.

[From the September 2023 issue: Does Sam Altman know what he’s creating?]

In addition to explaining the basics of cellular automata, Labatut turns the idea into a symbol of von Neumann’s failure to respect the difference between the gamelike abstractions of mathematics and the messy seriousness of human life. So it is poetic justice when Klara, infuriated by her husband’s “pigheadedness,” takes a printout of his work—“gorgeous filigrees of dots and lines that intermingled, fused, and then tore apart like the teeth of a broken zipper”—and sets it on fire in a trash can. It is another episode invented to point a moral: When science is inhumane, humanity has the right to take its revenge.

Yet in the long term, Labatut suggests, it may be humanity that has to submit. After bringing von Neumann’s story to a close, The MANIAC pivots to a lengthy postlude about Go, the ancient Chinese board game in which players take turns placing black and white stones on a board, capturing an opponent’s territory by surrounding it. In 2016, Lee Se-dol of South Korea, one of the world’s top-ranked Go players, was challenged to a match against AlphaGo, an AI developed by Google’s DeepMind. Garry Kasparov had lost a chess match to IBM’s Deep Blue 20 years earlier, but Go players were confident that their game was so much more complex that no machine could master it. Like so many skeptics before and since, they were proved wrong; AlphaGo won the match, taking four games to Lee’s one.

After telling von Neumann’s life story in about 200 pages, The MANIAC devotes its last 80 pages to this match. The effect is anticlimactic, but clearly Labatut sees the episode as the culmination of the book’s tragic arc. Ehrenfest dreaded the emergence of an inhuman intelligence, von Neumann made that emergence possible, and now Lee sees it taking place in front of him.

“When future historians look back at our time and try to pin down the first glimmer of a true artificial intelligence,” Labatut writes, “they may well find it in a single move during the second game between Lee Sedol and AlphaGo.” That move was so radically unexpected that it seemed to throw thousands of years of Go tradition out the window; no human watching the game could understand the justification for it, yet it led to the computer’s victory. By the end of the fifth game, Lee no longer hoped to win, only to postpone defeat. Labatut imagines one Go official’s view on the matter, saying, “There’s no point in playing out the endgame if you know you’re going to lose, right?” Today, when AI is on the cusp of making everyone from coders to truck drivers obsolete, that question feels more uncomfortably relevant than ever.

The MANIAC doesn’t quite say that this is all John von Neumann’s fault, and of course it isn’t. The really frightening thing is that even such a great mind can do relatively little to hasten or slow the progress of science. If von Neumann had never lived, someone else would likely have made his discoveries at about the same time, the way Gottfried Leibniz and Isaac Newton both invented calculus and Charles Darwin and Alfred Russel Wallace both came up with the theory of evolution. “It is not the particularly perverse destructiveness of one specific invention that creates danger,” an observer in the novel says of von Neumann. “The danger is intrinsic. For progress there is no cure.”

This article appears in the November 2023 print edition with the headline “The Smartest Man Who Ever Lived.”

The College Backlash Is Going Too Far

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 10 › college-degree-economic-mobility-average-lifetime-income › 675525

Americans are losing their faith in higher education. In a recent Wall Street Journal poll, more than half of respondents said that a bachelor’s degree isn’t worth the cost. Young people were the most skeptical. As a recent New York Times Magazine cover story put it, “For most people, the new economics of higher ed make going to college a risky bet.” The article drew heavily on research from the Federal Reserve Bank of St. Louis, which found that rising student-loan burdens have lowered the value proposition of a four-year degree.

American higher education certainly has its problems. But the bad vibes around college threaten to obscure an important economic reality: Most young people are still far better off with a four-year college degree than without one.

Historically, analysis of higher education’s value tends to focus on the so-called college wage premium. That premium has always been massive—college graduates earn much more than people without a degree, on average—but it doesn’t take into account the cost of getting a degree. So the St. Louis Fed researchers devised a new metric, the college wealth premium, to try to get a more complete picture. They compared the wealth premium of people born in the 1980s with that enjoyed by earlier cohorts. Because those earlier generations have been alive longer and thus have had more time to build wealth, the researchers projected out the future earnings of the younger cohort. They found that the lifetime wealth premium will be lower for people born in the 1980s than for any previous generation.

[Annie Lowrey: Why you have to care about these 12 colleges]

That analysis, however, suffers from a key oversight. In estimating the lifetime earnings for people who are now in their 30s and early 40s, the researchers assumed that the college wage premium will stay constant throughout their life. In fact, it almost surely will not. For Baby Boomers, Gen Xers, and older Millennials, the college wage premium has more than doubled between the ages of 25 and 50, from less than 40 percent to nearly 80 percent. Likewise, the college wealth premium for past generations was initially very small but grew rapidly after age 40. History tells us that the best is yet to come for today’s recent graduates.

Wages grow faster for more-educated workers because college is a gateway to professional occupations, such as business and engineering, in which workers learn new skills, get promoted, and gain managerial experience. Most noncollege workers, in contrast, end up in personal services and blue-collar occupations, for which wages tend to stagnate over time.

For example, truck drivers in the U.S. earn an average annual salary of about $48,700, according to my analysis of data from the American Community Survey. (Full-time unionized drivers can make much more, but they’re in the minority.) That’s close to the average annual income for four-year college graduates working full-time at age 24. It’s easy to see why some young people might look at those numbers and opt against borrowing money to attend a four-year college. Yet the math will be very different a decade later. For example, average earnings in business occupations, where almost everyone has a four-year degree, are about $50,000 at age 24, but double to $100,000 by age 50. Average earnings for truck drivers grow from about $36,000 to only about $51,000 over the same period. The earnings advantage for college graduates increases steadily with work experience, until eventually they are earning nearly twice as much as workers with only a high-school degree.

The debt timeline is basically the reverse. Most federal student loans have a repayment period of only 10 years, which begins shortly after graduation. (The exception is income-based and income-driven repayment loans, which charge a share of borrowers’ discretionary income for 20 to 25 years. These are about a quarter of all loans today and were less common several years ago. Private loans vary in term length, but most are about 10 years.) This means that the typical college graduate must completely repay their loans by their mid-30s. In other words, the earnings premium from a bachelor’s degree is smallest in the years when graduates are also paying down their debts. We are effectively asking a 17-year-old high-school student to delay gratification until age 35 or later—longer than they have been alive. But the rewards are worth it.

Of course, we cannot know for certain whether today’s college graduates will experience the same earnings growth as past generations. The tightness of the post-pandemic labor market has created upward pressure on wages in sectors such as retail and hospitality, leading to especially strong wage growth for less-educated workers. As a result, the college wage premium has been falling since 2020, after three decades of growth. Could this time be different?

In 1976, the Harvard economist Richard Freeman published a book called The Overeducated American. The labor market for young college graduates had been particularly depressed in the early 1970s, with the wage premium falling by more than 10 percentage points in less than a decade. As a result, college enrollment began to decline, reversing what had been a steady positive trend. Freeman argued that society was investing too much in higher education, and that college was no longer worth it for the marginal student.

His timing was impeccably bad. Shortly after the book’s publication, the college wage premium rose rapidly, increasing by more than 20 percentage points from 1976 to 1988. At age 50, the college graduates who entered the labor market in the 1970s were earning about 70 percent more than their less-educated contemporaries. College had been worth it after all; it just felt riskier and took longer to pay off.

We appear to be in a similar moment today. Despite the bad vibes around higher education, the fastest-growing occupations that do not require a college degree are mostly low-wage service jobs that offer little opportunity for advancement. Negative public sentiment might dissuade some people from going to college when it is in their long-run interest to do so. The potential harm is greatest for low- and middle-income students, for whom college costs are most salient. Wealthy families will continue to send their kids to four-year colleges, footing the bill and setting their children up for long-term success.

[Sanjay Sarma and Luke Yoquinto: The Toyota Corolla theory of college]

Indeed, highly educated elites in journalism, business, and academia are among those most likely to question the value of a four-year degree, even if their life choices don’t reflect that skepticism. In a recent New America poll, only 38 percent of respondents with household incomes greater than $100,000 said a bachelor’s degree was necessary for adults in the U.S to be financially secure. When asked about their own family members, however, that number jumped to 58 percent.

As a labor economist, I have argued elsewhere that the U.S. should invest more in workforce development to increase economic mobility for people who don’t have a four-year degree. At the same time, public investment in higher education, including making public-college tuition free, would help more students afford getting a degree. Until that happens, however, young people must play the cards they’ve been dealt. Taking on debt to go to college can feel risky, especially for first-generation students who don’t have examples from their own family, or for any young person without generational wealth. But the long-term value of a bachelor’s degree is much greater than it initially appears. If a college professor or pundit tries to convince you otherwise, ask them what they would choose for their own children.

The Trumpy Marriage of the UFC and WWE

The Atlantic

www.theatlantic.com › books › archive › 2023 › 10 › ufc-wwe-merger-trump-mcmahon-wrestling › 675528

With the news last month that the Ultimate Fighting Championship (brand: authentic, highly skilled violence) has merged, in a deal worth billions upon billions, with World Wrestling Entertainment (brand: fabulously stylized, highly skilled violence), it appears to be time to reset the reality levels. Again. What new form of narrative, what gory amalgam of truth and spectacle, what double-talking rough beast approaches? In other words: Are you ready to rumble?

If you doubt the importance of this bit of business, consider that braided into the corporate histories of both the UFC and WWE, and into their respective anthropologies (we’ll be coming back to this), is the rise of Donald Trump. Trumpism has expressed and explored itself through both of these entities. And as they coalesce, and as Trumpism itself further coalesces, we are surely heading into—as the great New Hampshire metal band Scissorfight once put it—the “high tide of the big grotesque.”

For a primer on the UFC side of things, you won’t do better than Michael Thomsen’s new book, Cage Kings: How an Unlikely Group of Moguls, Champions & Hustlers Transformed the UFC Into a $10 Billion Industry. A fine writer and a very good reporter, Thomsen tracks in detail the journey from the primal chaos of the promotion’s first event—1993’s UFC 1, where grapplers fought thumpers, bone-breakers fought chokers, and a Dutch kickboxer, with a blow of his foot, sent a Samoan sumo wrestler’s tooth flying into the crowd—to the streamlined pay-per-view mercilessness of today’s UFC.

The rise of the UFC is checkered but, in hindsight, unstoppable. Along the way, as if by accident, in response to the pressures brought to bear by various regulatory bodies, the converging impulses and skill sets of the fighters themselves, the demands of a just-discovered audience, and an ambient societal sense of what might be gotten away with, a new style of fighting was was invented: mixed martial arts (MMA). The rules were hammered out at a multiparty meeting in April 2001, with consulting physicians in attendance, where (as Thomsen writes) “soccer kicks, head butts, knees to the head of a grounded opponent” were outlawed.

And then there's pro wrestling. The UFC exerts a grim, surface-level fascination, but pro wrestling is deep. Ringmaster: Vince McMahon and the Unmaking of America, by Abraham Josephine Riesman, published earlier this year, will help you get your mind around it. And you need help. Pro wrestling is a thunderdome of images, the human comedy at near-celestial scale. Its lingo, its carny slang, expresses some kind of hierarchy of awareness, but where wrestling begins and where it ends, no one can say. If you’re a “mark,” you’re way down there: You’re taken in by the “kayfabe,” the fakery, and you think it’s all real. If you’re a “smart,” you’re higher up the great chain: You know what’s going on, you can tell a “work” (something prefabricated) from a “shoot” (an improvisation), and you can take an ironist’s or an aesthete’s pleasure in the pageantry and the bombast and the medieval moral drama.

But is anybody really a mark? And is anybody really a smart? “When you start to think about it,” Riesman muses, “the existence of marks in great numbers starts to seem unlikely. It’s possible that the majority of wrestling fans may have always been smarts. It’s possible that the illusion at the heart of wrestling was not that fans believed wrestling was real, but that wrestlers believed that fans believed it.” (This is an irresistible idea: the puffed and strutting wrestlers, maintained in their dreamworld by the gallantry of the fans.)

Both books have a focal strongman character. For Thomsen, it’s Dana White, once a penniless personal trainer who listened to Tony Robbins for inspiration, now the UFC’s abrasively charismatic caudillo and defining personality. For Riesman, it’s Vince McMahon, the demonically transformative former chairman and CEO of WWE (previously the World Wrestling Federation.) White bought the UFC in 2000, with his partners and moneymen the Fertitta brothers, when the promotion was at a low ebb. McMahon inherited the WWF from his father, Vince Sr. But in some respects—on a business level, at least—the story is the same: the aggressive  absorption of smaller fighting promoters; the wooing of legislatures and athletic commissions; the territorial expansions and TV deals; the escalation of hubris, razzmatazz, blood.

Where the two men differ is in their nature. White is a farseeing brawler-businessman. McMahon, unclassifiably but undeniably, is an artist—a creator/destroyer. And in 1998, having “booked”—that is, written narratives—for WWF for years, building up and blowing down characters according to his own uniquely despotic dramaturgical whim, Vince McMahon, at the age of 52, wildly entered his own creation. He became a character. As pumped and glistening as any of his “boys,” with a proper wrestler’s physique and carriage—he’d been building muscle, under those baggy suits, for years—he climbed into the ring as “Mr. McMahon.” A villain. A heel. What Riesman calls, in another context, a “sizzling” heel. “He’s a horrible human being,” says McMahon of this version of himself, “uncaring, a powermonger, manipulative, very manipulative.” “Ass-hole! Ass-hole!” chants the delirious crowd. Mr. McMahon’s antics, over the succeeding years, will include making out with the wrestler Trish Stratus as his wife, Linda, watches on, and peeing himself with fear while “Stone Cold” Steve Austin holds a gun to his head. (“Mr. McMahon looked up,” Riesman writes rather beautifully. “He saw what the viewers saw: his own tear-stained visage. His face looked like a kabuki mask of weeping terror.”)

Over both of these books, and both of these organizations, looms—I was going to write “the shadow of Trump,” but Trump has no shadow. No secret darkness, no buried awareness: Every inch of him is lit up. Better perhaps to say that the Trumpiness of all this is baked in. The story of the world as told by the UFC and WWE—it’s not exactly a liberal’s vision. Booming characters preen and dominate; nuance is banished. This is heavy-metal America. Trump is a longtime wrestling fan, and playing himself (who else?) he feuded publicly with Mr. McMahon, at one point shaving the character’s head at ringside. Much of Trump’s most appalling public behavior—say, that impression of a disabled reporter—is in the repertoire of a classic heel: To loudly deplore it, to boo and hiss, only reassures his fans. He was also an early supporter of the UFC, and Dana White has repaid him with many loud pledges of fealty, most notably in a speech at the 2016 Republican National Convention: “Let me tell you something! I’ve been in the fight business my whole life. I know fighters. Ladies and gentlemen, Donald Trump is a fighter, and I know he will fight for this country!” Always close to the McMahons, Trump in 2017 appointed Linda McMahon as the head of the Small Business Administration.

So now what? High theater, high narrative, has merged with what Kipling called “the undoctored incident.” The kayfabe has merged with the fist in the face. Is some kind of grotesque UFC-WWE blend in the cards? White has pooh-poohed the idea: “If you look at the WWE,” he said last week, “they have an entertainment value, and they have these guys that are incredible athletes that go in there and do their thing. It’s well known that it’s scripted. When you look at the UFC, this is as real as it gets. That’s our tagline.” But there’s life after the UFC: Former MMA stars such as Ronda Rousey and Brock Lesnar have already found that they can cash in as wrestlers for WWE. Will this process, this pipeline of talent, now be accelerated? Conor McGregor—the most wrestlerlike, in his self-presentation, of all the UFC champions—is surely watching these developments carefully. As are, from the stands, the howling wrestling fans, the bloodthirsty UFC fans, and the rest of us with an interest in the American evolution.

Telling the Truth About Taiwan

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 10 › big-lie-about-taiwan › 675523

For some 50 years, American policy toward Taiwan has been based on the assertion that people on both sides of the Taiwan Straits believe that they are part of the same country and merely dispute who should run it and precisely how and when the island and the continent should be reunified. It is a falsehood so widely stated and so often repeated that officials sometimes forget that it is simply untrue. Indeed, they—and other members of the foreign-policy establishment—get anxious if you call it a lie.

It may have been a necessary lie when the United States recognized the People’s Republic of China, although it is more likely that the United States got snookered by Chinese diplomats in the mid-1970s, when they needed us far more than we needed them. It may even be necessary now, but a lie it remains. Acknowledging this fact is not merely a matter of intellectual hygiene but an imperative if we are to prevent China from attempting to gobble up this island nation of 24 million, thereby unhinging the international order in Asia and beyond.

On a recent visit to Taiwan, I had the chance to talk with the president, the candidates to replace her, senior ministers, academic experts, diplomats, and soldiers. Those conversations brought home to me just how pernicious the falsehood has been. Taiwan is an independent country. Its people have (on the evidence of repeated polling) little interest in becoming part of the mainland, and by substantial majorities consider themselves more Taiwanese than Chinese. It has its own currency, a thriving economy, lively democratic politics, sizable armed forces, a more and more desperate foreign policy—everything that makes a country independent.

[From the January/February 2023 issue: I went to Taiwan to say goodbye]

The reflexive reaction of American officials and experts today when one mentions Taiwan is, as in the past, a red-faced insistence that they better not go for independence. Those officials rarely produce evidence that the Taiwanese are about to declare independence. They do not even seem to realize that Taiwan already is independent in every meaningful sense. They are just conditioned to fulminate, grimly or histrionically depending on their nature.

This finger-wagging is a pompous assertion of hegemony over a protectorate that we have yet to say unambiguously we will protect. When President Joe Biden repeatedly lets slip that we would do so with force, his aides, in a bureaucratic reflex created by years of unthinking habit, insist that the president does not mean it. As a result, once again Americans have set up a minor ally for failure, and then blamed them for our shortsightedness.

In this case, 50 years of being told, in effect, to sit in a corner and not disturb the grown-ups has made Taiwan more difficult for the United States to defend, and less able to defend itself. Because of Taiwan’s military isolation, its armed forces are literally insular, inexperienced, and deprived of all the benefits that countries like South Korea or Japan get from regular, routine training and operation with the U.S. armed forces. Because the United States, in a superfluity of cleverness and caution, continues to refuse to say whether it would fight for Taiwan, the Taiwanese themselves are not sure that they would adopt the New Hampshire motto “Live free or die.” And honestly, who can blame them?

Lie follows lie. A president of Taiwan cannot visit the United States—rather, they are “in transit” somewhere else, usually one of the few Caribbean countries (St. Kitts, for example) that China has not yet coerced into cutting diplomatic recognition. The United States has an “American Institute in Taiwan,” not an embassy. The deputy assistant secretaries of state responsible for Taiwan (and the same goes for those in the Defense Department, of course) cannot visit the country. The handful of American service personnel there cannot go about in uniform. The U.S. does not openly conduct exercises with the forces with which it would—maybe—fight side by side. All this when Washington needs, more than ever, intimate connections with Taiwan.

It is a comfortable lie, which the government is unwilling to acknowledge as such, let alone confront, because the United States has allowed China’s Communist rulers to shape how we understand this part of the world. And while China prepares its forces for a bloody invasion of the island, its real strategy is more that of the constant squeeze, in multiple dimensions simultaneously—bribing countries to drop their recognition of Taiwan, pushing it out of international forums, seducing and suborning Taiwanese surrogates and likely collaborators, and an intense ramping up of military operations around the island to unnerve its population and exhaust its defenders. In the past, the Chinese armed forces rarely crossed the “median line” between the island and the mainland—now they do so routinely. China periodically fires missiles near the island. It has violated the Taiwanese air-defense interception zone so far this year several times more frequently than it did in 2020. And, of course, it maintains a drumbeat of threats directed as much against the United States as against Taipei.

[From the December 2022 issue: Taiwan prepares to be invaded]

The Chinese are masters at the art of incremental and psychological pressure, which suggests the counter—namely, doing the same to them. Why not let American military personnel operate on the island in uniform? Why not let senior diplomats and defense officials visit? Why not conduct open training? Why not, come to think of it, maintain a knowing silence the next time President Biden slips and says that we would defend the island? The Chinese would react—but then again, American inaction over Chinese base-building in the South China Sea merely encouraged more such activity. Passivity is the greater danger here.

And the United States should be prepared to fight for Taiwan. Should the island fall to China, America’s most potent geopolitical rival will have gained the world’s 21st-largest economy, roughly equivalent to Switzerland’s or Poland’s. China would also gain a dense clot of advanced technology, particularly in the area of computer chips. A key piece of the so-called first island chain in the Pacific would be in hostile hands, endangering the sea lanes of our closest Pacific allies, particularly Japan. American credibility would take a brutal blow, and our allies would have to wonder whether they should accommodate China or resort to the development of their own nuclear arsenals to substitute for the guarantees of an unreliable superpower.

And not least important: Another liberal democratic state would be snuffed out, in a world in which free government, liberty, and rule of law are already under pressure.

Eighty-five years ago, the leader of the world’s greatest global power shrugged off interest in a “quarrel in a faraway country, between people of whom we know nothing.” There are those who would say the same today of Ukraine. Neville Chamberlain’s foolish words were seared in the memory of an earlier generation, who learned the hard way that the stakes in such places can be far larger and far graver than domestically obsessed politicians might imagine.

It is now generally acknowledged that history is very far from being at an end and that, pressing as they are, issues of transnational significance such as climate change and environmental degradation are not the only urgent ones. We are back in a world of great-power politics, and to deal with it, those who make policy need to do the simplest, if sometimes the hardest, thing: start with the truth, and take prudent but firm steps to undo the effects of falsehood.