Itemoids

Europe

AI Doomerism Is a Decoy

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 06 › ai-regulation-sam-altman-bill-gates › 674278

On Tuesday morning, the merchants of artificial intelligence warned once again about the existential might of their products. Hundreds of AI executives, researchers, and other tech and business figures, including OpenAI CEO Sam Altman and Bill Gates, signed a one-sentence statement written by the Center for AI Safety declaring that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Those 22 words were released following a multi-week tour in which executives from OpenAI, Microsoft, Google, and other tech companies called for limited regulation of AI. They spoke before Congress, in the European Union, and elsewhere about the need for industry and governments to collaborate to curb their product’s harms—even as their companies continue to invest billions in the technology. Several prominent AI researchers and critics told me that they’re skeptical of the rhetoric, and that Big Tech’s proposed regulations appear defanged and self-serving.

Silicon Valley has shown little regard for years of research demonstrating that AI’s harms are not speculative but material; only now, after the launch of OpenAI’s ChatGPT and a cascade of funding, does there seem to be much interest in appearing to care about safety. “This seems like really sophisticated PR from a company that is going full speed ahead with building the very technology that their team is flagging as risks to humanity,” Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project, a nonprofit that advocates against mass surveillance, told me.

The unstated assumption underlying the “extinction” fear is that AI is destined to become terrifyingly capable, turning these companies’ work into a kind of eschatology. “It makes the product seem more powerful,” Emily Bender, a computational linguist at the University of Washington, told me, “so powerful it might eliminate humanity.” That assumption provides a tacit advertisement: The CEOs, like demigods, are wielding a technology as transformative as fire, electricity, nuclear fission, or a pandemic-inducing virus. You’d be a fool not to invest. It’s also a posture that aims to inoculate them from criticism, copying the crisis communications of tobacco companies, oil magnates, and Facebook before: Hey, don’t get mad at us; we begged them to regulate our product.

Yet the supposed AI apocalypse remains science fiction. “A fantastical, adrenalizing ghost story is being used to hijack attention around what is the problem that regulation needs to solve,” Meredith Whittaker, a co-founder of the AI Now Institute and the president of Signal, told me. Programs such as GPT-4 have improved on their previous iterations, but only incrementally. AI may well transform important aspects of everyday life—perhaps advancing medicine, already replacing jobs—but there’s no reason to believe that anything on offer from the likes of Microsoft and Google would lead to the end of civilization. “It’s just more data and parameters; what’s not happening is fundamental step changes in how these systems work,” Whittaker said.

Two weeks before signing the AI-extinction warning, Altman, who has compared his company to the Manhattan Project and himself to Robert Oppenheimer, delivered to Congress a toned-down version of the extinction statement’s prophecy: The kinds of AI products his company develops will improve rapidly, and thus potentially be dangerous. Testifying before a Senate panel, he said that “regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.” Both Altman and the senators treated that increasing power as inevitable, and associated risks as yet-unrealized “potential downsides.”

But many of the experts I spoke with were skeptical of how much AI will progress from its current abilities, and they were adamant that it need not advance at all to hurt people—indeed, many applications already do. The divide, then, is not over whether AI is harmful, but which harm is most concerning—a future AI cataclysm only its architects are warning about and claim they can uniquely avert, or a more quotidian violence that governments, researchers, and the public have long been living through and fighting against—as well as who is at risk and how best to prevent that harm.

[Read: It’s a weird time to be a doomsday prepper]

Take, for example, the reality that many existing AI products are discriminatory—racist and misgendering facial recognition, biased medical diagnoses, and sexist recruiting algorithms are among the most well-known examples. Cahn says that AI should be assumed prejudiced until proven otherwise. Moreover, advanced models are regularly accused of copyright infringement when it comes to their data sets, and labor violations when it comes to their production. Synthetic media is filling the internet with financial scams and nonconsensual pornography. The “sci-fi narrative” about AI, put forward in the extinction statement and elsewhere, “distracts us from those tractable areas that we could start working on today,” Deborah Raji, a Mozilla fellow who studies algorithmic bias, told me. And whereas algorithmic harms today principally wound marginalized communities and are thus easier to ignore, a supposed civilizational collapse would hurt the privileged too. “When Sam Altman says something, even though it’s so disassociated from the real way in which these harms actually play out, people are listening,” Raji said.

Even if people listen, the words can appear empty. Only days after Altman’s Senate testimony, he told reporters in London that if the EU’s new AI regulations are too stringent, his company could “cease operating” on the continent. The apparent about-face led to a backlash, and Altman then tweeted that OpenAI had “no plans to leave” Europe. “It sounds like some of the actual, sensible regulation is threatening the business model,” the University of Washington’s Bender said. In an emailed response to a request for comment about Altman’s remarks and his company’s stance on regulation, a spokesperson for OpenAI wrote, “Achieving our mission requires that we work to mitigate both current and longer-term risks” and that the company is “collaborating with policymakers, researchers and users” to do so.

The regulatory charade is a well-established part of the Silicon Valley playbook. In 2018, after Facebook was rocked by misinformation and privacy scandals, Mark Zuckerberg told Congress that his company has “a responsibility to not just build tools, but to make sure that they’re used for good” and that he would welcome “the right regulation.” Meta’s platforms have since failed miserably to limit election and pandemic misinformation. In early 2022, Sam Bankman-Fried told Congress that the federal government needs to establish “clear and consistent regulatory guidelines” for cryptocurrencies. By the end of the year, his own crypto firm had proved to be a sham, and he was arrested for financial fraud on the scale of the Enron scandal. “We see a really savvy attempt to avoid getting lumped in with tech platforms like Facebook and Twitter, which have drawn increasingly searching scrutiny from regulators about the harms they inflict,” Cahn told me.

At least some of the extinction statement’s signatories do seem to earnestly believe that superintelligent machines could end humanity. Yoshua Bengio, who signed the statement and is sometimes called a “godfather” of AI, told me he believes that the technologies have become so capable that they risk triggering a world-ending catastrophe, whether as rogue sentient entities or in the hands of a human. “If it’s an existential risk, we may have one chance, and that’s it,” he said.

[Read: Here’s how AI will come for your job]

Dan Hendrycks, the director of the Center for AI Safety, told me he thinks similarly about these risks. He also added that the public needs to end the current “AI arms race between these corporations, where they’re basically prioritizing the development of AI technologies over their safety.” That leaders from Google, Microsoft, OpenAI, Deepmind, Anthropic, and Stability AI signed his center’s warning, Hendrycks said, could be a sign of genuine concern. Altman wrote about this threat even before the founding of OpenAI. Yet “even under that charitable interpretation,” Bender told me, “you have to wonder: If you think this is so dangerous, why are you still building it?

The solutions these companies have proposed for both the empirical and fantastical harms of their products are vague, filled with platitudes that stray from an established body of work on what experts told me regulating AI would actually require. In his testimony, Altman emphasized the need to create a new government agency focused on AI. Microsoft has done the same. “This is warmed-up leftovers,” Signal’s Whittaker said. “I was in conversations in 2015 where the topic was ‘Do we need a new agency?’ This is an old ship that usually high-level people in a Davos-y environment speculate on before they go to cocktails.” And a new agency, or any exploratory policy initiative, “is a very long-term objective that would take many, many decades to even get close to realizing,” Raji said. During that time, AI could not only harm countless people but also become so entrenched in various companies and institutions as to make meaningful regulation much harder.

For about a decade, experts have rigorously studied the damage done by AI and proposed more realistic ways to prevent them. Possible interventions could involve public documentation of training data and model design; clear mechanisms for holding companies accountable when their products put out medical misinformation, libel, and other harmful content; antitrust legislation; or just enforcing existing laws related to civil rights, intellectual property, and consumer protection. “If a store is systematically targeting Black customers through human decision making, that’s a violation of civil-rights law,” Cahn said. “And to me, it’s no different when an algorithm does it.” Similarly, if a chatbot writes a racist legal brief or gives incorrect medical advice, was trained on copyrighted writing, or scams people for money, current laws should apply.

Doomsday prognostications and calls for a new AI agency amount to “an attempt at regulatory sabotage,” Whittaker said, because the very people selling and profiting from this technology would “shape, hollow out, and effectively sabotage” the agency and its powers. Just look at Altman testifying before Congress, or the recent “responsible”-AI meeting between various CEOs and President Joe Biden: The people developing and profiting from the software are the ones telling the government how to approach it—an early glimpse of regulatory capture. “There’s decades worth of very specific kinds of regulations people are calling for about equity, fairness, and justice,” Safiya Noble, an internet-studies scholar at UCLA and the author of Algorithms of Oppression, told me. “And the kinds of regulations I see [AI companies] talking about are ones that are favorable to their interests.” These companies also spent many millions of dollars lobbying Congress in just the first three months of this year.

All that has really changed from the years-old conversations around regulating AI is ChatGPT—a program that, because it spits out human-esque language, has captivated consumers and investors, granting Silicon Valley a Promethean aura. Beneath that myth, though, much about AI’s harms is unchanged. The technology depends on surveillance and data collection, exploits creative work and physical labor, amplifies bias, and is not sentient. The ideas and tools needed for regulation, which would require addressing those problems and perhaps reducing corporate profits, are around for anybody who might care to look. The 22-word warning is a tweet, not scripture; a matter of faith, not evidence. That an algorithm is harming somebody right now would have been a fact if you read this sentence a decade ago, and it remains one today.

How Europe Won the Gas War With Russia

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 06 › russia-ukraine-natural-gas-europe › 674268

The most significant defeat in Russia’s war on Ukraine was suffered not on a battlefield but in the marketplace.

The Russian aggressors had expected to use natural gas as a weapon to bend Western Europe to their will. The weapon failed. Why? And will the failure continue?

Unlike oil, which is easily transported by ocean tanker, gas moves most efficiently and economically through fixed pipelines. Pipelines are time-consuming and expensive to build. Once the pipeline is laid, over land or underwater, the buyer at one end is bound to the seller on the other end. Gas can move by tanker, too, but first it must be compressed into liquid form. Compressing gas is expensive and technologically demanding. In the 2010s, European consumers preferred to rely on cheaper and supposedly reliable pipeline gas from Russia. Then, in 2021, the year before the Russian attack on Ukraine, Europeans abruptly discovered the limits of Russian-energy reliability.

The Russian pipeline network can carry only so much gas at a time. In winter, Europe consumes more than the network can convey, so Europe prepares for shortages by building big inventories of gas in the summertime, when it uses less.

Russian actions in the summer of 2021 thwarted European inventory building. A shortage loomed—and prices spiked. I wrote for The Atlantic on January 5, 2022:

In a normal year, Europe would enter the winter with something like 100 billion cubic meters of gas on hand. This December began with reserves 13 percent lower than usual. Thin inventories have triggered fearful speculation. Gas is selling on European commodity markets for 10 times the price it goes for in the United States.

These high prices have offered windfall opportunities for people with gas to sell. Yet Russia has refused those opportunities. Through August, when European utilities import surplus gas to accumulate for winter use, deliveries via the main Russian pipeline to Germany flowed at only one-quarter their normal rate. Meanwhile, Russia has been boycotting altogether the large and sophisticated pipeline that crosses Ukraine en route to more southerly parts of Europe.

I added a warning: “By design or default, the shortfalls have put a powerful weapon in [Russian President Vladimir] Putin’s hands.”

A month later, the world learned what Putin’s gas weapon was meant to do. Russian armored columns lunged toward the Ukrainian capital, Kyiv, on February 24. Putin’s gas cutoffs appear to have been intended to deter Western Europe from coming to Ukraine’s aid.

The day before the invasion, I tried to communicate the mood of fear that then gripped gas markets and European capitals:

In 2017, 2018, and 2019, Russia’s dominance over its gas customers in Western Europe was weaker, and its financial resources to endure market disruption were fewer. In 2022, Russia’s power over its gas customers is at a zenith—and its financial resources are enormous … One gas-industry insider, speaking on the condition of anonymity in order to talk candidly, predicted that if gas prices stay high, European economies will shrink—and Russia’s could grow—to the point where Putin’s economy will overtake at least Italy’s and perhaps France’s to stand second in Europe only to Germany’s.

That fear was mercifully not realized. Instead, European economies proved much more resilient—and Russia’s gas weapon much less formidable—than feared. The lights did not go out.

The story of this success is one of much ingenuity, solidarity, sacrifice, and some luck. If Putin’s war continues into its second winter and into Europe’s third winter of gas shortages, Western countries will need even more ingenuity, solidarity, sacrifice, and luck.

Over 12 months, European countries achieved a remarkable energy pivot. First, they reduced their demand for gas. European natural-gas consumption in 2022 was estimated to be 12 percent lower than the average for the years 2019–21. More consumption cuts are forecast for 2023.

Weather helped. Europe’s winter of 2022–23 was, for the most part, a mild one. Energy substitution made a difference too. Germany produced 12 percent more coal-generated electricity in 2022 than in 2021. The slow recovery from the coronavirus pandemic in China helped as well. Chinese purchases of liquid natural gas on world markets actually dropped by nearly 20 percent in 2022 from their 2021 level.

[David Frum: Putin’s big chill in Europe]

Second, European countries looked out for consumers, and for one another. European Union governments spent close to 800 billion euros ($860 billion) to subsidize fuel bills in 2022. The United Kingdom distributed an emergency grant of £400 ($500) a household to help with fuel costs. Germany normally reexports almost half of the gas it imports, and despite shortfalls at home through the crisis, it continued to reexport a similar proportion to EU partners.

Third, as European countries cut their consumption, they also switched their sources of supply. The star of this part of the story is Norway, which replaced Russia as Europe’s single largest gas supplier. Norway rejiggered its offshore fields to produce less oil and more gas, I learned from energy experts during a recent visit I made to Oslo.

Norwegians also made sacrifices for their neighbors. Norway has an abundance of cheap hydroelectricity, and exports much of that power. During the 2022 energy crisis, those export commitments pushed up Norwegian households’ power bills and helped push down the approval ratings of Norway’s governing Labor Party by more than a quarter from its level at the beginning of that year. Nevertheless, the government steadfastly honored its electricity-export commitments (although it has now moved to place some restrictions on future exports).

The redirection from Asia of shipments of liquid natural gas from the United States, the Persian Gulf, and West Africa also contributed to European energy security. In December 2022, Germany opened a new gas-receiving terminal in Wilhelmshaven, near Bremen, which was completed at record speed, in fewer than 200 days. Two more terminals will begin operating in 2023.

The net result is that Russian gas exports fell by 25 percent in 2022. And since the painful record prices set in the months before the February 2022 invasion, the cost of gas in Europe has steeply declined.

Russian leaders had assumed that their pipelines to Europe would make the continent dependent on Russia. They did not apparently consider that the same pipelines also made Russia dependent on Europe. By contrast, only a single pipeline connects Russia to the whole of China, and it is less valuable to Putin—according to a study conducted by the Carnegie Endowment for International Peace, the gas it carries commands prices much lower than the gas Russia pipes to Europe.

To reach world markets, Russia will have to undertake the costly business of compressing its gas into liquid form. A decompression plant like the one swiftly constructed in Wilhelmshaven costs about $500 million. Germany’s three newly built terminals to receive liquid natural gas will cost more than $3 billion. But the outbound terminals that compress the gas cost even more: $10.5 billion is the latest estimate for the next big project on the U.S. Gulf Coast. Russia depended on foreign investment and technology to compete in the liquid-natural-gas market. Under Western sanctions, the flow of both investment and technology to Russia have been cut.

[Eliot A. Cohen: It’s not enough for Ukraine to win. Russia has to lose.]

Russia lacks the economic and technological oomph to keep pace with the big competitors in the liquid-gas market, such as the U.S. and Qatar. In April, CNBC reported on a study by gas-industry consultants that projected growth of 50 percent for the liquid-natural-gas market by 2030. The Russian share of that market will, according to the same study, shrink to 5 percent (from about 7 percent), even as the American share rises to 25 percent (from about 20 percent).

If the war in Ukraine continues through the next winter, Europe will have to overcome renewed difficulties. For example, Germany’s nuclear-power plants, which eased the shock last year, went offline forever in April. And this time, the winter might be colder. But gas production by non-Russian producers keeps rising, outpacing demand in the rest of the world. The Chinese economy continues its slow recovery from COVID; India lags as a gas buyer.

Risks are everywhere—but so are possibilities. When this war comes to an end, the lesson will be clear: We have to hasten the planet to a post-fossil-fuel future—not only to preserve our environment but to uphold world peace from aggressors who use oil and gas as weapons. Yet perhaps the most enduring lesson is political. Through the energy shock, Europe discovered a new resource: the power of wisely led cooperation to meet and overcome a common danger.

What It Takes to Win a War

The Atlantic

www.theatlantic.com › books › archive › 2023 › 06 › ernie-pyle-world-war-ii-soldiers › 674271

Most war correspondents don’t become household names, but as the Second World War raged, every American knew Ernie Pyle. His great subject was not the politics of the war, or its strategy, but rather the men who were fighting it. At the height of his column’s popularity, more than 400 daily newspapers and 300 weeklies syndicated Pyle’s dispatches from the front. His grinning face graced the cover of Time magazine. An early collection of his columns, Here Is Your War, became a best seller. It was followed by Brave Men, rereleased this week by Penguin Classics with an introduction by David Chrisinger, the author of the recent Pyle biography The Soldier’s Truth.

Pyle was one of many journalists who flocked to cover the Second World War. But he was not in search of scoops or special access to power brokers; in fact, he avoided the generals and admirals he called “the brass hats.” What Pyle looked for, and then conveyed, was a sense of what the war was really like. His columns connected those on the home front to the experiences of loved ones on the battlefield in Africa, Europe, and the Pacific. For readers in uniform, Pyle’s columns sanctified their daily sacrifices in the grinding, dirty, bloody business of war. Twelve million Americans would read about what it took for sailors to offload supplies under fire on a beachhead in Anzio, or how gunners could shoot enough artillery rounds to burn through a howitzer’s barrel. Pyle wrote about what he often referred to as “brave men.” And his idea of courage wasn’t a grand gesture but rather the accumulation of mundane, achievable, unglamorous tasks: digging a foxhole, sleeping in the mud, surviving on cold rations for weeks, piloting an aircraft through flak day after day after day.

We’ve become skeptical of heroic narratives. Critics who dismiss Pyle as a real-time hagiographer of the Greatest Generation miss the point. Pyle was a cartographer, meticulously mapping the character of the Americans who chose to fight. If a person’s character becomes their destiny, the destiny of the American war effort depended on the collective character of Americans in uniform. Pyle barely touched on tactics or battle plans in his columns, but he wrote word after word about the plight of the average frontline soldier because he understood that the war would be won, or lost, in their realm of steel, dirt, and blood.

In the following passage, Pyle describes a company of American infantrymen advancing into a French town against German resistance:

They seemed terribly pathetic to me. They weren’t warriors. They were American boys who by mere chance of fate had wound up with guns in their hands, sneaking up a death-laden street in a strange and shattered city in a faraway country in a driving rain. They were afraid, but it was beyond their power to quit. They had no choice. They were good boys. I talked with them all afternoon as we sneaked slowly forward along the mysterious and rubbled street, and I know they were good boys. And even though they weren’t warriors born to the kill, they won their battles. That’s the point.

I imagine that when those words hit the U.S. in 1944, shortly after D-Day, readers found reassurance in the idea that those “good boys” had what it took to win the war, despite being afraid, and despite not really being warriors. However, today Pyle’s words hold a different meaning. They read more like a question, one now being asked about America’s character in an ever more dangerous world.

[Read: Notes from a cematary]

The past two years have delivered a dizzying array of national-security challenges, including the U.S.’s decision to abandon Afghanistan to the Taliban, Russia’s war in Ukraine, and the possibility of a Chinese invasion of Taiwan. A rising authoritarian axis threatens the West-led liberal world order birthed after the Second World War. Much like when Pyle wrote 80 years ago, the character of a society—whether it contains “brave men” and “good boys” willing to defend democratic values—will prove determinative to the outcomes of these challenges.

The collapse of Afghanistan’s military and government came as a surprise to many Americans. That result cannot be fully explained by lack of dollars, time, or resources expended. Only someone who understood the human side of war—as Pyle certainly did—could have predicted that collapse, when the majority of Afghan soldiers surrendered to the Taliban. Conversely, in Ukraine, where most experts predicted a speedy Russian victory, the Ukrainians overperformed, defying expectations. The character of the Ukrainian people, one which most didn’t fully recognize, has been the driving factor.

Pyle often wrote in anecdotes, but his writing’s impact was anything but anecdotal. His style of combat realism, which eschews the macro and strategic for the micro and human, can be seen in today’s combat reporting from Ukraine. A new documentary film, Slava Ukraini, made by one of France’s most famous public intellectuals, Bernard-Henri Lévy, takes a Pyle-esque approach to last fall’s Ukrainian counteroffensive against the Russians. The film focuses on everyday Ukrainians and the courage they display for the sake of their cause. “And I’m amazed,” Lévy says, walking through a trench in eastern Ukraine, “that while weapons were not always their craft, these men are transformed into the bravest soldiers.”

Ernie Pyle at the front in 1944.(Bettmann/CORBIS/Getty)

War correspondents such as Thomas Gibbons-Neff at The New York Times and James Marson at The Wall Street Journal take a similar approach, with reporting that’s grounded in those specifics, which must inform any real understanding of strategy. The result is a style that’s indebted to Pyle and his concern with the soldiers’ morale and commitment to the cause, and reveals more than any high-level analyses could.

Pyle wasn’t the first to search for strategic truths about war in the granular reality of individual experiences. Ernest Hemingway, who didn’t cover the First World War as a correspondent but later reflected on it as a novelist, wrote in A Farewell to Arms:

There were many words that you could not stand to hear and finally only the names of the places had dignity. Certain numbers were the same way and certain dates and these with the names of the places were all you could say and have them mean anything. Abstract words such as glory, honor, courage, or hallow were obscene beside the concrete names of villages, the numbers of roads, the names of rivers, the numbers of regiments and the dates.

Pyle took this advice to heart when introducing characters in his columns. He would not only tell you a bit about a soldier, their rank, their job, and what they looked like; he would also make sure to give the reader their home address. “Here are the names of just a few of my company mates in that little escapade that afternoon,” he writes, after describing heavy combat in France. “Sergeant Joseph Palajsa, of 187 I Street, Pittsburgh. Pfc. Arthur Greene, of 618 Oxford Street, Auburn Massachusetts …” He goes on to list more than a half dozen others. Pyle knew that “only the names of the places had dignity.” And sometimes those places were home.

As a combat reporter, Pyle surpassed all others working during the Second World War, outwriting his contemporaries, Hemingway included. This achievement was one of both style and commitment. Was there any reporter who saw more of the war than Pyle? He first shipped overseas in 1940, to cover the Battle of Britain. He returned to the war in 1942, to north Africa, and he went on to Italy, to France, and finally to the Pacific. On April 17, 1945, while on a patrol near Okinawa, a sniper shot Pyle in the head, killing him instantly. His subject, war, finally consumed him.

[Read: The two Stalingrads]

Reading the final chapters of Brave Men, it seems as though Pyle’s subject was consuming him even before he left for Okinawa. “For some of us the war has already gone on too long,” he writes. “Our feelings have been wrung and drained.” Brave Men ends shortly after the liberation of Paris. The invasion of western Europe—which we often forget was an enormous gamble—had paid off. Berlin stood within striking distance. The war in Europe would soon be over. Pyle, however, remains far from sanguine.

“We have won this war because our men are brave, and because of many other things.” He goes on to list the contribution of our allies, the roles played by luck, by geography, and even by the passage of time. He cautions against hubris in victory and warns about the challenges of homecoming for veterans. “And all of us together will have to learn how to reassemble our broken world into a pattern so firm and so fair that another great war cannot soon be possible … Submersion in war does not necessarily qualify a man to be the master of the peace. All we can do is fumble and try once more—try out of the memory of our anguish—and be as tolerant with each other as we can.”

America’s Dysfunction Has Two Main Causes

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 06 › us-societal-trends-institutional-trust-economy › 674260

How has America slid into its current age of discord? Why has our trust in institutions collapsed, and why have our democratic norms unraveled?

All human societies experience recurrent waves of political crisis, such as the one we face today. My research team built a database of hundreds of societies across 10,000 years to try to find out what causes them. We examined dozens of variables, including population numbers, measures of well-being, forms of governance, and the frequency with which rulers are overthrown. We found that the precise mix of events that leads to crisis varies, but two drivers of instability loom large. The first is popular immiseration—when the economic fortunes of broad swaths of a population decline. The second, and more significant, is elite overproduction—when a society produces too many superrich and ultra-educated people, and not enough elite positions to satisfy their ambitions.

These forces have played a key role in our current crisis. In the past 50 years, despite overall economic growth, the quality of life for most Americans has declined. The wealthy have become wealthier, while the incomes and wages of the median American family have stagnated. As a result, our social pyramid has become top-heavy. At the same time, the U.S. began overproducing graduates with advanced degrees. More and more people aspiring to positions of power began fighting over a relatively fixed number of spots. The competition among them has corroded the social norms and institutions that govern society.

The U.S. has gone through this twice before. The first time ended in civil war. But the second led to a period of unusually broad-based prosperity. Both offer lessons about today’s dysfunction and, more important, how to fix it.

To understand the root causes of the current crisis, let’s start by looking at how the number of über-wealthy Americans has grown. Back in 1983, 66,000 American households were worth at least $10 million. That may sound like a lot, but by 2019, controlling for inflation, the number had increased tenfold. A similar, if smaller, upsurge happened lower on the food chain. The number of households worth $5 million or more increased sevenfold, and the number of mere millionaires went up fourfold.

This article has been adapted from Turchin’s forthcoming book.

On its surface, having more wealthy people doesn’t sound like such a bad thing. But at whose expense did elites’ wealth swell in recent years?

Starting in the 1970s, although the overall economy continued to grow, the share of that growth going to average workers began to shrink, and real wages leveled off. (It’s no coincidence that Americans’ average height—a useful proxy for well-being, economic and otherwise—stopped increasing around then too, even as average heights in much of Europe continued climbing.) By 2010, the relative wage (wage divided by GDP per capita) of an unskilled worker had nearly halved compared with mid-century. For the 64 percent of Americans who didn’t have a four-year college degree, real wages shrank in the 40 years before 2016.

[From the December 2020 issue: The next decade could be even worse]

As wages diminished, the costs of owning a home and going to college soared. To afford an average house, a worker earning the median wage in 2016 had to log 40 percent more hours than she would have in 1976. And parents without a college degree had to work four times longer to pay for their children’s college.

Even college-educated Americans aren’t doing well across the board. They made out well in the 1950s, when fewer than 15 percent of 18-to-24-year-olds went to college, but not today, when more than 60 percent of high-school grads immediately enroll. To get ahead of the competition, more college graduates have sought out advanced degrees. From 1955 to 1975, the number of students enrolled in law school tripled, and from 1960 to 1970, the number of doctorate degrees granted at U.S. universities more than tripled. This was manageable in the post–World War II period, when the number of professions requiring advanced degrees shot up. But when the demand eventually subsided, the supply didn’t. By the 2000s, degree holders greatly outnumbered the positions available to them. The imbalance is most acute in the social sciences and humanities, but the U.S. hugely overproduces degrees even in STEM fields.

This is part of a broader trend. Compared with 50 years ago, far more Americans today have either the financial means or the academic credentials to pursue positions of power, especially in politics. But the number of those positions hasn’t increased, which has led to fierce competition.

Competition is healthy for society, in moderation. But the competition we are witnessing among America’s elites has been anything but moderate. It has created very few winners and masses of resentful losers. It has brought out the dark side of meritocracy, encouraging rule-breaking instead of hard work.

All of this has left us with a large and growing class of frustrated elite aspirants, and a large and growing class of workers who can’t make better lives for themselves.

The decades that have led to our present-day dysfunction share important similarities with the decades leading to the Civil War. Then as now, a growing economy served to make the rich richer and the poor poorer. The number of millionaires per capita quadrupled from 1800 to 1850, while the relative wage declined by nearly 50 percent from the 1820s to the 1860s, just as it has in recent decades. Biological data from the time suggest that the average American’s quality of life declined significantly. From 1830 to the end of the century, the average height of Americans fell by nearly two inches, and average life expectancy at age 10 decreased by eight years during approximately the same period.

This popular immiseration stirred up social strife, which could be seen in urban riots. From 1820 to 1825, when times were good, only one riot occurred in which at least one person was killed. But in the five years before the Civil War, 1855 to 1860, American cities experienced no fewer than 38 such riots. We see a similar pattern today. In the run-up to the Civil War, this frustration manifested politically, in part as anti-immigrant populism, epitomized by the Know-Nothing Party. Today this strain of populism has been resurrected by Donald Trump.

[From the January/February 2022 issue: Beware prophecies of civil war]

Strife grew among elites too. The newly minted millionaires of the 19th century, who made their money in manufacturing rather than through plantations or overseas trade, chafed under the rule of the southern aristocracy, as their economic interests diverged. To protect their budding industries, the new elites favored high tariffs and state support for infrastructure projects. The established elites—who grew and exported cotton, and imported manufactured goods from overseas—strongly opposed these measures. The southern slaveholders’ grip on the federal government, the new elites argued, prevented necessary reforms in the banking and transportation systems, which threatened their economic well-being.

As the elite class expanded, the supply of desirable government posts flattened. Although the number of U.S. representatives grew fourfold from 1789 to 1835, it had shrunk by mid-century, just as more and more elite aspirants received legal training—then, as now, the chief route to political office. Competition for political power intensified, as it has today.

Those were cruder times, and intra-elite conflict took very violent forms. In Congress, incidences and threats of violence peaked in the 1850s. The brutal caning that Representative Preston Brooks of South Carolina gave to Senator Charles Sumner of Massachusetts on the Senate floor in 1856 is the best-known such episode, but it was not the only one. In 1842, after Representative Thomas Arnold of Tennessee “reprimanded a pro-slavery member of his own party, two Southern Democrats stalked toward him, at least one of whom was armed with a bowie knife,” the historian Joanne Freeman recounts. In 1850, Senator Henry Foote of Mississippi pulled a pistol on Senator Thomas Hart Benton of Missouri. In another bitter debate, a pistol fell out of a New York representative’s pocket, nearly precipitating a shoot-out on the floor of Congress.

This intra-elite violence presaged popular violence, and the deadliest conflict in American history.

The victory of the North in the Civil War decimated the wealth and power of the southern ruling class, temporarily reversing the problem of elite overproduction. But workers’ wages continued to lag behind overall economic growth, and the “wealth pump” that redistributed their income to the elites never stopped. By the late 19th century, elite overproduction was back, new millionaires had replaced the defeated slave-owning class, and America had entered the Gilded Age. Economic inequality exploded, eventually peaking in the early 20th century. By 1912, the nation’s top wealth holder, John D. Rockefeller, had $1 billion, the equivalent of 2.6 million annual wages—100 times higher than the top wealth holder had in 1790.

Then came the New York Stock Exchange collapse of 1929 and the Great Depression, which had a similar effect as the Civil War: Thousands of economic elites were plunged into the commoner class. In 1925, there were 1,600 millionaires, but by 1950, fewer than 900 remained. The size of America’s top fortune remained stuck at $1 billion for decades, inflation notwithstanding. By 1982, the richest American had $2 billion, which was equivalent to “only” 93,000 annual wages.

[From the December 2019 issue: How America ends]

But here is where the two eras differed. Unlike the post–Civil War period, real wages steadily grew in the mid-20th century. And high taxes on the richest Americans helped reverse the wealth pump. The tax rate on top incomes, which peaked during World War II at 94 percent, stayed above 90 percent all the way until the mid-1960s. Height increased by a whopping 3 inches in roughly the first half of the 20th century. Life expectancy at age 10 increased by nearly a decade. By the 1960s, America had achieved a broad-based prosperity that was virtually unprecedented in human history.

The New Deal elites learned an important lesson from the disaster of the Civil War. The reversal of elite overproduction in both eras was similar in magnitude, but only after the Great Depression was it accomplished through entirely nonviolent means. The ruling class itself was an important part of this—or, at least, a prosocial faction of the ruling class, which persuaded enough of their peers to acquiesce to the era’s progressive reforms.

As the historian Kim Phillips-Fein wrote in Invisible Hands, executives and stockholders mounted an enormous resistance to the New Deal policies regulating labor–corporate relations. But by mid-century, a sufficient number of them had consented to the new economic order for it to become entrenched. They bargained regularly with labor unions. They accepted the idea that the state would have a role to play in guiding economic life and helping the nation cope with downturns. In 1943, the president of the U.S. Chamber of Commerce—which today pushes for the most extreme forms of neoliberal market fundamentalism—said, “Only the willfully blind can fail to see that the old-style capitalism of a primitive, free-shooting period is gone forever.” President Dwight Eisenhower, considered a fiscal conservative for his time, wrote to his brother:

Should any political party attempt to abolish social security, unemployment insurance, and eliminate labor laws and farm programs, you would not hear of that party again in our political history. There is a tiny splinter group, of course, that believes you can do these things … Their number is negligible and they are stupid.

Barry Goldwater ran against Lyndon Johnson in 1964 on a platform of low taxes and anti-­union rhetoric. By today’s standards, Goldwater was a middle-of-the-road conservative. But he was regarded as radical at the time, too radical even for many business leaders, who abandoned his campaign and helped bring about his landslide defeat.

The foundations of this broad-based postwar prosperity—and for the ruling elite’s eventual acquiescence to it—were established during the Progressive era and buttressed by the New Deal. In particular, new legislation guaranteed unions’ right to collective bargaining, introduced a minimum wage, and established Social Security. American elites entered into a “fragile, unwritten compact” with the working classes, as the United Auto Workers president Douglas Fraser later described it. This implicit contract included the promise that the fruits of economic growth would be distributed more equitably among both workers and owners. In return, the fundamentals of the political-economic system would not be challenged. Avoiding revolution was one of the most important reasons for this compact (although not the only one). As Fraser wrote in his famous resignation letter from the Labor Management Group in 1978, when the compact was about to be abandoned, “The acceptance of the labor movement, such as it has been, came because business feared the alternatives.”

We are still suffering the consequences of abandoning that compact. The long history of human society compiled in our database suggests that America’s current economy is so lucrative for the ruling elites that achieving fundamental reform might require a violent revolution. But we have reason for hope. It is not unprecedented for a ruling class—with adequate pressure from below—to allow for the nonviolent reversal of elite overproduction. But such an outcome requires elites to sacrifice their near-term self-interest for our long-term collective interests. At the moment, they don’t seem prepared to do that.

This article has been adapted from Peter Turchin’s forthcoming book, End Times: Elites, Counter-Elites, and the Path of Political Disintegration.