Itemoids

South

Alabama Is Defying the Supreme Court on Voting Rights

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 07 › alabama-defies-voting-rights-act-supreme-court › 674850

Supreme Court rulings are meant to be the law of the land, but Alabama is taking its recent opinion on the Voting Rights Act as a mere recommendation. In an echo of mid-century southern defiance of school desegregation, the Yellowhammer State’s Republican-controlled legislature defied the conservative-dominated Court’s directive to redraw its congressional map with an additional Black-majority district.

Openly defying a Supreme Court order is rare—almost as rare as conservative justices recognizing that the Fifteenth Amendment outlaws racial discrimination in voting. Under Section 2 of the Voting Rights Act, states are sometimes required to draw districts with majority-minority populations. This requirement exists because after Reconstruction, one of the methods southern states used to disenfranchise their Black populations was racially gerrymandering congressional districts so that Black voters could not affect the outcome of congressional elections. Earlier this year, Alabama asked the Supreme Court to further weaken the Voting Rights Act so as to preserve its racial gerrymander.

[Read: A Supreme Court ruling that could tip the House]

More than a quarter of Alabama’s population is Black, but the state’s Republican majority has racially gerrymandered that population into a single district out of seven because it fears those voters might elect Democrats. The partisan motive is no excuse for racial discrimination—1870s Democrats also had a partisan interest in disenfranchising Black voters, who were then reliably Republican. After failing to get the Supreme Court to overturn Section 2, Alabama decided that following the law was optional.

Alabama’s open rejection of a Supreme Court ruling comes in the midst of a conservative campaign accusing liberals of “delegitimizing” the Court by criticizing its lurch to the right and the coziness of the Republican-appointed justices with billionaire political donors who have interests before the Court.

“This is another front in the political campaign to delegitimize the Supreme Court, with a goal of tarnishing its rulings and subjecting it to more political control,” The Wall Street Journal editorialized in May about Democratic hearings on potential ethics legislation. “Most of all, the Court is no longer a backstop legislature for progressives to impose policies they can’t get through Congress.”

Whatever else this Court may be, it can now be fairly described as a backstop legislature for conservatives to impose policies they cannot get through Congress. Also, the Court hasn’t had a liberal majority since the Nixon era, so conservative complaints that the Court was a “backstop legislature for progressives” are not an expression of opposition to “political control” over the Court, but a lament that Republican appointees possessed only a slim one-vote majority for most of that time, which meant they didn’t get their preferred outcomes as often as they wanted. And the way that the conservative movement seized the Court was precisely by “tarnishing its rulings” for more than a half century. At one point, the right-wing legal martyr and originalist Robert Bork was so frustrated by the Court being insufficiently conservative that he declared, “As our institutional arrangements now stand, the Court can never be made a legitimate element of a basically democratic polity.” In the right’s view, the judiciary was an “imperial judiciary,” an “out of control branch of government.”

Indeed, although it now accuses the Court’s liberal critics of “delegitimization,” the Journal defends the current Court by saying it is merely undoing the “legal mistakes of recent decades.” What the Roberts Court’s defenders truly fear is the political strength of a critique of the Court as overreaching and out of touch with the majority of the electorate, because as conservatives well understand, that is a critique that has the power to influence elections and ultimately shape the Court itself. They understand this because that is one reason the 6–3 right-wing majority on the Court came to be in the first place. This is why questioning the Court’s legal reasoning and sweeping power is a privilege that must be exclusively reserved for conservatives.

The fear is clearly not that rogue actors will ignore the Court’s rulings. If the pervasive right-wing alarm over liberal criticism of the Court as “delegitimizing” has been deafening, the conservative response to Alabama openly flouting the Court’s ruling has been muted. The Wall Street Journal’s editorial page, for example, so protective of the Court’s “legitimacy,” when it comes to substantive public criticism, did not view Alabama’s refusal to obey the justices as an event worthy of comment.

One would think that verbal criticism of powerful institutions, an essential part of life in any democracy, would be less an act of “delegitimization” than an open challenge to the rule of law. But Alabama is defying the rule of law in pursuit of conservative causes—more Republicans in Congress; voiding constitutional prohibitions on racial discrimination—and so it’s fine.

[From the October 2022 issue: John Roberts’s long game]

All of this renders the Journal’s hand-wringing rather ironic: It is clear the right that views the Court as a political instrument for imposing conservative policy, and when the Court fails to heed its obligation to do so, they can simply ignore it. This is consistent with the movement’s Trumpist turn toward the belief that the legitimacy of any practice or institution—elections, fundamental freedoms, the state itself—is conferred not by the consent of the governed but by the consent of the right. You have an inalienable access to the franchise as long as you vote Republican. You have free speech as long as you say conservative things. The free market is free only when it leads to conservative outcomes. The Supreme Court’s rulings are the law of the land, except if those rulings are not what conservatives want.

Alabama’s maps will likely be challenged in court. But one reason the state’s Republican leadership may feel comfortable with ignoring the justices in the first place is that Brett Kavanaugh and John Roberts were so clearly holding their noses in overturning a clear act of racial discrimination in voting that they might not be inclined to do it a second time. As Matt Ford reminds us, in striking down part of the Voting Rights Act in 2013, Roberts argued that “things have changed dramatically” in the South, and so those protections could be disregarded. That was naive at best then; Alabama is intent on illustrating why now.

Maybe Alabama is bluffing. Or maybe it simply doesn’t believe that someone like Roberts, who has been dreaming of gutting the Voting Rights Act since he was in his 20s, really means it. Or perhaps Alabama is reminding the Republican-appointed justices that the Court’s legitimacy depends on its obedience to the conservative movement, whose view is that the only legitimate outcomes—or laws, or governments, or presidents, or Supreme Court rulings—are conservative ones.

It is that position, and the Court’s reliable adherence to it, that has precipitated its loss of legitimacy. No liberal criticism could be as devastating to the Court’s credibility as the justices’ own actions, or the expectations of their defenders.

America Already Has an AI Underclass

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 07 › ai-chatbot-human-evaluator-feedback › 674805

On weekdays, between homeschooling her two children, Michelle Curtis logs on to her computer to squeeze in a few hours of work. Her screen flashes with Google Search results, the writings of a Google chatbot, and the outputs of other algorithms, and she has a few minutes to respond to each—judging the usefulness of the blue links she’s been provided, checking the accuracy of an AI’s description of a praying mantis, or deciding which of two chatbot-written birthday poems is better. She never knows what she will have to assess in advance, and for the AI-related tasks, which have formed the bulk of her work since February, she says she has little guidance and not enough time to do a thorough job.

Curtis is an AI rater. She works for the data company Appen, which is subcontracted by Google to evaluate the outputs of the tech giant’s AI products and search algorithm. Countless people do similar work around the world for Google; the ChatGPT-maker, OpenAI; and other tech firms. Their human feedback plays a crucial role in developing chatbots, search engines, social-media feeds, and targeted-advertising systems—the most important parts of the digital economy.

Curtis told me that the job is grueling, underpaid, and poorly defined. Whereas Google has a 176-page guide for search evaluations, the instructions for AI tasks are relatively sparse, she said. For every task she performs that involves rating AI outputs, she is given a few sentences or paragraphs of vague, even convoluted instructions and as little as just a few minutes to fully absorb them before the time allotted to complete the task is up. Unlike a page of Google results, chatbots promise authoritative answers—offering the final, rather than first, step of inquiry, which Curtis said makes her feel a heightened moral responsibility to assess AI responses as accurately as possible. She dreads these timed tasks for the very same reason: “It’s just not humanly possible to do in the amount of time that we’re given.” On Sundays, she works a full eight hours. “Those long days can really wear on you,” she said.

Armughan Ahmad, Appen’s CEO, told me through a spokesperson that the company “complies with minimum wages” and is investing in improved training and benefits for its workers; a Google spokesperson said Appen is solely responsible for raters’ working conditions and job training. For Google to mention these people at all is notable. Despite their importance to the generative-AI boom and tech economy more generally, these workers are almost never referenced in tech companies’ prophecies about the ascendance of intelligent machines. AI moguls describe their products as forces akin to electricity or nuclear fission, like facts of nature waiting to be discovered, and speak of “maximally curious” machines that learn and grow on their own, like children. The human side of sculpting algorithms tends to be relegated to opaque descriptions of “human annotations” and “quality tests,” evacuated of the time and energy powering those annotations.

[Read: Google’s new search tool could eat the internet alive]

The tech industry has a history of veiling the difficult, exploitative, and sometimes dangerous work needed to clean up its platforms and programs. But as AI rapidly infiltrates our daily lives, tensions between tech companies framing their software as self-propelling and the AI raters and other people actually pushing those products along have started to surface. In 2021, Appen raters began organizing with the Alphabet Workers Union-Communications Workers of America to push for greater recognition and compensation; Curtis joined its ranks last year. At the center of the fight is a big question: In the coming era of AI, can the people doing the tech industry’s grunt work ever be seen and treated not as tireless machines but simply as what they are—human?  

The technical name for the use of such ratings to improve AI models is reinforcement learning with human feedback, or RLHF. OpenAI, Google, Anthropic, and other companies all use the technique. After a chatbot has processed massive amounts of text, human feedback helps fine-tune it. ChatGPT is impressive because using it feels like chatting with a human, but that pastiche does not naturally arise through ingesting data from something like the entire internet, an amalgam of recipes and patents and blogs and novels. Although AI programs are set up to be effective at pattern detection, they “don’t have any sense of contextual understanding, no ability to parse whether AI-generated text looks more or less like what a human would have written,” Sarah Myers West, the managing director of the AI Now Institute, an independent research organization, told me. Only an actual person can make that call.

The program might write multiple recipes for chocolate cake, which a rater ranks and edits. Those evaluations and examples will inform the chatbot’s statistical model of language and next-word predictions, which should make the program better at writing recipes in the style of a human, for chocolate cake and beyond. A person might check a chatbot’s response for factual accuracy, rate how well it fits the prompt, or flag toxic outputs; subject experts can be particularly helpful, and they tend to be paid more.

Using human evaluations to improve algorithmic products is a fairly old practice at this point: Google and Facebook have been using them for almost a decade, if not more, to develop search engines, targeted ads, and other products, Sasha Luccioni, an AI researcher at the machine-learning company Hugging Face, told me. The extent to which human ratings have shaped today’s algorithms depends on who you ask, however. Major tech companies that design and profit from search engines, chatbots, and other algorithmic products tend to characterize the raters’ work as only one among many important aspects of building cutting-edge AI products. Courtenay Mencini, a Google spokesperson, told me that “ratings do not directly impact or solely train our algorithms. Rather, they’re one data point … taken in aggregate with extensive internal development and testing.” OpenAI has emphasized that training on huge amounts of text, rather than RLHF, accounts for most of GPT-4’s capabilities.

[From the September 2023 issue: Does Sam Altman know what he’s creating?]

AI experts I spoke with outside these companies took a different stance. Targeted human feedback has been “the single most impactful change that made [current] AI models as good as they are,” allowing the leap from GPT-2’s half-baked emails to GPT-4’s convincing essays, Luccioni said. She and others argue that tech companies intentionally downplay the importance of human feedback. Such obfuscation “sockets away some of the most unseemly elements of these technologies,” such as hateful content and misinformation that humans have to identify, Myers West told me—not to mention the conditions the people work under. Even setting aside those elements, describing the extent of human intervention would risk dispelling the magical and marketable illusion of intelligent machines—a “Wizard of Oz effect,” Luccioni said.

Despite tech companies’ stated positions, digging into their own press statements and research papers about AI reveals that they frequently do acknowledge the value of this human labor, if in broad terms. A Google blog post promoting a new chatbot last year, for instance, said that “to create safer dialogue agents, we need to be able to learn from human feedback.” Google has similarly described human evaluations as necessary to its search engine. The company touts RLHF as “particularly useful” for applying its AI services to industries such as health care and finance. Two lead researchers at OpenAI similarly described human evaluations as vital to training ChatGPT in an interview with MIT Technology Review. The company stated elsewhere that GPT-4 exhibited “large improvements” in accuracy after RLHF training and that human feedback was crucial to fine-tuning it. Meta’s most recent language model, released this week, relies on “over 1 million new human annotations,” according to the company.

To some extent, the significance of humans’ AI ratings is evident in the money pouring into them. One company that hires people to do RLHF and data annotation was valued at more than $7 billion in 2021, and its CEO recently predicted that AI companies will soon spend billions of dollars on RLHF, similar to their investment in computing power. The global market for labeling data used to train these models (such as tagging an image of a cat with the label “cat”), another part of the “ghost work” powering AI, could reach nearly $14 billion by 2030, according to an estimate from April 2022, months before the ChatGPT gold rush began.

All of that money, however, rarely seems to be reaching the actual people doing the ghostly labor. The contours of the work are starting to materialize, and the few public investigations into it are alarming: Workers in Africa are paid as little as $1.50 an hour to check outputs for disturbing content that has reportedly left some of them with PTSD. Some contractors in the U.S. can earn only a couple of dollars above the minimum wage for repetitive, exhausting, and rudderless work. The pattern is similar to that of social-media content moderators, who can be paid a tenth as much as software engineers to scan traumatic content for hours every day. “The poor working conditions directly impact data quality,” Krystal Kauffman, a fellow at the Distributed AI Research Institute and an organizer of raters and data labelers on Amazon Mechanical Turk, a crowdsourcing platform, told me.

Stress, low pay, minimal instructions, inconsistent tasks, and tight deadlines—the sheer volume of data needed to train AI models almost necessitates a rush job—are a recipe for human error, according to Appen raters affiliated with the Alphabet Workers Union-Communications Workers of America and multiple independent experts. Documents obtained by Bloomberg, for instance, show that AI raters at Google have as little as three minutes to complete some tasks, and that they evaluate high-stakes responses, such as how to safely dose medication. Even OpenAI has written, in the technical report accompanying GPT-4, that “undesired behaviors [in AI systems] can arise when instructions to labelers were underspecified” during RLHF.

Tech companies have at times responded to these issues by stating that ratings are not the only way they check accuracy, that humans doing those ratings are paid adequately based on their location and afforded proper training, and that viewing traumatic materials is not a typical experience. Mencini, the Google spokesperson, told me that Google’s wages and benefits standards for contractors do not apply to raters, because they “work part-time from home, can be assigned to multiple companies’ accounts at a time, and do not have access to Google’s systems or campuses.” In response to allegations of raters seeing offensive materials, she said that workers “select to opt into reviewing sensitive content, and can opt out freely at any time.” The companies also tend to shift blame to their vendors—Mencini, for instance, told me that “Google is simply not the employer of any Appen workers.”  

[Read: The coming humanist renaissance]

Appen’s raters told me that their working conditions do not align with various tech companies’ assurances—and that they hold Appen and Google responsible, because both profit from their work. Over the past year, Michelle Curtis and other raters have demanded more time to complete AI evaluations, benefits, better compensation, and the right to organize. The job’s flexibility does have advantages, they told me. Curtis has been able to navigate her children’s medical issues; another Appen rater I spoke with, Ed Stackhouse, said the adjustable hours afford him time to deal with a heart condition. But flexibility does not justify low pay and a lack of benefits, Shannon Wait, an organizer with the AWU-CWA, told me; there’s nothing flexible about precarity.

The group made headway at the start of the year, when Curtis and her fellow raters received their first-ever raise. She now makes $14.50 an hour, up from $12.75—still below the minimum of $15 an hour that Google has promised to its vendors, temporary staff, and contractors. The union continued raising concerns about working conditions; Stackhouse wrote a letter to Congress about these issues in May. Then, just over two weeks later, Curtis, Stackhouse, and several other raters received an email from Appen stating, “Your employment is being terminated due to business conditions.”

The AWU-CWA suspected that Appen and Google were punishing the raters for speaking out.  “The raters that were let go all had one thing in common, which was that they were vocal about working conditions or involved in organizing,” Stackhouse told me. Although Appen did suffer a drop in revenue during the broader tech downturn last year, the company also had, and has, open job postings. Four weeks before the termination, Appen had sent an email offering cash incentives to work more hours and meet “a significant spike in jobs available since the beginning of year,” when the generative-AI boom was in full swing; just six days before the layoffs, Appen sent another email lauding “record-high production levels” and re-upping the bonus-pay offer. On June 14, the union filed a complaint with the National Labor Relations Board alleging that Appen and Google had retaliated against raters “by terminating six employees who were engaged in protected [labor] activity.”

Less than two weeks after the complaint was filed, Appen reversed its decision to fire Curtis, Stackhouse, and the others; their positions were reinstated with back pay. Ahmad, Appen’s CEO, told me in an email that his company bases “employment decisions on business requirements” and is “happy that our business needs changed and we were able to hire back the laid off contributors.” He added, “Our policy is not to discriminate against employees due to any protected labor activities,” and that “we’ve been actively investing in workplace enhancements like smarter training, and improved benefits.”

Mencini, the Google spokesperson, told me that “only Appen, as the employer, determines their employees’ working conditions,” and that “Appen provides job training for their employees.” As with compensation and training, Mencini deflected responsibility for the treatment of organizing workers as well: “We, of course, respect the labor rights of Appen employees to join a union, but it’s a matter between them and their employer, Appen.”

That AI purveyors would obscure the human labor undergirding their products is predictable. Much of the data that train AI models is labeled by people making poverty wages, many of them located in the global South. Amazon deliveries are cheap in part because working conditions in the company’s warehouses subsidize them. Social media is usable and desirable because of armies of content moderators also largely in the global South. “Cloud” computing, a cornerstone of Amazon’s and Microsoft’s businesses, takes place in giant data centers.

AI raters might be understood as an extension of that cloud, treated not as laborers with human needs so much as productive units, carbon transistors on a series of fleshly microchips—objects, not people. Yet even microchips take up space; they require not just electricity but also ventilation to keep from overheating. The Appen raters’ termination and reinstatement is part of “a more generalized pattern within the tech industry of engaging in very swift retaliation against workers” when they organize for better pay or against ethical concerns about the products they work on, Myers West, of the AI Now Institute, told me.

Ironically, one crucial bit of human labor that AI programs have proved unable to automate is their own training. Human subjectivity and prejudice have long migrated their way into algorithms, and those flaws mean machines may not be able to perfect themselves. Various attempts to train AI models with other AI models have bred further bias and worsened performance, though a few have shown limited success. “I can’t imagine that we will be able to replicate [human intervention] with current AI approaches,” Hugging Face’s Luccioni told me in an email; Ahmad said that “using AI to train AI can have dire consequences as it pertains to the viability and credibility of this technology.” The tech industry has so far failed to purge the ghosts haunting its many other machines and services—the people organizing on warehouse floors, walking out of corporate headquarters, unionizing overseas, and leaking classified documents. Appen’s raters are proving that, even amid the generative-AI boom, humanity may not be so easily exorcized.

Can Plants and Animals Lie?

The Atlantic

www.theatlantic.com › books › archive › 2023 › 07 › evolution-liars-of-nature-lixing-sun-review › 674808

Analyzing animals to better understand Homo sapiens has become a sub-genre in the field of science writing. Sometimes this works well, as when Sabrina Imbler uses a purple octopus, starving to death while incubating her eggs, as a metaphor for disordered eating in How Far the Light Reaches. But other times, the enterprise feels downright Procrustean: An author amasses a wide swath of animal activities, even down to the level of the cell, and describes them in ways that might give the impression that some human quality has an analogue out there in the biosphere.

The compression is most strained when the activity being explained is complex and quintessentially human, such afs deception. This makes the enterprise of writing The Liars of Nature and the Nature of Liars, a recent book by Lixing Sun, especially difficult. Sun, a professor of animal behavior, ecology, and evolution at Central Washington University, presents a string of entertaining facts about the many ways plants and animals use trickery to survive. But the language is too casually anthropomorphic, undermining the points he's trying to make about cheating in the natural world. At the outset, Sun acknowledges that ascribing intentionality to animal behavior is “neither easy nor necessary,” and admits that deceptive adaptations arise from the unplanned march of evolution. “Cheating flourishes in nature as a direct result of natural selection,” he writes. It also “serves as a potent selective force that drives evolution on its own.” But his word choices and awkward phrasing often leave the reader with only a partial understanding of how flora and fauna actually "lie" and "deceive."

The behavior we think of as lying requires a more sophisticated kind of cognition than telling the truth does—and certainly a more sophisticated kind of cognition than a bird, flower, or fungus can muster. Years ago, when I was reporting an article about the science of deception, the psychologist Paul Ekman told me that a liar needs three qualities to succeed: the ability to think strategically, like a good chess player; the ability to read the needs of others, like a talented therapist; and the ability to manage emotions, like a grown-up. In other words, a good liar needs a high level of both cognitive and emotional intelligence.

And though lying—at least in humans—might be an ingenious skill, the evolution through which cheating arises in the natural world is not smart in the least. The way Sun describes each example makes it seem that plants and animals develop particular traits of fakery or mimicry out of some sort of mischievous, cunning impulse. But there’s nothing deliberate about evolutionary adaptation, which is the perpetuation of traits, arising randomly, that improve an organism’s chance of staying alive, reproducing, and passing along the same advantageous traits to the next generation. It’s a stupid process, lacking not only intentionality but also any kind of end goal beyond helping a particular organism survive and reproduce.      

Sun knows this; he’s a biology professor, after all. But the diction he deploys suggest otherwise. Sun, like others, falls into the science-writing trap of stretching biological phenomena to fit the contours of human understanding. As a result, the reader can be forgiven for coming away with the impression that cheating fauna and flora—right down to the level of bacteria—are clever enough to always have, as he puts it in one of his many awkward attempts to be engaging, a few new “ruses up their sleeves.”

According to Sun, nature’s cheats come in two varieties: liars and deceivers. Lying adheres to what he calls the First Law of Cheating: “falsifying messages in signaling.” Deceiving falls under his Second Law: “exploitation of biases, weaknesses, or deficits in the cognitive systems of another animal.”

But to illustrate these laws—indeed, even to define them—Sun uses terms that suggest intentionality; how else would you expect a lay audience to interpret active verbs like falsifying and exploiting? But such an interpretation would be misleading. As Sun himself notes, evolution is a mindless process, with no plan, no direction, nothing except the fluke of a selective advantage generated by a completely arbitrary genetic blip.

His colorful word choices—no doubt employed to make his writing more accessible and fun—seem to elide this randomness altogether. He uses terms such as imposters, hustlers, and con artists to describe the book’s animals, including Pacific tree frogs, and talks of the “tricks” that were “invented” by the duplicitous cuckoo. He uses words like these to characterize cheats that don’t even have nervous systems, such as the South American passion flower. It has evolved spots on its leaves that resemble the eggs of the Heliconius butterfly, which cause the butterfly to avoid laying its clutch there and instead move on to a plant that looks unoccupied—a handy deflection, what Sun calls a “most peculiar form of plant mimicry,” because newly hatched Heliconius caterpillars can be quite destructive to the passion flower. As Sun chooses to describe it, those yellow spots arose because “these plants can ‘read the mind’ of Heliconius butterflies.” But in truth, no plant ESP is involved; those fake-out butterfly eggs are the result of a random mutation in leaf coloration that turned out to have a benefit for the mutant plant. And using scare quotes around “read the mind” doesn’t get the author off the hook. (Nor do scare quotes help when he describes the offspring of the normally monogamous dark-eyed junco as “birds born out of ‘wedlock.’”)

If you’re looking for a compilation of weird facts about the splendors of the natural world—something that’s been called the “gee-whiz” approach to science writing—this book delivers. It is an often-enlightening collection of some stunning acts of animal trickery that have evolved in the brutal Darwinian struggle for survival. Dropping some of the tidbits you picked up here would make you the hit of the cocktail party. Did you know, for instance, that the stripes of a zebra create a confusing effect called “razzle dazzle,” and that their advantage may be in keeping away tsetse flies? Then there’s the cuckoo, one of the most notorious fakes on the planet, which deposits its own egg in the nest of one of as many as a dozen other bird species, tossing out one of the eggs that belong there and forcing the deceived mother to hatch and nurture its own newborn instead. And the sphinx caterpillar emits a whistle that mimics the alarm call of one of its predators, the chickadee, causing the chickadee to abandon its would-be meal and dive for cover.

In the last chapters of the book, Sun tries to draw a straight line from these many organisms to humans. We can, he writes, “use our newly acquired evolutionary understanding of how cheaters operate to design novel strategies to combat cheating in our society.” But that seems a fool’s errand, missing much of the sophistication that deception in our own species entails. Yes, Sun acknowledges that cheating in humans is much richer in “scale, variety, intricacy, [and] novelty” than it is in other animals. And yes, he attributes this richness to our unique language ability, greater intelligence, and complex social structure. But as he attempts to cover a wide range of human deception—including Ponzi schemes, lies on social media, and infidelity—he doesn’t actually have anything fresh or illuminating to offer.

Sun’s desire to draw lessons from the natural world about human lies has a subtler danger: Other living things are not just furry or feathery little proto-people, and comparing their behavior to ours risks underappreciating them as the amazing organisms they are. The astounding variety of the biosphere, evolved through random mutations over the course of eons, is reason enough to marvel at all the ways that certain surprising conduct has helped plants and animals thrive. So my recommendation is to read The Liars of Nature and the Nature of Liars for its great cocktail-party fodder, and to leave lessons about human deception to the psychologists, neuroscientists, and philosophers.

A New Kind of Fascism

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 07 › trump-second-term-isolationist-fascism › 674791

For some years, a variety of news commentators and academics have called Donald Trump a fascist. I was one of those who resisted using that term. I thought it had long been abused by casual, imprecise applications, and as a historian of Nazi Germany, I did not think Trumpism was anywhere close to crossing the threshold of that comparison. I still deny that Trump’s presidency was fascist; but I’m concerned that if he wins another trip to the White House, he could earn the label.

Fascism was most fully exemplified by the regimes of Benito Mussolini and Adolf Hitler. These regimes combined totalitarian dictatorship, wars of imperial conquest, and outright genocide in the case of Hitler (of Jews, Slavs, Roma) or ethnic mass murder in Mussolini’s case (of Libyans, Ethiopians, Slovenes). Placing Trumpism in the same category seemed to me trivializing and misleading.

I argued instead that Trump was more like Hungary’s Viktor Orbán or Turkey’s Recep Tayyip Erdoğan than Hitler or Mussolini, and should be categorized as an “illiberal populist” rather than a fascist. And in one very important respect, Trump differed sharply from the European fascists of the interwar period.

They were ardent militarists and imperialists. War was the crucible in which the new fascist man was to be forged; territorial expansion was both the means and the end of fascist power and triumph. Trump has shown little ambition to pursue such aims. In his first term, he shamelessly abased himself before Russian President Vladimir Putin, exchanged “love letters” with North Korea’s Kim Jong Un, signed the Doha Agreement with the Taliban committing the U.S. to withdrawal from Afghanistan, and petulantly sought to downgrade U.S. treaty obligations to NATO and South Korean allies that he deemed to be “delinquent” and getting a “free ride.”

Trump has continued in the same isolationist vein in recent interviews and speeches. He has railed against “globalists.” He has promised to settle the Russian-Ukrainian conflict in 24 hours by cutting off aid to Kyiv if President Volodymyr Zelensky does not reach an immediate settlement with Moscow—that is, capitulate to Putin. He has disparaged Taiwan as a predator nation that stole microchip manufacturing from the U.S. (That Chinese President Xi Jinping would construe the simultaneous abandonment of Ukraine and dismissal of Taiwan as anything other than a green light to invade the latter seems improbable.)

[Christopher R. Browning: How Hitler’s enablers undid democracy in Germany]

No question, Trump inflicted grave damage on our country’s political culture, stoking toxic polarization and reveling in dishonesty. And Trumpism did exhibit distinct elements of the fascist style of politics: the inflammatory rallies; the incessant mongering of fear, grievance, and victimization; the casual endorsement of violence; the pervasive embrace of conspiracy theories; the performative cruelty; the feral instinct for targeting marginalized and vulnerable minorities; and the cult of personality. But the Trump presidency lacked any warlike, expansionist interest, and that made it decisively unlike 20th-century fascism.

Thankfully, also, Trump himself was too lazy, inexperienced, and unprepared to set about systematically constructing a true dictatorship. The main focus of the Trump presidency was less plans and programs and more the theatrics of satisfying his constant, insatiable need for attention and adulation. Everything—whether the state of the economy or the chocolate cake served to China’s Xi Jinping at Mar-a-Lago—had to be extolled as “the greatest ever.”

Until the final weeks of Trump’s term, the guardrails of American democracy seemed to hold firm. The institutions of the federal government remained relatively intact, and civil servants largely secure and uncorrupted. The United States experienced democratic backsliding but not democratic collapse.

In a second term, however, a newly emboldened Trump could well attack democracy itself. The MAGA Republican Party of his making has openly explored ways to transform states where they control all branches of government. States that were once pluralistic democracies with at least some chance of a transfer of power are coming to resemble one-party regimes directed by a minority of the population. (Anne Applebaum’s report from Tennessee is a case history in point.)

In Florida, Governor Ron DeSantis, Trump’s putative rival for the 2024 Republican nomination, has turned his state into a laboratory for testing how a determined, calculating, uninhibited authoritarian can maximize executive power. In many respects, he has already accomplished at the state level what Trump did not have the discipline and focus to do at the federal level. And DeSantis has created a blueprint for other Republican state leaders to follow.

[Shadi Hamid: Americans are losing sight of what fascism means]

Just as state Republicans have become more ruthlessly autocratic in their methods, a new Trump presidency would be much more efficiently goal-oriented  at the federal level. A huge transformation of the administrative state is being deliberately planned. The government agencies and civil service he has decried as the “deep state” would be purged or politicized, and the “retribution” he has promised against his enemies would also be carried out. The “unitary executive” theory long promoted by some Republicans would become the reality of an unabashed authoritarianism.

The very last months of the Trump presidency foreshadowed what a second term would entail. When formerly loyal vassals such as Attorney General William Barr and Defense Secretary Mark Esper demonstrated that they would not cross the line into unconstitutional insurgency, Trump sought sycophants for whom no such line existed. In a new Trump administration, total devotion to the leader would be the sole qualification for appointment.

Unlike previous fascist leaders with their cult of war, Trump still offers appeasement to dictators abroad, but he now promises something much closer to dictatorship at home. For me, what Trump is offering for his second presidency will meet the threshold, and the label I’d choose to describe it would be “isolationist fascism.” Until now, such a concept would have been an oxymoron, a historical phenomenon without precedent. Trump continues to break every mold.