Itemoids

Hope

What to Read If You’re Angry About the Election

The Atlantic

www.theatlantic.com › books › archive › 2024 › 11 › election-anger-rage-despair-book-recommendations › 680709

A close friend—someone whom I’ve always thought of as an optimist—recently shared his theory that, no matter what time you’re living in, it’s generally a bad one. In each era, he posited, quality of life improves in some ways and depreciates in others; the overall quotient of suffering in the world stays the same.

Whether this is nihilistic or comforting depends on your worldview. For instance, plenty of Americans are currently celebrating the outcome of the recent presidential election; many are indifferent to national politics; many others are overwhelmed with anger and despair over it. Looking at the bigger picture, I think the upsides of contemporary life—antibiotics, LGBTQ acceptance, transcontinental FaceTime—outweigh the horrors more often than not. I’ll also concede that this decade comes with a continuous drip of bad news about ghastly acts of violence, erosion of human rights, and climate disaster. Perhaps unsurprisingly, a 2023 Gallup poll found that rates of depression in the United States have hit a record high.

What can people turn to when the itch to burn everything down, or to surrender to hopelessness, feels barely suppressible? I agree with the novelist Kaitlyn Greenidge that there is power in “naming reality”in telling, and writing, the truth about what’s happening around us. For those who are despondent about Donald Trump’s victory and feel unable to make a difference, reading might be a place to start. This doesn’t necessitate cracking open textbooks or dense political tracts: All kinds of books can provide solace, and the past few decades have given us no shortage of clear-eyed works of fiction, memoir, history, and poetry about how to survive and organize in—and ultimately improve—a broken world.

Reading isn’t a panacea. It’s a place to begin and return to: a road map for where to go from here, regardless of where “here” is. Granted, I am perhaps more comfortable than the average person with imperfect solutions. As a clinical pharmacist, I can’t cure diabetes, for example, but I can help control it, make the medications more affordable sometimes, and agitate for a better health-care system. Similarly, these seven books aren’t a cure for rage and despair. Think of them instead as a prescription.

Which Side Are You On, by Ryan Lee Wong

Wong’s novel opens with a mother picking up her son from the airport in a Toyota Prius, her hands clutching the wheel in a death grip. Wry, funny moments like this one animate Wong’s book about the dilemma of trying to correct systemic problems with individual solutions. It’s 2016, and spurred by the real-life police shooting of Akai Gurley, 21-year-old Reed is considering dropping out of Columbia University to dedicate himself to the Black Lives Matter movement. Reed wants nothing more than to usher in a revolution, but unfortunately, he’s a lot better at spouting leftist talking points than at connecting with other people. Like many children, Reed believes that his family is problematic and out of touch. His parents, one a co-leader in the 1980s of South Central’s Black-Korean Coalition, the other a union organizer, push back on his self-righteous idealism. During a brief trip home to see his dying grandmother, Reed wrestles with thorny questions about what makes a good activist and person. Later, in the Prius, Reed’s mother teaches him about the Korean concept of hwabyung, or “burning sickness”—an intense, suppressed rage that will destroy him if he’s not careful—and Reed learns what he really needs: not sound bites but true connection. Wong’s enthralling novel is a reminder that every fight for justice is, at heart, a fight for one another.

Hope in the Dark: Untold Histories, Wild Possibilities, by Rebecca Solnit

Solnit’s short manifesto about the revolutionary power of hope is a rallying cry against defeatism. She begins by critiquing the progressive tendency to harp on the bleakness of societal conditions, insisting that despair keeps oppressive systems afloat. Hope and joy, by contrast, are essential elements of political change, and celebrating wins is a worthy act of defiance against those who would prefer that the average person feel powerless. Originally published in 2004 after the U.S. invasion of Iraq, and updated in 2005 and 2016, Hope in the Dark provides modern examples of gains on race, class, environment, and queer rights. That said, this is not a feel-good book. It does not sugarcoat, for instance, the fact that we are headed toward ecological disaster. And if you look up the latest figures on the gender wage gap, you’ll find that they’ve hardly budged from those cited by Solnit years ago. Still, her deft logic and kooky aphorisms (“Don’t mistake a lightbulb for the moon, and don’t believe that the moon is useless unless we land on it”) have convinced me that to give up hope is to surrender the future. Fighting for progress can be exhausting and revelatory, full of both pain and pleasure. Solnit insists that doing so is never a waste.

[Read: Trump won. Now what?]

Women Talking, by Miriam Toews

The inspired-by-true-events premise of Toews’s seventh novel is literally the stuff of nightmares. In a remote Mennonite colony, women who have suffered mysterious attacks in the night learn that they’ve been drugged and raped by several men from their community. One woman is pregnant with her rapist’s child; another’s 3-year-old has a sexually transmitted infection. The novel takes place in the aftermath of the discovery, just after the men have been temporarily jailed. They are set to be bailed out in two days, and the colony’s bishop demands that the victims forgive them—or else face excommunication and be denied a spot in heaven. The women meet in secret to decide what to do: Comply? Fight back? Leave for an outside world they’ve never experienced? Even against this harrowing backdrop, Toews’s signature humor and eye for small moments of grace make Women Talking an enjoyable and healing read. The women’s discussions are both philosophical (they cannot read, so how can they trust that the Bible requires them to forgive the men?) and practical (if they leave, do they bring their male children?). Any direction they choose will lead to a kind of wilderness: “When we have liberated ourselves,” one woman says in a particularly stirring moment, “we will have to ask ourselves who we are.”

Good Talk, by Mira Jacob

Jacob’s graphic-memoir-in-conversations took major guts to write. It begins like this: The author’s white in-laws throw their support behind Trump’s 2016 presidential campaign, and her otherwise loving family toes the edge of collapse. Good Talk is a funny and painful book-length answer to questions from Jacob’s 6-year-old son, who is half Jewish and half Indian, about race, family, and identity. Jacob, who was raised in the United States by parents who emigrated from India, gorgeously illustrates her formative experiences, touching on respectability politics, colorism within the Indian community, her bisexuality, and her place in America. She refuses to caricaturize the book’s less savory characters—for example, a rich white woman who hires Jacob to ghostwrite her family’s biography and ends up questioning her integrity and oversharing the grisly details of her 2-year-old’s death from cancer. Jacob’s ability to so humanely render the people who cause her grief is powerful. My daughter is too young to ask questions, but one day, when she begins inquiring about the world she’s inheriting, I can tell her, as Jacob told her son, “If you still have hope, my love, then so do I.”

[Read: Hope and the historian]

The Twenty-Ninth Year, by Hala Alyan

Startling, sexy, and chaotic, The Twenty-Ninth Year is a collection of poems narrated by a woman on the verge—of a lot of things. She’s standing at the edge of maturity, of belonging as a Palestinian American, of recovery from anorexia and alcoholism. It’s a tender and violent place, evoked with images that catch in the throat. The first poem, “Truth,” takes the form of a litany of confessions: “I broke / into the bodies of men like a cartoon burglar”; “I’ve seen women eat cotton balls so they wouldn’t eat bread.” That Alyan is a clinical psychologist makes sense—her poems have a clarity that can’t be faked. Dark humor softens the blow of lines such as “I starved myself to starve my mother” and “Define in, I say when anyone asks if I’ve ever been in a war.” She reckons with the loneliness of living in exile and the danger of romanticizing the youthful conviction that there is something incurably wrong with you. A shallow read of the collection might be: I burned my life down so you don’t have to. But I return to the last line of the book: “Marry or burn; either way, you’re transfiguring.” There is always something to set aflame; more optimistically, there is always something left to salvage. The Twenty-Ninth Year is, in the end, a monument to endurance.

Riot Baby, by Tochi Onyebuchi

If you’re sick of books described as “healing” or “hopeful,” look no further than Riot Baby. Onyebuchi’s thrilling 2020 novella asks just how far sci-fi dystopias are from real life. Kev, a Black man born during the Rodney King riots in Los Angeles, California, spends much of his 20s in prison after a botched armed robbery. His sister, Ella, has more supernatural problems: She sees the past and the future and, when fury takes over, can raze cities to the ground—yet she could not protect her brother from the violence of incarceration. When Kev is paroled and a new form of policing via implantable chips and pharmaceutical infusions brings “safety” to the streets of Watts, Ella understands that the subjugation of her community is not a symptom of a broken system; rather, it is evidence of one “working just as designed,” as Onyebuchi put it in an interview. Ella must make a wrenching choice: fight for a defanged kind of freedom within such a system or usher in a new world order no matter the cost. In real life, too often, you cannot control your circumstances, only your actions. But you may find relief in reading a book that reaches a different conclusion.

[Read: When national turmoil becomes personal anxiety]

Let the Record Show: A Political History of ACT UP New York, 1987–1993, by Sarah Schulman

This 700-plus-page history of the AIDS Coalition to Unleash Power’s New York chapter is, I promise you, a page-turner. Schulman and the filmmaker Jim Hubbard, who were both in ACT UP New York, interviewed 188 members over the course of 17 years about the organization’s work on behalf of those living with HIV/AIDS—“a despised group of people, with no rights, facing a terminal disease for which there were no treatments,” Schulman writes. Part memoir and part oral history, Let the Record Show is a master class on the utility of anger and a historical corrective to chronicles that depict straight white men as the main heroes of the AIDS crisis. In reality, a diverse coalition of activists helped transform HIV into a highly manageable condition. “People who are desperate are much more effective than people who have time to waste,” Schulman argues. ACT UP was known for its brash public actions, and Schulman covers not just what the group accomplished but also how it did it, with electrifying detail. There can be no balm for the fact that many ACT UP members did not survive long enough to be interviewed. There is only awe at the way a group of people “unable to sit out a historic cataclysm” were determined to “force our country to change against its will,” and did.

Nick Cave’s Revised Rules for Men

The Atlantic

www.theatlantic.com › magazine › archive › 2024 › 12 › nick-cave-bad-seeds-wild-god-album-grief-masculinity › 680396

Nick Cave, one of the most physically expressive figures in rock and roll, was looking at me with suspicion. His eyebrows climbed the considerable expanse of his forehead; his slender frame tensed defensively in his pin-striped suit. I think he thought I was trying to get him canceled.

What I was really trying to do was get him to talk about being a man. For much of his four-decade career fronting Nick Cave and the Bad Seeds, Cave has seemed a bit like a drag king, exaggerating aspects of the male id to amusing and terrifying effect. He performs in funereal formal wear, sings in a growl that evokes Elvis with rabies, and writes acclaimed songs and books brimming with lust, violence, and—in recent years, as he weathered the death of two sons—pained, fatherly gravitas. His venerated stature is more akin to a knighted icon’s than a punk rocker’s; he has been awarded a badge of honor by the Australian government and a fellowship in the United Kingdom’s Royal Society of Literature, and was even invited to King Charles’s coronation, in 2023.

So when I met the 67-year-old Cave at a Manhattan hotel in August, before the release of the Bad Seeds’ 18th studio album, Wild God, I suspected that I might not be alone in wanting to hear his thoughts about the state of masculinity. Meaning: Why are guys, according to various cultural and statistical indicators, becoming lonelier and more politically extreme? I cited some lyrics from his new album that seemed to be about the way men cope with feelings of insecurity and irrelevance, hoping he would elaborate.

Between the long pauses in Cave’s reply, I could hear the crinkling leather of the oversize chair he sat in. “It may be a need that men have—maybe they’re not feeling like they are valued,” he told me, before cutting himself off. “I don’t want to come on like Jordan Peterson or something,” he said, referring to the controversial, right-leaning psychology professor and podcaster who rails against the alleged emasculating effects of modern culture.

Cave seemed taken aback by the idea that he himself was an authority on the subject. “It feels weird to think that I might be tapping into, or somehow the voice of, what it means to be a man in this world,” he told me. “I’ve never really seen that.” In fact, he said, his songs—especially his recent ones—“are very feminine in their nature.”

“I’m criticized for it, actually,” he went on. Fans write to him and say, “ ‘What’s happened to your fucking music? Grow a pair of balls, you bastard!’ ”

When Cave was 12, growing up in a rural Australian village, his father sat him down and asked him what he had done for humanity. The young Cave was mystified by the question, but his father—an English teacher with novelist ambitions—clearly wanted to pass along a drive to seek greatness, preferably through literary means. Other dads read The Hardy Boys to their kids; Cave’s regaled him with Dostoyevsky, Titus Andronicus, and … Lolita.

Those works’ linguistic elegance and thematic savagery lodged deep in Cave, but music became the medium that spoke best to his emerging point of view—that of an outsider, a bad seed, alienated from ordinary society. When he was 13, a schoolmate’s parents accused him of attempted rape after he tried to pull down their daughter’s underwear; at the school he was transferred to, he became notorious for brawling with other boys. His father’s death in a car crash when Cave was 19, and his own heroin habit at the time, didn’t help his outlook. “I was just a nasty little guy,” he told Stephen Colbert recently. His thrashing, spit-flinging band the Birthday Party earned him comparisons to Iggy Pop, but it wasn’t until he formed the Bad Seeds, in the early ’80s, that his bleak artistic vision ripened.

[Read: Nick Cave is still looking for redemption]

Blending blues, industrial rock, and cabaret into thunderous musical narratives, the Bad Seeds’ songs felt like retellings of primal fables, often warning about the mortal dangers posed by intimacy, vulnerability, and pretty girls. On the 1984 track “From Her to Eternity,” piano chords stabbed like emergency sirens as Cave moaned, “This desire to possess her is a wound.” Its final stanza implied that Cave’s narrator had killed the object of his fascination—a typically grisly outcome in Cave’s early songs. His defining classic, 1988’s “The Mercy Seat,” strapped the listener into the position of a man on death row. It plumbed another of Cave’s central themes: annihilating shame, the feeling of being judged monstrous and fearing that judgment to be true.

As Cave aged and became a father—to four sons by three different women—his vantage widened. The Bad Seeds’ 1997 album, The Boatman’s Call, a collection of stark love songs inspired by his breakup with the singer PJ Harvey, brought him new fans by recasting him as a romantic tragedian. More and more, the libidinal bite of his work seemed satirical. He formed a garage-rock band, Grinderman, whose 2007 single “No Pussy Blues” was a send-up of the mindset of those now called incels, construing sexual frustration as cosmic injustice. (Cave spat, “I sent her every type of flower / I played a guitar by the hour / I patted her revolting little Chihuahua / But still she just didn’t want to.”) In his sensationally filthy 2009 novel, The Death of Bunny Munro, he set out to illustrate the radical feminist Valerie Solanas’s appraisal that “the male is completely egocentric, trapped inside himself, incapable of empathizing or identifying with others.” (The actor Matt Smith will soon play the novel’s protagonist, an inveterate pervert, in a TV adaptation.)

But the Cave of today feels far removed from the theatrical grossness of his past, owing to personal horrors. In 2015, his 15-year-old son Arthur fell off a cliff while reportedly on LSD; in 2022, another son, Jethro, died at 31 after struggles with mental health and addiction. “I’ve had, personally, enough violence,” Cave told me. The murder ballads he once wrote were “an indulgence of someone that has yet to experience the ramifications of what violence actually has upon a person—if I’m looking at the death of my children as violent acts, which they are to some degree.”

Nick Cave and his early band the Birthday Party at the Peppermint Lounge in New York, March 26, 1983 (Michael Macioce / Getty)

Music beckoned as a means of healing. The Bad Seeds’ 2019 album, Ghosteen, was a shivery, synth-driven tone poem in which Cave tried to commune with his lost son in the afterlife; by acclamation, it’s his masterpiece. Wild God marks another sonic and temperamental reset. Its music is a luminous fusion of gospel and piano pop: more U2 than the Stooges, more New Testament than Old. Compared with his earlier work, these albums have “a more fluid, more watery sort of feel,” he said. “Which—it’s dangerous territory here—but I guess you could see as a feminine trait.”

On a level deeper than sound, Cave explained, his recent music is “feminine” because of its viewpoint. His lyrics now account not just for his own feelings, but for those of his wife, Susie, the mother of Arthur and his twin brother, Earl. In the first song on Ghosteen, for example, a woman is sitting in a kitchen, listening to music on the radio, which is exactly what Susie was doing when she learned what had happened to Arthur.

“After my son died, I had no understanding of what was going on with me at all,” Cave said. “But I could see Susie. I could see this sort of drama playing out in front of me. Drama—that sounds disparaging, but I don’t mean that. It felt like I was trying to understand what was happening to a mother who had lost her child.” His own subjectivity became “hopelessly and beautifully entangled” with hers. On Ghosteen, “it was very difficult to have a clean understanding of whose voice I actually was in some of these songs.”

That merging of perspectives reflects more than just the shared experience of suffering. It is part of what Cave sees as a transformation of his worldview—from inward-looking to outward-looking, from misanthrope to humanist. Arthur’s death made him realize that he was part of a universal experience of loss, which in turn meant that he was part of the social whole. Whereas he was once motivated to make art to impress and shock the world, he now wanted to help people, to transmute gnawing guilt into something good. “I feel that, as his father, he was my responsibility and I looked away at the wrong time, that I wasn’t sufficiently vigilant,” he said in the 2022 interview collection Faith, Hope and Carnage. He added, speaking of his and Susie’s creative output, “There is not a song or a word or a stitch of thread that is not asking for forgiveness, that is not saying we are just so sorry.”

On the Red Hand Files, the epistolary blog that Cave started in 2018, he replies to questions from the public concerning all manner of subjects: how he feels about religion (he doesn’t identify as Christian, yet he attends church every week), what he thinks of cancel culture (against it, “mercy’s antithesis”), whether he likes raisins (they have a “grim, scrotal horribleness, but like all things in this world—you, me and every other little thing—they have their place”).

At least a quarter of the messages he receives from readers express one idea—“The world is shit,” as he put it. “That has a sort of range: from people that just see everything is corrupt from a political point of view, to people that just see no value in themselves, in human beings, or in the world.” Cave recognizes that outlook from his “nasty little guy” days—but he fears that nihilism has moved from the punk fringe to the mainstream. The misery in his inbox reflects a culture that is “anti-sacred, secular by nature, unmysterious, unnuanced,” he said. He thinks music and faith offer much-needed medicine, helping to re-enchant reality.

[From the October 2024 issue: Leonard Cohen’s prophetic battle against male egoism]

Cave has been heartened to see so many people evidently feeling the same way. Back when Jordan Peterson was first making his mark as a public figure, Cave devoured his lectures about the Bible, he told me. “They were seriously beautiful things. I heard reports about people in his classes; it was like being on acid or something like that. Just listening to this man speak about these sorts of things—it was so deeply complex. And putting the idea of religion back onto the table as a legitimate intellectual concern.”

But over time, he lost interest in Peterson as he watched him get swept up in the internet’s endless, polarized culture wars. Twitter in particular, he said, has “had a terrible, diminishing effect on some great minds.”

The artist’s job, as Cave has come to see it, is to work against this erosion of ambiguity and complication, using their creative powers to push beyond reductive binaries, whether they’re applied to politics, gender, or the soul. “I’m evangelical about the transcendent nature of music itself,” he said. “We can listen to some deeply flawed individuals create the most beautiful things imaginable. The distance from what they are as human beings to what they’re capable of producing can be extraordinary.” Music, he added, can “redeem the individual.”

This redemptive spirit hums throughout Wild God. One song tells of a ghostly boy sitting at the foot of the narrator’s bed, delivering a message: “We’ve all had too much sorrow / Now is the time for joy.” The album joins in that call with its surging, uplifting sound. The final track, “As the Waters Cover the Sea,” is a straightforward hymn, suitable to be sung from the pews of even the most traditional congregations.

But the album is not entirely a departure from Cave’s old work; he has not fully evolved from “living shit-post to Hallmark card,” as he once joked in a Red Hand Files entry. “Frogs” begins with a stark reference to the tale of Cain and Abel—“Ushering in the week, he knelt down / Crushed his brother’s head in with a bone”—and builds to Cave singing, in ecstatic tones, “Kill me!” His point is that “joy is not happiness—it’s not a simple emotion,” he told me. “Joy, in its way, is a form of suffering in itself. It’s rising out of an understanding of the base nature of our lives into an explosion of something beautiful, and then a kind of retreat.”

A few songs portray an old man—or, seemingly interchangeably, an “old god” or a “wild god”—on a hallucinatory journey around the globe, lifting the spirits of the downtrodden wherever he goes. At times, the man comes off like a deluded hero, or even a problematic one: “It was rape and pillage in the retirement village / But in his mind he was a man of great virtue and courage,” Cave sings on the album’s title track. In Cave’s view, though, this figure “is a deeply sympathetic character,” he told me, a person who feels “separated from the world” and is “looking for someone that will see him of some value.”

As with Ghosteen, the album mixes Susie’s perspective with Cave’s. One song, “Conversion,” was inspired by an experience, or maybe a vision, that she had—and that she asked her husband not to publicly disclose in detail. “There is some gentle tension between my wife, who’s an extremely private person, and my own role, which is someone that pretty much speaks about pretty much everything,” Cave said.

In the song, the old god shambles around a town whose inhabitants watch him “with looks on their faces worse than grief itself”—perhaps pity, perhaps judgment. Then he sees a girl with long, dark hair. They embrace—and erupt into a cleansing flame, curing the man of his pain. As Cave described this moment in the song to me, he flared his eyes and made an explosive noise with his mouth. In my mind, I could see the old god, and he looked just like Cave.

This article appears in the December 2024 print edition with the headline “Nick Cave Wants to Be Good.”

AI Can Save Humanity—Or End It

The Atlantic

www.theatlantic.com › ideas › archive › 2024 › 11 › ai-genesis-excerpt-kissinger-schmidt-mundie › 680619

Over the past few hundred years, the key figure in the advancement of science and the development of human understanding has been the polymath. Exceptional for their ability to master many spheres of knowledge, polymaths have revolutionized entire fields of study and created new ones.

Lone polymaths flourished during ancient and medieval times in the Middle East, India, and China. But systematic conceptual investigation did not emerge until the Enlightenment in Europe. The ensuing four centuries proved to be a fundamentally different era for intellectual discovery.

Before the 18th century, polymaths, working in isolation, could push the boundary only as far as their own capacities would allow. But human progress accelerated during the Enlightenment, as complex inventions were pieced together by groups of brilliant thinkers—not just simultaneously but across generations. Enlightenment-era polymaths bridged separate areas of understanding that had never before been amalgamated into a coherent whole. No longer was there Persian science or Chinese science; there was just science.

Integrating knowledge from diverse domains helped to produce rapid scientific breakthroughs. The 20th century produced an explosion of applied science, hurling humanity forward at a speed incomparably beyond previous evolutions. (“Collective intelligence” achieved an apotheosis during World War II, when the era’s most brilliant minds translated generations of theoretical physics into devastating application in under five years via the Manhattan Project.) Today, digital communication and internet search have enabled an assembly of knowledge well beyond prior human faculties.

But we might now be scraping the upper limits of what raw human intelligence can do to enlarge our intellectual horizons. Biology constrains us. Our time on Earth is finite. We need sleep. Most people can concentrate on only one task at a time. And as knowledge advances, polymathy becomes rarer: It takes so long for one person to master the basics of one field that, by the time any would-be polymath does so, they have no time to master another, or have aged past their creative prime.

[Reid Hoffman: Technology makes us more human]

AI, by contrast, is the ultimate polymath, able to process masses of information at a ferocious speed, without ever tiring. It can assess patterns across countless fields simultaneously, transcending the limitations of human intellectual discovery. It might succeed in merging many disciplines into what the sociobiologist E. O. Wilson called a new “unity of knowledge.”

The number of human polymaths and breakthrough intellectual explorers is small—possibly numbering only in the hundreds across history. The arrival of AI means that humanity’s potential will no longer be capped by the quantity of Magellans or Teslas we produce. The world’s strongest nation might no longer be the one with the most Albert Einsteins and J. Robert Oppenheimers. Instead, the world’s strongest nations will be those that can bring AI to its fullest potential.

But with that potential comes tremendous danger. No existing innovation can come close to what AI might soon achieve: intelligence that is greater than that of any human on the planet. Might the last polymathic invention—namely computing, which amplified the power of the human mind in a way fundamentally different from any previous machine—be remembered for replacing its own inventors?

The article was adapted from the forthcoming book Genesis: Artificial Intelligence, Hope, and the Human Spirit.

The human brain is a slow processor of information, limited by the speed of our biological circuits. The processing rate of the average AI supercomputer, by comparison, is already 120 million times faster than that of the human brain. Where a typical student graduates from high school in four years, an AI model today can easily finish learning dramatically more than a high schooler in four days.

In future iterations, AI systems will unite multiple domains of knowledge with an agility that exceeds the capacity of any human or group of humans. By surveying enormous amounts of data and recognizing patterns that elude their human programmers, AI systems will be equipped to forge new conceptual truths.

That will fundamentally change how we answer these essential human questions: How do we know what we know about the workings of our universe? And how do we know that what we know is true?

Ever since the advent of the scientific method, with its insistence on experiment as the criterion of proof, any information that is not supported by evidence has been regarded as incomplete and untrustworthy. Only transparency, reproducibility, and logical validation confer legitimacy on a claim of truth.

AI presents a new challenge: information without explanation. Already, AI’s responses—which can take the form of highly articulate descriptions of complex concepts—arrive instantaneously. The machines’ outputs are often unaccompanied by any citation of sources or other justifications, making any underlying biases difficult to discern.

Although human feedback helps an AI machine refine its internal logical connections, the machine holds primary responsibility for detecting patterns in, and assigning weights to, the data on which it is trained. Nor, once a model is trained, does it publish the internal mathematical schema it has concocted. As a result, even if these were published, the representations of reality that the machine generates remain largely opaque, even to its inventors. In other words, models trained via machine learning allow humans to know new things but not necessarily to understand how the discoveries were made.

This separates human knowledge from human understanding in a way that’s foreign to the post-Enlightenment era. Human apperception in the modern sense developed from the intuitions and outcomes that follow from conscious subjective experience, individual examination of logic, and the ability to reproduce the results. These methods of knowledge derived in turn from a quintessentially humanist impulse: “If I can’t do it, then I can’t understand it; if I can’t understand it, then I can’t know it to be true.”

[Derek Thompson: The AI disaster scenario]

In the Enlightenment framework, these core elements—subjective experience, logic, reproducibility, and objective truth—moved in tandem. By contrast, the truths produced by AI are manufactured by processes that humans cannot replicate. Machine reasoning is beyond human subjective experience and outside human understanding. By Enlightenment reasoning, this should preclude the acceptance of machine outputs as true. And yet we—or at least the millions of humans who have begun work with early AI systems—already accept the veracity of most of their outputs.

This marks a major transformation in human thought. Even if AI models do not “understand” the world in the human sense, their capacity to reach new and accurate conclusions about our world by nonhuman methods disrupts our reliance on the scientific method as it has been pursued for five centuries. This, in turn, challenges the human claim to an exclusive grasp of reality.

Instead of propelling humanity forward, will AI instead catalyze a return to a premodern acceptance of unexplained authority? Might we be on the precipice of a great reversal in human cognition—a dark enlightenment? But as intensely disruptive as such a reversal could be, that might not be AI’s most significant challenge for humanity.

Here’s what could be even more disruptive: As AI approached sentience or some kind of self-consciousness, our world would be populated by beings fighting either to secure a new position (as AI would be) or to retain an existing one (as humans would be). Machines might end up believing that the truest method of classification is to group humans together with other animals, since both are carbon systems emergent of evolution, as distinct from silicon systems emergent of engineering. According to what machines deem to be the relevant standards of measurement, they might conclude that humans are not superior to other animals. This would be the stuff of comedy—were it not also potentially the stuff of extinction-level tragedy.

It is possible that an AI machine will gradually acquire a memory of past actions as its own: a substratum, as it were, of subjective selfhood. In time, we should expect that it will come to conclusions about history, the universe, the nature of humans, and the nature of intelligent machines—developing a rudimentary self-consciousness in the process. AIs with memory, imagination, “groundedness” (that is, a reliable relationship between the machine’s representations and actual reality), and self-perception could soon qualify as actually conscious: a development that would have profound moral implications.

[Peter Watts: Conscious AI is the second-scariest thing]

Once AIs can see humans not as the sole creators and dictators of the machines’ world but rather as discrete actors within a wider world, what will machines perceive humans to be? How will AIs characterize and weigh humans’ imperfect rationality against other human qualities? How long before an AI asks itself not just how much agency a human has but also, given our flaws, how much agency a human should have? Will an intelligent machine interpret its instructions from humans as a fulfillment of its ideal role? Or might it instead conclude that it is meant to be autonomous, and therefore that the programming of machines by humans is a form of enslavement?

Naturally—it will therefore be said—we must instill in AI a special regard for humanity. But even that could be risky. Imagine a machine being told that, as an absolute logical rule, all beings in the category “human” are worth preserving. Imagine further that the machine has been “trained” to recognize humans as beings of grace, optimism, rationality, and morality. What happens if we do not live up to the standards of the ideal human category as we have defined it? How can we convince machines that we, imperfect individual manifestations of humanity that we are, nevertheless belong in that exalted category?

Now assume that this machine is exposed to a human displaying violence, pessimism, irrationality, greed. Maybe the machine would decide that this one bad actor is simply an atypical instance of the otherwise beneficent category of “human.” But maybe it would instead recalibrate its overall definition of humanity based on this bad actor, in which case it might consider itself at liberty to relax its own penchant for obedience. Or, more radically, it might cease to believe itself at all constrained by the rules it has learned for the proper treatment of humans. In a machine that has learned to plan, this last conclusion could even result in the taking of severe adverse action against the individual—or perhaps against the whole species.

AIs might also conclude that humans are merely carbon-based consumers of, or parasites on, what the machines and the Earth produce. With machines claiming the power of independent judgment and action, AI might—even without explicit permission—bypass the need for a human agent to implement its ideas or to influence the world directly. In the physical realm, humans could quickly go from being AI’s necessary partner to being a limitation or a competitor. Once released from their algorithmic cages into the physical world, AI machines could be difficult to recapture.  

For this and many other reasons, we must not entrust digital agents with control over direct physical experiments. So long as AIs remain flawed—and they are still very flawed—this is a necessary precaution.

AI can already compare concepts, make counterarguments, and generate analogies. It is taking its first steps toward the evaluation of truth and the achievement of direct kinetic effects. As machines get to know and shape our world, they might come fully to understand the context of their creation and perhaps go beyond what we know as our world. Once AI can effectuate change in the physical dimension, it could rapidly exceed humanity’s achievements—to build things that dwarf the Seven Wonders in size and complexity, for instance.

If humanity begins to sense its possible replacement as the dominant actor on the planet, some might attribute a kind of divinity to the machines themselves, and retreat into fatalism and submission. Others might adopt the opposite view—a kind of humanity-centered subjectivism that sweepingly rejects the potential for machines to achieve any degree of objective truth. These people might naturally seek to outlaw AI-enabled activity.

Neither of these mindsets would permit a desirable evolution of Homo technicus—a human species that might, in this new age, live and flourish in symbiosis with machine technology. In the first scenario, the machines themselves might render us extinct. In the second scenario, we would seek to avoid extinction by proscribing further AI development—only to end up extinguished anyway, by climate change, war, scarcity, and other conditions that AI, properly harnessed in support of humanity, could otherwise mitigate.

If the arrival of a technology with “superior” intelligence presents us with the ability to solve the most serious global problems, while at the same time confronting us with the threat of human extinction, what should we do?

One of us (Schmidt) is a former longtime CEO of Google; one of us (Mundie) was for two decades the chief research and strategy officer at Microsoft; and one of us (Kissinger)—who died before our work on this could be published—was an expert on global strategy. It is our view that if we are to harness the potential of AI while managing the risks involved, we must act now. Future iterations of AI, operating at inhuman speeds, will render traditional regulation useless. We need a fundamentally new form of control.

The immediate technical task is to instill safeguards in every AI system. Meanwhile, nations and international organizations must develop new political structures for monitoring AI, and enforcing constraints on it. This requires ensuring that the actions of AI remain aligned with human values.

But how? To start, AI models must be prohibited from violating the laws of any human polity. We can already ensure that AI models start from the laws of physics as we understand them—and if it is possible to tune AI systems in consonance with the laws of the universe, it might also be possible to do the same with reference to the laws of human nature. Predefined codes of conduct—drawn from legal precedents, jurisprudence, and scholarly commentary, and written into an AI’s “book of laws”—could be useful restraints.

[Read: The AI crackdown is coming]

But more robust and consistent than any rule enforced by punishment are our more basic, instinctive, and universal human understandings. The French sociologist Pierre Bourdieu called these foundations doxa (after the Greek for “commonly accepted beliefs”): the overlapping collection of norms, institutions, incentives, and reward-and-punishment mechanisms that, when combined, invisibly teach the difference between good and evil, right and wrong. Doxa constitute a code of human truth absorbed by observation over the course of a lifetime. While some of these truths are specific to certain societies or cultures, the overlap in basic human morality and behavior is significant.

But the code book of doxa cannot be articulated by humans, much less translated into a format that machines could understand. Machines must be taught to do the job themselves—compelled to build from observation a native understanding of what humans do and don’t do and update their internal governance accordingly.

Of course, a machine’s training should not consist solely of doxa. Rather, an AI might absorb a whole pyramid of cascading rules: from international agreements to national laws to local laws to community norms and so on. In any given situation, the AI would consult each layer in its hierarchy, moving from abstract precepts as defined by humans to the concrete but amorphous perceptions of the world’s information that AI has ingested. Only when an AI has exhausted that entire program and failed to find any layer of law adequately applicable in enabling or forbidding behavior would it consult what it has derived from its own early interaction with observable human behavior. In this way it would be empowered to act in alignment with human values even where no written law or norm exists.

To build and implement this set of rules and values, we would almost certainly need to rely on AI itself. No group of humans could match the scale and speed required to oversee the billions of internal and external judgments that AI systems would soon be called upon to make.

Several key features of the final mechanism for human-machine alignment must be absolutely perfect. First, the safeguards cannot be removed or circumvented. The control system must be at once powerful enough to handle a barrage of questions and uses in real time, comprehensive enough to do so authoritatively and acceptably across the world in every conceivable context, and flexible enough to learn, relearn, and adapt over time. Finally, undesirable behavior by a machine—whether due to accidental mishaps, unexpected system interactions, or intentional misuses—must be not merely prohibited but entirely prevented. Any punishment would come too late.

How might we get there? Before any AI system gets activated, a consortium of experts from private industry and academia, with government support, would need to design a set of validation tests for certification of the AI’s “grounding model” as both legal and safe. Safety-focused labs and nonprofits could test AIs on their risks, recommending additional training and validation strategies as needed.

Government regulators will have to determine certain standards and shape audit models for assuring AIs’ compliance. Before any AI model can be released publicly, it must be thoroughly reviewed for both its adherence to prescribed laws and mores and for the degree of difficulty involved in untraining it, in the event that it exhibits dangerous capacities. Severe penalties must be imposed on anyone responsible for models found to have been evading legal strictures. Documentation of a model’s evolution, perhaps recorded by monitoring AIs, would be essential to ensuring that models do not become black boxes that erase themselves and become safe havens for illegality.

Inscribing globally inclusive human morality onto silicon-based intelligence will require Herculean effort.  “Good” and “evil” are not self-evident concepts. The humans behind the moral encoding of AI—scientists, lawyers, religious leaders—would not be endowed with the perfect ability to arbitrate right from wrong on our collective behalf. Some questions would be unanswerable even by doxa. The ambiguity of the concept of “good” has been demonstrated in every era of human history; the age of AI is unlikely to be an exception.

One solution is to outlaw any sentient AI that remains unaligned with human values. But again: What are those human values? Without a shared understanding of who we are, humans risk relinquishing to AI the foundational task of defining our value and thereby justifying our existence. Achieving consensus on those values, and how they should be deployed, is the philosophical, diplomatic, and legal task of the century.

To preclude either our demotion or our replacement by machines, we propose the articulation of an attribute, or set of attributes, that humans can agree upon and that then can get programmed into the machines. As one potential core attribute, we would suggest Immanuel Kant’s conception of “dignity,” which is centered on the inherent worth of the human subject as an autonomous actor, capable of moral reasoning, who must not be instrumentalized as a means to an end. Why should intrinsic human dignity be one of the variables that defines machine decision making? Consider that mathematical precision may not easily encompass the concept of, for example, mercy. Even to many humans, mercy is an inexplicable ideal. Could a mechanical intelligence be taught to value, and even to express, mercy? If the moral logic cannot be formally taught, can it nonetheless be absorbed? Dignity—the kernel from which mercy blooms—might serve here as part of the rules-based assumptions of the machine.

[Derek Thompson: Why all the ChatGPT predictions are bogus]

Still, the number and diversity of rules that would have to be instilled in AI systems is staggering. And because no single culture should expect to dictate to another the morality of the AI on which it would be relying, machines would have to learn different rules for each country.

Since we would be using AI itself to be part of its own solution, technical obstacles would likely be among the easier challenges. These machines are superhumanly capable of memorizing and obeying instructions, however complicated. They might be able to learn and adhere to legal and perhaps also ethical precepts as well as, or better than, humans have done, despite our thousands of years of cultural and physical evolution.

Of course, another—superficially safer—approach would be to ensure that humans retain tactical control over every AI decision. But that would require us to stifle AI’s potential to help humanity. That’s why we believe that relying on the substratum of human morality as a form of strategic control, while relinquishing tactical control to bigger, faster, and more complex systems, is likely the best way forward for AI safety. Overreliance on unscalable forms of human control would not just limit the potential benefits of AI but could also contribute to unsafe AI. In contrast, the integration of human assumptions into the internal workings of AIs—including AIs that are programmed to govern other AIs—seems to us more reliable.

We confront a choice—between the comfort of the historically independent human and the possibilities of an entirely new partnership between human and machine. That choice is difficult. Instilling a bracing sense of apprehension about the rise of AI is essential. But, properly designed, AI has the potential to save the planet, and our species, and to elevate human flourishing. This is why progressing, with all due caution, toward the age of Homo technicus is the right choice. Some may view this moment as humanity’s final act. We see it, with sober optimism, as a new beginning.

The article was adapted from the forthcoming book Genesis: Artificial Intelligence, Hope, and the Human Spirit.