Itemoids

Swiss

Why Geneva Must Apply in Gaza

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 10 › israel-hamas-conflict-geneva-convention-compliance › 675798

We are again witnessing what a world without pity looks like. The invasion of Ukraine; Russia’s murder of civilians in Bucha; Hamas killers filming themselves murdering women, children, and retirees in kibbutz gardens; the pulverization of Gaza and mass civilian casualties. We’ve been catapulted backwards into the lawless universe Bruegel painted centuries ago in his Massacre of the Innocents.

When the internationally sanctioned system of rules collapses, the legal and ethical norms that regulate individual conduct begin to founder. Justifications of violence are hurled around, with all the righteousness that goes with identity claims and group loyalty. Mere bystanders rush to judgment in service of their prior political certainties.

Amid this moral storm, we have one piece of wreckage to cling to. The Geneva Conventions still figure prominently on both sides of the propaganda battle over the current conflict in Gaza, suggesting that they still have some vestigial authority. Hamas’s supporters cite these laws of war to justify its actions, and to claim that Israel violates the rules. For its part, Israel insists that it abides by them, because its military lawyers and commanders adhere to principles of proportionality and discretion, and take measures to avoid civilian casualties.

If this is the shred of law that is left in a lawless world, the question is: Why should Israel obey it when its enemies do not?

[Franklin Foer: The horror of Bucha]

The four Geneva Conventions, ratified in 1949 after the world’s last excursion into wholesale violence, established a de minimis code that accepted violent combat as a normal instrument in human affairs but sought at least to limit its horror.

The Geneva Conventions are law made for hell, the work of Swiss and European lawyers who’d watched the worst that human beings had done in World War II. Their conventions—especially the fourth, on the protection of civilians—command combatants to observe the principle of distinction, which confines fighting to soldiers and keeps civilians out of it. That means keeping violence proportional to a military objective. And it prohibits starving civilians or depriving them of water; attacking hospitals or civilian medical workers; taking hostages; raping women; expelling populations from conquered territory; destroying homes, churches, synagogues, mosques, and schools without an overriding military purpose. To do any of these things is to commit a war crime.

The conventions evoke an idea as old as the chivalric values of the European Middle Ages, the Bushido code of the Japanese samurai, and the rules of lawful holy war in Islam. Warriors are not supposed to be butchers or brigands. Soldiers are unworthy of their uniform if they violate women, steal from civilians, brutalize prisoners, or use gratuitous violence in the exercise of arms. From the creation of the Red Cross after the Battle of Solferino in 1859 to the Lieber Code governing the conduct of the Union Army in the American Civil War to the Hague Convention of 1907, lawyers working for states on all sides have codified the ideals of a warrior’s honor. That legacy has turned the Geneva Conventions into the most universally ratified body of international law we have.

[From the May 2005 issue: Fighting terrorism with torture]

The conventions give no one an alibi. The Geneva laws clearly distinguish between jus ad bellum (a country’s right to fight in self-defense, for example) and jus in bello (how a country conducts that fighting). The crux of this is that however legitimate self-defense may be, it can never justify barbarism.

A Palestinian may argue that Israel’s unjust blockade of Gaza and previous military actions, which give grounds for armed resistance, also justify the massacre of civilians at a music festival and in their homes and the taking of hostages. An Israeli may argue that Hamas’s atrocities in the October 7 attack justify the flattening of Gaza, despite the inevitable civilian deaths this entails. The Geneva Conventions say both positions are wrong. Nothing justifies the infliction of violence on noncombatants, neither a gruesome massacre in the desert nor the cruel confinement of civilians in Gaza.

When states ratify the conventions, it means, in principle, that superior officers face a charge of command responsibility for war crimes committed pursuant to their orders. The drafters of the law recognized that there are terrorists, insurgents, and irregular forces who don’t wear uniforms and don’t answer to a state or its laws. They were aware that terrorists will place weapons and forces close to civilian facilities such as hospitals in an attempt to exploit the reluctance of law-abiding forces to strike such targets. But the fact that one side games the rules does not relieve the other of the obligation to obey them. The Geneva Conventions are not voided in the absence of reciprocity between combatants.

Which brings us to the nightmare of Gaza. Both Israel and Palestine (an entity with nonmember status in the United Nations that includes the West Bank, Gaza, and East Jerusalem) are ratified parties to the four conventions; as the governing authority in Gaza, Hamas is bound to observe them. For its part, Israel has a legion of lawyers well versed in these rules, and it has always distinguished itself from adversaries by its status as a democracy. Because of that democracy, Israel’s army is politically accountable to its citizens, its military lawyers are accountable to a civilian Supreme Court, and its soldiers are accountable for war crimes in a court of law. Israel’s image as a bastion of democracy in a region of autocracies is intimately linked to its claim that its armed forces obey the conventions.

Under both the UN charter and the Geneva Conventions, Israel has a legitimate ad bellum goal: It was attacked by Hamas, and it has the right to defend itself, including by going into Gaza to prevent the enemy from attacking again. Israel’s difficulties begin with the in bello stipulations of the conventions. Besides discrimination in targeting, the conventions demand proportionality, which requires Israelis to minimize collateral damage to hospitals, schools, and civilian infrastructure. But because Hamas is likely to co-locate its men and matériel near what the conventions call “protected” objects, the conventions do allow Israel to strike civilian targets—but only when there is no other way to achieve a necessary military objective.

Israel has allowed aid convoys into Gaza; it has warned civilians of impending air strikes and urged mass evacuation from combat zones. Despite these gestures of compliance, the watching world can see on their screens, every hour of the day, the flattened streets and houses, the rescue workers and ambulances, the bloodied civilians borne into overcrowded hospitals. What we do not know is the extent and degree to which Israel is successfully targeting Hamas military personnel and assets.

[David Lapan: War-crimes pardons dishonor fallen heroes]

Respecting Geneva goes beyond good intentions and the mere gestures of legal compliance; it means ensuring that the actual results of military action comply with the conventions. Judging those results will depend on how many of the dead will turn out to have been Hamas fighters and how many were noncombatant civilians.

Some Israeli citizens, shocked and wounded to their very core by the October 7 attack, dismiss the law of Geneva as absurd in the face of an enemy whose stated aim is to destroy their state and murder Jews. Responding to its people’s rage, the Israeli government has demanded the resignation of the UN secretary-general after he said that Hamas’s attack “did not happen in a vacuum” and called on Israel to stick to Geneva’s rules despite “the appalling attacks by Hamas.” But that higher standard is what the Israeli government has signed up for, and what its military says it lives by.

For Israel, the war is a battle against a deadly and unprincipled enemy. But it is also a test of its values as a nation. Israel’s physical survival as a state is at stake, but so is its moral identity and its political reputation in the world. It can throw Geneva out the window in its attempt to destroy Hamas, but if it abandons the law, it may do lasting damage to its national ethos and international standing.

Adhering to Geneva is essential if Israel is to achieve its strategic objectives—which are to eliminate Hamas, not the Palestinian people, and to establish security on its borders. Israel’s leaders cannot achieve these objectives through military means alone. At some point, war must be followed by politics. If Israel adheres to the Geneva rules, that will help it attain its long-term political goals. If at least some Palestinians can see restraint in action, that will show them that Israel makes a crucial distinction between Hamas and the population it rules in Gaza.

Israel needs to send this signal, because it will need partners to help it rebuild Gaza after the shooting stops: It will need the Saudis and the Gulf States if it is to have any chance of securing peace on its borders. Not just the crushing of Hamas but the conduct of the war itself will determine what kind of peace is possible.

The law of Geneva is the sole remaining framework of moral universalism in which two peoples can acknowledge that they are both human beings. Holding on to Geneva, despite all the temptations to do otherwise, is a vital element of the politics that could lead, eventually, to peace.

What If There’s a Secret Benefit to Getting Asian Glow?

The Atlantic

www.theatlantic.com › science › archive › 2023 › 10 › alcohol-flush-asian-genetic-mutation-cause › 675759

At every party, no matter the occasion, my drink of choice is soda water with lime. I have never, not once, been drunk—or even finished a full serving of alcohol. The single time I came close to doing so (thanks to half a serving of mulled wine), my heart rate soared, the room spun, and my face turned stop-sign red … all before I collapsed in front of a college professor at an academic event.

The blame for my alcohol aversion falls fully on my genetics: Like an estimated 500 million other people, most of them of East Asian descent, I carry a genetic mutation called ALDH2*2 that causes me to produce broken versions of an enzyme called aldehyde dehydrogenase 2, preventing my body from properly breaking down the toxic components of alcohol. And so, whenever I drink, all sorts of poisons known as aldehydes build up in my body—a predicament that my face announces to everyone around me.  

By one line of evolutionary logic, I and the other sufferers of so-called alcohol flush (also known as Asian glow) shouldn’t exist. Alcohol isn’t the only source of aldehydes in the body. Our own cells also naturally produce the compounds, and they can wreak all sorts of havoc on our DNA and proteins if they aren’t promptly cleared. So even at baseline, flushers are toting around extra toxins, leaving them at higher risk for a host of health issues, including esophageal cancer and heart disease. And yet, somehow, our cohort of people, with its intense genetic baggage, has grown to half a billion people in potentially as little as 2,000 years.

The reason might hew to a different line of evolutionary logic—one driven not by the dangers of aldehydes to us but by the dangers of aldehydes to some of our smallest enemies, according to Heran Darwin, a microbiologist at New York University. As Darwin and her colleagues reported at a conference last week, people with the ALDH2*2 mutation might be especially good at fighting off certain pathogens—among them the bug that causes tuberculosis, or TB, one of the greatest infectious killers in recent history.

The research, currently under review for publication at the journal Science, hasn’t yet been fully vetted by other scientists. And truly nailing TB, or any other pathogen, as the evolutionary catalyst for the rise of ALDH2*2 will likely be tough. But if infectious disease can even partly explain the staggering size of the flushing cohort—as several experts told me is likely the case—the mystery of one of the most common mutations in the human population will be one step closer to being solved.

[Read: Tuberculosis got to South America through … seals?]

Scientists have long been aware of aldehydes’ nasty effects on DNA and proteins; the compounds are carcinogens that literally “damage the fabric of life,” says Ketan J. Patel, a molecular biologist at the University of Oxford who studies the ALDH2*2 mutation and is reviewing the new research for publication in Science. For years, though, many researchers dismissed the chemicals as the annoying refuse of the body’s daily chores. Our bodies produce them as part of run-of-the-mill metabolism; the compounds also build up during infection or inflammation, as byproducts of some of the noxious chemicals we churn out. But then aldehydes are generally swept away by our molecular cleanup systems like so much microscopic trash.

Darwin and her colleagues are now convinced that the chemicals deserve more credit. Dosed into laboratory cultures, aldehydes can kill TB within days. In previous research, Darwin’s team also found that aldehydes—including ones produced by the bacteria themselves—can make TB ultra sensitive to nitric oxide, a defensive compound that humans produce during infections, as well as copper, a metal that destroys many microbes on contact. (For what it’s worth, the aldehydes found in our bodies after we consume alcohol don’t seem to much bother TB, Darwin told me. Drinking has actually been linked to worse outcomes with the disease.)

The team is still tabulating the many ways in which aldehydes are exerting their antimicrobial effects. But Darwin suspects that the bugs that are vulnerable to the chemicals are dying “a death by a thousand cuts,” she told me at the conference. Which makes aldehydes more than worthless waste. Maybe our ancestors’ bodies wised up to the molecules’ universally destructive powers—and began to purposefully deploy them in their defensive arsenal. “It’s the immune system capitalizing on the toxicity,” says Joshua Woodward, a microbiologist at the University of Washington who has been studying the antibacterial effects of aldehydes.

Specific cells show hints that they’ve caught on to aldehydes’ potency. Sarah Stanley, a microbiologist and an immunologist at UC Berkeley, who has been co-leading the research with Darwin, has found that when immune cells receive certain chemical signals signifying infection, they’ll ramp up some of the metabolic pathways that produce aldehydes. Those same signals, the researchers recently found, can also prompt immune cells to tamp down their levels of aldehyde dehydrogenase 2—the very aldehyde-detoxifying enzyme that the mutant gene in people like me fails to make.

If holstering that enzyme is a way for cells to up their supply of toxins and brace for inevitable attack, that could be good news for ALDH2*2 carriers, who already struggle to make enough of it. When, in an extreme imitation of human flushers, the researchers purged the ALDH2 gene from a strain of mice, then infected them with TB, they found that the rodents accumulated fewer bacteria in their lungs.

The buildup of aldehydes in the mutant mice wasn’t enough to, say, render them totally immune to TB. But even a small defensive bump can make for a massive advantage when combating such a deadly disease, Russell Vance, an immunologist at UC Berkeley who’s been collaborating with Darwin and Stanley on the project, told me. Darwin is now curious as to whether TB’s distaste for aldehyde could be leveraged during infections, she told me—by, for instance, supplementing antibiotic regimens with a side of Antabuse, a medication that blocks aldehyde dehydrogenase, mimicking the effects of ALDH2*2.

Tying those results to the existence of ALDH2*2 in half a billion people is a larger leap, several experts told me. There are clues of a relationship: Darwin and Stanley’s team found, for instance, that in a cohort from Vietnam and Singapore, people carrying the mutation were less likely to have active cases of TB—echoing patterns documented by at least one other study from Korea. But Daniela Brites, an evolutionary geneticist at the Swiss Tropical and Public Health Institute, told me that the connection still feels a little shaky. Other studies that have searched for genetic predispositions to TB, or resistance to it, she pointed out, haven’t hit on ALDH2*2—a sign that any link might be weak.

The team’s general idea could still pan out. “They are definitely on the right track,” Patel told me. Throughout most of human history, infectious diseases have been among the most dramatic influences over who lives and who dies—a pressure so immense that it’s left obvious scars on the human genome. A mutation that can cause sickle cell anemia has become very common in parts of the African continent because it helps guard people against malaria.

[Read: A history of humanity in which humans are secondary]

The story with ALDH2*2 is probably similar, Patel said. He’s confident that some infectious agent—perhaps several of them—has played a major role in keeping the mutation around. TB, with its devastating track record, could be among the candidates, but it wouldn’t have to be. A few years ago, work from Woodward’s lab showed that aldehydes can also do a number on the bacterial pathogens Staphylococcus aureus and Francisella novicida. (Darwin and Stanley’s team have now shown that mice lacking ALDH2 also fare better against the closely related Francisella tularensis.) Che-Hong Chen, a geneticist at Stanford who’s been studying ALDH2*2 for years, suspects that the culprit might not be a bacterium at all. He favors the idea that it’s, once again, malaria, acting on a different part of our genome, in a different region of the world.

Other tiny perks of ALDH2*2 may have helped the mutation proliferate. As Chen points out, it’s a pretty big disincentive to drink—and people who abstain (which, of course, isn’t all of us) do spare themselves a lot of potential liver problems. Which is another way in which the consequences of my genetic anomaly might not be so bad, even if at first flush it seems more trouble than it’s worth.

Madonna Forever

The Atlantic

www.theatlantic.com › magazine › archive › 2023 › 11 › madonna-hung-up-video-age-sexuality › 675441

We like our female icons, as they age, to go quietly—to tiptoe backwards into semi-reclusion, away from our relentless curiosity and our unforgiving gaze. Tina Turner managed this arguably better than anyone else, holed up for the last decade of her life in a gated Swiss château with an adoring husband and a consulting role on the hit musical about her life, watching a younger performer step nimbly into her gold tassels. Joni Mitchell retreated to her Los Angeles and British Columbia properties for so long that when she reappeared for a full set at the Newport Folk Festival last year, it was as though God herself was suddenly present, ensconced in a gilded armchair, her voice still so sonorous that practically every single person onstage with her wept.

If you age in private, the deal goes, you can reemerge triumphantly as royalty in your silver era. But Madonna never signed up for dignified placating. At 47, as sinewy as an impala in a hot-pink leotard and fishnets, she moved with such controlled, physical sensuality in the video for “Hung Up” that the 20-something dancers around her seemed bland by comparison. At 53, she headlined a Super Bowl halftime show—part gladiatorial circus, part intergalactic ancient-Egyptian cheerleading meet—while 114 million people watched. At 65, Madonna regularly uploads videos of herself to TikTok, her face plumped into uncanny, doll-like smoothness, strutting to snippets of obscure dialogue or electronica in psychedelic outfits categorized by one commenter as “colorful granny.”

What’s most striking to me about the videos is how Madonna retains the power to scandalize each generation anew—even teenagers nourished on a cultural diet of Euphoria and hard-core pornography—with her adamantly sexual self-presentation. “Lost her mind,” one TikTok commenter wrote as Madonna, wearing a black lace fetish mask, simply stared confrontationally at the camera. About a clip of her waving her arms in a diamanté cowboy hat, her chest festooned with chains, a cheerful-looking boy posted, “Someone come get Nana she’s wandering again.”

[Read: The dark teen show that pushes the edge of provocation]

This is, mark you, almost 40 years after Madonna rolled around on the floor at the MTV Video Music Awards in a corseted wedding dress, her white underwear and garters fully visible to the cameras, in an early TV appearance that an outraged Annie Lennox called “very, very whorish … It was like she was fucking the music industry.” At the time, Madonna’s manager, Freddy DeMann, told her she’d ruined her career. One of the few who approved was Cyndi Lauper, perpetually compared to Madonna in those days. Lauper seemed to recognize what her contemporary was trying to do, and what she’s been doing ever since, often operating just beyond the frequency of comprehension. “I loved that,” Lauper said. “It was performance art.”

People have argued about Madonna from the very beginning. That people are still arguing about her—over whether she’s too old, too brazen, too narcissistic, too sexual, too deluded, too Botoxed, too shameless—underscores the scope and endurance of Madonna’s oeuvre. She makes music, but she’s not a musician. She’s not an actor either, or a director, or a children’s-book author, even though she’s embodied each of these roles (with varying degrees of success). She is, rather, an artist. More than that, she’s a living, breathing, constantly metamorphosing work of art, a Gesamtkunstwerk—her life, her physical self, her sexuality, her presence in the media interweaving and coalescing into the totality of the spectacle that is Madonna. “My sister is her own masterpiece,” Christopher Ciccone told Vanity Fair in 1991, the year Madonna: Truth or Dare, a movie capturing her Blond Ambition tour, became the then-highest-grossing documentary in history.

In her reverent, 800-page Madonna: A Rebel Life, the writer Mary Gabriel offers the argument that Madonna’s entire biography is an exercise in reinventing female power. She crystallizes this mission of masterful defiance in a chapter about Madonna’s Sex, a 1992 coffee-table collection of photographic erotica that sold more than 1.5 million copies and almost torched her career. A decade into her stardom, Madonna had already

inhabited all the stereotypes that patriarchal society concocted for women—dutiful daughter, gamine, blond bombshell, adoring wife, bitch—in her pursuit of a new woman, a person who exercised her power freely, joyously, even wantonly, if that’s what she wanted. Her quest was what the French philosopher Hélène Cixous described as the search for a “feminine imaginary … an ego no longer given over to an image defined by the masculine.”

Before long, Madonna had broken multiple records for a female solo artist, having sold more than 150 million albums around the world. She had also “transformed the traditional pop-rock concert format into a full-scale theatrical experience,” Gabriel writes, “raised music video from a sales tool to an art form, and put a woman—herself—in control of her own music, from creation to development to distribution.”

All of this is true, and yet the volume of evidence that Gabriel amasses reveals something even greater: not just a cultural phenomenon, or even a postmodern artist transforming herself into the ultimate commodity, but a woman who intuits and manifests social change so far ahead of everyone else that she makes people profoundly uncomfortable. We may not understand her in the moment, but rarely is she wrong about what’s coming.

[Read: What we talk about when we talk about ‘unruly’ women]

To try to write about Madonna is to stare into an abyss of content: the music, the videos, the movies, the books, the fashion, but also the responses that those things generated, a corpus almost as significant to the construction of Madonna as the work itself. More than 60 books have been devoted to her, encompassing biography, critical analysis, comic books, sleazy profiteering, and even a collection of women’s dreams about her. “With the possible exception of Elvis, Madonna is without peer in having inscribed herself with such intensity on the public consciousness in multiple and contradictory ways,” Cathy Schwichtenberg wrote in The Madonna Connection, a 1993 book of essays summarizing the growing academic field known as Madonna Studies.

Gabriel’s biography is astonishingly granular in its attention to biographical detail, and also to historical context. You could, if you wanted, read the book as a kind of late-20th-century history of women’s ongoing fight for liberation, filtered through the lens of someone whom Joni Mitchell variously derided as “manufactured,” “a living Barbie doll,” and “death to all things real” and Norman Mailer described as “our greatest living female artist.” More often, A Rebel Life reads like a Walter Isaacson biography of a Great Man, a thorough life-and-times synthesis of a world-changing, civilization-defining genius—only with a lot of cone bras and syncopated beats.

Gabriel’s attention to context is key, because trying to understand Madonna as a flesh-and-blood person—the biographer’s traditional endeavor—is a trap. Self-exposure, for her, is about obfuscation more than revelation. Every new identity she disseminates into the world is just a different layer; the more you see of her, the more the “truth” of her is obscured. Truth or Dare famously includes a contretemps between Madonna and her boyfriend at the time, the actor Warren Beatty, while Madonna is having her throat examined by a doctor mid-tour. “Do you want to talk at all off camera?” the doctor asks. “She doesn’t want to live off camera, much less talk,” Beatty interjects. “Why would you say something if it’s off camera? What point is there of existing?”

Beatty was then the embodiment of Old Hollywood, square-jawed and restrained, while the considerably younger Madonna supposedly represented the MTV generation, coarse and venal, willing to trade even her most intimate moments for hard profit. (Truth or Dare premiered a full year before The Real World ushered in a new realm of “reality” entertainment.) What Beatty, along with many others, missed was that exposure wasn’t about selling out in any conventional sense. For Madonna, the construction of her public-facing persona was about spinning masquerade, fantasy, and fragments of self-disclosure into mass-media magic that confounded, again and again, efforts to categorize her.

She teased ideas about gender fluidity and bisexuality; she declared herself to be a “gay man”; she played up her friendship with the comedian Sandra Bernhard as rumors flew that the two were sleeping together. The main constant through her kaleidoscopic permutations was the response they elicited: As the cultural theorist John Fiske once put it, her sexuality was perceived as a new caliber of threat—“not the traditional and easily contained one of woman as whore, but the more radical one of woman as independent of masculinity.” (No wonder Beatty, the most masculine of screen stars, chafed at it.)

And yet, believe it or not, Madonna is human, and she was born—to a woman also named Madonna and a man named Silvio “Tony” Ciccone—in Bay City, Michigan, in 1958. When she was 5 years old, her mother died, a fact that seems as fundamental to the arc of her career as music or sex or religion. Tony, Gabriel writes, struggling alone with a houseful of unruly children, simply raised Madonna in the same way that he raised her two older brothers. (At the time of her mother’s death, Madonna had three younger siblings; two more followed when Tony married the family’s housekeeper.) She played as they played; she fought and bit and belched and yelled just as they did. When we think about Madonna later, effortlessly disrupting conventions of feminine sexual presentation and power dynamics, this upbringing makes perfect sense. (In one of my favorite photos from Sex, Madonna stands by a window, facing outward, wearing just a white tank top, motorcycle boots, and no underwear, her buttocks exposed as she appears to scratch an imaginary pair of balls.)

Gabriel, from the start, is alert to signs of Madonna’s self-transfiguring urges: how, in elementary school, she put wires in her braids to make them stick up like those of her young Black friends; how, in eighth grade, she scandalized her junior-high-school audience with a risqué, psychedelic dance sequence set to the Who’s “Baba O’Riley”; how, at 15, she first presented herself to her dance teacher and mentor, Christopher Flynn, as a childlike figure carrying a doll under her arm, as if to signal that she was a blank slate for him to work on.

But the years that seem most crucial are the ones she spent in New York City trying to make it as a modern dancer after dropping out of the University of Michigan. In 1978, when she arrived, the city was experiencing ungovernable urban blight and a simultaneous creative renaissance. Modes of artistic expression were becoming ever more fluid; the Warholian creation of a persona, and the postmodern appropriation of original ideas and images into new art forms, expanded performance possibilities. After quickly realizing her limitations as a dancer, Madonna did a stint as a drummer in a New Wave band called the Breakfast Club. She did nude modeling to pay for a series of truly scuzzy apartments. When her father begged her to come home, she’d say, “You don’t get it, Dad. I don’t want to be a doctor. I don’t want to be a lawyer. I want to be an artist.”

Her desire to make art was tied up with her ferocious ambition, her early comprehension that celebrity could be its own kind of art form. A friend of Madonna’s recalls to Gabriel that when she first met her, in a club in New York in the early ’80s, Madonna said, “I’m going to be the most famous woman in the world.” By 1982, she had redirected her focus toward music and become embedded in what Gabriel describes as “a radical art kingdom” that melded high and low culture, where punk kids and street artists were suddenly the new creative aristocracy. The previous year, MTV had transformed music into a visual medium. Madonna started writing songs, and seems right from the start to have had a sweeping conception of what pop music could provide: not the kind of plastic, bubblegum stardom that jeering critics believed she was after, but a global canvas on which she aimed to project her vision.

Kim Gordon, of the band Sonic Youth, once wrote that “people pay to see others believe in themselves.” Madonna’s earliest fans were girls, gay men, queer teenagers of color who found community in the same spaces where her own sense of self was honed. In the video for her first single, “Everybody,” in 1982, Madonna dances onstage at a nightclub in a strikingly unsexy, punk-esque outfit: brown leather vest, plaid shirt, tapered khaki pants, theatrical makeup. The camera keeps its distance; you can hardly see her face. But by the video for her second, “Burning Up,” a year later, she’s unmistakably Madonna, with teased blond hair, armfuls of rubber bracelets, the mole above her lip and the slight gap between her teeth underscoring her confrontational, intent gaze. This was the moment when the product of Madonna seems to have coalesced. She wasn’t just making music (one critic famously described her vocals on her early albums as “Minnie Mouse on helium”). Provocation was part of her act—her second record, 1984’s Like a Virgin, was clear on that front—but not the point of it.

Rather, what her fans immediately recognized in Madonna was the animating spirit of her work: complete certainty in her worth, and a pathological unwillingness to give credence to anyone other than herself. Everything else about Madonna may change, but this fundamental self-conviction is always there. And for anyone who’s been raised to be or to feel like a modified, shamed, incomplete version of themselves, it’s intoxicating. At 7, in 1990, I wore out my cassette tape of I’m Breathless—the concept album Madonna recorded to accompany her role in Dick Tracy—thrilled by the unthinkable bravado, the cockiness of “Sooner or Later.” At 40, I keep coming back to her “Hung Up” video, stunned at the visual evidence that a middle-aged mother of young children could be so strong, so strange and charismatic and compelling.

This kind of power is unnerving to observe in women; instinctively, we’re either drawn to it or driven to destroy it. A Rebel Life sometimes feels excessively boosterish, noting and then brushing over criticism of Madonna’s more questionable acts over the years—her decision to forcibly kiss Drake at Coachella in 2015, to his apparent distress, among them. But Gabriel’s useful goal is perhaps to get beyond a debate that’s been stoked by an extraordinary amount of vilification. Madonna, the most successful female artist of all time, is also indubitably the most loathed. And her haters often respond to the same quality in her self-presentation that her most ardent fans do: her confidently incisive mockery of the way culture prefers women to be portrayed. People reacted to Sex—a work that constantly identifies and then undercuts how people want to see her—with the pearl-clutching faux horror that tends to accompany Madonna’s provocations, as though she had done something utterly novel and irredeemably graceless.

[Read: Madonna’s kamikaze kiss]

In fact, the book was right in step with contemporaneous art-world forays into hard-core erotica. Sex scandalized a mainstream audience that had presumably never seen Cindy Sherman’s Sex Pictures (the artist was one of Madonna’s inspirations) or Jeff Koons’s Made in Heaven series, in which the artist created explicit renderings of himself having sexual intercourse with the porn performer Ilona Staller, who was briefly his wife. Madonna has said she intended her book to be funny (in more than one photo, she outright laughs). But Sex also asserts her engagement with a lineage of artists who helped shape her, and highlights her determination to unsettle the conventional gaze.

Madonna’s videos and live shows, Gabriel argues, tend to be where you get the most complete sense of her vision, “a new kind of feminism, a lived liberation” that pointed the way for a woman to be captivating “not because she was so ‘pretty’ but because she was so free.” In her 1986 video for “Open Your Heart,” which features a giant Art Deco nude by the Polish painter Tamara de Lempicka, Madonna struts in a black corset in front of an audience that watches her—sneeringly, or with feigned lack of interest—but doesn’t see anything more than surface-level sexuality. At the video’s end, Madonna (dressed now in a suit and a bowler cap, with cropped hair) dances away with a preteen boy who’s been waiting for her outside. The spectators in the club want to possess and objectify Madonna; the boy wants to be her, recognizing her as an artistic kindred spirit, not just a sex object. (The video has long been interpreted by Madonna’s queer and trans fans as a gesture of affirmation.)

Three years later, in “Express Yourself,” directed by David Fincher, Madonna stages a riff on the 1927 Fritz Lang movie Metropolis, in which she rides a stone swan through a dystopian cityscape. She’s a kind of Ayn Randian femme fatale in a green silk gown, holding a cat; later, dressed in an oversize suit, she flexes her muscles and grabs her crotch; in another scene, she lies naked, in chains, on a bed. (“I have chained myself,” she later clarified in an interview with Nightline. “There wasn’t a man that put that chain on me.”) Madonna moves fluidly from subject to object, man to woman, captor to captive, skewering misogynistic Hollywood tropes. Her potent allure, whatever her guise, is unexpectedly disconcerting.

The video also has almost nothing whatsoever to do with the song, which is a totally generic, upbeat pop confection encouraging women to pick men who validate their mind and their self-worth. The discrepancy is, I think, purposeful: It begs us to notice the different registers her work is operating in, and to observe how “pop star,” for her, is just another chameleonic guise. I love Madonna’s music, which functions at a level that enables her to be stupendously successful, ridiculously wealthy, a public figure of a sort no one has ever seen before. But those accomplishments are so much less interesting than everything else her music allows her to do through the performance she choreographs around it: blast through boundaries of sexuality and presentation; explore the permeability of gender; expose the hypocrisy of a music-video landscape in which, as she said in that same Nightline interview, violence against women is readily portrayed but sex gets you banned from MTV.

Thirty years later, in a culture where bombastic, sexless superhero movies now dominate mass entertainment and where erotica—as opposed to porn—has been all but banished to the nonvisual realm of fiction, her explorations of sexuality feel as radical as ever. And we continue to resist them, to reflexively recoil. When I told people I was writing about Madonna, they invariably responded with some dismayed version of “Her face!!!” It’s easy to assume that she’s just another woman navigating the horror of aging in plain sight via an overreliance on cosmetic enhancements, just another former bombshell who won’t concede that her time as the ultimate sex object has ended.

But Madonna has never seemed to think of herself as a sex object. An objectifier who greedily prioritizes her own pleasure, yes; an alpha, absolutely; but never a sop to someone else’s fantasy. And the AI-esque strangeness of her appearance now suggests something else, too. I keep thinking about bell hooks’s argument, in a 1992 essay, that Madonna “deconstructs the myth of ‘natural’ white girl beauty” by exposing how artificial it is, how unnatural. She bends every effort, hooks notes, to embody an aesthetic that she herself is simultaneously satirizing. One might deduce that Madonna senses better than anyone where female beauty standards are heading, in an era of Facetune, Ozempic, livestreamed TikTok surgeries, and Instagram face. And that she knows what she’s doing: Her current mode of self-presentation is Madonna supplying yet another dose of what the media want from women—sexiness, youth, erasure of maturity—distorted just enough to make us flinch.

This article appears in the November 2023 print edition with the headline “Madonna Forever.”

Nothing Defines America’s Social Divide Like a College Education

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 10 › education-inequality-economic-opportunities-college › 675536

Inequality is one of the great constants. But what sets those at the top of society apart from those at the bottom has varied greatly. In some times and places, it was race; in others, “noble” birth. In some, physical strength; in others, manual dexterity. In America today, most of these factors still matter. The country is racially unequal. Some people inherit great wealth; others become celebrities through sporting prowess.

But much of America’s transformation in recent decades—including many of the country’s problems—can be ascribed to the ascendancy of a different marker of distinction: education. Whether or not you have graduated from college is especially important. This single social marker now determines much more than it did in the past what sort of economic opportunities you are likely to have and even how likely you are to get married.

Educational status doesn’t only influence how Americans live, though. As a new set of papers from the economists Anne Case and Angus Deaton shows, educational status has now overtaken other metrics, including race, in predicting one of the most important socioeconomic outcomes you can imagine: how long you get to live.

The rise of educational attainment as an indicator of social differentiation can be traced all the way back to the origins of modern democracy. The chief architects of the French Revolution were highly preoccupied with the obstacles to social mobility that had defined the ancien régime, a system in which prominent positions were reserved for members of the aristocracy and public offices such as judgeships were openly purchased. The French republicans founded public schools and universities that selected their students on the basis of competitive examinations, and furnished the upper echelons of French society with engineers, architects, civil servants, and other luminaries. Reflecting on his life from exile in St. Helena, Napoleon claimed that the revolutionary maxim of a “career open to the talent” had always guided him.

The Founders of the American republic worried about education for another reason: They saw an educated populace as a prerequisite for political stability. It would be a particular priority to attend to “the education of the common people,” Thomas Jefferson wrote, for “on their good sense we may rely with the most security for the preservation of a due degree of liberty.”

[Read: Poor Americans really are in despair]

Although democracy and education have always been closely intertwined, the degree to which formal educational qualifications are a prerequisite for political or societal influence is relatively new. In the past, many people could—and did—rise to the pinnacles of politics and society without graduating from college. Neither Harry Truman nor Winston Churchill, for example, had any formal qualifications beyond high school. In the year after World War II, almost half of U.S. congressmen and a quarter of U.S. senators did not have a bachelor’s or graduate degree; today, this holds true for only 6 percent of congressmen and a single senator. In all but exceptional circumstances, an undergraduate degree, preferably from a famous school, has become a necessary passport to the upper echelons of American life. As a result, educational status is now one of the strongest predictors of lifetime earnings, outstripping race or gender.

The “college bonus” refers to the wage advantage enjoyed by those who have a higher degree. In the 1970s, this bonus was very slight: Comparing a worker over the age of 25 who did have a college degree with an otherwise similar worker who did not have a college degree, the former enjoyed an income advantage of about 10 percent. Four decades later, that small gap had grown to a wide chasm. By the mid-2010s, a worker with a bachelor’s degree could expect to outearn an otherwise similar worker without a bachelor’s degree by about 70 percent. (Other studies find the same effect even if its magnitude varies: The college wage bonus has kept growing.)

Since 1980, differences in educational attainment have started to predict even the most personal outcomes. Americans without a bachelor’s degree are now much more likely to experience extreme mental distress. They are much more likely to suffer from physical pain. And they are much more likely to report that they are lonely or have difficulty socializing.

Even the chances of sustaining a successful relationship now strongly depend on educational status. Beginning in 1980, “the likelihood of divorce among college-educated Americans plummeted,” as Eli J. Finkel wrote in The Atlantic. Americans without college degrees, by contrast, are now far more likely to get divorced—and far less likely to get married in the first place. As a result, college-educated Americans are much more likely to be in a stable marriage than their compatriots who did not go to college.

All of these findings have convinced me that the gap between the educational haves and have-nots is now a defining cleavage in American life. Even so, I was genuinely shocked by Case and Deaton’s latest research, which demonstrates how far this difference now goes, explaining why Americans die so much younger than the inhabitants of other affluent countries.

Case and Deaton made headlines nearly a decade ago by uncovering the startling fact that adult life expectancy in the United States had started to decline—the first time in the country’s history that this had occurred for reasons other than war or pestilence. Much of this trend was driven by what Case and Deaton named “deaths of despair.” These included the hundreds of thousands of Americans felled by the opioid epidemic that has ravaged the country since the late 1990s. Other deaths of despair involve the consequences of alcoholism and a very high rate of gun suicide.

[Read: Is economic despair what’s killing middle-aged white Americans?]

The increase of this type of mortality makes America an extreme outlier. It is now virtually the only rich nation in the world where adult life expectancy began to fall well before the coronavirus pandemic (Scotland being the other exception).

The more closely Case and Deaton looked at the data for the U.S., the more struck they were by who was, and who wasn’t, suffering a premature death. Nearly all of the victims of deaths of despair did not have a bachelor’s degree; those who did were practically immune.

The trend held true when Case and Deaton expanded their search beyond deaths of despair. As they show in a new paper presented last week at the Brookings Institution, the chances of an American dying prematurely from a range of other diseases not obviously related to “despair,” including most forms of cancer and cardiovascular disease, also depend heavily on educational status.

These correlations help explain what underpinned Case and Deaton’s original finding about the divergence between the U.S. and other rich democracies. Until the pandemic, longevity for Americans with degrees continued to increase in step with the world’s wealthiest countries; even after COVID-19 increased mortality in rich countries, this demographic group suffered only a modest decrease in adult life expectancy. But Americans without a bachelor’s degree had a starkly different trajectory. They had already begun to suffer serious declines in longevity before the pandemic; when COVID hit, their adult life expectancy plummeted. (Case and Deaton mostly use a metric of adult life expectancy, which shows how many years people can expect to live once they have reached their 25th birthday.)

Today, the adult life expectancy of Americans with a college degree is comparable to that for residents of any other successful country. The adult life expectancy of Americans without a college degree, by contrast, is much lower. The gap between the two groups is now so large that Americans without a college degree have an adult life expectancy closer to that for residents of many developing countries than to the Japanese or Swiss. The highly educated and the “poorly educated,” as Donald Trump famously called them, now practically live in two different countries.

Case and Deaton’s findings also suggest that, at least in one crucial respect, America’s educational divide now surpasses the gap that has historically been most significant: race. As recently as 1990, race still trumped educational status as a determinant of life span in the United States. White Americans without a four-year college degree could expect to live longer than Black Americans with one.

This has changed. The adult life expectancy of Black Americans with a bachelor’s degree has increased markedly over the past three decades. As a result, they can now expect to live much longer than whites without a bachelor’s degree: “Black men and women with a BA, who used to have fewer expected years from 25 to 75 than White people without a BA, now have more expected years,” Case and Deaton write. “As a result, Black people with a BA are currently closer to White people with a BA than to Black people without a BA, in sharp contrast to the situation in 1990.” (For this set of calculations, Case and Deaton use a specific metric for adult life expectancy that calculates the number of years that people can expect to live between their 25th and their 75th birthdays.)

Racial disparities do persist. But the difference in adult life expectancy between Americans with and without a bachelor’s degree is now starker than that between white and Black Americans. In 1992, an average white American could expect to live six years longer than an average Black American, a gap that fell to three years by 2018. But over the same period, the gap in adult life expectancy among Americans with different educational credentials has widened at the exact same pace. In 1992, an average college graduate could expect to live three years longer than an average non–college graduate, a difference that increased to six years by 2018.

A natural question to ask about these findings is what drives this dramatic divergence in the outcomes between the most educated Americans and everybody else. According to one theory, Americans who go to college acquire skills that allow them to excel in a range of professions; the rewards of a degree might reflect their greater ability to contribute to public life and our collective prosperity. According to another theory, important traits such as the capacity to avoid self-destructive behaviors have a strong bearing both on whether somebody gains a college degree and on whether they’re able to live a healthy and successful life. In this case, the difference between these two groups might be mostly “compositional” in nature, simply reflecting the fact that different kinds of people are likely to end up in each group.

[Read: Is it better to be poor in Bangladesh or the Mississippi Delta?]

Case and Deaton, who prefer describing trends to explaining their causes, caution that scholars have yet to come up with a definitive answer to this question. But they mistrust explanations that rationalize the chasm between Americans with and Americans without a college degree as an accurate reflection of each group’s respective choices or skill sets. “We have increasingly come to believe,” they conclude in their new paper, that a college degree “works through often arbitrary assignation of status, so that jobs are allocated, not by matching necessary or useful skills, but by the use of the BA as screen.” In an email to me, Deaton was more blunt: Both he and Case believe that the college degree is most important as “a route to social standing.”

Regardless of the reasons for this divide, in a just society, holding a college degree should not be nearly so predictive of one’s life trajectory as it now is in the United States. “If some Nero or Domitian was to require a hundred persons to run a race for their lives,” the great liberal philosopher John Stuart Mill pointed out, that race would not be any more just because “the strongest or nimblest would, except through some untoward accident, be certain to escape.” The same, Mill pointed out, is true in societies that award a more humane existence to those who outcompete others: “To assert as a mitigation of the evil that those who thus suffer are the weaker members of the community, morally or physically, is to add insult to misfortune.”