Itemoids

TikTok

No One Knows Exactly What Social Media Is Doing to Teens

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 06 › social-media-teen-mental-health-crisis-research-limitations › 674371

Late last month, the U.S. surgeon general issued an advisory—a format reserved for public-health issues that demand the nation’s immediate attention. “Nearly every teenager in America uses social media,” the report read, “and yet we do not have enough evidence to conclude that it is sufficiently safe for them.” In response, the Biden administration announced a new interagency task force that has been given a year to come up with a slate of policy recommendations that will help “safeguard” children online.

This may be a legislative problem for Big Tech, and it’s certainly a public-relations problem. Over the past several years, cigarettes have become the dominant metaphor in the discourse about social media: Everyone seems to think that these sites are dangerous and addictive, like cigarettes. Young people get hooked. At a congressional hearing on Facebook’s impact on teenagers in 2021, Senator Ed Markey tossed the comparison at Antigone Davis, a vice president and the global head of safety for Meta, Instagram’s parent company. “Facebook is just like Big Tobacco, pushing a product that they know is harmful to the health of young people, pushing it to them early,” Markey, a Democrat, said. Now the metaphor is even more compelling, as it can also evoke the famous 1964 surgeon-general warning about the scientific evidence of cigarettes causing lung cancer.  

But the two are obviously very different. As a previous surgeon general pointed out: Cigarettes kill people through deadly disease. Social media is being blamed for something just as alarming but far less direct: a sharp increase in teen depression and suicide attempts over the past decade and a half that has been labeled a “national state of emergency” by the American Academy of Pediatrics and other prominent medical associations. The CDC’s latest trend report shows the percentage of high-school students who “experienced persistent feelings of sadness or hopelessness” jumping from 28 percent in 2011 to 42 percent in 2021, and the numbers for girls and LGBTQ students are even worse (57 and 69 percent, respectively, in 2021). Understandably, social media has been one of the places that parents have looked for an explanation. Last year, a Pew Research Center study found that more than half of American parents are at least somewhat worried that social media could lead their teenagers to develop mental-health problems—28 percent were “extremely” or “very” worried. Teens themselves are worried, at least about one another. About a third of them told Pew that social media is mostly negative for people their age, compared with about a quarter who say the effect has been mostly positive—although only a tenth said social media is mostly bad for them personally.

Compelling evidence suggests that social-media platforms are contributing to the crisis, but it’s also true that the horror stories and the headlines have gotten out in front of the science, which is not as settled as many would think. A decade of work and hundreds of studies have produced a mixture of results, in part because they’ve used a mixture of methods and in part because they’re trying to get at something elusive and complicated. Rather than coalescing into a unified message that social-media use is an awful, indisputably destructive force—tobacco with a “Like” button—the research instead has been building toward a more nuanced, and perhaps more intuitive, takeaway.

Social media’s effects seem to depend a lot on the person using it. It may play a different role for different demographics, and the role it plays may also change for people at different stages of life. It surely doesn’t affect everyone in the same way. This makes informed intervention extremely difficult. “Probably a lot of [the problem] comes down to the science not being precise enough,” says Amy Orben, a researcher at the University of Cambridge who studies the relationship between social media and well-being and whose work has been central to the ongoing debate. The field has not yet produced “precise enough measurements and precise enough hypotheses to merit a precise answer.”

This complicates a rapid succession of actions against social-media platforms in recent months. Last month, the governor of Arkansas signed a bill making it illegal for a minor to have a social-media account without parental consent and requiring social-media companies to verify user ages with government-issued ID; a similar one was signed by the governor of Utah in March. Other age-gating measures are being considered in at least 10 more states and at the national level.

Then there are the lawsuits. In January, the Seattle public-school district sued Facebook, Instagram, Snap, TikTok, and YouTube for violation of a state “public-nuisance law,” arguing that the social-media companies were known to “exploit the neurophysiology of the brain’s reward system” and that their “manipulative conduct” had created a mental-health crisis in the school system. Meanwhile, several major law firms have taken on personal-injury lawsuits on behalf of parents who believe that these platforms have caused problems in their kids’ lives, such as body dysmorphia, depression, anxiety, and suicide. Chris Seeger, of the New Jersey–based Seeger Weiss, told me his firm currently has more than 1,000 such cases.

These cases hinge on novel arguments that will have to carefully circumvent a lot of precedent of failed litigation against social-media companies. And new laws may run up against First Amendment issues and be difficult to enforce. (Critics have also pointed out that Arkansas Governor Sarah Huckabee Sanders’s expression of concern about exploitation of children is a bit confusing, given that she recently signed a bill undoing a number of child-labor protections in her state, including the requirement that employers get parental permission to employ children under the age of 16.)  

This is a crucial moment, Orben told me: “I think the key question is, in 20 years’ time, will we look back at this conversation and be like, We were worried about technology in excess, when we should have been worried about raising our kids? It’ll probably be somewhere halfway between the two.” Legislation that removes teenagers from social media likely won’t solve the mental-health crisis; teens will find ways around it, and for the ones who don’t, being displaced from their online communities may lead to different problems. The science, as it stands right now, provides reason to be concerned about social media. It also suggests the need for a far more sophisticated understanding of the effects of social media on young people, and the presence of much deeper problems that we could overlook if we aren’t careful.

This latest surge in concern about kids and the internet was exacerbated by the Facebook Papers, a collection of documents leaked by the former Facebook employee Frances Haugen and shared with journalists in fall 2021. Included were several studies conducted internally, asking groups of young Instagram users how the platform made them feel. “We make body image issues worse for one in three teen girls,” read the summary of one such study. Another: “Teens blame Instagram for increases in the rate of anxiety and depression. This reaction was unprompted and consistent across all groups.”

These were among the most widely discussed of the disclosures, and by the time the files had been covered in every major national publication, they could be referred to with the shorthand “Facebook knew.” Appearing on The Daily Show With Trevor Noah, Haugen agreed with the host’s suggestion that Facebook had behaved similarly to (you guessed it) tobacco and fossil-fuel companies by conducting self-damning research and opting not to share the findings. Facebook responded to the uproar by publishing annotated versions of the research, which emphasized how unscientific the studies were.

But what of the actual science? It’s been nearly six years since The Atlantic published the psychologist Jean Twenge’s blockbuster report “Have Smartphones Destroyed a Generation?” The generation she was talking about was born from 1995 to 2012—roughly Gen Z, though she called it “iGen.” These kids grew up with smartphones and made Instagram accounts before they started high school. “It’s not an exaggeration to describe iGen as being on the brink of the worst mental-health crisis in decades,” Twenge wrote. “Much of this deterioration can be traced to their phones.” She made this argument by citing early studies and by simply connecting the dots—kids were getting more anxious and depressed, and the trend started around the time they began using smartphones and social media and living life through screens.

Since then, scores of researchers have built a large body of work looking into the effects of screen time generally. But the results have continually been mixed: Screens are ubiquitous, and they’re personal. In a 2019 study, Orben and her research partner Andrew Przybylski found that screen time could not be correlated with well-being among adolescents in any coherent way. Screen time—the bogeyman of the 2010s—was simply too broad to be examined as one single phenomenon, they argued. The study was covered widely with a snappy takeaway: “Screens Might Be as Bad for Mental Health as … Potatoes.” Orben and Przybylski had contextualized their core finding by comparing screen time with other behaviors that could be similarly correlated with well-being, such as eating extra starch or wearing glasses. This helped the researchers make their point that the questions many had been asking about technology were not specific enough. “‘Screen time’ is a nonsense topic,” Orben told me last fall. “It brings everything together from yoga videos to watching self-harm content on Instagram.”

The study marked a shift in the research, which for the past several years has been more tightly focused on social-media use, as well as other, more specific ways people use the internet, and on the experiences of teenage girls in particular. Many of these studies found correlations between social-media use and bad outcomes such as anxiety, depression, and negative body image. But tech companies can easily defend themselves from correlative claims by arguing—reasonably—that they establish only that two things tend to happen at the same time, and not that one of those things is causing the other. The challenge for public-health researchers, then, is to find novel ways to prove (or disprove) a direct causal relationship as well—a very difficult thing to do.

In passing its new social-media restrictions for minors, the state government of Utah cited a 2022 review paper that summarized many correlative findings in the research. Utah also cited a buzzy paper from 2022 written by three economists that tried to get around the correlation conundrum with a creative attempt at a quasi-experiment. They followed Facebook’s staggered rollout across college campuses in the mid-aughts, matching up the timeline with increased rates of depression on the same campuses. Their “back-of-envelope calculation” was that 24 percent of the “increased prevalence of severe depression among college students over the last two decades can be explained by the introduction of Facebook.”

This approach has its own problems, Laurence Steinberg, a psychology and neuroscience professor at Temple University and an expert on adolescence, told me in an email. “I would tread very cautiously here,” he wrote after reading the economists’ paper. “The results are subject to what is referred to as the ecological fallacy—drawing inferences about individuals from aggregate data. As the authors note, they have no idea whether the students who reported mental-health problems were those that were using Facebook.”

This science is less straightforward—and slower-moving—than many realize. Researchers face a number of technical difficulties. For example, when the millions of people you want to study are teenagers, there are ethical hoops to jump through, prolonging the process and sometimes making research feel out-of-date before it’s even finished. And researchers have also struggled to come up with reliable methods for measuring what they’re interested in. To illustrate, Jeff Hancock, the founding director of the Stanford Social Media Lab, asked me a rhetorical question: “Did you use social media a lot or a little today, on a scale of 1 to 7?” How do you even answer that?

There is now a huge amount of research, but experts can look at the findings and draw disparate conclusions. In a 2022 umbrella review (a review of reviews of the research), scholars from the University of Amsterdam pointed out that different people had described similar effects from social-media use in dramatically different terms, from “weak” and “inconsistent” to “substantial” and “deleterious.” And in a 2020 review of the research, Orben found a slight negative correlation between social-media use and well-being (social-media use goes up; well-being goes down). Yet it is “still unclear what such a small effect can tell us about well-being outcomes as social media use is inherently linked in complex ways with other aspects of life,” she concluded.

Jonathan Haidt, a social psychologist at the NYU Stern School of Business and a regular contributor to The Atlantic, has been reading the research for years and has become one of the best-known commentators on the subject. He maintains a massive public Google Doc in which he collects, sorts, and analyzes all of the papers pertaining to the question of whether social media contributes to the rise of depression and anxiety in teenagers. Haidt agrees with Orben and other researchers that findings on screen time tend to be mixed. “But if you make it ‘social media,’ it’s very consistent,” he told me. “The next question is, what’s the population? Are we talking about all kids, or are we talking about girls?” In his review of all available work, including the data that Orben and Przybylski analyzed in 2019, he found a positive correlation between depression and anxiety and social-media use for teenage girls (depression and anxiety go up when social-media use goes up). “No person in their right mind would let their daughter be engaged in an activity” with such a clear connection to depression and anxiety, he said.

At this point, scientists at least agree that the relationship between depression and anxiety and social-media use is supported by enough evidence to demand attention. Orben’s latest paper argues for greater attention on young girls as well, showing a relationship between social-media use and a decline in different forms of life satisfaction. The question is: What kind of attention should we be paying? “If the correlations are worse for girls, then that’s really important and good to know,” Hancock told me. “We need to talk about that, but I guarantee you that social media is not bad for all teenage girls all the time.”

If we want solutions that are more delicate and precise than the legislation proposed so far, we need a lot of delicate and precise information. If social media isn’t bad for all teenage girls, we need to know which ones it is bad for, and what makes a specific girl susceptible to the risks. Some girls are suffering, and social media is exacerbating their pain. Some girls use the internet to find community that they don’t have offline, or to express creative impulses and questions about their identity that their families aren’t open to. We also need to know which aspects of social media are riskiest. Is it harmful because it cuts into sleep hours or IRL friend time and exposure to sunlight, or is it the envy-inducing images that invite comparison and self-doubt? Is it bullying we should worry most about, or the more ambient dread of being liked but not liked enough?

Right now, we have handfuls of numbers and no clear way to arrange them; social media might affect different people in different ways for any number of reasons. It could matter how they use social media. It could even matter how they think they’re using social media.

Angela Lee, a Ph.D. student at Stanford who works with Hancock, is one of the first researchers to break ground on the latter distinction. During her first psychology lecture as an undergraduate, Lee learned about “mindsets” in the context of education. Research had shown that the mindset you have about your own intelligence has a significant impact on the course of your intellectual life. If you believe that intelligence is something that can grow and improve, then you might take actions to grow and improve it. That “ends up being really powerful,” Lee told me. It would “affect their motivation—like, How hard am I going to try on this assignment?—or their behaviors—Do I go ask for help?” She wondered whether this would also be relevant to social media. In other words, did it matter how people answered the question when they asked themselves: Am I in control of this technology, or is it exerting control and influence over me? Studies showed that social-media use increased well-being for some adolescents, harmed other adolescents, and didn’t affect still others at all, so Lee had a feeling that some of these differences could be explained by the teens’ mindsets.

In the resulting paper, which has recently been published as a preprint and is under review at the Journal of Personality and Social Psychology, Lee and Hancock built on previous technology-use research showing that feeling a lack of control is “related to worse well-being, including depression, anxiety, and loneliness.” Logically, they found that a feeling of control was associated with “better well-being,” and “more social support and less psychological distress.” People who viewed social media more positively “also reported better outcomes than those who believed the effects of social media were harmful.” These effects were not limited to those who spent little time on social media, as those who felt in control of their use still “reported less distress” than those who didn’t feel in control, even when they were using social media for above-average amounts of time. (Facebook quickly conducted its own version of Hancock and Lee’s study after it was presented to the American Psychological Association in May 2019; the results were similar, though Facebook obviously had access to far better data.)

In their paper, which focused on adults rather than adolescents, Lee and Hancock noted their findings’ relevance to the current policy debate and its heavy reliance on tobacco metaphors. Feeling in control of your social-media use might be hard “if people are constantly exposed to messages about how it is addictive,” they argued. It might not be helpful to tell everyone that they’re helpless in the face of alluring images and sticky incentives, the same way that they could become helplessly beholden to nicotine. We might try to critique powerful and popular technologies without accidentally making the case that human beings have no ability to resist them. Bringing the concept of agency into the debate is compelling in part because it appeals to common sense. We know we’re not actually constantly coerced by the algorithms, the notifications, and the feed—we have to be more complicated than that.

But, of course, the agency insight is still up for debate. For one thing, the participants in Hancock and Lee’s study were not teenagers—they were mostly in their 20s and 30s. When I asked Frances Haugen about it, she said it would be “unreasonable to say that a 14-year-old is the one who should be responsible for modulating their social-media usage.” And I noticed a page of notes tacked onto the version of the paper that Lee had emailed to me. A fellow grad student had written, “Should we be telling people that they should think that they have control over platforms with algorithms that even the companies themselves don’t understand?”

Wanting to use social media does not mean that you’ve surrendered control of your emotions and life to a machine. In fact, for a lot of people, it could mean the opposite. “The use of digital media creates a forum that may allow for the development of rapid and nuanced communication skills,” Mitchell Prinstein, a psychologist, wrote in The Journal of Child Psychology and Psychiatry just as the pandemic began. He also noted the internet’s possibilities for identity exploration, creativity, connection, and acceptance. “Adolescents who feel ostracized or stigmatized within their offline social contexts, such as members of ethnic, racial, gender, and sexual minority groups, often report access to online companionship, resource sharing, and emotional validation that is much harder to access otherwise.” Other researchers have found that social media can be useful for young people who are dealing with chronic illness—sometimes even helping them stay on track with their treatment plans.

In all of this, we would do well to remember that we’re not aggregate numbers—we’re individuals making decisions about how to spend our time and pursue happiness. In a recently published advisory of its own, the American Psychological Association suggested that teens ought to be trained to use social media in productive ways and that parents should strive to be involved in their kids’ online lives—they should notice when the apps start to interfere with school or with time spent in other ways (including sleep and physical activity). Based on the available scientific evidence, the association argued, “using social media is not inherently beneficial or harmful to young people.” The surgeon general’s advisory also emphasized the incompleteness of the picture in a section of the report about “known evidence gaps” and the “urgent need” for further research.

Laurence Steinberg, the adolescence expert, argues that teenage depression and anxiety were already ticking up before social media became as popular as it is; the upward trend in the percentage of high-school students who “experienced persistent feelings of sadness or hopelessness” has been visible since at least 2009, after the rise of Facebook and YouTube but before the ubiquity of smartphones, which made social media accessible on the go. (According to other CDC data, suicide rates started increasing in 2003.) That doesn’t mean that social media hasn’t exacerbated the problem, he acknowledged. It just means that it’s too easy an answer. “I think that our tendency as human beings is to search for the simplest possible explanation of things,” he said. “You know, maybe it’s a combination of eight different things, each of which is contributing a little bit, but none of which is the culprit—people would rather just say ‘We found what the culprit is.’”

Under public pressure, some platforms have started to make changes. Though Instagram’s critics often talk as if it has done nothing at all, remaining laser-focused in pursuit of pure profit, Instagram has experimented quite a bit. Some changes are meant to reduce bullying and doomscrolling. It’s also added content warnings on posts and search results that encourage eating disorders, and reduced those posts’ visibility in feeds. Before Haugen’s leaks, the company tried hiding “like” counts under photos (doesn’t help); since the leaks, it has implemented bedtime prompts and more robust parental controls.

I don’t bring this up to defend the company (which has found itself in a political situation that all but compels some effort on its part), but to ground us in reality. We’re not going back to a time before Instagram. Social media is central to the way that young people understand the world and their relationships—how to be attentive, how to be creative, how to be a friend, how to think and react and learn. This is probably true for the worse, but it’s also true for the better (and the neutral!), and to untangle it completely would be impossible. So, knowing that we’ll never know precisely everything, we should be careful to describe the situation as accurately as we can. “We need to find a way to make sure the online world is safe for young people,” Orben told me. “And if we want to go down the route and do an experimental intervention without a really secure evidence base, I think we would need to invest a lot of money into figuring out whether it worked and then be ready to pivot if necessary. But I don’t know if the policy landscape allows that at the moment.”

It’s not comfortable to accept that our understanding of social media is still so limited or that the best path forward is to keep plodding along toward whatever clarity there might be to find. But removing millions of teenagers from social media is a dramatic, even draconian intervention. For many, it would feel good. It would feel like doing something, and doing something big. And it would be. We should bear in mind that, even as we resent the “experiment” that tech companies have performed on the young population of the country, we would be meeting their wild experiment with another wild experiment. This one would have unintended consequences too.

A Zany Nightlife Comedy

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 06 › nightlife-comedy-party-girl-parker-posey › 674365

This story seems to be about:

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

Good morning, and welcome back to The Daily’s Sunday culture edition, in which one Atlantic writer reveals what’s keeping them entertained.

Today’s special guest is Kelli María Korducki, a senior editor on The Atlantic’s newsletters team (and a frequent editor of this Sunday culture newsletter). Kelli has written about the Goopification of AI, America’s adult-ADHD problem, and what happened when tax season came for the crypto bros. Kelli is awaiting the release of a “very Salvadoran American” comedy from the director Julio Torres—her self-proclaimed “diasporic ambassador”—and rediscovering the pure joys of social media (but not TikTok).

First, here are three Sunday reads from The Atlantic:

Why is everyone watching TV with the subtitles on? Moneyball broke baseball. The immortal Mel Brooks The Culture Survey: Kelli María Korducki

The upcoming event I’m most looking forward to: I can’t wait to see Problemista, Julio Torres’s forthcoming (and very Salvadoran American) comedy from the premier cool-kid production studio A24, co-starring Tilda Swinton and Greta Lee. I can hardly believe I just used “Salvadoran American” and “forthcoming comedy” in the same sentence.

I share Torres’s Salvadoran heritage—my mom and her immediate family immigrated to the U.S. in the 1980s and ’90s—and the only Salvadoran character I can remember from any semi-recent pop-culture product is Cher’s maid, Lucy, in Clueless (1995), whose brief appearance ends with her declaration “I’m not a Mexican!”; I remember first seeing that scene and thinking, Right on! In true minority-group-within-a-minority-group fashion, I’ve made peace with the reality that even if an average, non-Salvadoran American has heard of my familial homeland and can place it on a map, there’s still a nonzero chance that their associations with the country will be limited to gangs, civil war, Bitcoin, and pupusas. Which is darkly hilarious in and of itself, I suppose. Anyway, I’m so proud to claim Torres as my diasporic ambassador. I think he’s a genius.

The best novel I’ve recently read, and the best work of nonfiction: I’ve been using the word fun to describe Catherine Lacey’s Biography of X, which might be a puzzling word choice to others who have read it. It’s a fictional biography, an alternate history, and a mishmash of decontextualized (or rather, recontextualized) cultural ephemera that piece together the story of a deceased artist’s secret life. Reading it, though, I cared less about the characters and their motivations than about how the story would come together; Lacey’s exhaustively footnoted meta-narrative appeals to my own journalistic urge to catalog and go down rabbit holes. It seems like it was a blast to write. [Related: This novelist is pushing all the buttons at the same time.]

As for nonfiction, I’m currently enjoying Darryl Pinckney’s Come Back in September; I just pilfered a review copy from The Atlantic’s New York office (I’ll return it, I swear). It ticks off a lot of my personal, maybe-pretentious boxes: an autobiography of creative life, intellectual mentorship, the coming-together of the right people at the right moment, 1970s New York. I love reading about bygone cultural scenes, kismet frozen in amber. I’m a sucker for nostalgia. [Related: The writer’s most sacred relationship]

Something I recently revisited: Not too long ago, I rewatched Ghost World, the 2001 film adapted from the Daniel Clowes graphic novel. I loved the movie in high school and strongly identified with Enid, its nonconformist teen protagonist—for a time, I even wore my hair in Enid’s bottle-black bob and had similar vintage cat’s-eye glasses frames fitted with my prescription. I remembered the movie for its humor and world-building. Twenty-odd years later, I noticed, for the first time, its poignancy. What younger me saw as an offbeat coming-of-age story was now a parable about misfits aching for connection in a world they can’t help but chafe against. The teenage rule-bucker within me still relates, but the 30-something understands the stakes. [Related: Ghost World endures for its cynicism—and pathos. (From 2017)]

My favorite way of wasting time on my phone: I’ve always had a love-hate relationship with social media (very unique, I know). I was a relatively early adopter of Twitter, a very early adopter of Facebook, and a somewhat late Instagram joiner. From the get-go, I’ve vacillated between anxious overuse and total neglect of these three platforms. But lately, I’ve found a groove—I’m remembering that Instagram isn’t just a place for stalking acquaintances and feeling terrible about my comparably boring life; it’s also a legitimately useful tool for sharing what I’m up to and keeping in touch with my geographically scattered friends and family in a reciprocal way. Being an older Millennial does have its perks; we still have a genuinely social web. The youths are missing out!

As for Twitter: The product bugginess that followed Elon Musk’s takeover of the company (and which continues, despite his recent passing of the CEO torch) has, in my opinion, rekindled some of the chaos that made early Twitter so fun. Although I wouldn’t go so far as to suggest that Musk’s Twitter era has been great for society—you can read more about that bigger picture here—my feed has somehow become more pleasant, albeit in a slightly unhinged way. I see fewer posts that are clearly written for the purpose of maximizing engagement (so lame) and more stream-of-consciousness riffing. There’s more interaction for its own sake, versus pure performance. I’ve been enjoying the platform more lately than I had been for years.

And TikTok? No offense to theater-kid energy, but that’ll be a nope for me. I prefer to keep enjoying the occasional video in the sensible old-person way—through other people’s curation on the platforms I actually use.

The last thing that made me snort with laughter: Last month, I attended a brunch screening of the new 4K restoration of Party Girl, the 1995 cult comedy starring the ’90s’ “queen of the indies” Parker Posey as an aimless lower-Manhattan scenester who gets a job as a library clerk, drinks the proverbial Dewey Decimal Kool-Aid, falls for a Lebanese schoolteacher turned falafel-cart peddler, and decides to turn her life around and become a librarian. I love everything about this movie—the fashion is divine, and its glimpses of New York’s then-gentrifying downtown capture a moment lost in time. Apparently, the film is beloved, by those in the know, for its incredibly accurate portrayal of the library-science field. But ultimately, this is Parker Posey’s star vehicle. Her face has perfect comedic timing. I dare you to watch this scene (or this one) without letting out a snort or two.

Read past editions of the Culture Survey with Emma Sarappo, Adam Harris, Saahil Desai, Yasmin Tayag, Damon Beres, Julie Beck, and Faith Hill.

The Week Ahead Reproduction, a new novel by Louisa Hall, examines the surreality and hazard of childbirth through the perspective of a novelist-protagonist attempting to write a book about Mary Shelley (on sale Tuesday). The Flash, a DC superhero film that—despite the “mountain of disturbing allegations against its star”—manages to be “breezy and charming,” our critic writes (in theaters Friday) Swiping America, an eight-episode “romantic documentary” dating series that follows four New York City singles on blind dates across eight American cities (first two episodes begin streaming Thursday on Max) Essay Illustration by Lucas Burtin

Inside Frank Bascombe’s Head, Again

By Adam Begley

Half a century ago, at the 1974 Adelaide Festival of Arts, in South Australia, John Updike delivered a muscular manifesto: “We must write where we stand,” he said. “An imitation of the life we know, however narrow, is our only ground.” His call for accurate and specific witness, for a realism dedicated to the here and now, was surely in part an apology for the repeat appearances of Harry “Rabbit” Angstrom, the former high-school-basketball star Updike called his “ticket to the America all around me.” Already the hero of Rabbit, Run (1960) and Rabbit Redux (1971), Harry was destined to star in two more alliterative Rabbit novels, Rabbit Is Rich (1981) and Rabbit at Rest (1990), as well as the postmortem novella Rabbit Remembered (2000). Restless and hungry, open to experience and eager to learn, as fallible as the rest of us, and a staunch, often dismayed patriot, Harry is Updike’s everyman.

Read the full article.

Culture Break How parking ruined everything The reality show that’s tackling the toxic workplace Movies are best before noon. The novelist who truly understood the South Six books that feel like puzzles “Hell welcomes all.” Poem: “A Room of One’s Own” Catch Up on The Atlantic The stupidest crimes imaginable It’s 5 a.m. somewhere. The golf merger may be dead on arrival. Photo Album An aerial view of Gilleleje Labyrinth, in Gilleleje, Denmark, on June 6, 2023 (Ritzau Scanpix / Reuters)

Behold a hedge labyrinth in Denmark, a thousand-musician performance in Madrid, and more in our editor’s selection of the week’s best photos.

Explore all of our newsletters.

Is Gen Z Coming for the GOP?

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 06 › gen-z-millennials-vote-republican › 674328

Gen Z is poised to massively expand its influence in the 2024 election. But its impact may be more complex than typically assumed.

As many as 7 million to 9 million more members of the racially and culturally diverse Gen Z could cast ballots in 2024 than did in 2020, while the number of the predominantly white Baby Boomers and older generations voting may decline by a corresponding amount, according to nonpartisan forecasts. As a result, for the first time, Gen Z and Millennials combined could account for as many votes next year as the Baby Boomers and their elders—the groups that have made up a majority of voters for decades.

That generational transition represents a clear opportunity for Democrats, who have consistently amassed solid, sometimes overwhelming, margins among both Millennials and Gen Z voters. But an analysis of previously unpublished election data from Catalist, a Democratic targeting firm, by Michael Podhorzer, the former political director for the AFL-CIO, shows that even the emergence of these new voters may not break the larger political stalemate that has partitioned the country into seemingly immovable blocks of red and blue states.

Podhorzer’s analysis of the Catalist data, shared exclusively with The Atlantic, found that over the past four elections, Gen Z voters have broken heavily for Democrats in blue states, and provided the party solid margins in closely contested swing states. But in red states, with a few prominent exceptions, Podhorzer surprisingly found that even Gen Z voters are mostly supporting Republicans.

The generation’s strong Democratic lean in blue and purple states may create growing challenges for Republicans trying to amass the 270 Electoral College votes needed to win the White House. But the Republican tilt of younger voters in red states could frustrate Democrats trying to loosen the GOP’s hold on those places. That seemingly unbreakable Republican grip has made it difficult for Democrats to win majorities in the U.S. Senate and House of Representatives, and has allowed the GOP to impose a sweepingly conservative social agenda across nearly half of the country.

Republicans remain dubious that young voters will show up in large numbers anywhere next year for President Joe Biden, the oldest U.S. president, who did not run well among them in the 2020 Democratic primaries and whose approval ratings with them remain anemic. As Kristen Soltis Anderson, a GOP pollster who has extensively studied younger voters, told me, “I don’t think there is a lot of focus in Republican world” about the potential risk to the party of a big surge of new Generation Z voters in 2024, “in part because a lot of Republicans believe that there is just no way young voters will turn out for Joe Biden.”

But other analysts point out that despite their equivocal feelings about Biden, young people voted in very large numbers in 2020 and maintained relatively high turnout in 2022. A lack of enthusiasm about Biden personally “didn’t really dissuade the generation from coming out and voting for Democrats” in either of the past two elections, says John Della Volpe, the director of polling at the Harvard Kennedy School Institute of Politics, which conducts a twice-yearly national survey of youth attitudes. “They knew the stakes in the election. They knew what life was like under more Republican control versus more Democratic control.”

Whatever they think about Biden, the influence of Gen Z, generally defined as young people born from 1997 to 2012, is certain to rise next year simply because so many of them will age into the electorate. William Frey, a demographer at Brookings Metro, estimates that about 15.4 million eligible young people will have turned 18 between the 2020 election and Election Day next year.

In 2016, the first presidential election when any members of the generation were old enough to participate, Gen Z accounted for just 2 percent of voters, according to an analysis of census data by Frey for the nonpartisan States of Change project. In 2020, Gen Z rose to 7.5 percent of all voters, Frey calculates. Frey projects that the generation will increase its share of the electorate to 13 percent in 2024. Depending on turnout, that could mean about 8 million more Gen Z voters next year, increasing the total to about 20 million in all.

Millennials, generally described as younger adults born from 1981 to 1996, have also increased their share of the electorate. In Frey’s analysis of census data, they rose from about one in seven voters in 2008 to just under one in four in 2020. Frey predicts that in 2024, the two generations combined will make up about 37 percent of the electorate.

That could mark a historic tipping point. Frey projects that in 2024, the Baby Boomers and their elders—the last members of the Greatest and Silent Generations still voting—will also constitute 37 percent of voters. If that forecast holds up, it will end decades during which those Republican-leaning older cohorts were the biggest generations in the electorate. Meanwhile, Generation X, defined as those born from 1965 to 1980, will remain stable over this period at about one-fourth of the electorate.

Another fundamental shift in American politics over the past half century is magnifying the impact of this generational evolution: Voters now divide between the parties more along lines of cultural identity than class interest. And on every important cultural and demographic dividing line between the two parties, the younger generations exhibit characteristics that predict support for Democrats.

More than 70 percent of Baby Boomers are white. But just 55 percent of Millennials are white and only slightly more than half of Gen Z are. Millennials and Gen Z are far less likely than older generations to identify with any organized religion and far more likely (especially in Gen Z) to identify as LGBTQ. Younger generations are also more likely than older ones to hold a college degree.

“What sets Gen Z apart is … they are growing up in a much more racially and ethnically diverse cohort, which really is driving them to more progressive positions,” Melissa Deckman, the chief executive officer of the nonpartisan Public Religion Research Institute and the author of a forthcoming book on the generation, told me.

Overall, these new voters are behaving almost exactly as those attributes would predict. Before 2004, as I’ve written, exit polls and other sources found little difference between the voting preferences of younger and older voters. But since Millennials and then Gen Z entered the electorate in large numbers, Democrats have established a durable advantage among the young. Catalist’s data, for instance, show that Democrats have carried almost exactly 60 percent of the two-party vote among Millennials and Gen Z in each of the past three presidential elections and in three of the past four congressional elections; the one exception came when the party’s vote among them hit 66 percent in the 2018 congressional races. (One New York Times analyst, citing unpublished polling data, recently claimed that Millennials, though still supporting Democrats, are moving to the right as they age, a view also held by some Republican pollsters. But skeptics quickly noted that other data sources, such as results from the large-sample Cooperative Election Survey, do not show such a shift.)

The key insight that Podhorzer’s analysis adds is that even this strong overall Democratic advantage remains subject to substantial geographic variation that tends to reinforce, rather than reconfigure, the nation’s electoral divisions.

Using Catalist data, he found that Democrats in the four elections from 2016 through 2022 have consistently amassed imposing margins of 20 to as much as 40 percentage points among Gen Z voters in the 18 states he identifies as already leaning reliably Democratic, such as California, Oregon, Washington, Hawaii, Illinois, Minnesota, and the Eastern Seaboard states from Maryland to Maine.

Gen Z voters over those four elections have also provided Democrats solid margins of roughly 15 to 25 percentage points in the eight purple states: Arizona, Georgia, North Carolina, and Nevada across the Sun Belt, and Michigan, Pennsylvania, Wisconsin, and New Hampshire in the Rust Belt.

But the story in the remaining two dozen Republican-leaning states is more complex. Podhorzer found that Democrats performed better in the red states among Gen Z than they did among older generations—but not well enough to actually win those youngest voters. Republicans still carried a majority of Gen Z voters in most of the red states. Even in red states where Democrats have won most Gen Z voters in recent elections—including Texas, Florida, Iowa, Kansas, and Montana—the party’s margins among them are typically slim. That means Democrats in red states are not generating nearly enough advantage from younger generations to overcome the lopsided GOP edge among older cohorts.

Podhorzer told me this regional variation is “only surprising to the extent you believe that age explains almost everything about voters’ partisanship. But if you understand that the neighborhood you grew up in, the parents you have, the schools you went to, and the general politics that you are introduced into is a big factor, it shouldn’t be surprising at all. Because if you grow up in Brooklyn, no matter how old you are, you are swimming in blue water … and the same goes for those growing up in red America.”

For Democrats, the most important of the trends Podhorzer cataloged may be their persistent strength among Gen Z voters in the battleground swing states that decide who wins the White House. In all, Podhorzer calculates that Gen Z voters in the swing states who cast their first ballot in the 2018 election or after have preferred Democrats by nearly 20 percentage points. (Democrats also hold a strong 15-point edge among Millennials in those states who voted for the first time in 2018 or after.) To Podhorzer, the clear lesson of these trends is that Democrats are more likely to win the battleground states by investing in turning out these new voters than by trying to lure back the mostly blue-collar whites who have abandoned the party to support Donald Trump.

Podhorzer says the Democratic advantage among younger voters in the purple and blue states has been driven largely by an unusual dynamic. Typically, he points out, young voters gravitate toward a party because of a positive association with the president in office as they entered the electorate: John F. Kennedy or Ronald Reagan or Barack Obama. But in this case, Podhorzer argues, the most powerful force moving Gen Z toward Democrats is not so much excitement about the party (or Biden), but negative views of Trump. “They are coming of age at a time when everybody around them, as well as the popular culture, loathe and ridicule” Trump, he told me. “Especially in the blue states, where MAGA candidates have hijacked the nominating process, there is no exemplar of a reasonable Republican anywhere to be seen.”

Some GOP strategists aren’t particularly concerned about the party’s poor performance with young voters, Anderson, the GOP pollster, says, because inside the GOP coalition, Trump is strongest among the youngest generations. “So if you are hanging out in Republican land only, you can easily convince yourself that Donald Trump is actually very popular with young voters, because he is irreverent and edgy or whatever your rationale would be,” she told me. The problem is that too many in the GOP don’t realize “that the young people in the past who might have liked Mitt Romney aren’t in our rooms anymore” and that instead we “have boiled the youth of the party down to this very Trumpist core.”

In many red states, Republicans appear to be taking no chances with the unfolding generational transition: Several GOP-controlled states, such as Texas, Georgia, and Arizona, where the ascending younger generations are much more racially diverse than older voters, have imposed the toughest restrictions on voting.

In every state, influence in the coming years will flow from those mostly white older generations to more diverse younger ones. By 2028, Frey projects, the Boomers and their elders will fall to slightly below a third of voters nationwide, while Millennials and Gen Z will soar well past two-fifths. By 2032, when all of Gen Z is eligible, Americans born after 1980 will cast almost exactly half of all votes.

Deckman said she expects Gen Z to continue to lean left over this period—in part because, more than any previous generation, these young people are consuming media that they themselves create, on TikTok and similar platforms. “Their news is generated by themselves, and because they are more progressive, I think many Gen Zers are consuming information that reinforces those viewpoints,” she told me.

As Podhorzer’s analysis shows, this transition isn’t yet threatening Republicans in most red states. And in the swing states, Republicans can probably offset the growing presence of Gen Z and Millennials in 2024 by running better with older voters, many of whom are unhappy with Biden’s performance.

But the Democratic advantage with Gen Z is like an investment whose value compounds over time—in this case, as their share of the electorate expands. If Republicans can’t regain at least some ground with younger voters, especially in the battleground states, the party will need to squeeze bigger margins out of shrinking groups. In any given election, as Trump demonstrated in 2016, Republicans might meet that test. But making that math add up will only get tougher for the GOP as the generational transition inexorably rolls on.

The Missing Piece of the Foraging Renaissance

The Atlantic

www.theatlantic.com › health › archive › 2023 › 06 › foraging-tours-medicinal-plants-popularity › 674307

Harvesting wild local produce in Brooklyn’s Prospect Park may not seem like the best idea. And yet, on a foraging tour of the lively public park last month, a straw-hatted forager named “Wildman” Steve Brill and his teenage daughter, Violet, led roughly 40 of us amateurs into the grassy areas beyond the park’s paved footpaths for a four-hour tromp. Among plastic wrappers and bottle caps we found edible roots, fragrant herbs, and sturdy greens, all ripe for experimentation in the adventurous cook’s kitchen.

At least in theory. There was food here, for sure, but hardly of the practical variety. We recovered fallen pods from the Kentucky coffeetree, whose seeds can be used to brew a caffeine-free alternative to a morning cup. That is, if one is willing to harvest enough of them, wash them of green toxic goo, and roast them for hours—though even then, it won’t really be coffee. I stuffed a few pods in a canvas bag alongside sassafras root, once used to make root beer the old-fashioned way, and a handful of lettuce-flavored violet leaves that could, in the right quantities, constitute a small salad. Two weeks later, I’m still wondering what, if anything, I’ll actually make with these odd new ingredients.

What I didn’t anticipate were all the medicinal plants. Just a few minutes into the tour, we came across enough wild analgesics and anti-inflammatories to insure a casual hike. Here among the cigarette butts was broadleaf plantain, an easy-to-miss herb (unrelated to the bananalike fruit) known for calming mosquito bites. Over near the urinating puppy was jewelweed, which soothes poison-ivy and stinging-nettle rashes. Twigs snapped from a black birch tree exuded wintergreen oil, also known as methyl salicylate, a relative of aspirin that powers pain-killing ointments such as Bengay and Icy Hot.

Interest in foraging for food has taken off in recent years, owing in part to the gourmet-ification of eating locally and in part to its popularity on social media, where influencers make chips out of stinging nettles and add fir needles to granitas. Foraged ramps and morel mushrooms have become so well known that they now appear on restaurant menus and in high-end grocery stores. But the foraging boom has largely left behind what has historically been a big draw of scrounging for plants—finding treatments for minor ailments. To be clear, medicinal plants aren’t likely to save the casual forager’s life, and they lack the robust clinical data that back up pharmaceuticals. But even some scientists believe they can be handy in a pinch. In a way, being able to find a jewelweed stem is more useful than identifying a handful of leaves that can substitute for lettuce.

That has definitely been the case for Marla Emery, a scientific adviser to the Norwegian Institute for Natural Research and a former research geographer for the U.S. Forest Service who studies community foraging. Several years ago, when huge, oozing blisters formed on her legs after a run-in with poison ivy on a hunting trip, Emery visited an herbalist in Scotland who applied lobelia, an herb with pale-violet flowers, and slippery elm, a tree with mucilaginous properties, to her calf. Soon, she felt a tingling sensation—“as if someone had poured seltzer over the area”—and within an hour the blisters had healed, Emery told me.

Both plants, traditionally used to treat skin conditions, “are supportive of health and have medicinal value,” she said, and they’re especially useful because “you’re highly unlikely to poison yourself” with them. Such anecdotes illustrating the profound utility of medicinal plants are common among botanist types. “If you get a cut and put [broadleaf] plantain on it, you can see it close up,” Alex McAlvay, an ethnobotanist at the New York Botanical Garden, told me. At least for some species, he said, “the proof is in the pudding.”

Though foraging has long been a medicinal practice, and so many modern drugs are derived from plants, in the West, medicinal flora has largely been relegated to “traditional” or “folk remedy” status. Still, their use lives on in many communities, including immigrant groups that “come with medicinal-plant uses from their homelands and seek to continue them,” Emery said. People in Chinese, Russian, and certain Latin communities in the U.S. commonly forage dandelion, a weed with diuretic properties, to support kidney and urinary-tract health, she added.

Along the concrete footpaths of Prospect Park, the Brills pointed out stands of burdock; its roots, in addition to being a tasty potato dupe, are used in some cultures to detoxify the body. Pineapple weed, found in baseball diamonds and sidewalk cracks, can calm an upset stomach, Steve told me later. Scientific data for such claims are scant, much like they are for other foraged plants, and using the plants for health inevitably raises questions about scientific credibility. Many medicinal plants that a casual forager will encounter in the wild will not have been studied through rigorous clinical trials in the same way that any prescription drug has been. Whether people ultimately embrace foraging for medicinal plants depends on how they believe “we make evidence and truth,” McAlvay said. “A lot of people are like, ‘If there’s no clinical research, it’s not legit.’ Other people are like, ‘My grandma did it; it’s legit.’” Nothing beats clinical research, though clearly some plants share valuable properties with certain drugs. Lamb’s quarters, a dupe for spinach, is so packed with vitamin C that it was traditionally used to prevent scurvy; stinging nettle, traditionally used for urination issues, may have similar effects as finasteride, a prostate medication.

Naturally, the experts I spoke with unanimously recommended using foraged medicinal plants only for minor ailments. Just as foraging for food comes with some risks—what looks like a delicious mushroom can make you sick—the same is true of medicinal foraging. Take established, reputable classes and use books and apps to correctly identify plants, many of which have dangerous look-alikes; the edible angelica plant, for example, is easily confused with poisonous water hemlock, of Socrates-killing notoriety. Learning about dosage is important too. A benign plant can become poisonous if too large a dose is used, warned Emery. When working with medicinal plants, she said, “you’ve got to know what you’re doing, and that doesn’t lend itself to the casual TikTok post.” Beginner foragers should stick to “gentle but definitely powerful, easy-to-identify herbs,” such as dandelion and violet, said McAlvay.

As the Brills instructed, when I got home I submerged a foraged jewelweed stem in witch hazel to make a soothing skin tincture. Days later, when I dabbed some onto a patch of sunburn on my arm, I felt, or maybe imagined, a wave of relief. Whatever the case, my delight was real. When I had asked both tour-goers and experts why foraged medical plants mattered in a world where drugs that accomplish the same things could be easily bought at a pharmacy, some said it was “empowering” or “satisfying,” but the description that resonated with me most came from McAlvay, who called it “magic”: the power to wield nature, in nature, in order to heal.

When I got home from the tour and opened my bag of foraged goods, I found a black birch twig, still redolent of wintergreen. Coincidentally, that is the one smell I have craved throughout 38 weeks (and counting) of pregnancy, but moms-to-be are advised to avoid the medicinal ointments containing the oil. I sniffed the twig deeply, again and again, recalling that it might become useful in the months to come. When teething infants are given black birch twigs to chew, the gently analgesic qualities of the low-dose wintergreen oil helps soothe their pain, Brill had said. All of a sudden, their crying stops. What’s more magical than that?

Why Is Everyone Watching TV With the Subtitles On?

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 06 › watching-movies-tv-with-subtitles › 674301

The first time it happened, I assumed it was a Millennial thing. Our younger neighbors had come over with their kids and a projector for backyard movie night—Clueless, I think, or maybe The Goonies.

“Oh,” I said as the opening scene began, “you left the subtitles on.”

“Oh,” the husband said, “we always leave the subtitles on.”

Now, I don’t like to think of myself as a snob—snobs never do—but in that moment, I felt something gurgling up my windpipe that can only be described as snobbery, a need to express my aesthetic horror at the needless gashing of all those scenes. All that came out, though, was: Why? They don’t like missing any of the dialogue, he said, and sometimes it’s hard to hear, or someone is trying to sleep, or they’re only half paying attention, and the subtitles are right there waiting to be flipped on, so … why not?

Because now I’m reading TV, not watching it. Because now, instead of focusing my attention on the performances, the costumes, the cinematography, the painstakingly mixed sound, and how it all works together to tell a story and transport me into an alternate world, my eyes keep getting yanked downward to read words I can already hear. My soul can’t bear the notion of someone watching The Sopranos for the first time and, as Tony wades into the pool, looking down to the bottom of the screen to read [ducks quack]. Subtitles serve an important purpose for people with hearing or cognitive impairments, or for translation from a foreign language. They’re not for fluent English speakers watching something in fluent English.

This monologue was all internal, though, because I’m in my mid-40s and don’t want to sound like an old man shouting at a cloud. We left the subtitles on that night, and I noticed that even though I knew every word of Clueless (or maybe it was The Goonies), I was still reading along. For the life of me I couldn’t understand how this didn’t drive everyone else crazy too. I said nothing, though. Millennials! What’ll they think of next?

Then, a couple of months later, over New Year’s Eve, my wife and I were about to start watching Don’t Look Up with another couple, Ken Leung and Nancy Bulalacao, when Nancy asked if we minded her turning on the subtitles. Ken is a cast member on the HBO series Industry, and Nancy works in New York theater production, and they’re both a bit older than us—squarely Gen X. They watch almost everything now with the subtitles on, she told us, even Ken’s own show, which is full of rapid-fire financial jargon coming at you in about a dozen languages and a riot of accents. She said it almost like a confession, as if bracing for judgment. But I was too stunned to judge.

Both of them have spent their entire adult lives working in movies, television, theater—the visual arts, where voice and imagery are sacrosanct tools of communication with the audience. Surely a screen actor like Ken would be aghast at the notion of so many people choosing to miss so much of the detail and nuance that he builds into his performances?

Nah. Following the story is the most important thing, he told me recently when I asked him about it for this article, and if you’re getting knocked out of the story because you can’t follow the dialogue, then by all means turn on the subtitles. It’s fine. You have his permission.

I grew alarmed by the way subtitles seem to be creeping into our homes—an addictive substance like TikTok, which, by the way, deserves some blame for this shift, conditioning multiple generations to watch content with text plastered all over it. A war is raging in living rooms and bedrooms across America—a Great Subtitle War. On one side: the bombastic visual effects of post–Game of Thrones mega-budget TV. On the other side: hearing the words. On one side: people like me, the purists and refuseniks. On the other: our friends and spouses, people who just want to follow the plot. The widespread use of subtitles felt, to me, like a lurch backward toward the silent-film era. But I didn’t want to be too doctrinaire. Maybe some exceptions could be made.

Then one night a few weeks ago, I walked into the bedroom to find my wife watching Abbott Elementary with the subtitles on. I’d lost her too.

Just three years ago, the South Korean filmmaker Bong Joon Ho took the stage at the Golden Globes to accept the Best Foreign Language Film award for Parasite and made a heartfelt speech urging us all to watch more stuff with subtitles.

“Once you overcome the one-inch-tall barrier of subtitles,” he said, “you will be introduced to so many more amazing films.”

A month later, Parasite won the Oscar for Best Picture. About a month after that, the World Health Organization declared the novel coronavirus a pandemic, and much of the world went into quarantine. Cooped-up Americans were primed for a little experimental viewing; demand for Asian-language content spiked in the U.S. in the months after Parasite became available for streaming, according to data from Parrot Analytics, an entertainment-analytics company. It spiked again, a Parrot spokesman told me, after the September 2021 premiere of Squid Game, another South Korean export, which probably did more than any other single work of culture to bring down Bong’s one-inch wall. The words at the bottom of the screen don’t appear to have distracted anyone from all that arterial spray.

Now subtitles are everywhere, and in fact, they may already be our default mode. According to Preston Smalley, Roku’s vice president of viewer product, a 2022 internal survey revealed that 58 percent of subscribers use subtitles: 36 percent of them switch the subtitles on because of a diagnosed hearing impairment; 32 percent do it out of force of habit. (The remaining third cite a stew of situational issues, such as kids sleeping nearby, other people in the room, and poor audio quality.) Many of the people using subtitles, in other words, do not need them.

And as it turns out, it is a Millennial thing, or at least Millennials are leading the way. A full two-thirds of Roku’s Millennial customers use subtitles, more than any other generation, including seniors, though Smalley attributes that in part to technical hurdles, which is a polite way of saying that older users don’t always know how to turn them on.

Watching a Korean-language film such as Parasite with subtitles, of course, isn’t the same as leaving them on for Abbott Elementary. One experience requires them for most English speakers, the other super does not. But they’re also exactly the same thing. You’re still reading words at the bottom of the screen, it’s the same eye movement, the same mental-conditioning process—so what’s the difference if the actual language being spoken is English or Korean or some distant alien tongue from the Marvel Cinematic Universe? Subtitles, in other words, are a door that swings both ways. They can usher you into a rich new cultural experience, only to flick you in the ear during the experience itself.

Once the subtitles are on the screen, my friend Ken said, you feel, subconsciously, that “there’s somebody else in the room. There’s a third person, and they’re telling you what’s being said—they’re being very quiet, they’re minding their own business, but they’re here.” And of course that affects the experience. Imagine, he said during our Zoom call, “if our conversation right now was being subtitled live.”

I get it; not everything is art. Most things we watch don’t require or deserve such reverence. You don’t need spotless mise-en-scène to get the full experience of Is It Cake? But what if it’s The Sopranos? My wife and I recently watched Dead Ringers, which was so visually clever and twisted and sumptuous that I can only imagine how great it would’ve been without subtitles. If you ask me, there’s no defense for anything that requires us to take our eyes off Rachel Weisz, let alone two Rachel Weiszes.

Set aside the qualitative debate over whether this cultural shift is better or worse—let’s at least agree that it’s different. Ken says he appreciates the way subtitles help him and his wife follow along, but he also now finds himself doing something he calls “lazy listening”: “You begin to rely on the subtitles,” he said, “and then without them, you’re suddenly like, I never had an issue hearing things before. How come I do now?

The writer-director Hannah Fidell—whose Hulu series, A Teacher, starring Kate Mara as a predatory high-school teacher, was based on her 2013 indie film on the same subject—is likewise worried that subtitles are changing viewers’ habits. I assumed that a filmmaker would feel most violated on behalf of their camera shots, but Fidell was, if anything, more aghast at the trouncing of her sound mix. Subtitles make you literal-minded, she says, and oftentimes, the scripted words transcribed on the screen say one thing while the actor’s performance of them says another. I asked Fidell how she would feel if a friend turned on the subtitles while watching the pilot episode of A Teacher.

She went quiet for a moment. “I would be so pissed,” she said.

Game of Thrones, which premiered in 2011 and ended in 2019, shifted the home-viewing paradigm in any number of ways, but it was also the tipping point in this struggle between the audio and the visual. Game of Thrones, Andrew Miano, a longtime film producer, told me, is when we all started turning up the volume to hear the dialogue. Miano made The Farewell, starring Awkwafina, about three-quarters of which is in Mandarin with English subtitles. His issue isn’t with subtitles; it’s with the swelling ranks of always-on-ers, a group that now includes his wife. “It drives me crazy,” he said.

House of the Dragon—last summer’s Game of Thrones prequelis what broke me. How was anyone supposed to follow that show without subtitles? House Targaryen. House Velaryon. Rhaenys. Rhaena. Rhaenyra. The Sea Snake. The Crabfeeder. Now three years have passed. Now 10 years have passed. Now a different actor is playing Rhaenyra, but the same actor is playing Rhaenys. Dragons shrieking and throwing flames over all of it. What the hell is going on here?

I still didn’t turn subtitles on, though, until halfway through the season, when the cast reshuffled after the second time jump and my options were either (a) turn on the subtitles or (b) pause and rewatch every scene multiple times, requiring an average of three viewing hours per episode. Besides, all of the shots on that show are too dark anyway.

The good news, according to Onnalee Blank, the four-time Emmy Award–winning sound mixer on Game of Thrones, is that it’s not your fault that you can’t hear well enough to follow this stuff. It’s not your TV’s fault either, or your speakers—your sound system might be lousy, but that’s not why you can’t hear the dialogue. “It has everything to do with the streaming services and how they’re choosing to air these shows,” Blank told me.

Specifically, it has everything to do with LKFS, which stands for “Loudness, K-weighted, relative to full scale” and which, for the sake of simplicity, is a unit for measuring loudness. Traditionally it’s been anchored to the dialogue. For years, going back to the golden age of broadcast television and into the pay-cable era, audio engineers had to deliver sound levels within an industry-standard LKFS, or their work would get kicked back to them. That all changed when streaming companies seized control of the industry, a period of time that rather neatly matches Game of Thrones’ run on HBO. According to Blank, Game of Thrones sounded fantastic for years, and she’s got the Emmys to prove it. Then, in 2018, just prior to the show’s final season, AT&T bought HBO’s parent company and overlaid its own uniform loudness spec, which was flatter and simpler to scale across a large library of content. But it was also, crucially, un-anchored to the dialogue.

“So instead of this algorithm analyzing the loudness of the dialogue coming out of people’s mouths,” Blank explained to me, “it analyzes the whole show as loudness. So if you have a loud music cue, that’s gonna be your loud point. And then, when the dialogue comes, you can’t hear it.” Blank remembers noticing the difference from the moment AT&T took the reins at Time Warner; overnight, she said, HBO’s sound went from best-in-class to worst. During the last season of Game of Thrones, she said, “we had to beg [AT&T] to keep our old spec every single time we delivered an episode.” (Because AT&T spun off HBO’s parent company in 2022, a spokesperson for AT&T said they weren’t able to comment on the matter.)

Netflix still uses a dialogue-anchor spec, she said, which is why shows on Netflix sound (to her) noticeably crisper and clearer: “If you watch a Netflix show now and then immediately you turn on an HBO show, you’re gonna have to raise your volume.” Amazon Prime Video’s spec, meanwhile, “is pretty gnarly.” But what really galls her about Amazon is its new “dialogue boost” function, which viewers can select to “increase the volume of dialogue relative to background music and effects.” In other words, she said, it purports to fix a problem of Amazon’s own creation. Instead, she suggested, “why don’t you just air it the way we mixed it?”

The silver lining of tech companies trying to fix problems of their own creation is that, every so often, they stumble onto an ingenious solution. Roku offers a replay feature in which the subtitles show up when you press the 20-second rewind button. It saved Miano’s marriage, and it might save yours. Roku also offers an “automatic speech clarity” feature, though Roku is more akin to an operating system for your television than a streaming platform—it’s just the middle man, sonically speaking—so the option is more of a bandage than a cure. Home-theater providers such as Sonos, meanwhile, offer their own dialogue-boost capabilities, in case you want to pay a second tech company to fix what the first one broke.

Or you can just turn on the subtitles. In any version of our streaming future, subtitles will be the simplest, most cost-effective solution, so maybe what the snobs among us should hope for is that the creators themselves will seize back some creative license over exactly how those words look on the screen. Brett Pawlak, the director of photography for Disney+’s new television series American Born Chinese, told me that although he doesn’t compose shots to leave room for words at the bottom of the screen, the rising ubiquity of subtitles reminds him of the creative hurdle presented about a decade ago, when some directors started incorporating characters’ text messages. The visual challenge, in other words, requires a visual solution.

The appearance of the subtitles on your screen also varies widely by platform—the streamers control that dial too—and some of them put more effort into the task than others. But their default typefaces are all clunky and robotic and bear no connection to the content. If they can beam Severance into our homes and invent dialogue-boost features, surely they can figure out how to let us pick our own typeface, or shrink the font size, or move the words to a different spot on the screen. You know who’d really benefit from that? Deaf people! Non-English speakers. Anyone who finds that subtitles make them feel included in the culture, rather than shut out of it. And maybe the ubiquity of words at the bottom of the screen will inspire filmmakers and showrunners to craft their own subtitles as a viewing option—you can watch this Jordan Peele art-house horror series with Hulu’s charmless sans serif or with Peele’s signature typeset.

Or, to echo Blank, you could just air it the way she mixed it. Her home still frowns on unnecessary subtitles, but that might change as streamer platforms continue to wreak havoc with her sound mixes. “The world is getting louder,” she said. And if subtitles offer us a way to turn down the volume a little bit, maybe that’s not so terrible. She knows a losing battle when she hears one.

In Defense of Humanity

The Atlantic

www.theatlantic.com › magazine › archive › 2023 › 07 › generative-ai-human-culture-philosophy › 674165

On July 13, 1833, during a visit to the Cabinet of Natural History at the Jardin des Plantes, in Paris, Ralph Waldo Emerson had an epiphany. Peering at the museum’s specimens—butterflies, hunks of amber and marble, carved seashells—he felt overwhelmed by the interconnectedness of nature, and humankind’s place within it.

The experience inspired him to write “The Uses of Natural History,” and to articulate a philosophy that put naturalism at the center of intellectual life in a technologically chaotic age—guiding him, along with the collective of writers and radical thinkers known as transcendentalists, to a new spiritual belief system. Through empirical observation of the natural world, Emerson believed, anyone could become “a definer and map-maker of the latitudes and longitudes of our condition”—finding agency, individuality, and wonder in a mechanized age.

America was crackling with invention in those years, and everything seemed to be speeding up as a result. Factories and sugar mills popped up like dandelions, steamships raced to and from American ports, locomotives tore across the land, the telegraph connected people as never before, and the first photograph was taken, forever altering humanity’s view of itself. The national mood was a mix of exuberance, anxiety, and dread.

[From the June 2018 issue: Henry A. Kissenger on AI and how the Enlightenment ends]

The flash of vision Emerson experienced in Paris was not a rejection of change but a way of reimagining human potential as the world seemed to spin off its axis. Emerson’s reaction to the technological renaissance of the 19th century is worth revisiting as we contemplate the great technological revolution of our own century: the rise of artificial superintelligence.

Even before its recent leaps, artificial intelligence has for years roiled the informational seas in which we swim. Early disturbances arose from the ranking algorithms that have come to define the modern web—that is, the opaque code that tells Google which results to show you, and that organizes and personalizes your feeds on social platforms like Facebook, Instagram, and TikTok by slurping up data about you as a way to assess what to spit back out.

Now imagine this same internet infrastructure but with programs that communicate with a veneer of authority on any subject, with the ability to generate sophisticated, original text, audio, and video, and the power to mimic individuals in a manner so convincing that people will not know what is real. These self-teaching AI models are being designed to become better at what they do with every single interaction. But they also sometimes hallucinate, and manipulate, and fabricate. And you cannot predict what they’ll do or why they’ll do it. If Google’s search engine is the modern-day Library of Alexandria, the new AI will be a mercurial prophet.

[From the May 2018 issue: The era of fake video begins]

Generative artificial intelligence is advancing with unbelievable speed, and will be applied across nearly every discipline and industry. Tech giants—including Alphabet (which owns Google), Amazon, Meta (which owns Facebook), and Microsoft—are locked in a race to weave AI into existing products, such as maps, email, social platforms, and photo software.

The technocultural norms and habits that have seized us during the triple revolution of the internet, smartphones, and the social web are themselves in need of a thorough correction. Too many people have allowed these technologies to simply wash over them. We would be wise to rectify the errors of the recent past, but also to anticipate—and proactively shape—what the far more radical technology now emerging will mean for our lives, and how it will come to remake our civilization.

Corporations that stand to profit off this new technology are already memorizing the platitudes necessary to wave away the critics. They’ll use sunny jargon like “human augmentation” and “human-centered artificial intelligence.” But these terms are as shallow as they are abstract. What’s coming stands to dwarf every technological creation in living memory: the internet, the personal computer, the atom bomb. It may well be the most consequential technology in all of human history.

People are notoriously terrible at predicting the future, and often slow to recognize a revolution—even when it is already under way. But the span of time between when new technology emerges and when standards and norms are hardened is often short. The Wild West, in other words, only lasts for so long. Eventually, the railroads standardize time; incandescent bulbs beat out arc lamps; the dream of the open web dies.

The window for effecting change in the realm of AI is still open. Yet many of those who have worked longest to establish guardrails for this new technology are despairing that the window is nearly closed.

Generative AI, just like search engines, telephones, and locomotives before it, will allow us to do things with levels of efficiency so profound, it will seem like magic. We may see whole categories of labor, and in some cases entire industries, wiped away with startling speed. The utopians among us will view this revolution as an opportunity to outsource busywork to machines for the higher purpose of human self-actualization. This new magic could indeed create more time to be spent on matters more deserving of our attention—deeper quests for knowledge, faster routes to scientific discovery, extra time for leisure and with loved ones. It may also lead to widespread unemployment and the loss of professional confidence as a more competent AI looks over our shoulder.

[Annie Lowrey: Before AI takes over, make plans to give everyone money]

Government officials, along with other well-intentioned leaders, are groping toward ethical principles for artificial intelligence—see, for example, the White House’s “Blueprint for an AI Bill of Rights.” (Despite the clunky title, the intention is for principles that will protect human rights, though the question of civil rights for machines will eventually arise.) These efforts are necessary but not enough to meet the moment.

We should know by now that neither the government’s understanding of new technologies nor self-regulation by tech behemoths can adequately keep pace with the speed of technological change or Silicon Valley’s capacity to seek profit and scale at the expense of societal and democratic health. What defines this next phase of human history must begin with the individual.

Just as the Industrial Revolution sparked transcendentalism in the U.S. and romanticism in Europe—both movements that challenged conformity and prioritized truth, nature, and individualism—today we need a cultural and philosophical revolution of our own. This new movement should prioritize humans above machines and reimagine human relationships with nature and with technology, while still advancing what this technology can do at its best. Artificial intelligence will, unquestionably, help us make miraculous, lifesaving discoveries. The danger lies in outsourcing our humanity to this technology without discipline, especially as it eclipses us in apperception. We need a human renaissance in the age of intelligent machines.

In the face of world-altering invention, with the power of today’s tech barons so concentrated, it can seem as though ordinary people have no hope of influencing the machines that will soon be cognitively superior to us all. But there is tremendous power in defining ideals, even if they ultimately remain out of reach. Considering all that is at stake, we have to at least try.

[From the June 2023 issue: Never give artificial intelligence the nuclear codes]

Transparency should be a core tenet in the new human exchange of ideas—people ought to disclose whenever an artificial intelligence is present or has been used in communication. This ground rule could prompt discipline in creating more-human (and human-only) spaces, as well as a less anonymous web. Any journalist can tell you that anonymity should be used only as a last resort and in rare scenarios for the public good. We would benefit from cultural norms that expect people to assert not just their opinions but their actual names too.

Now is the time, as well, to recommit to making deeper connections with other people. Live videochat can collapse time and distance, but such technologies are a poor substitute for face-to-face communication, especially in settings where creative collaboration or learning is paramount. The pandemic made this painfully clear. Relationships cannot and should not be sustained in the digital realm alone, especially as AI further erodes our understanding of what is real. Tapping a “Like” button is not friendship; it’s a data point. And a conversation with an artificial intelligence is one-sided—an illusion of connection.

Someday soon, a child may not have just one AI “friend,” but more AI friends than human ones. These companions will not only be built to surveil the humans who use them; they will be tied inexorably to commerce—meaning that they will be designed to encourage engagement and profit. Such incentives warp what relationships ought to be.

Writers of fiction—Fyodor Dostoyevsky, Rod Serling, José Saramago—have for generations warned of doppelgängers that might sap our humanity by stealing a person’s likeness. Our new world is a wormhole to that uncanny valley.

Whereas the first algorithmic revolution involved using people’s personal data to reorder the world for them, the next will involve our personal data being used not just to splinter our shared sense of reality, but to invent synthetic replicas. The profit-minded music-studio exec will thrill to the notion of an AI-generated voice with AI-generated songs, not attached to a human with intellectual-property rights. Artists, writers, and musicians should anticipate widespread impostor efforts and fight against them. So should all of us. One computer scientist recently told me she’s planning to create a secret code word that only she and her elderly parents know, so that if they ever hear her voice on the other end of the phone pleading for help or money, they’ll know whether it’s been generated by an AI trained on her publicly available lectures to sound exactly like her and scam them.

Today’s elementary-school children are already learning not to trust that anything they see or hear through a screen is real. But they deserve a modern technological and informational environment built on Enlightenment values: reason, human autonomy, and the respectful exchange of ideas. Not everything should be recorded or shared; there is individual freedom in embracing ephemerality. More human interactions should take place only between the people involved; privacy is key to preserving our humanity.

Finally, a more existential consideration requires our attention, and that is the degree to which the pursuit of knowledge orients us inward or outward. The artificial intelligence of the near future will supercharge our empirical abilities, but it may also dampen our curiosity. We are at risk of becoming so enamored of the synthetic worlds that we create—all data sets, duplicates, and feedback loops—that we cease to peer into the unknown with any degree of true wonder or originality.

We should trust human ingenuity and creative intuition, and resist overreliance on tools that dull the wisdom of our own aesthetics and intellect. Emerson once wrote that Isaac Newton “used the same wit to weigh the moon that he used to buckle his shoes.” Newton, I’ll point out, also used that wit to invent a reflecting telescope, the beginnings of a powerful technology that has allowed humankind to squint at the origins of the universe. But the spirit of Emerson’s idea remains crucial: Observing the world, taking it in using our senses, is an essential exercise on the path to knowledge. We can and should layer on technological tools that will aid us in this endeavor, but never at the expense of seeing, feeling, and ultimately knowing for ourselves.

A future in which overconfident machines seem to hold the answers to all of life’s cosmic questions is not only dangerously misguided, but takes away that which makes us human. In an age of anger, and snap reactions, and seemingly all-knowing AI, we should put more emphasis on contemplation as a way of being. We should embrace an unfinished state of thinking, the constant work of challenging our preconceived notions, seeking out those with whom we disagree, and sometimes still not knowing. We are mortal beings, driven to know more than we ever will or ever can.

The passage of time has the capacity to erase human knowledge: Whole languages disappear; explorers lose their feel for crossing the oceans by gazing at the stars. Technology continually reshapes our intellectual capacities. What remains is the fact that we are on this planet to seek knowledge, truth, and beauty—and that we only get so much time to do it.

As a small child in Concord, Massachusetts, I could see Emerson’s home from my bedroom window. Recently, I went back for a visit. Emerson’s house has always captured my imagination. He lived there for 47 years until his death, in 1882. Today, it is maintained by his descendants and a small staff dedicated to his legacy. The house is some 200 years old, and shows its age in creaks and stains. But it also possesses a quality that is extraordinarily rare for a structure of such historic importance: 141 years after his death, Emerson’s house still feels like his. His books are on the shelves. One of his hats hangs on a hook by the door. The original William Morris wallpaper is bright green in the carriage entryway. A rendering of Francesco Salviati’s The Three Fates, holding the thread of destiny, stands watch over the mantel in his study. This is the room in which Emerson wrote Nature. The table where he sat to write it is still there, next to the fireplace.

[From the October 1883 issue: Ralph Waldo Emerson’s ‘Historic Notes of Life and Letters in Massachusetts’]

Standing in Emerson’s study, I thought about how no technology is as good as going to the place, whatever the destination. No book, no photograph, no television broadcast, no tweet, no meme, no augmented reality, no hologram, no AI-generated blueprint or fever dream can replace what we as humans experience. This is why you make the trip, you cross the ocean, you watch the sunset, you hear the crickets, you notice the phase of the moon. It is why you touch the arm of the person beside you as you laugh. And it is why you stand in awe at the Jardin des Plantes, floored by the universe as it reveals its hidden code to you.

This article appears in the July/August 2023 print edition with the headline “In Defense of Humanity.” When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.

The Perfect Escapist Sci-Fi Series

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 06 › the-perfect-escapist-sci-fi-series › 674289

This story seems to be about:

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

Good morning, and welcome back to The Daily’s Sunday culture edition, in which one Atlantic writer reveals what’s keeping them entertained.

Today’s special guest is Emma Sarappo, an associate editor on The Atlantic’s Books team. Emma is also a frequent contributor to our Books Briefing newsletter, having recently written about books for a changing planet and making sense of the divide between technology and humanity. Right now Emma is looking forward to a once-in-a-lifetime cross-country concert trip, scratching her brain with the Two Dots smartphone puzzle game, and gearing up for the 60th-anniversary special of Doctor Who.

First, here are three Sunday reads from The Atlantic:

The Succession plot point that explained the whole series Fans’ expectations of Taylor Swift are chafing against reality. The blue-strawberry problem The Culture Survey: Emma Sarappo

The upcoming event I’m most looking forward to: I’m going to see Joni Mitchell, plus Brandi Carlile, play in Washington State next weekend. It’s a bit of a wild trip—I’m heading all the way to the West Coast from Washington, D.C., and only staying for three days—but my best friend and I figured this might be a once-in-our-lifetime opportunity, so we agreed we had to do it. [Related: The unknowable Joni Mitchell (from 2017)]

Something delightful introduced to me by a kid in my life: Last year, my teenage cousin got me to watch Heartstopper, Netflix’s adaptation of the webcomic and graphic-novel series by Alice Oseman, which is so delightful and fun. My cousin is Norwegian but apparently adores the books so much that she buys and reads them in English in order to get them sooner. [Related: Heartstopper and the era of feel-good, queer-teen romances]

Something I loved as a teenager and still love: Sometimes I feel like I carry my teenage self around in my front pocket; her tastes are still so influential to me today. She loved Doctor Who, and she was right—it’s still perfect sci-fi escapism—and we are so excited for the forthcoming Doctor Who special that’ll bring back the actors David Tennant and Catherine Tate, plus Yasmin Finney (whom I loved in Heartstopper)! Then we’re due for a series with Ncuti Gatwa (whom I loved in Sex Education). [Related: How Doctor Who survived 50 years (from 2013)]

The last museum or gallery show that I loved: I was at the Philadelphia Museum of Art the other week and made a point of spending time in the room that holds Cy Twombly’s Fifty Days at Iliam, a series of 10 paintings that evoke the Iliad and the Trojan War through gesture, color, and writing. They inspire really strong responses, because they’re so large and so surprising—at first glance, they appear scribbled or imprecise. If you stay long enough, you’ll hear some gasps, or laughs. I loved that experience.

A painting, sculpture, or other piece of visual art that I cherish: So many, but one of the first that genuinely changed my life as a young adult is Félix González-Torres’s “Untitled” (Portrait of Ross in L.A.), on view at the Art Institute of Chicago. I hear that teenagers are talking a lot about it on TikTok, which is sweet. When I was younger, we were all reblogging González-Torres’s work “Untitled” (Perfect Lovers) on Tumblr.

Something I recently rewatched, reread, or otherwise revisited: I started listening to the Smiths again after their bassist, Andy Rourke, died last month. They’re another formative teenage band for me—two generations deep, because I got the CDs from my dad, who also found them formative in his youth. Today, lead singer Morrissey’s racist rhetoric casts a pall over the band for me, but listening to the music, I understand entirely why I was so obsessed with it long before I’d ever read anything about the band. Rourke was a huge part of that. This video of the guitarist Johnny Marr inviting a kid onstage, basically daring him to play “This Charming Man,” a crucial Rourke song—and the kid suddenly, improbably, nailing the riff—is one of my favorite things on the internet.

A piece of journalism that recently changed my perspective on a topic: Katie Engelhart’s “The Mother Who Changed: A Story of Dementia” from The New York Times Magazine last month. There are no easy answers here, so it didn’t have me reverse any of my positions, but it opened my eyes to questions about autonomy and aging that I’d never considered.

A favorite story I’ve read in The Atlantic: Painful to pick just a few. Patricia Lockwood on To the Lighthouse was tailor-made for me. I just sent someone Dara Mathis’s story on the Black-liberation movement she grew up in. I read William Langewiesche’s story on Flight MH370 exactly once and haven’t stopped thinking about it, but I will never read it again (too frightening).

My favorite way of wasting time on my phone: Two Dots. It frees me from the social web and scratches my brain perfectly.

An online creator that I’m a fan of: My TikTok is basically all cooking and jokes, which is ideal. I especially love videos from Bettina Makalintal (@bettinamak) and Chuck Cruz (@chuckischarles).

The last debate I had about culture: Less a debate than a round of cooperative overlapping about why Taylor Swift refuses to make her best songs the singles from her albums (justice for “Cruel Summer”).

A good recommendation I recently received: I finally gave in to my best friend’s multiyear urging that I watch The Americans, and, after finishing the series, I must demand that you all watch The Americans. [Related: The Americans is the realest, scariest spy show on TV. (From 2014)]

A poem, or line of poetry, that I return to: I just saw my sister graduate from college with an engineering degree; she was telling me about a humanities class on German culture and literature that she had to take. Her class had read this poem about some old statue, she said, and the abrupt turn at the end knocked them all out—they laughed, and they made memes, because the suddenness of the speaker’s realization felt so dramatic. She couldn’t remember it verbatim, so I finished the line automatically: “You must change your life,” from Rainer Maria Rilke’s “Archaic Torso of Apollo.” I know I’m old now, because that kind of lightning-flash epiphany inspired by art was so strange to a class of undergraduates, but so familiar—and so moving—to me. [Related: ‘To work is to live without dying.’ (From 1996)]

Read past editions of the Culture Survey with Adam Harris, Saahil Desai, Yasmin Tayag, Damon Beres, Julie Beck, Faith Hill, and Derek Thompson.

The Week Ahead The Idol, the buzzy (and contentious) new series from the Euphoria creator Sam Levinson, Abel “The Weeknd” Tesfaye, and Reza Fahim, starring Tesfaye and Lily-Rose Depp (premieres on HBO and Max tonight at 9 p.m. ET) Countries of Origin, the debut novel by Javier Fuentes, which tells the story of a blossoming romance between two young men from very different worlds (on sale Tuesday) Transformers: Rise of the Beasts, a reboot of the live-action film franchise based on the popular Hasbro toys and animated series, starring the In the Heights actor Anthony Ramos (in theaters Friday) Essay Illustration by The Atlantic. Sources: Matt Squire / Lookout Point / AMC.

The Most Compelling Female Character on Television

By Sophie Gilbert

The last time we saw Happy Valley’s Catherine Cawood, she was trying—and quite magnificently failing—to capture one of her police-force colleagues, the nebbishy John Wadsworth, who’d finally been implicated in the murder of his lover. The pursuit is a bleak comedy of errors: Directed by her superiors not to pursue John down train tracks, Catherine mutters “bollocks” and follows him anyway. The pair end up on a bridge in relentless rain. Catherine, who says that she’s never trained in negotiation, asks John—who’s successfully talked down 17 people from various ledges—what to say to compel him not to jump. She has to keep him talking, John says. “You’ve got to be assertive. Reassuring. Empathetic and kind. And you’ve got to listen.” Catherine tells John to take his time, that she’ll be there. His face discernibly changes. “I love my kids,” he tells her; he propels himself backward.

Read the full article.

More in Culture The filmmaker who knows what’s wrong with your relationships The indignity of grocery shopping Usher knows what it means to burn. Cynthia Ozick on the link between beauty and purity Short story: “Late-Night-Radio Talk-Show Host Tells All” Barry finally gave up its delusions. Seven tips from Susan Sontag for independent thinking The trees don’t care about us. The key to America’s victory in the Second World War Catch Up on The Atlantic The aspects of manifestation we shouldn’t discount Lordy, there are tapes. Semi-retirees know the key to work-life balance. Photo Album Hugo Hu / Getty

Browse snapshots of Manhattanhenge in New York City, dune climbing in China, and more in our editor’s selection of the week’s best photos.

Explore all of our newsletters.