Itemoids

America

Tennis Explains Everything

The Atlantic

www.theatlantic.com › culture › archive › 2024 › 05 › challengers-tennis-metaphors › 678444

Tennis is an elegant and simple sport. Players stand on opposite sides of a rectangle, divided by a net that can’t be crossed. The gameplay is full of invisible geometry: Viewers might trace parabolas, angles, and lines depending on how the players move and where they hit the ball. It’s an ideal representation of conflict, a perfect stage for pitting one competitor against another, so it’s no wonder that the game comes to stand in for all sorts of different things off the court. Google tennis metaphor and you’ll learn how marriage is like the call and response of a rally; how business is like trying to find the best angle on your opponent; how in life it’s sometimes important to “come to the net.”

Naturally, the protagonists of Luca Guadagnino’s film Challengers, whose entire existence revolves around tennis, also make sense of themselves through the rules of the game. To hear them speak to one another is to experience their monomania: Everything they really mean is hidden beneath layers of tennis puns and analogies, and the lines between life and the game become as imperceptible as those on a well-used clay court. If this is a movie about love or desire or anything else, it’s only by way of tennis.

The film’s story unfolds during the final of the fictional Phil’s Tire Town Challenger tennis tournament, held in New Rochelle, New York. Via flashbacks interspersed throughout the match, we learn about the rivalry between the prim champion Art Donaldson (Mike Faist) and scruffy down-on-his-luck Patrick Zweig (Josh O’Connor)—as well as their relationships with Tashi Duncan (Zendaya), a once-promising player whose career fell apart due to injury. Although Art and Tashi are now married, the film slowly reveals the evolution of these relationships. We see how they all met at a sponsor party during the U.S. Open Junior tournament, where Tashi promised her phone number to the winner of a match between the two boys, who at the time were best friends, declaring her desire to watch some “good fucking tennis.” We see how Patrick and Tashi were a short-lived couple and had an affair long after they broke up, and how Art’s irrepressible flirtation with Tashi led to a career-defining romantic and coaching partnership between the two of them. As we realize how much of their lives are tied up in the Phil’s Tire Town final, every glance, serve, and motion becomes fraught with meaning.

The narrative progresses in a way that’s not unlike John McPhee’s 1969 book, Levels of the Game, which recounts a single match played between two American players, Arthur Ashe and Clark Graebner, in the semifinals of the 1968 U.S. Open. Between McPhee’s descriptions of various points played during the match, he travels back to key moments in each competitor’s life, narrating the personal and social conditions that shaped their respective playing styles and dispositions on the court—and how the two rivals see each other.

For McPhee, “a person’s tennis game begins with his nature and background and comes out through his motor mechanisms into shot patterns and characteristics of play.” Graebner sees Ashe’s short strokes and risk taking as an extension of his “loose” lifestyle, equating his confidence on the court with the rising social position of Black Americans. To Ashe, Graebner’s cautious and predictable play style is indicative of his traditional values and conservative, family-oriented life: He calls it “Republican tennis.” Although in some ways it was just another meeting between two longtime rivals, the match comes to stand in for competing cultural currents in America, the civil-rights struggles of the ’50s and ’60s looming in the background.

A few years later, another match took on post-1960s gender politics in a famously theatrical showdown. The “Battle of Sexes” match in 1973, between Billie Jean King and then-retired Bobby Riggs, has since been mythologized as a turning point for women’s sports. If the social allegory of the Ashe-Graeber match was subtextual, the one in this spectacle—which ended in a decisive victory for King over the cartoonishly chauvinistic Riggs—was glaringly explicit. At a time when women’s liberation was becoming a force that threw all sorts of conventions into question, and plenty of people were for or against the gains of the movement, seeing the debate represented by a game of tennis surely had a comforting appeal. For those with more regressive beliefs, rooting for Bobby was certainly easier than really articulating a justification for maintaining massive pay disparities between men and women, both within and outside of professional tennis.

[Read: Tennis temperament]

In Challengers, the topic of tennis plays a similar orienting role for three players whose “only skill in life is hitting a ball with a racket,” according to Tashi. Talking with Patrick and Art after she meets them, Tashi describes tennis as a “relationship.” On the court, she understands her opponent—and the crowd understands them both, watching them almost fall in love as they battle back and forth. For Tashi who has nothing but tennis to talk about, the tennis metaphor works because seeing things as a game based on one-on-one competition, long-standing rivalries, and extended strategic play makes intuitive sense. Although pretty much everything else in her life might be complicated, tennis is not.

But this assured confidence doesn’t follow the players off the court. Within their love triangle, tension arises with the dawning recognition that in a one-on-one sport, there’s always another person who doesn’t have a place on the court. Save for the night they meet, when Tashi induces Art and Patrick to kiss each other for her entertainment, the three of them rarely engage with one another at the same time: Someone is always watching from the stands, whether literally or metaphorically. Tashi’s solution to Patrick and Art’s competing interest—giving her number to the winner of their match—doesn’t stop the loser from wanting to continue play, of course. Life isn’t that simple.

Nor are the boundaries between sport and play so neatly defined. During Patrick and Tashi’s brief romance, a post-coital conversation seamlessly transitions into a discussion about Patrick’s poor performance as a pro, and eventually becomes a referendum on why their relationship doesn’t work. Confused, and trying to make sense of it all as their banter swiftly changes definitions, Patrick asks: “Are we still talking about tennis?” “We’re always talking about tennis,” Tashi replies. Frustrated, Patrick tersely retorts: “Can we not?”

What would it be for them to not talk about tennis? As the linguists George Lakoff and Mark Johnson argue in their 1980 book, Metaphors We Live By, “Our ordinary conceptual system, in terms of which we both think and act, is fundamentally metaphorical in nature.” In other words, we’re always talking about things in terms of other things—even if it’s not always as obvious as it is in Challengers. Metaphors are more than just a poetic device; they’re fundamental to the way language is structured. Complex ideas almost always elude easy explanation, so we reach for metaphors, either consciously or not. When tennis represents these various concepts—love, gender, race—they become easier to discuss due to the sport’s inherent legibility. No matter what issue is at stake, or how grand it may be, it can always be reduced to an individual’s performance on the court.

And as a sport, tennis is versatile enough to be a playful and rich metaphor in Challengers. While Patrick is still dating Tashi, and Art is transparently trying to steal his best friend’s girl, Patrick playfully accuses Art of playing “percentage tennis”—a patient strategy of hitting low-risk shots and waiting for your opponent to mess up. It’s something unique to the game, as it wouldn’t really make sense in the context of other individual sports like boxing, track, or bowling. As we learn, it’s also not a good strategy for love—because although Art does make his move once Patrick inevitably screws up, his unflagging commitment isn’t enough to make Tashi genuinely love him.

On the night before the Phil’s Tire Town final, Art asks for Tashi’s permission to retire once the season is over. Art knows that this would be the end of their professional relationship—he would no longer be able to play dutiful pupil to Tashi. But it might also be the end of whatever spark animated their love in the first place, as you can’t play “good fucking tennis” in retirement. Tashi says she will leave Art if he doesn’t beat Patrick in the final. Tired of playing, but unable to escape the game, Art curls up in his wife’s lap and cries.

The next day, as the final nears its conclusion, tensions run high. Art has just discovered the truth about Patrick and Tashi’s affair, and the match goes into a tiebreaker to decide the final set. After an intense rally, Art jumps for a smash and falls over the net, landing in Patrick’s arms. As she watches her two lovers embrace, Tashi stands up and screams “Come on!” with a passion not seen since early in her career. It doesn’t matter who wins. Lost in a moment of catharsis, they’re finally not talking about tennis anymore.

The 248th Anniversary of America’s Jewish Golden Age

The Atlantic

www.theatlantic.com › magazine › archive › 2024 › 06 › the-commons › 678204

The End of the Golden Age

Anti-Semitism on the right and the left threatens to end an unprecedented period of safety and prosperity for Jewish Americans—and demolish the liberal order they helped establish, Franklin Foer wrote in the April 2024 issue.

Franklin Foer’s article on the end of the Golden Age for American Jews makes an excellent and painful connection between the rise of anti-Semitism and the decline of democratic institutions throughout history. I was a child in Communist Romania in 1973 at the outbreak of the Yom Kippur War. Some of my teachers made my life miserable in school simply because I was Jewish. My parents had to bribe them with American cigarettes to stop them from tormenting me. Three years later, my family and I defected to the United States. The U.S. was known around the world for its democratic institutions, and we wanted to get away from a country where anti-Semitism ran rampant.

No one born here can imagine what it was like to be free, to be Jewish and dare to admit it. But that was America in the 1970s and ’80s. Today’s America frightens me: I’ve lived in an authoritarian state before; I understand viscerally what’s at stake in this year’s election. For the first time in 48 years, I think twice before telling people I’m Jewish.

Monica Friedlander
Cambria, Calif.

I am a 96-year-old Holocaust survivor. I was born in Berlin in 1928 and observed the rise of anti-Semitism in Germany. There is a world of difference between those days and the United States today. In Germany, anti-Semitism was sanctioned, even encouraged, by the authorities. Police officers stood by laughing when boys beat us on our way to school. The government passed laws forbidding us from owning radios, newspapers, telephones, even pets. The world knows how that ended: I was liberated from Bergen-Belsen on April 15, 1945. I think Franklin Foer’s article is a bit over the top.

Walter L. Lachman
Laguna Niguel, Calif.

Although an interesting review of 20th-century Jewish entertainers and intellectuals, Franklin Foer’s assessment ignores the street reality.

I was born and raised during the Franklin D. Roosevelt years. Growing up, I was given a bloody nose by other kids more than once on my way home from school. They shouted anti-Semitic slurs and attacked me for “killing their God.” When I served in the military, my roommate asked whether I had horns, and if it “had hurt when they took them off.” When I applied for a job at a prestigious law firm, I was told, “We do not hire your kind.”

I went on to enjoy a successful career. But the underlying prejudice has always been present. The fact that we Jews have been entertaining and creative does nothing to eliminate the basic prejudice against us as “the other.”

Benjamin Levine
Roseland, N.J.

The night before I read Franklin Foer’s article, a stranger tore my mezuzah off my doorframe. I was upset—but so was my non-Jewish roommate. In that, he was part of a broader American tradition: At the founding of our country, George Washington promised the Jews of Rhode Island, “To bigotry no sanction, to persecution no assistance.”

The Jewish American Golden Age predates the 20th century, and has outlasted it. Not only has America been the best place in the diaspora to be a Jew, but the scale of Jewish participation and inclusion is larger than many realize. The highest-ranking American armor officer to die in combat was the legendary Maurice Rose—a Jewish major general who died fighting the Nazis in Germany. Foer quotes Thomas Friedman saying that the Six-Day War made American Jews realize they could be tank commanders—but Jews have been tank commanders as long as America has had tanks.

In Columbus, Georgia, where I live, shortly after the October 7 attacks, the mayor and city-council members attended my synagogue. People from all over the country reached out to express their sympathy and support. A friend stationed in Syria checked in after Iran launched missiles toward Israel, concerned about my Israeli family and how I was dealing with American anti-Semitism. America’s continuing warm welcome isn’t just anecdotal: The Pew Research Center recently found that Jews are viewed more positively than any other U.S. religious group.

Anti-Semitism may be on the rise, but it is and remains un-American. My great-great-grandfather, a Jewish refugee, arrived in New York on the Fourth of July. According to family lore, he saw the fireworks and thought they were for him. In a way, they were. This July, I look forward to celebrating the Golden Age’s 248th anniversary.

Jacob Foster
Columbus, Ga.

I was disappointed reading “The End of the Golden Age.” I think the Golden Age is now, as so many American Jews rise up to say “Not in our name.” We are recognizing the difference between anti-Semitism and anti-Zionism. It’s time for everyone to recognize it too. Criticism of Israel’s actions in Gaza is not anti-Semitism. American Jews and Israeli Jews will be safe when we can recognize the resilience and survival of both Palestinians and Jews and see how our struggles are interconnected.

R. Toran Ailisheva
Oakland, Calif.

Franklin Foer interprets a survey—“nearly one in five non-Jewish students said they ‘wouldn’t want to be friends with someone who supports the existence of Israel as a Jewish state’ ”—to mean that they were saying they wouldn’t be friends with most Jews. I would challenge this interpretation.

As a Columbia graduate, and as someone who can actually read the Yiddish on The Atlantic’s cover, I do not question the Zionist dream of a haven for Jews. But I question the need for a predominantly religious state, which I fear will inevitably lead to a theocracy, intolerant even of Jews deemed insufficiently Orthodox. Israel is headed in that direction.

Elliott B. Urdang
Providence, R.I.

We were surprised and dismayed that The Atlantic would publish Franklin Foer’s article about the rise of anti-Semitism without any accompanying articles discussing the concurrent rise in anti-Palestinian racism. Students who protest the brutal war crimes committed in Gaza or advocate for the freedom and dignity of the Palestinian people are being silenced and persecuted. We hope The Atlantic will publish stories that highlight efforts seeking peace and justice for all. Right now, we need solutions. We need voices supportive of our shared humanity, not inflammatory rhetoric that will lead to further polarization and alienation.

Samar Salman
Ann Arbor, Mich.

Christina Kappaz
Evanston, Ill.

Franklin Foer replies:

A writer’s deeply ingrained instinct is to want their stories to prove prophetic. In this instance, I desperately hope that I will be proved wrong. Sadly, in the aftermath of publishing this article, I have heard too many stories like Jacob Foster’s, of mezuzahs ripped from doors in the night. One of the most ubiquitous critiques of my story, echoed in R. Toran Ailisheva’s letter, is that my argument equates anti-Zionism with anti-Semitism. Many mainstream Jewish groups take that stance, but it is not my contention. I explicitly stated that there are strains of anti-Zionism that paint a vision of life in a binational state, where Palestinians and Jews peacefully coexist. That vision strikes me as hopelessly quixotic, but it isn’t anti-Semitic. Unfortunately, criticisms of Zionism are rarely so idealistic. They are usually cast in ugly terms, depicting a dangerous Jewish cabal guilty of dual loyalties, betraying the hallmarks of classical anti-Semitism.

Behind the Cover

In this month’s cover story, “Democracy Is Losing the Propaganda War,” Anne Applebaum examines how autocrats in China, Russia, and other places have sought to discredit liberal democracy—and how they’ve found unlikely allies on the American far right. Our cover draws inspiration from constructivist propaganda artists such as Alexander Rodchenko and Gustav Klutsis. The angled imagery and ascending lines evoke the style of a Soviet propaganda poster, updated with liberalism’s new rivals.

Paul Spella, Senior Art Director

This article appears in the June 2024 print edition with the headline “The Commons.”

American Beauty

The Atlantic

www.theatlantic.com › magazine › archive › 2024 › 06 › jennifer-emerling-see-america-first › 678208

Photographs by Jennifer Emerling

By the time Jennifer Emerling was 12, she had been to 22 national parks. In an interview with her local newspaper that year, the California middle schooler said that in addition to collecting shirts and stuffed animals from the parks, “I take lots of pictures.” Asked what she would do when she’d exhausted the list of parks to visit, Emerling answered, “Go see them again.”

Top: Norris Geyser Basin, in Yellow­stone National Park. Bottom: A bear-safety demonstration at Yellowstone. (Jennifer Emerling)

Emerling, now a professional photographer, never stopped taking pictures of national parks. For her series “See America First!” she retraced her family’s summer road trips. The resulting images convey a spirit of adventure and childlike wonder. Emerling’s compositions juxtapose the ordinariness of smartphones and sun hats with the majesty of the natural landscape. In one photo, visitors pause on the Old Faithful boardwalk, in Yellowstone, to capture the geyser’s eruption; in another, a woman holds a camera, but her gaze is fixed on the view across a crystal lake in Grand Teton National Park.

Top: Jenny Lake, in Grand Teton National Park. Bottom: Glacier Point, in Yosemite National Park. (Jennifer Emerling)

For all their whimsy and nostalgia, the photographs also invite serious reflection on the complexities of American tourism and its fantasies of an unspoiled West. The series takes its title from an early-20th-century marketing campaign to promote domestic travel among the wealthy via the railroads (the original, longer slogan was “See Europe if you will, but see America first”). “See America First!” can be read straight, as intended by the railroad boosters—or with an ironic twist, through the hindsight of history. To acknowledge the many contradictions of our national parks—­areas that were touted as examples of “undisturbed creation” at the expense of Native American territorial sovereignty; places that cultivate an appreciation of nature even though they have long been commercialized—is not to negate their beauty or power.

This article appears in the June 2024 print edition with the headline “American Beauty.” When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.

Higher Education Isn’t The Enemy

The Atlantic

www.theatlantic.com › ideas › archive › 2024 › 05 › higher-education-isnt-the-enemy › 678434

I’ve spent more than five decades making difficult decisions in finance, government, business, and politics. Looking back, what most prepared me for the life I’ve led was the open exchange of ideas that I experienced in college and law school, supported by a society-wide understanding that universities and their faculty should be allowed to pursue areas of study as they see fit, without undue political or financial pressure. More broadly, throughout my career, I have seen firsthand the way America’s higher-education system strengthens our nation.

I cannot recall a time when the country’s colleges and universities, and the wide range of benefits they bring, have faced such numerous or serious threats. Protests over Gaza, Israel, Hamas, and anti-Semitism—and the attempt by certain elected officials and donors to capitalize on these protests and push a broader anti-higher-education agenda—have been the stuff of daily headlines for months. But the challenges facing colleges and universities have been building for years, revealed in conflicts over everything from climate change and curriculum to ideological diversity and academic governance.

But there is a threat that is being ignored, one that goes beyond any single issue or political controversy. Transfixed by images of colleges and universities in turmoil, we risk overlooking the foundational role that higher education plays in American life. With its underlying principles of free expression and academic freedom, the university system is one of the nation’s great strengths. It is not to be taken for granted. Undermining higher education would harm all Americans, weakening our country and making us less able to confront the many challenges we face.

The most recent upheavals on American campuses—and the threat posed to the underlying principles of higher education—have been well documented.

In some cases, individuals have been silenced or suppressed, not because they were threatening anyone’s physical safety or disrupting the functioning of the university environment, but rather, it seems, because of their opinions. The University of Southern California, for example, recently canceled its valedictorian’s speech at graduation. Although administrators cited safety concerns, many on campus, including the student herself, said they believe that the true cause lay in the speaker’s pro-Palestinian, anti-Israel views. One does not have to agree with the sentiments being expressed by a speaker in order to be troubled by the idea that they would be suppressed because of their content.

[Conor Friedersdorf: Columbia University’s impossible position]

In other cases, it is the demonstrators themselves who have sought to force their views on others—by breaking university policies regarding shared spaces, occupying buildings, and reportedly imposing ideological litmus tests on students seeking to enter public areas of campus. Some activists have advocated violence against those with whom they disagree. Even before the unrest of recent weeks, I had heard for many years from students and professors that they felt a chilling effect on campuses that rendered true discussion—including exchanges of ideas that might make others uncomfortable—very difficult.

Even as free speech faces serious threats from inside the campus, academic freedom is under assault from outside. To an unprecedented degree, donors have involved themselves in pressure campaigns, explicitly linking financial support to views expressed on campus and the scholarship undertaken by students and faculty. At the University of Pennsylvania, one such effort pressed donors to reduce their annual contribution to $1 to protest the university’s decision to host a Palestinian literary conference. At Yale, Beverly Gage, the head of the prestigious Brady-Johnson Program in Grand Strategy, felt compelled to resign after the program came under increasing pressure from its donors. Among other things, the donors objected to an op-ed by an instructor in the program headlined “How to Protect America From the Next Donald Trump.”

It’s not just donors. Elected officials and candidates for office are also attacking academic freedom. On a Zoom call whose content was subsequently leaked, a Republican member of Congress, Jim Banks of Indiana, characterized recent hearings with the presidents of Harvard, MIT, Penn, and Columbia—along with upcoming ones with the presidents of Rutgers, UCLA, and Northwestern—as part of a strategy to “defund these universities.” In a recent campaign video, former President Trump asserted that colleges are “turning our students into Communists and terrorists and sympathizers,” and promised to retaliate by taxing, fining, and suing private universities if he wins a second term. Senator J. D. Vance of Ohio, a close ally of Trump’s, has introduced a bill that would punish schools that don’t crack down on demonstrators. The bill would tax the endowment of such schools heavily and curb their access to federal funds.

The methods of these donors and politicians—politically motivated subpoenas and hearings, social-media pressure campaigns, campaign-trail threats—may not violate the First Amendment. They do, however, seek to produce a chilling effect on free speech. The goal of these efforts is to force universities to bow to outside pressure and curtail the range of ideas they allow—not because scholars at universities believe those ideas lack merit, but because the ideas are at odds with the political views of those bringing the pressure.  

All of this needs to be seen against a foreboding backdrop. At a time when trust in many American institutions is at an all-time low, skepticism about higher education is on the rise. Earlier this year, a noteworthy essay by Douglas Belkin in The Wall Street Journal explored “Why Americans Have Lost Faith in the Value of College.” The New York Times wondered last fall whether college might be a “risky bet.” According to Gallup, confidence in higher education has fallen dramatically—from 57 percent in 2015 to 36 percent in 2023. The attacks on free expression and academic freedom on campus are both causes and symptoms of this declining confidence.

It is ironic that, at a moment when higher education faces unprecedented assaults, more Americans than ever have a college diploma. When I graduated from college, in 1960, only 8 percent of Americans held a four-year degree. Today, that number has increased almost fivefold, to 38 percent. Even so, I suspect that many Americans don’t realize just how exceptional the country’s university system actually is. Although the United States can claim less than 5 percent of the world’s population, it is home to 65 percent of the world’s 20 highest-ranked universities (and 28 percent of the world’s top-200 universities). Americans can get a quality education at thousands of academic institutions throughout the country.

Despite the skepticism in some quarters about whether a college degree is really worth it, the financial benefits of obtaining a degree remain clear. At 25, college graduates may earn only about 27 percent more than high-school-diploma holders. However, the college wage premium doubles over the course of their lifetime, jumping to 60 percent by the time they reach age 55. Looking solely at an individual’s financial prospects, the case for attending college remains strong.

[David Deming: The college backlash is going too far]

But the societal benefits we gain from higher education are far greater—and that’s the larger point. Colleges and universities don’t receive tax exemptions and public funds because of the help they give to specific individuals. We invest in higher education because there’s a broad public purpose.

Our colleges and universities are seen, rightly, as centers of learning, but they are also engines of economic growth. Higher graduation rates among our young people lead to a better-educated workforce for businesses and a larger tax base for the country as a whole. Institutions of higher education spur early-stage research of all kinds, create environments for commercializing that research, provide a base for start-up and technology hubs, and serve as a mentoring incubator for new generations of entrepreneurs and business leaders. In many communities, especially smaller towns and rural areas, campuses also create jobs that would be difficult to replace.

The importance of colleges and universities to the American economy will grow in the coming decades. As the list of industries that can be automated with AI becomes longer, the liberal-arts values and critical-thinking skills taught by colleges and universities will become only more valuable. Machine learning can aid in decision making. It cannot fully replace thoughtfulness and judgment.

Colleges and universities also help the United States maintain a geopolitical edge. We continue to attract the best and brightest from around the world to study here. Although many of these students stay and strengthen the country, many more return home, bringing with them a lifelong positive association with the United States. When I served as Treasury secretary, I found it extremely advantageous that so many of my foreign counterparts had spent their formative years in the U.S. That’s just as true today. In many instances, even the leadership class in unfriendly countries aspires to send its children to study here. In a multipolar world, this kind of soft-power advantage matters more than ever.

At home, higher education helps create the kind of citizenry that is central to a democracy’s ability to function and perhaps even to survive. This impact may be hard to quantify, but that doesn’t make it any less real.

It is not just lawmakers and executives who must make difficult decisions in the face of uncertainty. All of us—from those running civil-society groups that seek to influence policy to the voters who put elected leaders in office in the first place—are called upon to make hard choices as we live our civic lives. All of us are aware that the country is not in its best condition—this is hardly news. Imagine what that condition might be if we set out to undermine the very institutions that nurture rigorous and disciplined thinking and the free exchange of ideas.

Of course, there is much about higher education that needs fixing. Precisely because colleges and universities are so valuable to society, they should do more to engage with it. Bringing down costs can help ensure that talented, qualified young people are not denied higher education for financial reasons. Being clear about the principles and policies regarding the open expression of views—even as we recognize that applying them may require judgment calls, and that it is crucial to protect student safety and maintain an environment where learning and research can be conducted—would help blunt the criticism, not always made in good faith, that universities have an ideological agenda. Communicating more effectively with the public would help more Americans understand what is truly at stake.

But the fact that universities can do more does not change a basic fact: It is harmful to society to put constraints on open discussion or to attack universities for purposes of short-term political gain. Perhaps some of those trying to discourage the open exchange of ideas at universities believe that we can maintain their quality while attacking the culture of academic independence. I disagree. Unfettered discussion and freedom of thought and expression are the foundation upon which the greatness of our higher-education system is built. You cannot undermine the former without damaging the latter. To take one recent example: After Governor Ron DeSantis reshaped Florida’s New College along ideological lines, one-third of the faculty left within a year. This included scholars not only in fields such as gender studies, which many conservatives view with distaste, but in areas such as neuroscience as well.

We can have the world’s greatest higher-education system, with all of the benefits it brings to our country, or we can have colleges and universities in which the open exchange of views is undermined by pressure campaigns from many directions. We can’t have both.

The Big AI Risk Not Enough People Are Seeing

The Atlantic

www.theatlantic.com › ideas › archive › 2024 › 05 › ai-dating-algorithms-relationships › 678422

“Our focus with AI is to help create more healthy and equitable relationships.” Whitney Wolfe Herd, the founder and executive chair of the dating app Bumble, leans in toward her Bloomberg Live interviewer. “How can we actually teach you how to date?”

When her interviewer, apparently bemused, asks for an example of what this means, Herd launches into a mind-bending disquisition on the future of AI-abetted dating: “Okay, so for example, you could in the near future be talking to your AI dating concierge, and you could share your insecurities. ‘I just came out of a breakup. I have commitment issues.’ And it could help you train yourself into a better way of thinking about yourself. And then it could give you productive tips for communicating with other people. If you want to get really out there, there is a world where your dating concierge could go and date for you with other dating concierges.” When her audience lets out a peal of uneasy laughter, the CEO continues undeterred, heart-shape earrings bouncing with each sweep of her hands. “No, no, truly. And then you don’t have to talk to 600 people. It will then scan all of San Francisco for you and say, These are the three people you really ought to meet.

What Herd provides here is much more than a darkly whimsical peek into a dystopian future of online dating. It’s a window into a future in which people require layer upon layer of algorithmic mediation between them in order to carry out the most basic of human interactions: those involving romance, sex, friendship, comfort, food. Implicit in Herd’s proclamation—that her app will “teach you how to date”—is the assumption that AI will soon understand proper human behavior in ways that human beings do not. Despite Herd’s insistence that such a service would empower us, what she’s actually describing is the replacement of human courtship rituals: Your digital proxy will go on innumerable dates for you, so you don’t have to practice anything so pesky as flirting and socializing.

[Read: America is sick of swiping]

Hypothetical AI dating concierges sound silly, and they are not exactly humanity’s greatest threat. But we might do well to think of the Bumble founder’s bubbly sales pitch as a canary in the coal mine, a harbinger of a world of algorithms that leave people struggling to be people without assistance. The new AI products coming to market are gate-crashing spheres of activity that were previously the sole province of human beings. Responding to these often disturbing developments requires a principled way of disentangling uses of AI that are legitimately beneficial and prosocial from those that threaten to atrophy our life skills and independence. And that requires us to have a clear idea of what makes human beings human in the first place.

In 1977, Ivan Illich, an Austrian-born philosopher, vagabond priest, and ruthless critic of metastatic bureaucracies, declared that we had entered “the age of Disabling Professions.” Modernity was characterized, in Illich’s view, by the standardization and professionalization of everyday life. Activities that were once understood to be within the competencies of laypeople—say, raising children or bandaging the wounded—were suddenly brought under the purview of technical experts who claimed to possess “secret knowledge,” bestowed by training and elite education, that was beyond the ken of the untutored masses. The licensed physician displaced the local healer. Child psychologists and their “cutting edge” research superseded parents and their instincts. Data-grubbing nutritionists replaced the culinary wisdom of grandmothers.

Illich’s singular insight was that the march of professional reason—the transformation of Western civilization into a technocratic enterprise ruled by what we now call “best practices”—promised to empower us but actually made us incompetent, dependent on certified experts to make decisions that were once the jurisdiction of the common man. “In any area where a human need can be imagined,” Illich wrote, “these new professions, dominant, authoritative, monopolistic, legalized—and, at the same time, debilitating and effectively disabling the individual—have become exclusive experts of the public good.” Modern professions inculcate the belief not only that their credentialed representatives can solve your problems for you, but also that you are incapable of solving said problems for yourself. In the case of some industries, like medicine, this is plainly a positive development. Other examples, like the ballooning wellness industry, are far more dubious.

If the entrenchment of specialists in science, schooling, child-rearing, and so on is among the pivotal developments of the 20th century, the rise of online dating is among the most significant of the 21st. But one key difference between this more recent advancement and those of yesteryear is that websites such as Tinder and Hinge are defined not by disabling professionals with fancy degrees, but by disabling algorithms. The white-coated expert has been replaced by digital services that cut out the human middleman and replace him with an (allegedly) even smarter machine, one that promises to know you better than you know yourself.

[Faith Hill: ‘Nostalgia for a dating experience they’ve never had’]

And it’s not just dating apps. Supposed innovations including machine-learning-enhanced meal-kit companies such as HelloFresh, Spotify recommendations, and ChatGPT suggest that we have entered the Age of Disabling Algorithms as tech companies simultaneously sell us on our existing anxieties and help nurture new ones. At the heart of it all is the kind of AI bait-and-switch peddled by the Bumble CEO. Algorithms are now tooled to help you develop basic life skills that decades ago might have been taken as a given: How to date. How to cook a meal. How to appreciate new music. How to write and reflect. Like an episode out of Black Mirror, the machines have arrived to teach us how to be human even as they strip us of our humanity. We have reason to be worried.

As conversations over the dangers of artificial intelligence have heated up over the past 18 months—largely thanks to the meteoric rise of large language models like ChatGPT—the focus of both the media and Silicon Valley has been on Skynet scenarios. The primary fear is that chat models may experience an “intelligence explosion” as they are scaled up, meaning that LLMs might proceed rapidly from artificial intelligence to artificial general intelligence to artificial superintelligence (ASI) that is both smarter and more powerful than even the smartest human beings. This is often called the “fast takeoff” scenario, and the concern is that if ASI slips out of humanity’s control—and how could it not—it might choose to wipe out our species, or even enslave us.

These AI “existential risk” debates—at least the ones being waged in public—have taken on a zero-sum quality: They are almost exclusively between those who believe that the aforementioned Terminator-style dangers are real, and others who believe that these are Hollywood-esque fantasies that distract the public from more sublunar AI-related problems, like algorithmic discrimination, autonomous weapons systems, or ChatGPT-facilitated cheating. But this is a false binary, one that excludes another possibility: Artificial intelligence could significantly diminish humanity, even if machines never ascend to superintelligence, by sapping the ability of human beings to do human things.

The epochal impact of online dating is there for all to see in a simple line graph from a 2019 study. It shows the explosive growth of online dating since 1995, the year that Match.com, the world’s first online-dating site, was launched. That year, only 2 percent of heterosexual couples reported meeting online. By 2017, that figure had jumped to 39 percent as other ways of meeting—through friends or family, at work or in church—declined precipitously.

Besides online dating, the only way of meeting that increased during this period was meeting at a bar or restaurant. However, the authors of the study noted that this ostensible increase was a mirage: The “apparent post-2010 rise in meeting through bars and restaurants for heterosexual couples is due entirely to couples who met online and subsequently had a first in-person meeting at a bar or restaurant or other establishment where people gather and socialize. If we exclude the couples who first met online from the bar/restaurant category, the bar/restaurant category was significantly declining after 1995 as a venue for heterosexual couples to meet.” In other words, online dating has become hegemonic. The wingman is out. Digital matchmaking is in.

But even those selling online-dating services seem to know there’s something unsettling about the idea that algorithms, rather than human beings, are now spearheading human romance. A bizarre Tinder ad from last fall featured the rapper Coi Leray playing the role of Cupid, perched on an ominously pink stage, tasked with finding a date for a young woman. A coterie of associates, dressed in Hunger Games chic, grilled a series of potential suitors as Cupid swiped left until the perfect match was found. These characters put human faces on an inhuman process.

Leif Weatherby, an expert on the history of AI development and the author of a forthcoming book on large language models, told me that ads like this are a neat distillation of Silicon Valley’s marketing playbook. “We’re seeing a general trend of selling AI as ‘empowering,’ a way to extend your ability to do something, whether that’s writing, making investments, or dating,” Weatherby explained. “But what really happens is that we become so reliant on algorithmic decisions that we lose oversight over our own thought processes and even social relationships. The rhetoric of AI empowerment is sheep’s clothing for Silicon Valley wolves who are deliberately nurturing the public’s dependence on their platforms.” Curtailing human independence, then, is not a bug, but a feature of the AI gold rush.

Of course, there is an extent to which this nurtured dependence isn’t unique to AI, but is an inevitable by-product of innovation. The broad uptake of any new technology generally atrophies the human skills for the processes that said technology makes more efficient or replaces outright. The advent of the vacuum was no doubt accompanied by a corresponding decline in the average American’s deftness with a broom. The difference between technologies of convenience, like the vacuum or the washing machine, and platforms like Tinder or ChatGPT is that the latter are concerned with atrophying competencies, like romantic socializing or thinking and reflection, that are fundamental to what it is to be a human being.

[Read: AI has lost its magic]

The response to our algorithmically remade world can’t simply be that algorithms are bad, sensu stricto. Such a stance isn’t just untenable at a practical level—algorithms aren’t going anywhere—but it also undermines unimpeachably positive use cases, such as the employment of AI in cancer diagnosis. Instead, we need to adopt a more sophisticated approach to artificial intelligence, one that allows us to distinguish between uses of AI that legitimately empower human beings and those—like hypothetical AI dating concierges—that wrest core human activities from human control. But making these distinctions requires us to re-embrace an old idea that tends to leave those of us on the left rather squeamish: human nature.

Both Western intellectuals and the progressive public tend to be hostile to the idea that there is a universal “human nature,” a phrase that now has right-wing echoes. Instead, those on the left prefer to emphasize the diversity, and equality, of varying human cultural traditions. But this discomfort with adopting a strong definition of human nature compromises our ability to draw red lines in a world where AI encroaches on human territory. If human nature doesn’t exist, and if there is no core set of fundamental human activities, desires, or traits, on what basis can we argue against the outsourcing of those once-human endeavors to machines? We can’t take a stand against the infiltration of algorithms into the human estate if we don’t have a well-developed sense of which activities make humans human, and which activities—like sweeping the floor or detecting pancreatic cancer—can be outsourced to nonhuman surrogates without diminishing our agency.  

One potential way out of this impasse is offered by the so-called capability approach to human flourishing developed by the philosopher Martha Nussbaum and others. In rejection of the kind of knee-jerk cultural relativism that often prevails in progressive political thought, Nussbaum’s work insists that advocating for the poor or marginalized, at home or abroad, requires us to agree on universal “basic human capabilities” that citizens should be able to develop. Nussbaum includes among these basic capabilities “being able to imagine, to think, and to reason” and “to engage in various forms of familial and social interaction.” A good society, according to the capability approach, is one in which human beings are not just theoretically free to engage in these basic human endeavors, but are actually capable of doing so.

As AI is built into an ever-expanding roster of products and services, covering dating, essay writing, and music and recipe recommendations, we need to be able to make granular, rational decisions about which uses of artificial intelligence expand our basic human capabilities, and which cultivate incompetence and incapacity under the guise of empowerment. Disabling algorithms are disabling precisely because they leave us less capable of, and more anxious about, carrying out essential human behaviors.

Of course, some will object to the idea that there is any such thing as fundamental human activities. They may even argue that describing behaviors like dating and making friends, critical thinking, or cooking as central to the human condition is ableist or otherwise bigoted. After all, some people are asexual or introverted. Others with mental disabilities might not be adept at reflection, or written or oral communication. Some folks simply do not want to cook, an activity which is historically gendered besides. But this objection relies on a sleight of hand. Identifying certain activities as fundamental to the human enterprise does not require you to believe that those who don’t or can’t engage in them are inhuman, just as embracing the idea that the human species is bipedal does not require you to believe that people born without legs lack full personhood. It only asks that you acknowledge that there are some endeavors that are vital aspects of the human condition, taken in the aggregate, and that a society where people broadly lack these capacities is not a good one.

Without some minimal agreement as to what those basic human capabilities are—what activities belong to the jurisdiction of our species, not to be usurped by machines—it becomes difficult to pin down why some uses of artificial intelligence delight and excite, while others leave many of us feeling queasy.

What makes many applications of artificial intelligence so disturbing is that they don’t expand our mind’s capacity to think, but outsource it. AI dating concierges would not enhance our ability to make romantic connections with other humans, but obviate it. In this case, technology diminishes us, and that diminishment may well become permanent if left unchecked. Over the long term, human beings in a world suffused with AI-enablers will likely prove less capable of engaging in fundamental human activities: analyzing ideas and communicating them, forging spontaneous connections with others, and the like. While this may not be the terrifying, robot-warring future imagined by the Terminator movies, it would represent another kind of existential catastrophe for humanity.

Whether or not the Bumble founder’s dream of artificial-intelligence-induced dalliances ever comes to fruition is an open question, but it is also somewhat beside the point. What should give us real pause is the understanding of AI, now ubiquitous in Big Tech, that underlies her dystopian prognostications. Silicon Valley leaders have helped make a world in which people feel that everyday social interactions, whether dating or making simple phone calls, require expert advice and algorithmic assistance. AI threatens to turbocharge this process. Even if your personalized dating concierge is not here yet, the sales pitch for them has already arrived, and that sales pitch is almost as dangerous as the technology itself: AI will teach you how to be a human.