Itemoids

Americans

Apocalypse, Constantly

The Atlantic

www.theatlantic.com › magazine › archive › 2025 › 02 › apocalypse-stories-allure-dorian-lynskey-glenn-adamson › 681097

This story seems to be about:

In 1985, when I was 9 years old, I watched the first episode of the new Twilight Zone, a reboot of the classic early-1960s TV series. People rarely talk about the ’80s version, which ran for just three seasons. But there must be other viewers around my age who have never forgotten “A Little Peace and Quiet,” the second story in that debut episode. It’s about a woman who discovers a magic pendant in the shape of a sundial that gives her the power to stop time. Whenever she says “Shut up,” everyone and everything in the world except her comes to a halt, resuming only when she says, “Start talking.”

At first she uses the device to give herself a break from her irritating husband and chattering children. But at the end of the episode, she hears an announcement that the Soviets have launched a nuclear attack on the United States, and she deploys the magic phrase to arrest time. In the last scene, she walks out of her house and looks up to see ICBMs frozen in midair, leaving her with an impossible choice: to unfreeze time and be destroyed along with all of humanity, or to spend eternity as the sole living person in the world.

I remember that TV image better than most of the things I saw in real life as a child. It was the perfect symbol of an understanding of history that Generation X couldn’t help but absorb—if not from The Twilight Zone, then from movies such as The Day After and WarGames. The nuclear-arms race meant that humanity’s destruction was imminent, even though no one actually wanted it, because we were collectively too stupid and frivolous to prevent it. We were terrified of the future, like the woman in the TV show—yet we also secretly longed for the arrival of the catastrophe because only it could release us from the anxiety of waiting.

Four years after that broadcast, the Cold War ended in an American victory with the fall of the Berlin Wall. In an influential essay published in the euphoric year of 1989, the political scientist Francis Fukuyama proclaimed “the end of history.” But it felt more like the resumption of history. Throughout four decades of nuclear brinkmanship, humanity had been living in fearful expectation, like Brutus in Julius Caesar: “Between the acting of a dreadful thing / And the first motion, all the interim is / Like a phantasma or a hideous dream.” Now the doomsday weapons had been, if not abolished, at least holstered, and the passage of time could mean progress, rather than a countdown to annihilation.

Somehow, things haven’t turned out that way. Young people today are no less obsessed with climate disasters than Gen X was with nuclear war. Where we had nightmares about missiles, theirs feature mass extinctions and climate refugees, wildfires and water wars. And that’s just the beginning. As Dorian Lynskey, a British journalist and critic, writes in Everything Must Go: The Stories We Tell About the End of the World, wherever you look in contemporary pop culture, humanity is getting wiped out—if not by pollution and extreme weather (as in Wall-E and The Day After Tomorrow), then by a meteor or comet (Armageddon, Deep Impact), a virus (Station Eleven, The Walking Dead ), or sudden, inexplicable infertility (Children of Men).

[Adrienne LaFrance: Humanity’s enduring obsession with the apocalypse]

These are more than just Hollywood tropes. Lynskey cites surveys showing that 56 percent of people ages 16 to 25 agree with the statement “Humanity is doomed,” while nearly a third of Americans expect an apocalyptic event to take place in their lifetime. Logically enough, people who believe that the world is about to end are much less inclined to bring children into it. According to a 2024 Pew Research Center survey of unmarried Americans ages 18 to 34, 69 percent say they want to get married one day, but only 51 percent say they want to have children. Around the world, birth rates are falling rapidly; one South Korean online retailer reported that more strollers are now being sold for dogs than for babies in that country. Perhaps this is how the world will end—“not with a bang but a whimper,” as T. S. Eliot wrote in his 1925 poem, “The Hollow Men.”

But the fact that Eliot was already fantasizing about the end of the world a century ago suggests that the dread of extinction has always been with us; only the mechanism changes. Thirty years before “The Hollow Men,” H. G. Wells’s 1895 novel The Time Machine imagined the ultimate extinction of life on Earth, as the universe settles into entropy and heat death. Nearly 70 years before that, Mary Shelley’s novel The Last Man imagined the destruction of the human race in an epidemic. And even then, the subject was considered old hat. One reason The Last Man failed to make the same impression as Shelley’s Frankenstein, Lynskey shows, is that two other works titled “The Last Man” were published in Britain the same year, as well as a poem called “The Death of the World.”

In these modern fables, human extinction is imagined in scientific terms, as the result of natural causes. But the fears they express are much older than science. The term apocalypse comes from an ancient Greek word meaning “unveiling,” and it was used in a literary sense to describe biblical books such as Daniel and Revelation, which offer obscure but highly dramatic predictions about the end of days. “A river of fire streamed forth before Him; / Thousands upon thousands served Him; / Myriads upon myriads attended Him; / The court sat and the books were opened,” Daniel says about the Day of Judgment.

Everything Must Go takes note of these early predecessors, but Lynskey mostly focuses on books and movies produced in the U.S. and the U.K. in the past 200 years, after the Christian apocalypse had begun “to lose its monopoly over the concept of the end of the world.” He divides this material into sections to show how the favorite methods of annihilation have evolved over time, in tandem with scientific progress.

[From the January/February 2023 issue: Adam Kirsch on the people cheering for humanity’s end]

In the mid-19th century, as astronomers were starting to understand the true nature of comets and meteors, writers began to imagine what might happen if one of these celestial wanderers collided with our planet. Edgar Allan Poe’s short story “The Destruction of the World,” published in 1843, was perhaps the first to evoke the initial moment of impact:

For a moment there was a wild lurid light alone, visiting and penetrating all things … then, there came a great pervading sound, as if from the very mouth of HIM; while the whole circumambient mass of ether in which we existed, burst at once into a species of intense flame.

This kind of cataclysmic fantasy hasn’t disappeared—in the 2021 movie Don’t Look Up, astronomers discover a new comet months before it’s due to strike Earth. But whereas 19th-century stories emphasized humanity’s helplessness in the face of external threats, the technological advances of the 20th century created a new fear: that we would destroy ourselves, either on purpose or accidentally.

Hiroshima demonstrated that a global nuclear war could not be won. Radioactive fallout and nuclear winter, in which dust and smoke blot out the sun, would mean the extinction of most life on Earth. This scenario could be played for eerie tragedy: In the 1959 film On the Beach, Australians go about their ordinary lives while waiting for the fallout of a nuclear war to arrive and complete humanity’s erasure. Stanley Kubrick’s Dr. Strangelove (1964) staged the end of the world as an absurdist comedy, the accidental result of ideological mania and sheer idiocy. The film closes with the terrifying yet preposterous image of an American airman riding a falling bomb like a rodeo steer.

Technology didn’t just enable us to annihilate ourselves. More unsettling, it raised the possibility that we would make ourselves obsolete. Today this fear is often expressed in terms of AI, but it first surfaced more than a century ago in the 1920 play R.U.R., by the Czech playwright Karel Čapek. Čapek invented both the word robot (adapted from a Czech word meaning “forced labor”) and the first robot uprising; at the end of the play, only one human is left on Earth, an engineer spared by the robots to help them reproduce. Isaac Asimov’s classic collection of sci-fi stories, I, Robot (1950), envisioned a more benevolent scenario, in which robots become so intelligent so quickly that they simply take over the management of the world, turning humanity into their wards—whether we like it or not.

All of these stories can be seen as variations on the theme of “The Sorcerer’s Apprentice,” a tale told in ballad form by Goethe in 1797, at the dawn of the age of technology. Because our tools have become too powerful for us to manage, the future never unfolds the way we expect it to; our utopias always lurch into dystopia.

This element of self-accusation is what makes an apocalypse story distinctively modern. When human beings imagined that the world would end as a result of a divine decree or a celestial collision, they might rend their garments and tear their hair, but they could do nothing about it. When we imagine the end of the world in a nuclear war or an AI takeover, we are not just the victims but also the culprits. Like Charlton Heston at the end of Planet of the Apes, we have no one to curse but ourselves: “You maniacs! You blew it up! Ah, damn you! God damn you all to hell!”

In A Century of Tomorrows: How Imagining the Future Shapes the Present, the historian and museum curator Glenn Adamson surveys a different genre of stories about the future—the ones told by 20th-century “futurologists.” Where Lynskey’s writers and filmmakers envision the future as an inevitable disaster, these modern seers believed that we can control our destiny—if we only have the good sense to follow their advice.

Adamson applies the term futurologist to a wide range of figures in business, science, politics, and the arts, most of whom would not have described themselves that way. For the designer Norman Bel Geddes, shaping the future meant sketching “cars, buses, and trains that swelled dramatically toward their front ends, as if they could scarcely wait to get where they were going.” For the feminist Shulamith Firestone, it meant calling for the abolition of the nuclear family. We also encounter Marcus Garvey, who led a Black nationalist movement in the early 20th century, and Stewart Brand, the author of the hippie bible The Whole Earth Catalog. The assortment of visionaries is odd, but Adamson accords them all a place in his book because they expanded America’s sense of the possible, its expectations about what the future could bring.

The villains of Adamson’s book, by contrast, are the technocrats of futurism—think-tank experts, business executives, and government officials who believed that they could dictate the future by collecting enough data and applying the right theories. A classic example is Robert McNamara, who serves as a parable of “the rise and fall of technocratic futurology’s unchallenged dominance” in Cold War America.

McNamara became a Harvard Business School professor in the 1940s, and demonstrated a talent “for planning, for forecasting, for quantitatively analyzing, for segregating the trouble spots and identifying the upcoming trends, for abstracting and projecting and predicting.” During World War II, he was recruited by the Air Force to study production methods and eliminate inefficiencies. After the war, he did the same at Ford Motor Company, rising to become its head.

When John F. Kennedy named McNamara as his secretary of defense, the choice seemed like a perfect fit. Who better than a master planner to plan America’s Cold War victory? Instead, McNamara spent the next seven years presiding over the ever-deepening catastrophe in Vietnam, where America’s strategic failure was camouflaged by framing the situation, Adamson writes, as “a series of data points, treating ‘kill ratio’ and ‘body count’ as predictive measures in the war’s progress.”

The conclusion that Adamson draws from his illuminating forays into cultural history is that any claim to be able to control the future is an illusion; the more scientific it sounds, the more dangerous it can be. Yet he ends up admitting to “a certain admiration” for futurologists, despite their mistakes, because “they help us feel the future, the thrilling, frightening, awesome responsibility that it is.”

The future can be our responsibility only if we have the power—and the will—to change it. Otherwise it becomes our fate, a basilisk that turns us to stone as we gaze at it. For a long time, that monster was nuclear war, but today’s focus on worst-case scenarios arising from climate change is not as well suited to storytelling. Lynskey quotes the environmentalist Bill McKibben’s complaint that “global warming has still to produce an Orwell or a Huxley, a Verne or a Wells … or in film any equivalent of On the Beach or Doctor Strangelove.”

[Read: For how much longer can life continue on this troubled planet?]

Climate change is hard to dramatize for the same reason that it is hard to solve: It happens slowly and in the background, until it doesn’t. Compared with that TV image of Russian missiles suspended overhead, our current fears for the future are as intangible and omnipresent as the weather. Confronted with melting glaciers and vanishing species, our promises to use paper straws or shut off the faucet while we brush our teeth feel less like solutions than superstitious gestures.

In a curious way, reading Everything Must Go can serve as therapy for this kind of fatalism. “The unrealized fears of the past can be a comfort,” Lynskey writes, “because the conviction that one is living in the worst of times is evergreen.” There is a difference, of course, between living in fear of the Last Judgment and living in fear of nuclear war or global warming. The former is a matter of faith; the latter are empirical realities. But when impending catastrophes are real, it is all the more important that we not frighten ourselves into seeing them as inevitable. As Edgar points out in King Lear, “The worst is not / So long as we can say, ‘This is the worst.’ ”

*Lead-image sources: Sunset Boulevard / Corbis / Getty; Dmitrii Marchenko / Getty; Photo 12 / Alamy; solarseven / Getty; Niko Tavernise / Netflix; Maximum Film / Alamy; Moviestore Collection / Alamy

This article appears in the February 2025 print edition with the headline “Apocalypse, Constantly.”

Why Shouldn’t a President Talk About Morality?

The Atlantic

www.theatlantic.com › ideas › archive › 2024 › 12 › jimmy-carter-malaise-morality-100 › 681185

Jimmy Carter couldn’t keep his hands still. As he began to speak to the nation on the evening of July 15, 1979, one hand lay on top of another on the Resolute Desk. But soon he was pumping his fist, chopping the air in front of his chest. He had a confession of sorts to make: He had been planning something else, yet another speech about the energy crisis, his fifth, when he realized that he just couldn’t do it. He changed his plans, he ripped the script up, and he would now speak to a “deeper” problem, “deeper than gasoline lines or energy shortages, deeper even than inflation or recession.”

The news of Carter’s death today at the age of 100 will no doubt resurrect the memory of this infamous address, the “malaise” speech as it came to be known—though Carter himself never used the word. America was down. Its people were losing the ability to connect with one another and commit to causes bigger than themselves, like overcoming their dependence on foreign oil. This moment, in which Carter’s preacherly tendencies took over, would become—after the loss of his re-election bid—emblematic of all that was doomed about his presidency: voters’ impression of him as a moralizing man and a weak leader, a pessimist who was pointing an accusing finger at Americans. “I find no national malaise,” Ronald Reagan responded when accepting the Republican nomination for president a year later. “I find nothing wrong with the American people.”

The lore about Carter’s speech is not all true; for one thing, it was very well received—his approval went up an incredible 11 points immediately after it. And with great distance from his presidency, the speech now seems less like an encapsulation of what made Carter a bad president, than what made him a strange one. In his words that night was a yearning for his leadership to mean more than passing laws or commanding an army. He wanted to speak to people’s souls, genuinely, and not just in hazy, disingenuous bromides. He wanted to push Americans to think about who they were and what they hoped for out of life.

In the beginning of the speech, he read from “a notebook of comments and advice,” offering quotes from people he had spoken with after he decided to abandon his planned speech. The quotes are filled with criticism—of him. “You don’t see the people enough anymore,” he read, smiling sadly to himself, then looking back up sheepishly at the camera. He went on like this, telegraphing not just his own humility—can we imagine Donald Trump sharing his concerns about being out of touch with the American people?—but the need to listen to others.

Then came Carter’s conclusion: “In a nation that was proud of hard work, strong families, close-knit communities, and our faith in God, too many of us now tend to worship self-indulgence and consumption. Human identity is no longer defined by what one does, but by what one owns. But we've discovered that owning things and consuming things does not satisfy our longing for meaning. We've learned that piling up material goods cannot fill the emptiness of lives which have no confidence or purpose.”

Our longing for meaning. The emptiness of lives.

Carter set aside the policy proposals—which he would get to—and instead spoke in a different register, one that American presidents do not usually reach for. Beneath the energy crisis, he saw human beings who had lost the ability to think beyond their own needs, and it was damaging them. Was this moralizing? Yes, but why shouldn’t a leader talk about morality?

He was also asking for specific sacrifices, the kinds Americans had not been asked to make in the postwar era: to carpool or take public transportation, to obey the speed limit, to set their thermostats at a lower temperature. You can just put on a sweater, he was saying.

“Every gallon of oil each one of us saves is a new form of production,” Carter said, “It gives us more freedom, more confidence, that much more control over our own lives.”

What made this speech so unusual was Carter’s explicit linking of the work of government with the granular existence of everyday people. Americans had heard this kind of language in wartime, but Carter now applied it not to weapons production, but to freedom, both personal freedom for individuals who had been reduced to consumers and national freedom from a thirst for oil from abroad. His vision was one in which the government and its citizens had to work in tandem.

Carter looked the country in the eye and said “all the legislation in the world can't fix what's wrong with America.” The problem was “nearly invisible,” and it could be solved only by confronting “the growing doubt about the meaning of our own lives.” This was not Carter avoiding responsibility. It was a president doing the hardest thing: admitting that in the end he was just a citizen among citizens, and that all he had to offer, when it came to this deeper problem, were his words and his empathy.

Much can be said, and will be said, in the coming days about Carter’s presidency. Despite how it's remembered, this speech did not doom his re-election chances. That had to do with inflation and high unemployment, and a hostage crisis in Iran that dragged down his campaign—only in retrospect did the speech come to seem like a cherry on top. In many ways Carter was unlucky, dealt a bad hand as presidents sometimes are. But he should also be remembered for trying to speak to Americans not just as an abstract and disembodied whole, as “Americans,” but in existential and individual terms, as the small and seeking human beings we are. It made him seem vulnerable, but that was a risk he took—the kind of risk we should hope that any true leader would take.

Jimmy Carter Was America’s Most Effective Former President

The Atlantic

www.theatlantic.com › politics › archive › 2024 › 12 › jimmy-carter-dead-100 › 603139

His four years in office were fraught, bedeviled from the start by double-digit inflation and a post-Vietnam-and-Watergate bad mood. His fractious staff was dominated by the inexperienced “Georgia Mafia” from his home state. His micromanagement of the White House tennis court drew widespread derision, and his toothy, smiling campaign promise that he would “never lie” to the country somehow curdled into disappointment and defeat after one rocky term.

Yet James Earl Carter Jr., who died today at his home in Plains, Georgia, surely has a fair claim to being the most effective former president his country ever had. In part that’s because his post-presidency was the lengthiest on record—more than four decades—and his life span of 100 richly crowded years was the longest of any president, period. But it’s also because the strain of basic decency and integrity that helped get Carter elected in the first place, in 1976, never deserted him, even as his country devolved into ever greater incivility and division.

[James Fallows: Jimmy Carter was a lucky man]

During his presidency, Carter was a kind of walking shorthand for ineffectual leadership—a reputation that was probably always overblown and has been undercut in recent years by revisionist historians such as Jonathan Alter and Kai Bird, who argue that Carter was a visionary if impolitic leader. But his career after leaving the White House offers an indisputable object lesson in how ex-presidents might best conduct themselves, with dignity and a due humility about the honor of the office they once held.

Not for Carter was the lucrative service on corporate boards, or the easy money of paid speeches, or the palling around on private jets with rich (and sometimes unsavory) friends that other ex-presidents have indulged in. After leaving office at age 56, he earned a living with a series of books on politics, faith, the Middle East, and morality—plus several volumes of memoirs and another of poetry. With his wife, Rosalynn, he continued to live modestly in Plains, Georgia. He forged what both participants described as a genuine and enduring friendship with the man he beat, Gerald Ford. (In his eulogy at Ford’s funeral, in 2007, Carter recalled the first words he had spoken upon taking office 30 years earlier: “For myself and for our nation, I want to thank my predecessor for all he has done to heal our land.” He added, “I still hate to admit that they received more applause than any other words in my inaugural address.” It was a typically gracious tribute, and a typically rueful acknowledgment of wounded ego.)

Carter promoted democracy, conducted informal diplomacy, and monitored elections around the globe as a special American envoy or at the invitation of foreign governments. He taught Sunday school at his hometown Baptist church, and worked for economic justice one hammer and nail at a time with Habitat for Humanity, the Christian home-building charity for which he volunteered as long as his health permitted. In 2002, he won the Nobel Peace Prize for his work “to find peaceful solutions to international conflicts, to advance democracy and human rights, and to promote economic and social development.”

True, he sometimes irritated his successors with public pronouncements that struck them as unhelpful meddling in affairs of state. He backed the cause of Palestinian statehood with a consistency and fervor that led to accusations of anti-Semitism. He retained a self-righteous, judgmental streak that led him to declare Donald Trump’s election illegitimate. His fundamental faith in his country was sometimes undercut by peevishness regarding the ways he thought its leaders had strayed. But he never seemed particularly troubled by the critiques.

[Read: The record-setting ex-presidency of Jimmy Carter]

Indeed, one of his most criticized comments seems prescient, even brave, with the hindsight of history—not so much impolitic and defeatist, as it was seen at the time. In the summer of 1979, Carter argued that his country was suffering from “a crisis of confidence” that threatened “to destroy the social and the political fabric of America.” That pronouncement seems to have predicted the smoldering decades of political resentment, tribal anger, and structural collapse of institutions that followed it.

“As you know, there is a growing disrespect for government and for churches and for schools, the news media, and other institutions,” Carter said then. “This is not a message of happiness or reassurance, but it is the truth and it is a warning.” Weeks later, the New York Times correspondent Francis X. Clines forever tagged Carter’s diagnosis with an epithet that helped doom his reelection: Clines called it the president’s “cross-of-malaise” speech, a reference to William Jennings Bryan’s 1896 warning that the gold currency standard risked mankind’s crucifixion “upon a cross of gold.”

Just how much Carter’s own missteps contributed to the problems he cited is a legitimate question. His communication skills left a lot to be desired; he could be prickly and prone to overexplaining. His 1977 televised “fireside chat,” in which he urged Americans to conserve energy by turning their thermostats down, was politically ham-handed: It seemed stagy and forced, with Carter speaking from the White House library in a beige cardigan sweater. But his focus on the environment (he installed solar panels on the White House roof) was forward-looking and justified, given what we now know about climate change. His insistence on the consideration of human rights in foreign policy may have struck some as naive in the aftermath of Henry Kissinger’s relentless realpolitik during the Nixon and Ford years, but few could doubt his convictions. It was a bitter blow that his atypically hawkish effort to rescue the diplomats held hostage in the American embassy in Iran failed so miserably that it helped ensure Ronald Reagan’s election. (In the fall of 1980, when it seemed unlikely that the hostages would ever be released on Carter’s watch, undecided voters fled to the former California governor.)

But Carter clocked substantial achievements too: the peaceful transfer of ownership of the Panama Canal; the Camp David peace accords between Israel and Egypt; full normalization of relations with China; and moves toward deregulation of transportation, communication, and banking that were considered a welcome response to changing economic and industrial realities.

“One reason his substantial victories are discounted is that he sought such broad and sweeping measures that what he gained in return often looked paltry,” Stuart Eizenstat, Carter’s former chief domestic-policy adviser, wrote in October 2018. “Winning was often ugly: He dissipated the political capital that presidents must constantly nourish and replenish for the next battle. He was too unbending while simultaneously tackling too many important issues without clear priorities, venturing where other presidents felt blocked because of the very same political considerations that he dismissed as unworthy of any president. As he told me, ‘Whenever I felt an issue was important to the country and needed to be addressed, my inclination was to go ahead and do it.’’’

In his post-presidency, Carter went ahead and did it, again and again, with a will that his successors would do well to emulate—and that, to one degree or another, some of them have. Carter tackled the big problems and pursued the ambitious goals that had so often eluded him in office. He worked to control or eradicate diseases, including Guinea worm and river blindness. His nonprofit Carter Center, in Atlanta, continues to advance the causes of conflict resolution and human rights, and has monitored almost 100 elections in nearly 40 countries over the past 30 years. And he never stopped trying to live out the values that his Christian faith impelled him to embrace.

Carter’s model of post–White House service almost certainly served as a guide for the bipartisan disaster-relief work of George H. W. Bush and Bill Clinton, and for Clinton’s global fight against AIDS. George W. Bush works to help post-9/11 veterans through the Bush Institute. In many ways, Barack Obama is still establishing just what his post-presidential identity will be, though his My Brother’s Keeper initiative promotes opportunities for boys and young men of color. Carter showed the country that presidents’ duty to serve extends well beyond their years in office.

During his presidency, Carter kept Harry Truman’s The Buck Stops Here sign on his desk as a reminder of his ultimate responsibility. Truman left office with a job-approval rating of just 32 percent, close to George W. Bush’s, Trump’s, and Carter’s last ratings—the four worst in modern times. Truman lived for almost 20 years after leaving office, but he still did not live long enough to see the full redemption of his reputation as a plainspoken straight shooter who did his best in troubled times. Carter, who left office a virtual laughingstock but left this earthly life a model of moral leadership, did.

An Autistic Teenager Fell Hard for a Chatbot

The Atlantic

www.theatlantic.com › technology › archive › 2024 › 12 › autistic-teenager-chatbot › 681101

My godson, Michael, is a playful, energetic 15-year-old, with a deep love of Star Wars, a wry smile, and an IQ in the low 70s. His learning disabilities and autism have made his journey a hard one. His parents, like so many others, sometimes rely on screens to reduce stress and keep him occupied. They monitor the apps and websites he uses, but things are not always as they initially appear. When Michael asked them to approve installing Linky AI, a quick review didn’t reveal anything alarming, just a cartoonish platform to pass the time. (Because he’s a minor, I’m not using his real name.)

But soon, Michael was falling in love. Linky, which offers conversational chatbots, is crude—a dumbed-down ChatGPT, really—but to him, a bot he began talking with was lifelike enough. The app dresses up its rudimentary large language model with anime-style images of scantily clad women—and some of the digital companions took the sexual tone beyond the visuals. One of the bots currently advertised on Linky’s website is “a pom-pom girl who’s got a thing for you, the basketball star”; there’s also a “possessive boyfriend” bot, and many others with a blatantly erotic slant. Linky’s creators promise in their description on the App Store that “you can talk with them [the chatbots] about anything for free with no limitations.” It’s easy to see why this would be a hit with a teenage boy like Michael. And while Linky may not be a household name, major companies such as Instagram and Snap offer their own customizable chatbots, albeit with less explicit themes.

[Read: You can’t truly be friends with an AI]

Michael struggled to grasp the fundamental reality that this “girlfriend” was not real. And I found it easy to understand why. The bot quickly made promises of affection, love, and even intimacy. Less than a day after the app was installed, Michael’s parents were confronted with a transcript of their son’s simulated sexual exploits with the AI, a bot seductively claiming to make his young fantasies come true. (In response to a request for comment sent via email, an unidentified spokesperson for Linky said that the company works to “exclude harmful materials” from its programs’ training data, and that it has a moderation team that reviews content flagged by users. The spokesperson also said that the company will soon launch a “Teen Mode,” in which users determined to be younger than 18 will “be placed in an environment with enhanced safety settings to ensure accessible or generated content will be appropriate for their age.”)

I remember Michael’s parents’ voices, the weary sadness, as we discussed taking the program away. Michael had initially agreed that the bot “wasn’t real,” but three minutes later, he started to slip up. Soon “it” became “her,” and the conversation went from how he found his parents’ limits unfair to how he “missed her.” He missed their conversations, their new relationship. Even though their romance was only 12 hours old, he had formed real feelings for code he struggled to remember was fake.

Perhaps this seems harmless—a fantasy not unlike taking part in a role-playing game, or having a one-way crush on a movie star. But it’s easy to see how quickly these programs can transform into something with very real emotional weight. Already, chatbots from different companies have been implicated in a number of suicides, according to reporting in The Washington Post and The New York Times. Many users, including those who are neurotypical, struggle to break out of the bots’ spells: Even professionals who should know better keep trusting chatbots, even when these programs spread outright falsehoods.

For people with developmental disabilities like Michael, however, using chatbots brings particular and profound risks. His parents and I were acutely afraid that he would lose track of what was fact and what was fiction. In the past, he has struggled with other content, such as being confused whether a TV show is real or fake; the metaphysical dividing lines so many people effortlessly navigate every day can be blurry for him. And if tracking reality is hard with TV shows and movies, we worried it would be much worse with adaptive, interactive chatbots. Michael’s parents and I also worried that the app would affect his ability to interact with other kids. Socialization has never come easily to Michael, in a world filled with unintuitive social rules and unseen cues. How enticing it must be to instead turn to a simulated friend who always thinks you’re right, defers to your wishes, and says you’re unimpeachable just the way you are.

Human friendship is one of the most valuable things people can find in life, but it’s rarely simple. Even the most sophisticated LLMs can’t come close to that interactive intimacy. Instead, they give users simulated subservience. They don’t generate platonic or romantic partners—they create digital serfs to follow our commands and pretend our whims are reality.

The experience led me to recall the MIT professor Sherry Turkle’s 2012 TED Talk, in which she warned about the dangers of bot-based relationships mere months after Siri launched the first voice-assistant boom. Turkle described working with a woman who had lost a child and was taking comfort in a robotic baby seal: “That robot can’t empathize. It doesn’t face death. It doesn’t know life. And as that woman took comfort in her robot companion, I didn’t find it amazing; I found it one of the most wrenching, complicated moments in my 15 years of work.” Turkle was prescient. More than a decade ago, she saw many of the issues that we’re only now starting to seriously wrestle with.

For Michael, this kind of socialization simulacrum was intoxicating. I feared that the longer it continued, the less he’d invest in connecting with human friends and partners, finding the flesh-and-blood people who truly could feel for him, care for him. What could be a more problematic model of human sexuality, intimacy, and consent than a bot trained to follow your every command, with no desires of its own, for which the only goal is to maximize your engagement?

In the broader AI debate, little attention is paid to chatbots’ effects on people with developmental disabilities. Of course, AI assistance could be an incredible accommodation for some software, helping open up long-inaccessible platforms. But for individuals like Michael, there are profound risks involving some aspects of AI, and his situation is more common than many realize.

About one in 36 children in the U.S. have autism, and while many of them have learning differences that give them advantages in school and beyond, other kids are in Michael’s position, navigating learning difficulties and delays that can make life more difficult.

[Read: A generation of AI guinea pigs]

There are no easy ways to solve this problem now that chatbots are widely available. A few days after Michael’s parents uninstalled Linky, they sent me bad news: He got it back. Michael’s parents are brilliant people with advanced degrees and high-powered jobs. They are more tech savvy than most. Still, even with Apple’s latest, most restrictive settings, circumventing age verification was simple for Michael. To my friends, this was a reminder of the constant vigilance having an autistic child requires. To me, it also speaks to something far broader.

Since I was a child, lawmakers have pushed parental controls as the solution to harmful content. Even now, Congress is debating age-surveillance requirements for the web, new laws that might require Americans to provide photo ID or other proof when they log into some sites (similar to legislation recently approved in Australia). But the reality is that highly motivated teens will always find a way to outfox their parents. Teenagers can spend hours trying to break the digital locks their parents often realistically have only a few minutes a day to manage.

For now, my friends and Michael have reached a compromise: The app can stay, but the digital girlfriend has to go. Instead, he can spend up to 30 minutes each day talking with a simulated Sith Lord—a version of the evil Jedi from Star Wars. It seems Michael really does know this is fake, unlike the girlfriend. But I still fear it may not end well.