Itemoids

COVID

ElevenLabs Is Building an Army of Voice Clones

The Atlantic

www.theatlantic.com › technology › archive › 2024 › 05 › elevenlabs-ai-voice-cloning-deepfakes › 678288

Updated at 3:05 p.m. ET on May 4, 2024

My voice was ready. I’d been waiting, compulsively checking my inbox. I opened the email and scrolled until I saw a button that said, plainly, “Use voice.” I considered saying something aloud to mark the occasion, but that felt wrong. The computer would now speak for me.

I had thought it’d be fun, and uncanny, to clone my voice. I’d sought out the AI start-up ElevenLabs, paid $22 for a “creator” account, and uploaded some recordings of myself. A few hours later, I typed some words into a text box, hit “Enter,” and there I was: all the nasal lilts, hesitations, pauses, and mid-Atlantic-by-way-of-Ohio vowels that make my voice mine.

It was me, only more pompous. My voice clone speaks with the cadence of a pundit, no matter the subject. I type I like to eat pickles, and the voice spits it out as if I’m on Meet the Press. That’s not my voice’s fault; it is trained on just a few hours of me speaking into a microphone for various podcast appearances. The model likes to insert ums and ahs: In the recordings I gave it, I’m thinking through answers in real time and choosing my words carefully. It’s uncanny, yes, but also quite convincing—a part of my essence that’s been stripped, decoded, and reassembled by a little algorithmic model so as to no longer need my pesky brain and body.

Listen to the author's AI voice:

Using ElevenLabs, you can clone your voice like I did, or type in some words and hear them spoken by “Freya,” “Giovanni,” “Domi,” or hundreds of other fake voices, each with a different accent or intonation. Or you can dub a clip into any one of 29 languages while preserving the speaker’s voice. In each case, the technology is unnervingly good. The voice bots don’t just sound far more human than voice assistants such as Siri; they also sound better than any other widely available AI audio software right now. What’s different about the best ElevenLabs voices, trained on far more audio than what I fed into the machine, isn’t so much the quality of the voice but the way the software uses context clues to modulate delivery. If you feed it a news report, it speaks in a serious, declarative tone. Paste in a few paragraphs of Hamlet, and an ElevenLabs voice reads it with a dramatic storybook flare.

Listen to ElevenLabs read Hamlet:

ElevenLabs launched an early version of its product a little over a year ago, but you might have listened to one of its voices without even knowing it. Nike used the software to create a clone of the NBA star Luka Dončić’s voice for a recent shoe campaign. New York City Mayor Eric Adams’s office cloned the politician’s voice so that it could deliver robocall messages in Spanish, Yiddish, Mandarin, Cantonese, and Haitian Creole. The technology has been used to re-create the voices of children killed in the Parkland school shooting, to lobby for gun reform. An ElevenLabs voice might be reading this article to you: The Atlantic uses the software to auto-generate audio versions of some stories, as does The Washington Post.

It’s easy, when you play around with the ElevenLabs software, to envision a world in which you can listen to all the text on the internet in voices as rich as those in any audiobook. But it’s just as easy to imagine the potential carnage: scammers targeting parents by using their children’s voice to ask for money, a nefarious October surprise from a dirty political trickster. I tested the tool to see how convincingly it could replicate my voice saying outrageous things. Soon, I had high-quality audio of my voice clone urging people not to vote, blaming “the globalists” for COVID, and confessing to all kinds of journalistic malpractice. It was enough to make me check with my bank to make sure any potential voice-authentication features were disabled.

I went to visit the ElevenLabs office and meet the people responsible for bringing this technology into the world. I wanted to better understand the AI revolution as it’s currently unfolding. But the more time I spent—with the company and the product—the less I found myself in the present. Perhaps more than any other AI company, ElevenLabs offers a window into the near future of this disruptive technology. The threat of deepfakes is real, but what ElevenLabs heralds may be far weirder. And nobody, not even its creators, seems ready for it.

In mid-November, I buzzed into a brick building on a side street and walked up to the second floor. The London office of ElevenLabs—a $1 billion company—is a single room with a few tables. No ping-pong or beanbag chairs—just a sad mini fridge and the din of dutiful typing from seven employees packed shoulder to shoulder. (Many of the company’s staff is remote, scattered around the world.) Mati Staniszewski, ElevenLabs’ 29-year-old CEO, got up from his seat in the corner to greet me. He beckoned for me to follow him back down the stairs to a windowless conference room ElevenLabs shares with a company that, I presume, is not worth $1 billion.

Staniszewski is tall, with a well-coiffed head of blond hair, and he speaks quickly in a Polish accent. Talking with him sometimes feels like trying to engage in conversation with an earnest chatbot trained on press releases. I started our conversation with a few broad questions: What is it like to work on AI during this moment of breathless hype, investor interest, and genuine technological progress? What’s it like to come in each day and try to manipulate such nascent technology? He said that it’s exciting.

We moved on to Staniszewski’s background. He and the company’s co-founder, Piotr Dabkowski, grew up together in Poland watching foreign movies that were all clumsily dubbed into a flat Polish voice. Man, woman, child—whoever was speaking, all of the dialogue was voiced in the same droning, affectless tone by male actors known as lektors.

They both left Poland for university in the U.K. and then settled into tech jobs (Staniszewski at Palantir and Dabkowski at Google). Then, in 2021, Dabkowski was watching a film with his girlfriend and realized that Polish films were still dubbed in the same monotone lektor style. He and Staniszewski did some research and discovered that markets outside Poland were also relying on lektor-esque dubbing.

Mati Staniszewski’s story as CEO of ElevenLabs begins in Poland, where he grew up watching foreign films clumsily dubbed into a flat voice. (Daniel Stier for The Atlantic)

The next year, they founded ElevenLabs. AI voices were everywhere—think Alexa, or a car’s GPS—but actually good AI voices, they thought, would finally put an end to lektors. The tech giants have hundreds or thousands of employees working on AI, yet ElevenLabs, with a research team of just seven people, built a voice tool that’s arguably better than anything its competitors have released. The company poached researchers from top AI companies, yes, but it also hired a college dropout who’d won coding competitions, and another “who worked in call centers while exploring audio research as a side gig,” Staniszewski told me. “The audio space is still in its breakthrough stage,” Alex Holt, the company’s vice president of engineering, told me. “Having more people doesn’t necessarily help. You need those few people that are incredible.”

ElevenLabs knew its model was special when it started spitting out audio that accurately represented the relationships between words, Staniszewski told me—pronunciation that changed based on the context (minute, the unit of time, instead of minute, the description of size) and emotion (an exclamatory phrase spoken with excitement or anger).

Much of what the model produces is unexpected—sometimes delightfully so. Early on, ElevenLabs’ model began randomly inserting applause breaks after pauses in its speech: It had been training on audio clips from people giving presentations in front of live audiences. Quickly, the model began to improve, becoming capable of ums and ahs. “We started seeing some of those human elements being replicated,” Staniszewski said. The big leap was when the model began to laugh like a person. (My voice clone, I should note, struggles to laugh, offering a machine-gun burst of “haha”s that sound jarringly inhuman.)

Compared with OpenAI and other major companies, which are trying to wrap their large language models around the entire world and ultimately build an artificial human intelligence, ElevenLabs has ambitions that are easier to grasp: a future in which ALS patients can still communicate in their voice after they lose their speech. Audiobooks that are ginned up in seconds by self-published authors, video games in which every character is capable of carrying on a dynamic conversation, movies and videos instantly dubbed into any language. A sort of Spotify of voices, where anyone can license clones of their voice for others to use—to the dismay of professional voice actors. The gig-ification of our vocal cords.

What Staniszewski also described when talking about ElevenLabs is a company that wants to eliminate language barriers entirely. The dubbing tool, he argued, is its first step toward that goal. A user can upload a video, and the model will translate the speaker’s voice into a different language. When we spoke, Staniszewski twice referred to the Babel fish from the science-fiction book The Hitchhiker’s Guide to the Galaxy—he described making a tool that immediately translates every sound around a person into a language they can understand.

Every ElevenLabs employee I spoke with perked up at the mention of this moonshot idea. Although ElevenLabs’ current product might be exciting, the people building it view current dubbing and voice cloning as a prelude to something much bigger. I struggled to separate the scope of Staniszewski’s ambition from the modesty of our surroundings: a shared conference room one floor beneath the company’s sparse office space. ElevenLabs may not achieve its lofty goals, but I was still left unmoored by the reality that such a small collection of people could build something so genuinely powerful and release it into the world, where the rest of us have to make sense of it.

ElevenLabs’ voice bots launched in beta in late January 2023. It took very little time for people to start abusing them. Trolls on 4chan used the tool to make deepfakes of celebrities saying awful things. They had Emma Watson reading Mein Kampf and the right-wing podcaster Ben Shapiro making racist comments about Representative Alexandria Ocasio-Cortez. In the tool’s first days, there appeared to be virtually no guardrails. “Crazy weekend,” the company tweeted, promising to crack down on misuse.

ElevenLabs added a verification process for cloning; when I uploaded recordings of my voice, I had to complete multiple voice CAPTCHAs, speaking phrases into my computer in a short window of time to confirm that the voice I was duplicating was my own. The company also decided to limit its voice cloning strictly to paid accounts and announced a tool that lets people upload audio to see if it is AI generated. But the safeguards from ElevenLabs were “half-assed,” Hany Farid, a deepfake expert at UC Berkeley, told me—an attempt to retroactively focus on safety only after the harm was done. And they left glaring holes. Over the past year, the deepfakes have not been rampant, but they also haven’t stopped.

I first started reporting on deepfakes in 2017, after a researcher came to me with a warning of a terrifying future where AI-generated audio and video would bring about an “infocalypse” of impersonation, spam, nonconsensual sexual imagery, and political chaos, where we would all fall into what he called “reality apathy.” Voice cloning already existed, but it was crude: I used an AI voice tool to try to fool my mom, and it worked only because I had the halting, robotic voice pretend I was losing cell service. Since then, fears of an infocalypse have lagged behind the technology’s ability to distort reality. But ElevenLabs has closed the gap.

The best deepfake I’ve seen was from the filmmaker Kenneth Lurt, who used ElevenLabs to clone Jill Biden’s voice for a fake advertisement where she’s made to look as if she’s criticizing her husband over his handling of the Israel-Gaza conflict. The footage, which deftly stitches video of the first lady giving a speech with an ElevenLabs voice-over, is incredibly convincing and has been viewed hundreds of thousands of times. The ElevenLabs technology on its own isn’t perfect. “It’s the creative filmmaking that actually makes it feel believable,” Lurt said in an interview in October, noting that it took him a week to make the clip.

“It will totally change how everyone interacts with the internet, and what is possible,” Nathan Lambert, a researcher at the Allen Institute for AI, told me in January. “It’s super easy to see how this will be used for nefarious purposes.” When I asked him if he was worried about the 2024 elections, he offered a warning: “People aren’t ready for how good this stuff is and what it could mean.” When I pressed him for hypothetical scenarios, he demurred, not wanting to give anyone ideas.

Daniel Stier for The Atlantic

A few days after Lambert and I spoke, his intuitions became reality. The Sunday before the New Hampshire presidential primary, a deepfaked, AI-generated robocall went out to registered Democrats in the state. “What a bunch of malarkey,” the robocall began. The voice was grainy, its cadence stilted, but it was still immediately recognizable as Joe Biden’s drawl. “Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again,” it said, telling voters to stay home. In terms of political sabotage, this particular deepfake was relatively low stakes, with limited potential to disrupt electoral outcomes (Biden still won in a landslide). But it was a trial run for an election season that could be flooded with reality-blurring synthetic information.

Researchers and government officials scrambled to locate the origin of the call. Weeks later, a New Orleans–based magician confessed that he’d been paid by a Democratic operative to create the robocall. Using ElevenLabs, he claimed, it took him less than 20 minutes and cost $1.

Afterward, ElevenLabs introduced a “no go”–voices policy, preventing users from uploading or cloning the voice of certain celebrities and politicians. But this safeguard, too, had holes. In March, a reporter for 404 Media managed to bypass the system and clone both Donald Trump’s and Joe Biden’s voices simply by adding a minute of silence to the beginning of the upload file. Last month, I tried to clone Biden’s voice, with varying results. ElevenLabs didn’t catch my first attempt, for which I uploaded low-quality sound files from YouTube videos of the president speaking. But the cloned voice sounded nothing like the president’s—more like a hoarse teenager’s. On my second attempt, ElevenLabs blocked the upload, suggesting that I was about to violate the company’s terms of service.

For Farid, the UC Berkeley researcher, ElevenLabs’ inability to control how people might abuse its technology is proof that voice cloning causes more harm than good. “They were reckless in the way they deployed the technology,” Farid said, “and I think they could have done it much safer, but I think it would have been less effective for them.”

The core problem of ElevenLabs—and the generative-AI revolution writ large—is that there is no way for this technology to exist and not be misused. Meta and OpenAI have built synthetic voice tools, too, but have so far declined to make them broadly available. Their rationale: They aren’t yet sure how to unleash their products responsibly. As a start-up, though, ElevenLabs doesn’t have the luxury of time. “The time that we have to get ahead of the big players is short,” Staniszewski said, referring to the company’s research efforts. “If we don’t do it in the next two to three years, it’s going to be very hard to compete.” Despite the new safeguards, ElevenLabs’ name is probably going to show up in the news again as the election season wears on. There are simply too many motivated people constantly searching for ways to use these tools in strange, unexpected, even dangerous ways.

In the basement of a Sri Lankan restaurant on a soggy afternoon in London, I pressed Staniszewski about what I’d been obliquely referring to as “the bad stuff.” He didn’t avert his gaze as I rattled off the ways ElevenLabs’ technology could be and has been abused. When it was his time to speak, he did so thoughtfully, not dismissively; he appears to understand the risks of his products and other open-source AI tools. “It’s going to be a cat-and-mouse game,” he said. “We need to be quick.”

Later, over email, he cited the “no go”–voices initiative and told me that ElevenLabs is “testing new ways to counteract the creation of political content,” adding more human moderation and upgrading its detection software. The most important thing ElevenLabs is working on, Staniszewski said—what he called “the true solution”—is digitally watermarking synthetic voices at the point of creation so civilians can identify them. That will require cooperation across dozens of companies: ElevenLabs recently signed an accord with other AI companies, including Anthropic and OpenAI, to combat deepfakes in the upcoming elections, but so far, the partnership is mostly theoretical.

The uncomfortable reality is that there aren’t a lot of options to ensure bad actors don’t hijack these tools. “We need to brace the general public that the technology for this exists,” Staniszewski said. He’s right, yet my stomach sinks when I hear him say it. Mentioning media literacy, at a time when trolls on Telegram channels can flood social media with deepfakes, is a bit like showing up to an armed conflict in 2024 with only a musket.

The conversation went on like this for a half hour, followed by another session a few weeks later over the phone. A hard question, a genuine answer, my own palpable feeling of dissatisfaction. I can’t look at ElevenLabs and see beyond the risk: How can you build toward this future? Staniszewski seems unable to see beyond the opportunities: How can’t you build toward this future? I left our conversations with a distinct sense that the people behind ElevenLabs don’t want to watch the world burn. The question is whether, in an industry where everyone is racing to build AI tools with similar potential for harm, intentions matter at all.

To focus only on deepfakes elides how ElevenLabs and synthetic audio might reshape the internet in unpredictable ways. A few weeks before my visit, ElevenLabs held a hackathon, where programmers fused the company’s tech with hardware and other generative-AI tools. Staniszewski said that one team took an image-recognition AI model and connected it to both an Android device with a camera and ElevenLabs’ text-to-speech model. The result was a camera that could narrate what it was looking at. “If you’re a tourist, if you’re a blind person and want to see the world, you just find a camera,” Staniszewski said. “They deployed that in a weekend.”

Repeatedly during my visit, ElevenLabs employees described these types of hybrid projects—enough that I began to see them as a helpful way to imagine the next few years of technology. Products that all hook into one another herald a future that’s a lot less recognizable. More machines talking to machines; an internet that writes itself; an exhausting, boundless comingling of human art and human speech with AI art and AI speech until, perhaps, the provenance ceases to matter.

I came to London to try to wrap my mind around the AI revolution. By staring at one piece of it, I thought, I would get at least a sliver of certainty about what we’re barreling toward. Turns out, you can travel across the world, meet the people building the future, find them to be kind and introspective, ask them all of your questions, and still experience a profound sense of disorientation about this new technological frontier. Disorientation. That’s the main sense of this era—that something is looming just over the horizon, but you can’t see it. You can only feel the pit in your stomach. People build because they can. The rest of us are forced to adapt.

This article previously misquoted Staniszewski as calling his background an "investor story."

Hypochondria Never Dies

The Atlantic

www.theatlantic.com › magazine › archive › 2024 › 06 › body-made-of-glass-book-review-hypochondria › 678218

At breakfast the other week, I noticed a bulging lump on my son’s neck. Within minutes of anxious Googling, I’d convinced myself that he had a serious undiagnosed medical condition—and the more I looked, the more apprehensive I got. Was it internal jugular phlebectasia, which might require surgery? Or a sign of lymphoma, which my father had been diagnosed with before he died? A few hours and a visit to the pediatrician later, I returned home with my tired child in tow, embarrassed but also relieved: The “problem” was just a benignly protuberant jugular vein.

My experience was hardly unique. We live in an era of mounting health worries. The ease of online medical self-diagnosis has given rise to what’s called cyberchondria: concern, fueled by consulting “Dr. Google,” that escalates into full-blown anxiety. Our medical system features ever more powerful technologies and proliferating routine preventive exams—scans that peer inside us, promising to help prolong our lives; blood tests that spot destructive inflammation; genetic screenings that assess our chances of developing disease. Intensive vigilance about our health has become the norm, simultaneously unsettling and reassuring. Many of us have experienced periods of worry before or after a mammogram or colonoscopy, or bouts of panic like mine about my son’s neck. For some, such interludes become consuming and destabilizing. Today, at least 4 percent of Americans are known to be affected by what is now labeled “health anxiety,” and some estimates suggest that the prevalence is more like 12 percent.

And yet hypochondria, you may be surprised to learn, officially no longer exists. In 2013, the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders, the so-called bible of psychiatric conditions, eliminated hypochondriasis. The change reflected an overdue need to reconceive a diagnosis that people found stigmatizing because it implied that hypochondriacs are neurotic malingerers whose symptoms aren’t “real.” The DSM introduced two distinct new diagnoses, illness anxiety disorder and somatic symptom disorder, both of which aim to be neutrally clinical descriptions of people with “extensive worries about health.” What differentiates them is the presence or absence of physical symptoms accompanying those fears.

But the efforts to delineate the spectrum of health anxiety, however, fall short of clarifying the murky nature of hypochondria. The ostensibly helpful terms are actually anything but that. Although we know more than ever before about the diseases and mental illnesses that afflict us, the body’s most obdurate mysteries remain. Doctors and patients must navigate them together. The only way to do so is by setting aside any impulse to moralize and by embracing uncertainty—the very thing that modern medicine is least equipped to do. The abyss between patients’ subjective experience of symptoms and medicine’s desire for objectivity is hard to bridge, as the scholar Catherine Belling notes in A Condition of Doubt. This is the space where hypochondria still lives.

The timing of the writer Caroline Crampton’s new book, A Body Made of Glass: A Cultural History of Hypochondria, couldn’t be better. What her belletristic account of hypochondria’s long and twisting lineage sometimes lacks in authoritative rigor, it makes up for in vivid evocations of being a patient. Her youthful experience with cancer and the anxiety she has suffered ever since propel her undertaking: a tour that includes a sampling of evolving medical science about the condition, as well as literary reflections (from, among others, John Donne, Molière, Marcel Proust, Virginia Woolf, and Philip Larkin) on the doubt and fear that are inseparable from life in a body that gets sick.

[Read: The psychology of irrational fear]

Hypochondria, as Crampton highlights, is not just a lay term for a tendency to worry about illness that isn’t there. It’s a diagnosis that has existed for hundreds of years. The attendant symptoms and meanings have shifted continually, always in step with changing conceptions of wellness and disease. In that sense, the history of hypochondria reflects one constant: Each era’s ideas track its limited understanding of health, and demonstrate a desire for clarity about the body and illness that again and again proves elusive. Knowing this doesn’t stop Crampton from dreaming of a “definitive test for everything, including health anxiety itself.”

Hippocrates, known as the father of medicine, used the term hypochondrium in the fifth century B.C.E. to identify a physical location—the area beneath the ribs, where the spleen was known to lie. Hippocratic medicine held that health depended on a balance among four humors—blood, black bile, yellow bile, and phlegm—that affected both body and mind. An excess of black bile, thought to collect in the organs of the hypochondrium, where many people experienced unpleasant digestive symptoms, could also cause responses such as moodiness and sadness. The term hypochondria thus came to be associated, as the humoral theory persisted into the Renaissance, not only with symptoms like an upset stomach but also with sluggishness, anxiety, and melancholy—a convergence of “two seemingly unrelated processes within the body: digestive function and emotional disorder,” as Crampton notes.

By the 17th century, the notion of hypochondria as a fundamentally physical condition that also had mental symptoms had been firmly established. In The Anatomy of Melancholy (1621), the English writer and scholar Robert Burton described it as a subset of melancholia, noting a “splenetic hypochondriacal wind” accompanied by “sharp belchings” and “rumbling in the guts,” along with feeling “fearful, sad, suspicious”—an illness that, as he put it, “crucifies the body and mind.” Physicians in the 18th century began to investigate hypochondria as a disorder of the recently discovered nervous system, accounting for symptoms not just in the gut but in other parts of the body as well. According to this view, the cause wasn’t imbalanced humors but fatigue and debility of the nerves themselves.

The story of Charles Darwin, which Crampton tells in her book, illustrates the transition between the period when hypochondria was still seen primarily as a physical disease and the period when it began to look like a primarily psychological condition. Darwin, who was born in 1809, suffered from intense headaches, nausea, and gastric distress, as well as fatigue and anxiety, all of which he chronicled in a journal he called “The Diary of Health.” Although various posthumous diagnoses of organic diseases have been proposed—including systemic lactose intolerance—Crampton observes that Darwin’s need to follow strict health regimens and work routines could be interpreted as a manifestation of undue worry. This blurred line between intense (and possibly useful) self-scrutiny and mental disorder became a challenge for doctors and patients to address.

A fundamental shift had taken place by the late 19th century, thanks to the emergence of views that went on to shape modern psychology, including the idea that, as Crampton puts it, “the mind … controlled the body’s experiences and sensations, not the other way around.” Distinguished by what the neurologist George Beard, in the 1880s, called “delusions,” hypochondria was reconceived as a mental illness: It was a psychological state of unwarranted concern with one’s health.

In the 20th century, the prototypical hypochondriac became the kind of neurotic whom Woody Allen plays in Hannah and Her Sisters: someone who obsessively thinks they are sick when they’re not. Freud’s view that unexplained physical symptoms can be the body’s expression of inner conflict—meaning that those symptoms could be entirely psychological in origin—played an influential role. The idea that stress or anguish could manifest as bodily distress, in a process that came to be called “somatization,” spread. So did 20th-century medicine’s new capacity to test for and rule out specific conditions. Consider Allen’s character in that film, fretting about a brain tumor, only to have his worries assuaged by a brain scan. This newly psychologized anxiety, juxtaposed with medical science’s objective findings, helped solidify the modern image of the hypochondriac as a comedic figure, easily caricatured as a neurotic who could, and should, just “snap out of it.”

Unlike some other forms of anxiety, health worries are a problem that neither better labels nor improved treatments can hope to completely banish. Hypochondria, the writer Brian Dillon pointedly notes in his The Hypochondriacs: Nine Tormented Lives, ultimately “makes dupes of us all, because life, or rather death, will have the last laugh.” In the meantime, we doubt, wait, anticipate, and try to identify: Is that stabbing headache a passing discomfort, or a sign of disease? Our bodies are subject to fluctuations, as the medical science of different eras has understood—and as today’s doctors underscore. The trick is to pay enough attention to those changes to catch problems without being devoured by the anxiety born of paying too much attention.

In retrospect, Crampton, as a high-school student in England, wasn’t anxious enough, overlooking for months a tennis-ball-size lump above her collarbone that turned out to be the result of Hodgkin’s lymphoma, a blood cancer. Her doctor told her she had a significant chance that treatment would leave her cancer-free. After chemo, radiation, one relapse, and a stem-cell transplant, she got better. But the experience left her hypervigilant about her body, anxious that she might miss a recurrence. As she reflects, “it took being cured of a life-threatening illness for me to become fixated on the idea that I might be sick.” Her conscientious self-monitoring gave way to panicked visits to urgent care and doctors’ offices, seeking relief from the thought that she was experiencing a telltale symptom—a behavior that she feels guilty about as a user of England’s overstretched National Health Service. “At some point,” she writes, “my responsible cancer survivor behavior had morphed into something else.”

[From the January/February 2014 issue: Scott Stossel on surviving anxiety]

What Crampton was suffering from—the “something else”—seems to be what the DSM now labels “illness anxiety disorder,” an “excessive” preoccupation with health that is not marked by intense physical symptoms. It applies both to people who are anxious without apparent cause or symptoms and to people like Crampton, who have survived a serious disease that might recur and are understandably, but debilitatingly, apprehensive.

It can be hard to distinguish this term, Crampton finds, from the DSM ’s other one, somatic symptom disorder, which describes a disproportionate preoccupation that is accompanied by persistent physical symptoms. It applies to people who catastrophize—the person with heartburn who grows convinced that she has heart disease—as well as those with a serious disease who fixate, to their detriment, on their condition. The definition makes a point of endorsing the validity of a patient’s symptoms, whatever the cause may be; in this, it embodies a 21st-century spirit of nonjudgmental acceptance. Yet because it is a diagnosis of a mental “disorder,” it inevitably involves assessments—of, among other things, what counts as “excessive” anxiety; evaluations like these can be anything but clear-cut. Medicine’s distant and not so distant past—when multiple sclerosis was often misdiagnosed as hysteria, and cases of long COVID were dismissed as instances of pandemic anxiety—offers a caution against confidently differentiating between psychological pathology and poorly understood illness.

In Crampton’s view, the DSM ’s revision has turned out to be “an extensive exercise in obfuscation.” Some physicians and researchers agree that the categories neither lump nor split groups of patients reliably or helpfully. A 2013 critique argued that somatic symptom disorder would pick up patients with “chronic pain conditions [and] patients worrying about the prognosis of a serious medical condition (e.g., diabetes, cancer),” not to mention people with undiagnosed diseases. A 2016 study failed to provide “empirical evidence for the validity of the new diagnoses,” concluding that the use of the labels won’t improve the clinical care of patients suffering from “high levels of health anxiety.”

“Hypochondria only has questions, never answers, and that makes us perpetually uneasy,” Crampton writes. Still, she finds that she almost mourns the old term. Its imperfections fit her messy experience of anxiety—and help her describe it to herself and doctors, giving “edges to a feeling of uncertainty” that she finds overwhelming. But her position, she acknowledges, is a privileged one: As a former adolescent cancer patient, she gets care when she seeks it, and doesn’t really have to worry about being stigmatized by doctors or friends.

Crampton’s concerns and her experience, that is, are legible to the medical system—to all of us. But that is not true for the millions of patients (many of them young women) suffering from fatigue or brain fog who struggle to get doctors to take their symptoms seriously, and turn out to have a condition such as myalgic encephalomyelitis/chronic fatigue syndrome or an autoimmune disease. They, too, are pulled into the story of hypochondria—yet the DSM ’s labels largely fail to solve the problem these patients encounter: In the long shadow of Freud, we are still given to assuming that what clinicians call “medically unexplained symptoms” are psychological in origin. Fifteen-minute appointments in which doctors often reflexively dismiss such symptoms as indicators of anxiety don’t help. How can doctors usefully listen without time—or medical training that emphasizes the bounds of their own knowledge?

This omission is the real problem with the DSM ’s revision: It pretends to have clarity we still don’t have, decisively categorizing patients rather than scrutinizing medicine’s limitations. The challenge remains: Even as evidence-based medicine laudably strives to nail down definitions and make ever-finer classifications, patients and practitioners alike need to recognize the existential uncertainty at the core of health anxiety. Only then will everyone who suffers from it be taken seriously. After all, in an era of pandemics and Dr. Google, what used to be called hypochondria is more understandable than ever.

Someday we might have the longed-for “definitive test” or a better set of labels, but right now we must acknowledge all that we still don’t know—a condition that literature, rather than medicine, diagnoses best. As John Donne memorably wrote, in the throes of an unknown illness, now suspected to have been typhus, “Variable, and therefore miserable condition of man! This minute I was well, and am ill, this minute.”

This article appears in the June 2024 print edition with the headline “Hypochondria Never Dies.”