Itemoids

Science

Trump Is Threatening to Unwind AI Progress

The Atlantic

www.theatlantic.com › technology › archive › 2024 › 10 › trump-ai-policy › 680476

If the presidential election has provided relief from anything, it has been the generative-AI boom. Neither Kamala Harris nor Donald Trump has made much of the technology in their public messaging, and they have not articulated particularly detailed AI platforms. Bots do not seem to rank among the economy, immigration, abortion rights, and other issues that can make or break campaigns.

But don’t be fooled. Americans are very invested, and very worried, about the future of artificial intelligence. Polling consistently shows that a majority of adults from both major parties support government regulation of AI, and that demand for regulation might even be growing. Efforts to curb AI-enabled disinformation, fraud, and privacy violations, as well as to support private-sector innovation, are under way at the state and federal levels. Widespread AI policy is coming, and the next president may well steer its direction for years to come.

On the surface, the two candidates couldn’t be further apart on AI. When AI has come up on the campaign trail, the focus has not been on substantive issues, but instead on the technology’s place in a supposed culture war. At a rally last winter, Trump railed against the Biden administration’s purported “use of AI to censor the speech of American citizens” (a contorted reference, perhaps, to an interview that week in which Secretary of Homeland Security Alejandro Mayorkas denounced the “politicization” of public education around the dangers of AI, including misinformation). Trump also said he would overturn Joe Biden’s executive order on AI—a sprawling document aiming to preserve consumer and civil rights while also spurring innovation—“on day one.” Then, over the summer, the GOP platform lambasted the “dangerous” executive order as slowing innovation and imposing “Radical Leftwing ideas” on the technology, perhaps referring to the order’s stated “dedication to advancing equity.” Elon Musk, now the most powerful Trump surrogate in the world, recently invited his followers to “imagine an all-powerful woke AI.” Harris, for her part, hasn’t discussed AI much as a candidate, but she is leading many of Biden’s AI efforts as vice president, and her economic platform mentions furthering “the commitments set forth in the 2023 AI Executive Order.”

[Read: The real AI threat starts when the polls close]

Such rhetoric is par for the course this election cycle: Trump in particular has never been known for nuance or gravity, and tearing down Biden is obviously his default position. What no one seems to remember, though, is that Biden’s “dangerous” executive order echoes not one but two executive orders on AI that Trump himself signed. Many of the policies around AI that President Biden and Vice President Harris have supported extend principles and initiatives from Trump’s term—such as efforts to establish federal funding for AI research, prepare American workers for a changing economy, and set safety standards for the technology. The two most recent presidential administrations even agreed on ensuring that federal AI use is nondiscriminatory. Trump’s approach to the technology, in turn, built on foundations laid during Barack Obama’s presidency.

In other words, despite how AI has been approached by their campaigns (that is, barely, or only in the shallowest terms), both candidates have real track records on AI, and those records are largely aligned. The technology appeared to be a rare issue driven for years by substance rather than partisanship, perhaps because prior to the launch of ChatGPT, it wasn’t on many Americans’ minds. With AI now assuming national importance, Trump has promised to tear that consensus down.

Still, there’s a good chance he won’t be able to—that reason and precedent will prevail in the end, if only because there’s already so much momentum behind what began during his own administration. “To the extent that the Trump administration worked on issues of science and technology policy, it worked on AI,” Alondra Nelson, a professor at the Institute for Advanced Study who previously served as the acting director of Biden’s Office of Science and Technology Policy, told me. And in doing so, it was inheriting priorities set under a man Trump has called “the most ignorant president in our history.” Near the end of his second term, Obama directed several federal agencies to study and plan for the growing importance of “big data” and AI, which culminated at the end of 2016 with the publication of a report on the “future of artificial intelligence,” as well as a national strategic plan for AI research and development. Those included broad suggestions to grow the federal government’s AI expertise, support private-sector innovation, establish standards for the technology’s safety and reliability, lead international conversations on AI, and prepare the American workforce for potential automation.

A few years later, Trump began to deliver on those recommendations through his executive orders on AI, a 2019 update to that strategic plan, and his White House’s guidance to federal agencies on using AI. “The Trump administration made AI a national technology priority,” Michael Kratsios, who served as the country’s chief technology officer under Trump and helped design his AI strategy, told Congress last October. In that testimony, Kratsios, who is currently the managing director of the start-up Scale AI, lauded much of Obama’s previous and Biden’s current work on AI—even criticizing Biden for not doing enough to implement existing policies—and noted the continued importance of supporting “high-quality testing and evaluation” of AI products.

Biden and Harris have since taken the baton. Trump’s first executive order in particular did “have a lot of the ingredients that got much more developed in Biden’s EO,” Ellen Goodman, a professor at Rutgers Law School who has advised the National Telecommunications and Information Administration on the fair and responsible use of algorithms, told me. “So when Trump says he’s going to repeal it with a day-one action, one wonders, what is it exactly that’s so offensive?” Even specific policies and programs at the center of Biden and Harris’s work on AI, such as establishing national AI-research institutes and the National AI Initiative Office, were set in motion by the Trump administration. The National Artificial Intelligence Research Resource, which Harris’s economic plan touts by name, originated with AI legislation that passed near the end of Trump’s term. Innovation, supporting American workers, and beating China are goals Harris and Trump share. Bluster aside, the candidates’ records suggest “a lot of similarities when you get down to the brass tacks of priorities,” Alexandra Givens, the president of the Center for Democracy & Technology, a nonprofit that advocates for digital privacy and civil rights, told me.

[Read: The EV culture wars aren’t what they seem]

To be clear, substantive disputes on AI between Harris and Trump will exist, as with any pair of Democratic and Republican presidential candidates on most issues. Even with broad agreements on priorities and government programs, implementation will vary. Kratsios had emphasized a “light touch” approach to regulation. Some big names in Silicon Valley have come out against the Biden administration’s AI regulations, arguing that they put undue burdens on tech start-ups. Much of the Republican Party’s broader message involves dismantling the federal government’s regulatory authority, Goodman said, which would affect its ability to regulate AI in any domain.

And there is the “Radical Leftwing” rhetoric. The Biden-Harris administration made sure the “first piece of work out the public would see would be the Blueprint for an AI Bill of Rights,” Nelson said, which outlines various privacy and civil-rights protections that anyone building or deploying AI systems should prioritize. Republicans seem to have a particular resistance to these interventions, which are oriented around such concepts as “algorithmic discrimination,” or the idea that AI can perpetuate and worsen inequities from race, gender, or other identifying characteristics.

But even here, the groundwork was actually laid by Trump. His first executive order emphasized “safety, security, privacy, and confidentiality protections,” and his second “protects privacy, civil rights, [and] civil liberties.” During his presidency, the National Institutes of Standards and Technology issued a federal plan for developing AI standards that mentioned “minimizing bias” and ensuring “non-discriminatory” AI—the very reasons why the GOP platform lashed out against Biden’s executive order and why Senator Ted Cruz recently called its proposed safety standards “woke.” The reason that Trump and his opponents have in the past agreed on these issues, despite recent rhetoric suggesting otherwise, is that these initiatives are simply about making sure the technology actually functions consistently, with equal outcomes for users. “The ‘woke’ conversation can be misleading,” Givens said, “because really, what we’re talking about is AI systems that work and have reliable outputs … Of course these systems should actually work in a predictable way and treat users fairly, and that should be a nonpartisan, commonsense approach.”

In other words, the question is ultimately whether Trump will do a heel turn simply because the political winds have shifted. (The former president has been inconsistent even on major issues such as abortion and gun control in the past, so anything is possible.) The vitriol from Trump and other Republicans suggests they may simply oppose “anything that the Biden administration has put together” on AI, says Suresh Venkatasubramanian, a computer scientist at Brown University who previously advised the Biden White House on science and technology policy and co-authored the Blueprint for an AI Bill of Rights. Which, of course, means opposing much of what Trump’s own administration put together on AI.

But he may find more resistance than he expects. AI has become a household topic and common concern in the less than two years since ChatGPT was released. Perhaps the parties could tacitly agree on broad principles in the past because the technology was less advanced and didn’t matter much to the electorate. Now everybody is watching.

Americans broadly support Biden’s executive order. There is bipartisan momentum behind laws to regulate deepfake disinformation, combat nonconsensual AI sexual imagery, promote innovation that adheres to federal safety standards, protect consumer privacy, prevent the use of AI for fraud, and more. A number of the initiatives in Biden’s executive order have already been implemented. An AI bill of rights similar to the Biden-Harris blueprint passed Oklahoma’s House of Representatives, which has a Republican supermajority, earlier this year (the legislative session ended before the bill could make it out of committee in the senate). There is broad “industry support and civil-society support” for federal safety standards and research funding, Givens said. And every major AI company has entered voluntary agreements with and advised the government on AI regulation. “There’s going to be a different expectation of accountability from any administration around these issues and powerful tools,” Nelson said.

When Obama, Trump, and Biden were elected, few people could have predicted anything like the release of ChatGPT. The technology’s trajectory could shift even before the inauguration, and almost certainly will before 2028. The nation’s political divides might just be too old, and too calcified, to keep pace—which, for once, might be to the benefit of the American people.

Americans Are Hoarding Their Friends

The Atlantic

www.theatlantic.com › family › archive › 2024 › 10 › friend-hoarding-group-mixing-psychology › 680386

Hypothetically, introducing friends from different social circles shouldn’t be that hard. Two people you like—and who like you—probably have some things in common. If they like each other, you’ll have done them a service by connecting them. And then you can all hang out together. Fun!

Or, if you’re like me, you’ve heard a little voice in your head whispering: Not fun. What if you’re sweet with one friend and sardonic with another, and you don’t know who to be when you’re all in the same room? Or what if they don’t get along? Worst of all: What if they do—but better than they do with you? What if they leave you behind forever, friendless and alone?

That might sound paranoid, but in my defense, it turns out these thoughts are common. Danielle Bayard Jackson, the author of Fighting for Our Friendships: The Science and Art of Conflict and Connection in Women's Relationships, told me that when she was a high-school teacher years ago, she’d often hear students airing anxieties: So-and-so’s befriending my friend or I think she’s trying to take her. She assumed it was a teenage issue—until she began working as a friendship coach and found that her “charismatic, high-achieving, successful” adult clients didn’t want to introduce friends either. The subject has been popping up online, too. A whole category of TikToks seem to consist of people just looking stressed, with a caption like “when your birthday is coming up and you gotta decide if u wanna mix the friend groups or not” or “POV mixing friendgroups and they’re about to watch you switch between personality 1 & 3.” In a recent Slate article, the writer Chason Gordon confessed to an “overwhelming horror at merging friend groups.”

Much of what can make linking friends scary—insecurity, envy, an instinct to hold tight to the people you love—isn’t new; it’s fundamentally human. But keeping your friends to yourself, what I call “friend hoarding,” is a modern practice. Before the Industrial Revolution, having different social circles was hardly possible: You were likely to eat, work, and pray with the same people day in and day out. Only once more people moved from close-knit farming villages to larger towns and cities did strangers begin coexisting in private bubbles and forming disconnected groups.

Today, this phenomenon has gone into “hyperdrive,” Katherine Stovel, a University of Washington sociologist, told me. With the internet and faster transportation, people can more easily maintain relationships from different parts of life; the more discrete the groups are, the harder it might be to integrate them.

But the thing is, many people want to benefit from the kinds of introductions they’re nervous to make. And ironically, though they might hoard friends out of fear of being abandoned, doing so could leave them feeling more lonely in the end. Marisa G. Franco, the author of Platonic: How Understanding Your Attachment Style Can Help You Make—And Keep—Friends, told me that people who have plenty of individual friends can still experience “collective loneliness,” or a yearning to be part of a group with common identity or purpose—something that a more connected, cohesive network could solve. Bayard Jackson mentioned something similar: “I've had people say to me how hungry they are to be a part of a friend group, this family feel,” she said. “And then in the same breath tell me they don't want to introduce their friends to one another. And I'll point out … do you understand how that doesn't work?”

If Americans let their friends mingle, they might form the communities they’ve been hoping for. But first they need to stop standing in their own way.   

Before the late 18th century, most relationships were either familial or, at least to some degree, practical; they were rarely just about having fun or developing intimacy, as friendship is usually conceived of now. But after industrialization, people suddenly had far more options in life: what they’d do for work, where they’d live, whom they’d meet. As Reuben Thomas, a University of New Mexico sociologist, told me, it became possible to be the only person “who works as a hospital technician but is also in a Sherlock Holmes book club, and is also in a rock-climbing club, who goes to Renaissance fairs and is part of the Swedish Lutheran church and lives in Wichita.” Each pocket of life can yield more pals.

These days, people can socialize online with scattered friends who’ll never end up at the same bar or party—and who might not even know of one another’s existence. Even if friends live in the same area, today there are fewer so-called third spaces: free, public areas where big groups can hang out. Just as romance has become privatized, with more people dating strangers from apps than acquaintances from their network, researchers told me that there’s been a shift toward privatized friendship too. “Everybody has to have a play date rather than just going out into the neighborhood and playing with whoever's there,” Stovel said.

Keeping friends separate can have its benefits. It allows people to freely express certain sides of themselves in the safety of simpatico groups—say, earnest geekiness with the Renaissance stans and adventurousness with the climbers. Stovel told me this can be particularly important for young adults, who might be “trying on personas” to figure out who they are.

[Read: What adults forget about friendship]

A more primal motivation also keeps many folks from making introductions: They’re nervous that their friends will grow close and that they’ll be cast aside. People have argued for decades that feeling threatened by friends’ other bonds is immature; or worse, that it reveals how capitalism has crept into relationships, driving us to compete, amass power, and treat one another like possessions, Jaimie Krems, a UCLA psychologist who studies friendship envy, told me. But the cold, hard fact, I’m sorry to report, is that friendship inherently does involve some competition. According to the “alliance theory,” humans have evolved to make friends because they’re in our corner—not someone else’s—in times of trouble, and we’re in theirs in return. Today, too, everyone has limited time, attention, and resources to share with the people they love, and more time with one friend inevitably means less time with another. Friendship envy is adaptive, Krems told me.

You can lose friends after introducing them; researchers have found that “friend poaching” is a very real phenomenon. But even if that worst-case scenario isn’t likely to happen, the thought of losing any closeness can be terrible. Bayard Jackson said that women in particular “really value feeling like we’re in this mutually exclusive private vault” with our besties. It’s cozy in there! And so many people already have a gnawing fear, she told me: “that I’ll be left behind, forgotten, that I don’t offer anything interesting enough.”

Being the person who introduces two friends—Stovel calls these people the “catalyst brokers”—nearly always involves some risk. Initially, the broker gains power because the two people she’s introduced are dependent on her for access; the friends are also, hopefully, grateful for the connection. At some point, though, the broker might become redundant, even disposable, the same way a matchmaker or a real-estate agent would be after a job well done.

People may have more to gain than they do to lose when mixing friends, though. Making those introductions might make you feel more whole, like the various versions of yourself are finally coming together. Combining circles could be the difference between sustaining friendships and letting them languish from neglect, given that finding time is a big obstacle to friendship today. Your friends may also be able to offer you more support together than they could individually, especially in a crisis; they can work together to care for you. And you might start feeling like part of something larger than yourself—a remedy for the “collective loneliness” that Franco described.

[Read: What if friendship, not marriage, was at the center of life?]

Drawing connections among people could even shift society as a whole, making it more equitable and less homogenous. For one thing, friend hoarding—however unintentionally—can lead to “opportunity hoarding,” in which privileged people circulate resources among themselves rather than distributing them to people with greater need outside their bubble. And if people all stay locked in the groups they formed from, say, high school, society is more likely to remain stubbornly segregated. The German philosopher and sociologist Georg Simmel believed that a society with separate but overlapping circles allows people to observe one another’s commonalities and differences, which, Stovel said, can “breed empathy, understanding, tolerance, and a richness of experience and curiosity.” It’s a sign, she said, of a “strong social fabric.”

This doesn’t mean that everybody needs to immediately invite all their buddies to the same place and keep the door locked until they’re ready to emerge as one mega-group. But maybe more people could start warming to the idea of being the broker. Bayard Jackson likes to remind people that friendships ebb and flow: Even if some of your friends do eventually get closer to one another than they are to you, that hierarchy isn’t static. And it might help to remember, too, that the reason this all can feel so hard is that friends mean so much. Krems believes that friend envy is functional in part because it motivates people to care for their relationships, to not take them for granted. In her research, she’s found that when people feel that their bond is threatened, they’ll take pains to protect it. This might involve telling a friend that you care about them—so much so that you fear them getting close to someone else, even if you know that reaction might seem silly.

The truth is that you probably can’t keep your friends separate even if you want to. You certainly can’t dictate whom they connect with. That’s the thing about friends: They’re not characters in your head but autonomous human beings with their own motivations and experiences. That’s why they’re interesting—and why they give us so much to lose.

​​When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.

Why People Itch and How to Stop It

The Atlantic

www.theatlantic.com › ideas › archive › 2024 › 10 › why-people-itch-and-how-to-stop-it › 680285

This article was featured in the One Story to Read Today newsletter. Sign up for it here.

The twinge begins in the afternoon: toes. At my desk, toes, itching. Toes, toes, toes.

I don’t normally think about my toes. But as I commute home in a crowded subway car, my feet are burning, and I cannot reach them. Even if I could, what would I do with my sneakers? My ankles are itchy too. But I’m wearing jeans, which are difficult to scratch through, unless you have a fork or something similarly rigid and sharp. I contemplate getting off at the next stop, finding a spot on a bench, removing my shoes, and scratching for a while. But I need to get home. Growing desperate, I scrape at my scalp, which is not itchy. This somehow quiets things down.

I am full of these kinds of tricks. A lot of folks, if you tell them you’re itchy, will recommend a specific brand of lotion. I hate these people. My husband made me a T-shirt that reads yes, I have tried lotions. They do not work. No, not that one either. Zen types will tell you to accept the itch, to meditate on it, as you might do if you were in pain. These people have no idea what they are talking about. Watching someone scratch makes you itchy; worrying about something pruritogenic, like a tick crawling on you, makes you itchy; focusing on how itchy you are when you are itchy makes you itchier. The trick, if you are itchy, is to not think about it, using those ancient psychological tricks disfavored in today’s therapeutic environments: avoidance, deflection, compartmentalization, denial.

Cruelest of all are the people who tell you not to scratch. They have a point, I admit. Scratching spurs cells in your immune system to secrete the hormone histamine, which makes you itchy; in this way, scratching leads to itching just as itching leads to scratching. But if you itch like I itch, like a lot of people itch, there’s no not scratching. It would be like telling someone to stop sneezing or not to pee. “I never tell people not to scratch,” Gil Yosipovitch, a dermatologist at the University of Miami Miller School of Medicine known as “the godfather of itch,” told me, something I found enormously validating.

[Read: Another reason to hate ticks]

No, the techniques that work are the techniques that work. During the day, I pace. Overnight, when the itching intensifies, I balance frozen bags of corn on my legs or dunk myself in a cold bath. I apply menthol, whose cold-tingle overrides the hot-tingle for a while. I jerk my hair or pinch myself with the edges of my nails or dig a diabetic lancet into my stomach. And I scratch.

My body bears the evidence. Right now I am not itchy—well, I am mildly itchy, because writing about being itchy makes me itchy—yet my feet and legs are covered in patches of thick, lichenified skin. This spring, I dug a bloody hole into the inside of my cheek with my teeth. I’ve taken out patches of my scalp, shredded the edge of my belly button, and more than once, desperate to get to an itch inside of me, abraded the walls of my vagina.

During my first pregnancy, when the itching began, it was so unrelenting and extreme that I begged for a surgeon to amputate my limbs; during the second, my doctor induced labor early to stop it. Still, I ended up hallucinating because I was so sleep-deprived. Now I have long spells when I feel normal. Until something happens; I wish I knew what. I get brain-fogged, blowing deadlines, struggling to remember to-dos, failing to understand how anyone eats dinner at 8 p.m., sleeping only to wake up tired. And I get itchy. Maybe it will last forever, I think. It stops. And then it starts again.

One in five people will experience chronic itch in their lifetime, often caused by cancer, a skin condition, liver or kidney disease, or a medication such as an opiate. (Mine is caused by a rare disease called primary biliary cholangitis, or PBC.) The itching is the corporeal equivalent of a car alarm, a constant, obnoxious, and shrill reminder that you are in a body: I’m here, I’m here, I’m here. It is associated with elevated levels of stress, anxiety, and depression; causes sleep deprivation; and intensifies suicidal ideation. In one study, the average patient with chronic itch said they would give up 13 percent of their lifespan to stop it.

Yet itching is taken less seriously than its cousin in misery, pain. Physicians often dismiss it or ignore it entirely. Not that they could treat it effectively if they wanted to, in many cases. There are scores of FDA-approved medications for chronic pain, from ibuprofen to fentanyl. There are no medications approved for chronic itch. “Pain has so much more research, in terms of our understanding of the pathophysiology and drug development. There’s so much more compassion from doctors and family members,” Shawn Kwatra of the University of Maryland School of Medicine told me. Itch, he added, “is just not respected.”

Perhaps doctors do not respect it because, until recently, they did not really understand it. Only in the late aughts did scientists establish that itch is a sensation distinct from pain and begin figuring out the physiology of chronic itch. And only in the past decade did researchers find drugs that resolve it. “We’re having all these breakthroughs,” Kwatra said, ticking off a list of medications, pathways, proteins, and techniques. “We’re in a golden age.”

Once left to suffer through their commutes and to ice their shins with frozen vegetables, millions of Americans are finding relief in their medicine cabinet. For them, science is finally scratching that itch. Still, so far, none of those treatments works on me.

Itching is one of those tautological sensations, like hunger or thirst, characterized by the action that resolves it. The classic definition, the one still used in medical textbooks, comes from a 17th-century German physician: “an unpleasant sensation that provokes the desire to scratch.” Physicians today classify it in a few ways. Itching can be acute, or it can be chronic, lasting for more than six weeks. It can be exogenous, caused by a bug bite or a drug, or endogenous, generated from within. It can be a problem of the skin, the brain and nervous system, the liver, the kidneys.

Most itching is acute and exogenous. This kind of itch, scientists understand pretty well. In simplified terms, poison ivy or laundry detergent irritates the skin and spurs the body’s immune system to react; immune cells secrete histamine, which activates the nervous system; the brain hallucinates itch into being; the person starts to scratch. The episode ends when the offending irritant is gone and the body heals. Usually medicine can vanquish the itch by quieting a person’s immune response (as steroids do) or blocking histamine from arousing the nervous system (as antihistamines do).

Yet some people itch for no clear reason, for months or even years. And many itching spells do not respond to steroids or antihistamines. This kind of itch, until recently, posed some “fundamental, basic science questions,” Diana Bautista, a neuroscientist at UC Berkeley, told me. Scientists had little idea what was happening.

In the 1800s, physicians were studying the nervous system, trying to figure out how the body is capable of feeling such an astonishing panoply of sensations. Researchers found that tiny patches of skin respond to specific stimuli: You might feel a needle prick at one spot, but feel nothing a hair’s breadth away. This indicated that the body has different nerve circuits for different sensations: hot, warm, cold, cool, crushing, stabbing. (Migratory birds have receptors in their eyes that detect the world’s magnetic field.) The brain synthesizes signals from nerve endings and broadcasts what it senses with obscene specificity: the kiss of a raindrop, the crack of an electric shock.

In the 1920s, a German physiologist noted that when researchers poked a pain point on the skin, itch often followed ouch. This led scientists to believe that the sensations shared the same nervous-system circuits, with the brain interpreting weak messages of pain as itch. This became known as the “intensity theory”—itch is pain, below some threshold—and it became the “canonical view,” Brian Kim, a dermatologist at the Icahn School of Medicine at Mount Sinai, told me.

It never made much sense. If you catch your finger in a door, the stinging sensation does not dissipate into itch as the swelling goes down. That the body might have different circuitries for itch and pain seemed plausible for other reasons, too. “If you take 10 patients experiencing pain and give them morphine, probably all of them will feel better. If you take 10 patients with chronic itch and you give them morphine, none of them would,” Kim said. “That tells you right there.” Moreover, pain alleviates itch. It interrupts it. That is, in part, why you scratch: The pain creates the pleasure of relief. “The behavioral output is very different,” Bautista told me. “If you encounter poison ivy or get a bug bite, you don’t try to avoid the injury. You attack it. But with pain, you withdraw; you have these protective reflexes.”

Many scientists preferred an alternative theory: that itch had its own dedicated “labeled line” within the body. It took until 2007 for neuroscientists to uncover an itch-specific circuitry that many had long suspected was there. Mice genetically engineered to lack a specific receptor, scientists found, felt “thermal, mechanical, inflammatory, and neuropathic pain,” but not itch.

Since then, neuroscientists have refined and complicated their understanding of how things work—in particular, extending their understanding of what amplifies or overrides itch and the relationship between the pain and itch circuitries. And doctors have come to understand itch as a disease in and of itself.

And a curious disease, at that. In any given year, one epidemiological survey found, chronic itch afflicts 16 percent of the general adult population, making it half as common as chronic pain. Yet there are scores of American medical centers dedicated to treating pain and none for itch. On Facebook, I found hundreds of peer-support groups for people with chronic pain. For chronic itch, I found just one, dedicated to sufferers of the miserable dermatological condition prurigo nodularis.

Millions of us are scratching alone, a social reality with deep physiological roots. Itching is isolating. The touch of another person can be unbearable. When I get really itchy at night, I build a pillow wall between myself and my cuddle-enthusiast husband, so he does not accidentally wake me up, kickstart the itch-scratch cycle, and mechanically increase our chance of divorce. Studies also show that itch is both contagious and repellent. In the 1990s, scientists in Germany rigged up cameras in a lecture hall and filmed members of the public who came to watch a talk on pruritus. Inevitably, people in the crowd began scratching themselves. Yet people reflexively move away from others who are itching, and toward those in pain.

At best, scratching yourself is like chewing with your mouth open, embarrassing and undignified. At worst, it broadcasts uncleanliness, infestation, derangement, and disease, raising the specter of bedbugs, scabies, chicken pox, roseola, gonorrhea, insanity, and who knows what else. In ancient times, people believed that lice were a form of godly punishment: They generated spontaneously in a person’s flesh, tunneled their way out, and consumed their host, thus transfiguring them into bugs. Plato is one of many historical figures accused by his haters of being so lousy, literally, that it killed him. And maybe it did. An extreme lice infestation can cause a person to die from a blood infection or anemia.

[Read: The wellness industry is manifesting a quantum world]

At least the ancients grasped how miserable being itchy can be. In 1365, a scabies-ridden Petrarch complained to Boccaccio that his hands could not hold a pen, as “they serve only to scratch and scrape.” In Dante’s Inferno, itching is meted out as a punishment to alchemists in the eighth ring of hell. Murderers in the seventh ring, including Attila the Hun, get a mere eternal dunk in a boiling river of blood.

In my experience, people do not meet an itchy person and grasp that they might be beyond the boiling river. (The physician and journalist Atul Gawande wrote about a patient who scratched all the way through her skull into her brain.) The stigma and the dismissal compound the body horror. When I explain that I itch, and at some point might start itching and never stop, many people respond with a nervous giggle or incredulousness. One of my dumb lines on it involves being a distant relative of a participant—to be clear, an accuser—in the Salem witch trials. Who knew that curses work so well!

Itch is a curse, an eldritch one. At night, I sometimes feel crumbs or sand on my sheets, go to brush the grit off, and find the bed clean. One day, I was rummaging around in a basement and felt a spider drop onto my shoulder from the ceiling. I felt that same, vivid sensation a hundred times more over the next few days. The inside of my body itches, like I have bug bites on my intestines and my lungs. I swear that I can feel the floss-thin electric fibers under my skin, pinging their signals back and forth.

The worst is when I need the itch to stop and I cannot get it to stop, not by dunking myself in ice water or abrading myself with a fork or stabbing myself with a needle or taking so much Benadryl that I brown out. It generates the fight-or-flight response; it feels like being trapped. I don’t know; maybe it is akin to drowning.  

My chronic itch might be a disease unto itself, but it is also a symptom. At some point in my early 30s, my immune system erred and started to destroy the cells lining the small bile ducts in my liver. This inflamed them, obstructing the flow of sticky green bile into my digestive system. The ducts are now developing lattices of scar tissue, which will spread through my liver, perhaps resulting in cirrhosis, perhaps resulting in death.

Primary biliary cholangitis is degenerative and incurable, and was until recently considered fatal. The prognosis was radically improved by the discovery that a hundred-year-old drug used to dissolve gallstones slows its progression, reducing inflammation and making bile secretion easier. But a minority of people do not respond to the medication. I am one of them.

PBC is generally slow moving. Science keeps advancing; my doctors have me on an off-label drug that seems to be working. Still, I am sick, and I always will be. I feel fine much of the time. The dissonance is weird, as is the disease. What am I supposed to do with the knowledge of my illness? Am I at the end of the healthy part of my life, at the beginning of the dying part?

I am stuck with questions I cannot answer, trying to ignore them, all the while reminded of them over and over again, itchy.

Some answers, however, are coming. Having found nerve circuits dedicated to itch, scientists also began finding receptors triggered by substances other than histamine, thus unlocking the secrets of chronic itch. “We know more about the neural circuits that allow you to experience this sensation, regardless of cause,” Bautista told me. “We know more about inflammatory mediators and how they activate the circuits. We know more about triggers and priming the immune system and priming the nervous system.”

I asked a number of experts to help me understand chronic itch in the same way I understood acute itch—to show me an itch map. “It’s complicated,” Kwatra told me. “Complicated,” Kim agreed. “Complex,” said Xinzhong Dong of Johns Hopkins. The issue is that there’s not really a map for chronic itch. There are multiple itch maps, many body circuits going haywire in many ways.

Still, Dong gave me one example. The drug chloroquine “works really well to kill malaria,” he explained. But chloroquine can cause extreme itchiness in people with dark skin tones. “The phenomenon is not an allergic reaction,” Dong told me; and antihistamines do not ease it. In 2009, his lab figured it out: In highly simplified terms, melanin holds chloroquine in the skin, and chloroquine lights up an itch receptor.

Because there is no single map for chronic itch, there is no “big itch switch that you can turn off reliably with a drug,” Kim told me. “I’m not so convinced that it is even doable.” (Dong thought that it probably is. It just might cause debilitating side effects or even kill the itchy person in the process.) Still, there are lots of smaller itch switches, and researchers are figuring out how to flip them, one by one.

These include a pair of cytokines called interleukin 4 and interleukin 13. When a person encounters an allergen, the body secretes these chemical messengers to rev up the immune system. Yet the messengers also spur the body to produce itch-related cytokines and make the nervous system more sensitive to them. In 2017, the FDA approved a drug called Dupixent, which blocks the pair of cytokines, to treat atopic dermatitis, a form of eczema; the agency later approved it for asthma, laryngitis, and other inflammatory conditions (at a retail cost of $59,000 a year).

Michael McDaniel found a single open blister on his bicep when he was traveling in Europe in 2013. Within a few days, he told me, a crackling, bleeding rash had engulfed his upper extremities, oozing a honey-colored liquid. His knuckles were so swollen that his hands stiffened.

Back in the United States one miserable week after his trip, he saw a dermatologist, who diagnosed him with atopic dermatitis. Nothing McDaniel tried—steroids, bathing in diluted bleach, avoiding cigarette smoke and dryer sheets, praying to any god who would listen—ended his misery. He bled through socks and shirts. He hid his hands in photographs. “I was able to get my symptoms to a manageable baseline,” he told me. “It wasn’t really manageable, though. I just got used to it.”

McDaniel muddled through this circle of hell for seven years, until his dermatologist gave him an infusion of Dupixent. Twenty-four hours later, “my skin was the calmest it had been since my symptoms appeared,” McDaniel told me. The drug was a “miracle.”

Numerous drugs similar to Dupixent have been found over the past seven years to work on chronic itch, and physicians are refining techniques such as nerve blocks and ketamine infusions. But finding treatments for itching that is not related to an immune response has proved harder. Progress is throttled by the relatively small number of researchers working on itch, and the limited sums Big Pharma is willing to pump into drug development and trials. Plus, treatment options do not readily translate into treatment; a lot of folks are still being told to try Benadryl, even if all it does is make them groggy.

When I saw my hepatologist in August, that’s exactly what he suggested. The drug would help to quell the itching caused by my scratching, at a minimum, and help me sleep.

“I hate Benadryl,” I snapped. (Maybe I need a new T-shirt.) He suggested Zyrtec or Claritin.

As I continued to press for more options, he reviewed my bloodwork. My liver enzymes were still high. He suggested more tests, a biopsy. And he said we could start trialing drugs to manage my symptoms better. SSRIs, used to treat depression, sometimes ease itch in patients with PBC. Opioid antagonists, used to treat heroin overdoses, sometimes do the same. Cholestyramine, which soaks up bile acid (a known pruritogen), could work. Maybe UVB phototherapy. Maybe a cream charged with fatty acids that activate the endocannabinoid system. Maybe rifampin, an antibiotic.  

These ragtag off-label treatment options reflect the fact that physicians have not yet figured out PBC’s itch map. Some patients just itch and itch and itch and it never ends. I once asked my old hepatologist what she would do if that happened to me. “Transplant your liver,” she told me, not even looking up from her computer.

This was not a comforting answer. Organ transplantation is a lifesaving miracle, but a saved life is not an easy one. Recovery from a liver transplant takes at least a year. Grafts die, not infrequently. Many patients never heal fully. The five-year survival rate is 14 percentage points lower for PBC patients with liver transplants than it is for PBC patients who respond to the standard treatment and do not need one.

When I shared this prognosis with my mother, she responded, “You better start being nice to your siblings!” (I would rather die.) When I broke it to my husband, he paused a beat before saying he might go call his therapist.

Would I rather just live with the itch? How would I do it? I could not find a support group for the chronically itchy. But I did find two people with PBC who were willing to share their experiences with me. Carol Davis is a retired kindergarten teacher. More than a decade ago, she started itching “like crazy,” she told me. “It would wake me up in the night.” A doctor diagnosed her with PBC; like me, she itches on and off, and doctors have never found a set of drugs to quell her itch without causing miserable side effects.

[Read: A food-allergy fix hiding in plain sight]

I asked her how she has dealt with it, not in terms of doctors and drugs and lotions but in a more cosmic sense. “When you’re at the end of your lifespan, you just have the mindset: These things are going to happen,” Davis told me. “If I had been younger, like you, it might have been more scary.” Then she ticked off a list of things she looks forward to: games of Farkle, Bible study, going to the gym, seeing her friends from her sorority, spending time with her husband of 54 years. She got out of her head, she meant. And when she found herself back there, itching or afraid or in pain, she told me, “I don’t dwell on myself. I don’t ask the Lord to make me well. I dwell on Him!”

Gail Fisher is 84 “and a half,” she told me, and a harpist, gardener, and motor-home enthusiast. She lives alone in rural Effingham County, Illinois. Her PBC has developed into cirrhosis, and she also has arthritis and thyroid disease. The itching drives her nuts sometimes too, she told me. But she does not dwell on it either. “Gosh, don’t worry about it,” she said. “You don’t know what tomorrow is going to bring anyway!”  

When the itch is at its worst—not a bodily sensation but an existential blight, not a force begging for resignation but one driving a person to madness—that’s easier said than done. Still, I knew that following Davis’s and Fisher’s advice would do me more good than lotion or Benadryl ever has.

I’m here, my body tells me. I’m here. I’m alive. I’m dying. I’m here.

I know, I respond. Enough. I know.

Why Trump and Harris Are Turning to Podcasts

The Atlantic

www.theatlantic.com › newsletters › archive › 2024 › 10 › why-trump-and-harris-are-turning-to-podcasts › 680199

This story seems to be about:

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

Kamala Harris is in the midst of a media blitz this week, including an interview on CBS’s 60 Minutes yesterday evening and an appearance on The Late Show With Stephen Colbert tonight. But she is also dipping into the world of mega-popular, not straightforwardly journalistic podcasts—notably appearing on the show Call Her Daddy last weekend. I spoke with my colleague Helen Lewis, who covers the podcast-sphere, about why Donald Trump and Harris are both spending time on these sorts of shows, what these interviews avoid, and how independent podcasters became major players in political media.

The New Mainstream

Lora Kelley: How does the value to the viewer of a traditional press interview—one focused on the specific issues and policies of the race—differ from that of a lifestyle podcast?

Helen Lewis: Roughly speaking, there are two types of sit-down conversations in politics: the accountability interview and the talk-show appearance. One focuses on pinning down candidates on their past statements and their future promises; the other, which most podcasts fall into, tries to understand the candidate as a person. The latter aren’t necessarily soft options—being charismatic and engaging while making small talk or fielding deeply personal questions is a skill in itself. (And I found Donald Trump’s appearance on Theo Von’s podcast, where he talked about his elder brother’s struggle with alcoholism, very revealing indeed.)

But only with the accountability interviews do you get candidates pressed repeatedly on questions that they’re trying to dodge. On Logan Paul’s podcast, Impaulsive, Trump was asked about the transmission of fentanyl over the border, and he got away with rambling about how “unbelievable” the German shepherds Border Patrol officers use are. On Lex Fridman’s podcast, Trump asserted that he could easily sort out the crisis in Ukraine—and that was it. Who needs details? When Kamala Harris went on Call Her Daddy, the host, Alex Cooper, gave her a chance to lay out her message on reproductive rights but didn’t, for example, challenge her on whether she supports third-trimester abortions, which are deeply divisive.

Lora: From the perspective of a political campaign, are there any downsides to appearing on a podcast such as Call Her Daddy?

Helen: The obvious criticism of Harris appearing on Call Her Daddy, which has a young, female audience, is that she already has a big lead among young women aged 18–25. You can say the same about Trump appearing on podcasts that are popular with young men. But both groups contain many people who will be undecided about whether to vote at all.

Lora: Harris has done some traditional press interviews during this campaign cycle, including her 60 Minutes interview yesterday. But are we in a new era in which chats with friendly podcasters rival (or even overtake) traditional media interviews?

Helen: Well, quite. An article I think about a lot is John Herrman’s 2015 “Access Denied,” in which he asked why an A-lister—someone like Kim Kardashian—would give an interview to a celebrity magazine if she had something to sell, instead of simply putting a picture on Instagram. Why cooperate with the old guard of media when they are no longer the gatekeepers of attention? Herrman argued that the traditional media was suffering a “loss of power resulting in a loss of access resulting in further loss of power.”

That dynamic has now migrated to politics. The legacy brands no longer have a monopoly on people’s attention, and the online right, in particular, has been extremely successful in building an alternative, highly partisan media. Fox News is no longer the rightmost end of the spectrum—beyond that is Tucker Carlson’s podcast, or the Daily Wire network, or Newsmax, or Elon Musk’s X.

Now candidates tend to talk to the traditional media only when they want to reset the narrative about them, because other journalists still watch 60 Minutes or whatever it might be. There’s still a noisiness around a big legacy interview that you don’t get with, say, Call Her Daddy—even if more people end up consuming the latter.

Lora: Are these podcasts really doing anything new, or are they largely replicating traditional media interviews without the same standards and accountability?

Helen: The better ones strive for impartiality and don’t, for example, reveal their questions in advance—but many political podcasts are wrapped in an ecosystem where big-name guests mean more advertising revenue, and thus bigger profits for the hosts personally; plus, their only hope of getting a second interview is if the candidate feels the first one was sympathetic. Compare that with 60 Minutes, which interviewed Trump so robustly in 2020 that he has asked for an apology.

I’m as guilty as anyone, but we need to stop treating these podcasts as the “alternative” media when they are absolutely the mainstream these days. The top ones have audiences as big as, if not bigger than, most legacy outlets. If they don’t want to hire all the editorial infrastructure that traditional journalism has (such as fact-checkers, research assistants, etc.), or risk being unpopular by asking difficult questions, that’s on them. Joe Rogan renewed his Spotify contract for $250 million. Alex Cooper signed a deal with SiriusXM this year worth $125 million. We should stop treating the mega-podcasts like mom-and-pop outfits competing with chain stores. They’re behemoths.

Lora: You recently wrote about The Joe Rogan Experience, which is the top-listened-to podcast on Spotify and arguably the most influential behemoth of them all. Why haven’t the candidates gone on the show yet? Who from each ticket do you think would make the most sense as a guest?

Helen: As I understand it, Team Trump would love to get on The Joe Rogan Experience. The two politicians that Rogan adores are Tulsi Gabbard and Robert F. Kennedy Jr., who are now both working with the Republicans, and Team Trump would hope to encourage some of Rogan’s audience of crunchy, COVID-skeptic libertarians to follow them in moving from the independent/Democrat column to the GOP. But Rogan isn’t a full MAGA partisan like some of his friends, and Trump recently said that Rogan hasn’t asked him to appear.

In any case, I think Rogan would prefer to talk to J. D. Vance, who is very much part of the heterodox Silicon Valley–refugee tendency that he admires. For the Democrats, Harris might struggle to relax into the stoner-wonderment vibe of Rogan, given the tight-laced campaign she’s running. Rogan and Tim Walz could probably have a good chat about shooting deer and the best way to barbecue.

Related:

What going on Call Her Daddy did for Kamala Harris How Joe Rogan remade Austin

Here are three new stories from The Atlantic:

Milton is the hurricane that scientists were dreading. David Frum: Behind the curtain of Mexico’s progress Donald Trump flirts with race science.

Today’s News

Florida Governor Ron DeSantis announced that roughly 8,000 National Guard members will be mobilized by the time Hurricane Milton, a Category 5 storm, makes landfall this week. The Supreme Court appears likely to uphold the Biden administration’s regulation of “ghost gun” kits, which allow people to buy gun parts and build the weapons at home. Prime Minister Benjamin Netanyahu claimed that the Israeli military has killed the replacement successors of the Hezbollah leader Hassan Nasrallah, who was killed in an Israeli air strike last month.

Dispatches

Atlantic Intelligence: The list of Nobel laureates now contains two physicists whose 1980s research laid the foundations for modern artificial intelligence, Matteo Wong writes.

Explore all of our newsletters here.

Evening Read

Illustration by Ben Kothe / The Atlantic. Source: Getty.

They Were Made Without Eggs or Sperm. Are They Human?

By Kristen V. Brown

The little clump of cells looked almost like a human embryo. Created from stem cells, without eggs, sperm, or a womb, the embryo model had a yolk sac and a proto-placenta, resembling a state that real human embryos reach after approximately 14 days of development. It even secreted hormones that turned a drugstore pregnancy test positive.

To Jacob Hanna’s expert eye, the model wasn’t perfect—more like a rough sketch … But in 2022, when two students burst into his office and dragged him to a microscope to show him the cluster of cells, he knew his team had unlocked a door to understanding a crucial stage of human development. Hanna, a professor at the Weizmann Institute of Science in Israel, also knew that the model would raise some profound ethical questions.

Read the full article.

More From The Atlantic

Israel and Hamas are kidding themselves, Hussein Ibish argues. The New York race that could tip the House

Culture Break

Warner Bros. / Everett Collection

Read. Lauren Elkin’s latest novel, Scaffolding, suggests that total honesty can take a marriage only so far, Lily Meyer writes.

Watch (or skip). Joker: Folie à Deux (out now in theaters) has nothing interesting to say about the challenges of fame, Spencer Kornhaber writes.

Play our daily crossword.

Stephanie Bai contributed to this newsletter.

When you buy a book using a link in this newsletter, we receive a commission. Thank you for supporting The Atlantic.

They Were Made Without Eggs or Sperm. Are They Human?

The Atlantic

www.theatlantic.com › health › archive › 2024 › 10 › human-embryo-model-ethics › 680189

The little clump of cells looked almost like a human embryo. Created from stem cells, without eggs, sperm, or a womb, the embryo model had a yolk sac and a proto-placenta, resembling a state that real human embryos reach after approximately 14 days of development. It even secreted hormones that turned a drugstore pregnancy test positive.

To Jacob Hanna’s expert eye, the model wasn’t perfect—more like a rough sketch. It had no chance of developing into an actual baby. But in 2022, when two students burst into his office and dragged him to a microscope to show him the cluster of cells, he knew his team had unlocked a door to understanding a crucial stage of human development. Hanna, a professor at the Weizmann Institute of Science in Israel, also knew that the model would raise some profound ethical questions.

You might recall images of embryonic development from your high-school biology textbook: In a predictable progression, a fertilized egg morphs into a ball of cells, then a bean-shaped blob, and then, ultimately, something that looks like a baby. The truth is, though, that the earliest stages of human development are still very much a mystery. Early-stage embryos are simply too small to observe with ultrasound; at 14 days, they are just barely perceptible to the naked eye. Keeping them alive outside the body for that long is difficult. Whether anyone should is another matter—for decades, scientific policy and regulation has held 14 days as the limit for how long embryos can be cultured in a lab.  

Embryo models—that is, embryos created using stem cells—could provide a real alternative for studying some of the hardest problems in human development, unlocking crucial details about, say, what causes miscarriages and developmental disorders. In recent years, Hanna and other scientists have made remarkable progress in cultivating pluripotent stem cells to mimic the structure and function of a real, growing embryo. But as researchers solve technical problems, they are still left with moral ones. When is a copy so good that it’s equivalent to the real thing? And more to the point, when should the lab experiment be treated—legally and ethically—as human?  

Around the 14th day of embryonic development, a key stage in human growth called gastrulation kicks off. Cells begin to organize into layers that form the early buds of organs. The primitive streak—a developmental precursor of the spine—shows up. It is also at that point that an embryo can no longer become a twin. “You become an individual,” Jeremy Sugarman, a professor of bioethics and medicine at the Johns Hopkins Berman Institute of Bioethics, told me.

[Read: A woman gave birth from an embryo frozen for 24 years]

The primitive streak is the main rationale behind what is often referred to as the “14-day rule.” Many countries limit the amount of time that a human embryo can be kept alive in a petri dish to 14 days. When a U.K. committee recommended the 14-day limit in the 1980s, IVF, which requires keeping embryos alive until they are either transferred or frozen around day five or six, was still brand-new. The committee reasoned that 14 days was the last point at which an embryo could definitively be considered no more than a collection of cells, without potential individual identity or individual rights; because the central nervous system is formed after the 14-day milestone, they reasoned, there was no chance it could feel pain.

But the recent rise of advanced embryo models has led some groups to start questioning the sanctity of the two-week mark. In 2021, the International Society for Stem Cell Research relaxed its 14-day guideline, saying that research could continue past 14 days depending on ethical review and national regulations. (The organization declined to set a new limit.) In July, U.K. researchers put out a similar set of guidelines specifically for models. Australia’s Embryo Research Licensing Committee, however, recently decided to treat more realistic models like the real deal, prohibiting them from developing past 14 days. In the United States, federal funding of human-embryo research has been prohibited since 1996, but no federal laws govern experiments with either real or model embryos. “The preliminary question is, are they embryos at all?” Hank Greely, a law professor and the director of the Center for Law and the Biosciences at Stanford University, told me. Allow one to develop further, and “maybe it grows a second head. We don’t know.” (Having a second head is not necessarily a reason to disqualify someone from being human.) In the absence of an ethical consensus, Hanna is at work trying to cultivate his models to the equivalent of day 21, roughly the end of gastrulation. So far, he said, he’s managed to grow them to about day 18.

Researchers generally agree that today’s models show little risk of one day becoming walking, talking human beings. Combining sperm and eggs the old-fashioned way is already no guarantee of creating new life; even women in their 20s have only about a 25 percent chance of getting pregnant each month. Making embryos in a lab, sans the usual source material, is considerably harder. Right now, only about 1 percent of embryo models actually become anything that resembles an embryo, according to Hanna. And because scientists don’t have a great idea of what a nine-day-old embryo looks like inside the body, Greely said, they don’t actually know for certain whether the models are developing similarly.

[Read: The most mysterious cells in our bodies don’t belong to us]

And yet, in the past few years, scientists have already accomplished what seemed impossible not so long ago. Both Hanna and Magdalena Żernicka-Goetz, a developmental and stem-cell biologist at the California Institute for Technology and the University of Cambridge, have created models for mice with brains and beating hearts. Scientists and ethicists would be wise to consider what qualifies as human before human embryo models have beating hearts, too. The most important question, some ethicists argue, is not whether researchers can achieve a heartbeat in a petri dish, but whether they can achieve one with a model embryo implanted in a human womb. “It's no longer so much about how embryos are made or where they come from, but more what they can possibly do,” Insoo Hyun, a bioethicist and the director of life sciences at Boston’s Museum of Science told me. In an experiment published last year, seven-day-old model monkey embryos were successfully implanted in the uterus of three female monkeys. Signs of pregnancy disappeared about a week afterward, but the paper still raised the specter—or perhaps the promise—of a human version of the experiment.

Building more realistic embryo models could have enormous benefits—starting with basic understanding of how embryos grow. A century ago, scientists collected thousands of embryo samples, which were then organized into 23 phases covering the first eight weeks of development. Those snapshots of development, known as the Carnegie stages, still form much of the basis for how early life is described in scientific texts. The problem is, “we don’t know what happens in between,” Hanna said. “To study development, you need the living material. You have to watch it grow.Until recently, scientists had rarely sustained embryos in the lab past day seven or so, leaving manifold questions about development beyond the first week. Most developmental defects happen in the first trimester of pregnancy; for example, cleft palate, a potentially debilitating birth defect, occurs sometime before week nine for reasons that scientists don’t yet understand. It’s a mystery that more developmental research performed on embryo models could solve, Greely said.

Better understanding the earliest stages of life could yield insights far beyond developmental disorders. It could help reveal why some women frequently miscarry, or have trouble getting pregnant at all. Żernicka-Goetz has grown models to study the amniotic cavity—when it forms improperly, she suspects, pregnancies may fail. Embryo models could also help explain how and why prenatal development is affected by viruses and alcohol—and, crucially, medications. Pregnant people are generally excluded from drug trials because of potential risks to the fetus, which leaves them without access to treatments for new and chronic health conditions. Hanna has started a company that aims, among other things, to test drug safety on embryo models. Hanna told me he also envisions an even more sci-fi future: treating infertility by growing embryo models to day 60, harvesting their ovaries, and then using the eggs for IVF. Because stem cells can be grown from skin cells, such a system could solve the problem of infertility caused by older eggs without the more invasive aspects of IVF, which requires revving the ovaries up with hormones and surgery to retrieve the resulting eggs.

[Read: Christian parents have a blueprint for IVF]

Answering at least some of these questions may not require hyperrealistic models of an embryo. Aryeh Warmflash, a biosciences professor at Rice University, is studying gastrulation, but the cells that form the placenta aren’t relevant to his research questions, so his models leave them out, he told me. “In some sense, the better your model goes, the more you have to worry,” he said. Hyun told me he cautions scientists against making extremely complex models in order to avoid triggering debate, especially in a country already divided by ideas about when life begins. But given all the medical advances that could be achieved by studying realistic models—all the unknowns that are beginning to seem knowable—it’s hard to imagine that everyone will follow his advice.

For How Much Longer Can Life Continue on This Troubled Planet?

The Atlantic

www.theatlantic.com › science › archive › 2024 › 10 › how-long-will-earth-life-exist › 680123

Wikipedia’s “Timeline of the Far Future” is one of my favorite webpages from the internet’s pre-slop era. A Londoner named Nick Webb created it on the morning of December 22, 2010. “Certain events in the future of the universe can be predicted with a comfortable level of accuracy,” he wrote at the top of the page. He then proposed a chronological list of 33 such events, beginning with the joining of Asia and Australia 40 million years from now. He noted that around this same time, Mars’s moon Phobos would complete its slow death spiral into the red planet’s surface. A community of 1,533 editors have since expanded the timeline to 160 events, including the heat death of the universe. I like to imagine these people on laptops in living rooms and cafés across the world, compiling obscure bits of speculative science into a secular Book of Revelation.

Like the best sci-fi world building, the Timeline of the Far Future can give you a key bump of the sublime. It reminds you that even the sturdiest-seeming features of our world are ephemeral, that in 1,100 years, Earth’s axis will point to a new North Star. In 250,000 years, an undersea volcano will pop up in the Pacific, adding an extra island to Hawaii. In the 1 million years that the Great Pyramid will take to erode, the sun will travel only about 1/200th of its orbit around the Milky Way, but in doing so, it will move into a new field of stars. Our current constellations will go all wobbly in the sky and then vanish.

Some aspects of the timeline are more certain than others. We know that most animals will look different 10 million years from now. We know that the continents will slowly drift together to form a new Pangaea. Africa will slam into Eurasia, sealing off the Mediterranean basin and raising a new Himalaya-like range across France, Italy, and Spain. In 400 million years, Saturn will have lost its rings. Earth will have replenished its fossil fuels. Our planet will also likely have sustained at least one mass-extinction-triggering impact, unless its inhabitants have learned to divert asteroids.

The events farther down the page tend to be shakier. Recently, there has been some dispute over the approximate date that complex life will no longer be able to live on Earth. Astrophysicists have long understood that in roughly half a billion years, the natural swelling of our sun will accelerate. The extra radiation that it pours into Earth’s atmosphere will widen the planet’s daily swing between hot and cold. Continents will expand and contract more violently, making the land brittle, and setting into motion a process that is far less spectacular than an asteroid strike but much deadlier. Rainfall will bring carbon dioxide down to the surface, where it will bond with the silicates exposed by cracking earth. Rivers will carry the resulting carbonate compounds to the ocean, where they will sink. About 1 billion years from now, this process will have transferred so much carbon dioxide to the seafloor that very little will remain in the air. Photosynthesis will be impossible. Forests and grasslands will have vanished. A few plants will make a valiant last stand, but then they, too, will suffocate, wrecking the food chain. Animals on land will go first; deep-sea invertebrates will be last. Microbes may survive for another billion years, but the era of complex life on Earth will have ended.

Researchers from the University of Chicago and Israel’s Weizmann Institute of Science have now proposed an update to this crucial part of the timeline. In a new paper called “Substantial Extension of the Lifetime of the Terrestrial Biosphere,” available as a preprint and accepted for publication in The Planetary Science Journal, they argue that the effects of silicate weathering may be overstated. In a billion years, they say, enough carbon dioxide may yet remain for plants to perform photosynthesis. That doesn’t mean plants will last forever. Even if they can continue breathing, the sheer heat of the ballooning sun will eventually kill them and every other living thing on Earth. The question is when, and the researchers note that there is reason for optimism on this score. Some plant species have already evolved to withstand extreme heat. (One flowering shrub in Death Valley appears to thrive at 117 degrees Fahrenheit.) In the future, they could evolve to withstand higher temperatures still. With carbon-dioxide starvation out of the picture, these hardy plants could perhaps live for 800 million extra years.

[Read: Scientists found ripples in space and time. And you have to buy groceries.]

Claims like these are laughably hard to test, of course. But in this case, there could be a way. Astronomers plan to use the next generation of space telescopes to zoom into the atmospheres of the nearest hundred Earthlike planets, looking for precise chemical combinations that indicate the presence of life. With this census, they hope to tell us whether life is common in the universe. If it is, and if humans keep on building bigger and bigger telescopes, then the astronomers of the 22nd century may be able to survey lots of planets at once, including those that orbit suns that are more swollen than ours. If in the atmospheres of these planets—these future Earth analogues—we see the telltale exhalations of photosynthesis, that could suggest that plantlike lifeforms here are indeed more resilient than we’d once imagined.

Until then, we will just have to keep tabs on the Timeline of the Far Future. Yesterday morning, I visited it again and scrolled down a billion years to see if it had been updated. It had not. I kept scrolling anyway, to remind myself how it all turns out. (Doomscrolling in its purest form.) I went 3, 4, and 5 billion years into the future, by which time the Milky Way will have merged with the Andromeda galaxy. Together, the two will gobble up all the other galaxies in our local, gravitationally bound group. Because the universe is expanding, everything beyond this consolidated mega-galaxy will recede away, leaving it to float alone like an island in a void. The longest-lasting of its stars will shine reddish-orange for trillions of years. Eventually, they’ll twinkle out, and only a black hole will remain. It, too, will evaporate, but over a period of time so long that expressing it in years is comical. The number runs for hundreds of digits.

It is a strange thing that humans do, calculating these expiration dates, not just for life but for stars and black holes. Scientists have even tried to determine when every last fizzing bit of energy in the cosmos will come to rest. We have no obvious stake in these predictions, and at a moment when there are more pressing reasons to doomscroll, they might rightly be called a distraction. I have no straightforward counterargument, only a vague suspicion that there is something ennobling in trying to hold the immensities of space and time inside our small and fragile mammal brains.