Itemoids

Cornell

One of Tuberculosis’ Biggest, Scariest Numbers Is Probably Wrong

The Atlantic

www.theatlantic.com › science › archive › 2023 › 12 › tuberculosis-infection-latent-myth › 676355

Growing up in India, which for decades has clocked millions of tuberculosis cases each year, Lalita Ramakrishnan was intimately familiar with how devastating the disease can be. The world’s greatest infectious killer, rivaled only by SARS-CoV-2, Mycobacterium tuberculosis spreads through the air and infiltrates the airways, in many cases destroying the lungs. It can trigger inflammation in other tissues too, wearing away bones and joints; Ramakrishnan watched her own mother’s body erode in this way. The sole available vaccine was lackluster; the microbe had rapidly evolved resistance to the drugs used to fight it. And the disease had a particularly insidious trait: After entering the body, the bacterium could stow away for years or decades, before erupting without warning into full-blown disease.

This state, referred to as latency, supposedly afflicted roughly 2 billion people—a quarter of the world’s population. Ramakrishnan, now a TB researcher at the University of Cambridge, heard that fact over and over, and passed it down to her own students; it was what every expert did with the dogma at the time. That pool of 2 billion people was understood to account for a large majority of infections worldwide, and it represented one of the most intimidating obstacles to eradicating the disease. To end TB for good, the thinking went, the world would need to catch and cure every latent case.

In the years since, Ramakrishnan’s stance on latent TB has shifted quite a bit. Its extent, she argues, has been exaggerated for a good three decades, by at least an order of magnitude—to the point where it has scrambled priorities, led scientists on wild-goose chases, and unnecessarily saddled people with months of burdensome treatment. In her view, the term latency is so useless, so riddled with misinformation, that it should disappear. “I taught that nonsense forever,” she told me; now she’s spreading the word that TB’s largest, flashiest number may instead be its greatest, most persistent myth.

Ramakrishnan isn’t the only one who thinks so. Together with her colleagues Marcel Behr, of Quebec’s McGill University, and Paul Edelstein, of the University of Pennsylvania (“we call ourselves the three BERs,” Ramakrishnan told me), she’s been on a years-long crusade to set the record straight. Their push has attracted its fair share of followers—and objectors. “I don’t think they’re wrong,” Carl Nathan, a TB researcher at Cornell, told me. “But I’m not confident they’re right.”

Several researchers told me they’re largely fine with the basic premise of the BERs’ argument: Fewer than 2 billion isn’t that hard to get behind. But how many fewer matters. If current latency estimates overshoot by just a smidge, maybe no practical changes are necessary. The greater the overestimate, though, the more treatment recommendations might need to change; the more research and funding priorities might need to shift; the more plans to control, eliminate, and eventually eradicate disease might need to be wholly and permanently rethought.

[Read: A historical lesson in disease containment]

The muddled numbers on latency seem to be based largely on flawed assumptions about certain TB tests. One of the primary ways to screen people for the disease involves pricking harmless derivatives of the bacterium into skin, then waiting for an inflamed lump to appear—a sign that the immune system is familiar with the microbe (or a TB vaccine), but not direct proof that the bacterium itself is present. That means that positive results can guarantee only that the immune system encountered something resembling MTB at some point—perhaps even in the distant past, Rein Houben, an epidemiologist at the London School of Hygiene & Tropical Medicine, told me.

But for a long time, a prevailing assumption among researchers was that all TB infections had the potential to be lifelong, Behr told me. The thought wasn’t entirely far-fetched: Other microbial infections can last a lifetime, and there are historical accounts of lasting MTB infections, including a case in which a man developed tuberculosis more than 30 years after his father passed the bacterium to him. Following that logic—that anyone once infected had a good enough chance of being infected now—researchers added everyone still reacting to the bug to the pool of people actively battling it. By the end of the 1990s, Behr and Houben told me, prominent epidemiologists had used this premise to produce the big 2 billion number, estimating that roughly a third of the population had MTB lurking within.

That eye-catching figure, once rooted, rapidly spread. It was repeated in textbooks, academic papers and lectures, news articles, press releases, government websites, even official treatment guidelines. The World Health Organization parroted it too, repeatedly calling for research into vaccines and treatments that could shrink the world’s massive latent-TB cohort. “We were all taught this dogma when we were young researchers,” Soumya Swaminathan, the WHO’s former chief scientist, told me. “Each generation passed it on to the next.”

But, as the BERs argue, for TB to be a lifelong sentence makes very little sense. Decades of epidemiological data show that the overwhelming majority of disease arises within the first two years after infection, most commonly within months. Beyond that, progression to symptomatic, contagious illness becomes vanishingly rare.

The trio is convinced that a huge majority of people are clearing the bug from their body rather than letting it lie indefinitely in wait—a notion that recent modeling studies support. If the bacteria were lingering, researchers would expect to see a big spike in disease late in life among people with positive skin tests, as their immune system naturally weakens. They would also expect to see a high rate of progression to full-blown TB among people who start taking immunosuppressive drugs or catch HIV. And yet, neither of those trends pans out: At most, some 5 to 10 percent of people who have tested positive by skin test and later sustain a blow to their immune system develop TB disease within about three to five years—a hint that, for almost everyone else, there may not be any MTB left. “If there were a slam-dunk experiment, that’s it,” William Bishai, a TB researcher at Johns Hopkins, told me.

[Read: Tuberculosis got to South America through … seals?]

Nathan, of Cornell, was less sold. Immunosuppressive drugs and HIV flip very specific switches in the immune system; if MTB is being held in check by multiple branches, losing some immune defenses may not be enough to set the bacteria loose. But most of the experts I spoke with are convinced that lasting cases are quite uncommon. “Some people will get into trouble in old age,” Bouke de Jong, a TB researcher at the Institute of Tropical Medicine, in Antwerp, told me. “But is that how MTB hangs out in everybody? I don’t think so.”

If anything, people with positive skin tests might be less likely to eventually develop disease, Ramakrishnan told me, whether because they harbor defenses against MTB or because they are genetically predisposed to clear the microbe from their airway. In either case, that could radically change the upshot of a positive test, especially in countries such as the U.S. and Canada, where MTB transmission rarely occurs and most TB cases can be traced from abroad. Traditionally, people in these places with positive skin tests and no overt symptoms have been told, “‘This means you’ve got sleeping bacteria in you,’” Behr said. “‘Any day now, it may pop out and cause harm.’” Instead, he told me, health-care workers should be communicating widely that there could be up to a 95 percent chance that these patients have already cleared the infection, especially if they’re far out from their last exposure and might not need a drug regimen. TB drugs, although safe, are not completely benign: Standard regimens last for months, interact with other meds, and can have serious side effects.

At the same time, researchers disagree on just how much risk remains once people are a couple of years past an MTB exposure. “We’ve known for decades that we are overtreating people,” says Madhu Pai, a TB researcher at McGill who works with Behr but was not directly involved in his research. But treating a lot of people with positive skin tests has been the only way to ensure that the people who are carrying viable bacteria get the drugs they need, Robert Horsburgh, an epidemiologist at Boston University, told me. That strategy squares, too, with the goal of elimination in places where spread is rare. To purge as much of the bug as possible, “clinicians will err on the side of caution,” says JoAnne Flynn, a TB researcher at the University of Pittsburgh.

Elsewhere in the world, where MTB transmission is rampant and repeat infections are common, “to be honest, nobody cares if there’s latent TB,” Flynn told me. Many people with very symptomatic, very contagious cases still aren’t getting diagnosed or treated; in too many places, the availability of drugs and vaccines is spotty at best. Elimination remains a long-term goal, but active outbreaks demand attention first. Arguably, quibbling about latency now is like trying to snuff stray sparks next to an untended conflagration.

[Read: The dangers of ignoring tuberculosis]

One of the BERs’ main goals could help address TB’s larger issues. Despite decades of research, the best detection tools for the disease remain “fundamentally flawed,” says Keertan Dheda, a TB researcher at the London School of Hygiene & Tropical Medicine and the University of Cape Town. A test that could directly detect viable microbes in tissues, rather than an immune proxy, could definitively diagnose ongoing infections and prioritize people across the disease spectrum for treatment. Such a diagnostic would also be the only way to finally end the fuss over latent TB’s prevalence. Without it, researchers are still sifting through only indirect evidence to get at the global TB burden—which is probably still “in the hundreds of millions” of cases, Houben told me, though the numbers will remain squishy until the data improve.

That 2 billion number is still around—though not everywhere, thanks in part to the BERs’ efforts. The WHO’s most recent annual TB reports now note that a quarter of the world’s population has been infected with MTB, rather than is infected with MTB; the organization has also officially discarded the term latent from its guidance on the disease, Dennis Falzon, of the WHO Global TB Programme, told me in an email. However subtle, these shifts signal that even the world's biggest authorities on TB are dispensing with what was once conventional wisdom.

Losing that big number does technically shrink TB’s reach—which might seem to minimize the disease’s impact. Behr argues the opposite. With a huge denominator, TB’s mortality rate ends up minuscule—suggesting that most infections are benign. Deflating the 2 billion statistic, then, reinforces that “this is one of the world’s nastiest pathogens, not some symbiont that we live with in peace,” Behr told me. Fewer people may be at risk than was once thought. But for those who are harboring the microbe, the dangers are that much more real.

Science Is Becoming Less Human

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 12 › ai-scientific-research › 676304

This summer, a pill intended to treat a chronic, incurable lung disease entered mid-phase human trials. Previous studies have demonstrated that the drug is safe to swallow, although whether it will improve symptoms of the painful fibrosis that it targets remains unknown; this is what the current trial will determine, perhaps by next year. Such a tentative advance would hardly be newsworthy, except for a wrinkle in the medicine’s genesis: It is likely the first drug fully designed by artificial intelligence to come this far in the development pipeline.

The pill’s maker, the biotech company Insilico Medicine, used hundreds of AI models to discover both a new target in the body that could treat the fibrosis and which molecules might be synthesized for the drug itself. Those programs allowed Insilico to go from scratch to putting this drug through the first phase of human trials in two and a half years, rather than the typical five or so. Even if the pill proves useless, a real possibility, plenty of other drugs designed with the help of AI are in the wings. Scientists and companies alike hope that these will reach pharmacies far faster than traditionally designed medicine—bringing a drug to market typically takes well over a decade, and failure rates are high.

Medicine is just one aspect of a broader transformation in science. In only the past few months, AI has appeared to predict tropical storms with similar accuracy and much more speed than conventional models; Meta has released a model that can analyze brain scans to reproduce what a person is looking at; Google recently used AI to propose millions of new materials that could enhance supercomputers, electric vehicles, and more. Just as the technology has blurred the line between human-created and computer-generated text and images—upending how people work, learn, and socialize—AI tools are accelerating and refashioning some of the basic elements of science. “We can really make discoveries that would not be possible without the use of AI,” Marinka Zitnik, a biomedical and AI researcher at Harvard, told me.

Science has never been faster than it is today. But the introduction of AI is also, in some ways, making science less human. For centuries, knowledge of the world has been rooted in observing and explaining it. Many of today’s AI models twist this endeavor, providing answers without justifications and leading scientists to study their own algorithms as much as they study nature. In doing so, AI may be challenging the very nature of discovery.

AI exists to derive impossibly intricate patterns from data sets that are too large for any person to fathom, a mystifying phenomenon that has grown more familiar since ChatGPT was released last year. The chatbot—a tool suddenly at everyone’s fingertips that appears to synthesize the entire internet—changed how we can access and apply knowledge, but it simultaneously tainted much of our thinking with doubt. We do not understand exactly how generative-AI chatbots determine their responses, only that they sound remarkably human, making it hard to parse what is real, logical, or trustworthy, and whether writing, even our own, is fully human or bears a silicon touch. When a response does make sense, it can seem to offer a shortcut rather than any true understanding of how or why the answer came to be.

AI may be doing something similar in a broad range of scientific disciplines. Among the most notable scientific advances achieved via AI may be those in molecular biology from DeepMind, a leading AI research lab now based at Google. After DeepMind’s programs conquered the game of Go in 2016—a game so much more complex than chess that many thought that computers could never master it—Demis Hassabis, DeepMind’s CEO, told me that he began considering how to build an AI program for the decades-old challenge of protein folding. All sorts of biological processes depend on proteins, and every protein is made of a sequence of amino acids. How those molecules fold into a three-dimensional shape determines a protein’s function, and mapping those structures could help scientists develop new vaccines, kill antibiotics-resistant bacteria, and explore new cancer treatments. Without a protein’s 3-D shape, scientists have little more than a bunch of Lego bricks without instructions for putting them together.

Figuring out a single protein structure from a sequence of amino acids used to take years. But in 2022, DeepMind’s flagship scientific model, AlphaFold, found the most likely structure of almost every protein known to science—some 200 million of them. Much like the company’s chess- and Go-playing programs, which search for the best possible move, AlphaFold searches the number of possible structures for an amino-acid sequence to find the most probable one. The program compresses what could have been an entire Ph.D.’s worth of work into seconds, and it has been widely lauded for its “revolutionary impact” on basic biology and the development of novel treatments alike. Still, independent researchers have noted that despite inhuman speed, the model does not fully explain why a specific structure is likely. As a result, scientists are trying to demystify AlphaFold’s predictions, and Hassabis noted that those efforts are making good progress.

[Read: Welcome to the big blur]

AI allows researchers to study complex systems in “the world of bits” at a much faster pace than in the “world of atoms,” Hassabis said, and then physically test their hypotheses as a final step. The technology is pushing forward advances in numerous other disciplines—not just improving speed and scale, but changing what kind of research is thought possible. Neuroscientists at Meta and elsewhere, for instance, are turning artificial neural networks trained to “see” photographs or “read” text into hypotheses for how the brain processes both images and language. Biologists are using AI trained on genetic data to study rare diseases, improve immunotherapies, and better understand SARS-CoV-2 variants of concern. “Now we have viable hypotheses, where before we had mysteries,” Jim DiCarlo, a cognitive scientist at MIT who has pioneered the use of AI to study vision in the brain, told me.

Astronomers and physicists are using machine learning to process data sets from the universe that were too immense to touch before, Brice Ménard, an astrophysicist at Johns Hopkins, told me. Some experiments, such as the CERN particle collider, produce too much information to physically store. Researchers rely on AI to throw out familiar observations while keeping unknowns for analysis. “We don’t know what the needle looks like, because these are undetected physics events, but we know what the hay looks like,” Alexander Szalay, the director of the Institute for Data Intensive Science at Johns Hopkins, told me. “So computers are trained to recognize the hay and basically throw it away.”

[Read: Computers are learning to smell]

The long-term vision could even involve combining AI models and physical experiments in a sort of “self-driving lab,” Zitnik said, wherein computer programs and robots generate hypotheses, plan experiments to test them, and analyze the results. Such labs are a ways off, although prototypes do exist, such as the Scientific Autonomous Reasoning Agent, a robotic system that has already discovered new materials for renewable energy. SARA uses a laser to analyze and alter materials iteratively, with each loop lasting a few seconds, Carla Gomes, a computer scientist at Cornell, told me—reducing days of research to hours. This future, if it comes to pass, will elevate software and robots from tools to collaborators, even co-creators of knowledge.

Quantum observations too numerous for humans to store, experiments too rapid for humans to run, neuroscientific hypotheses too complex for humans to derive—even as AI enables scientific work never before thought possible, those same tools pose an epistemic dilemma. They will produce groundbreaking knowledge while breaking apart what it means to know in the first place.

“The holy grail of science is understanding,” Zitnik said. “To be able to understand a phenomenon, whether that’s the behavior of a cell or a planetary system, requires being able to identify causes and effects.” But AI models are famously opaque. They detect patterns based on gargantuan data sets via software architectures whose inner workings baffle human intuition and reasoning. Experts have taken to calling them “black boxes.”

This presents obvious problems for the scientific method. “We have to understand what is going on inside this black box so we can see where this discovery is coming from,” Szalay told me. To predict events without understanding why those predictions are accurate might gesture toward a different type of science, in which knowledge and the resulting actions are not always accompanied by an explanation. An AI model might predict a thunderstorm’s arrival but struggle to explain the underlying physics and atmospheric changes that triggered it, analyze an X-ray without showing how it arrived at its diagnosis, or propose abstract mathematical conjectures without proving them. Such shifts from observations and grounded reasoning to mathematical probability have happened in the sciences before: The equations of quantum mechanics, which emerged in the 20th century, accurately predict subatomic phenomena that physicists still don’t fully understand—leading Albert Einstein himself to doubt quantum theory.

[From the January 1951 issue: Faith in science]

Science itself may offer a solution to this conundrum. Physical experiments have uncovered a great deal in the past century about the quantum world, and similarly, AI tools might appear inscrutable partly because researchers haven’t spent enough time probing them. “You have to build the artifact first before you can pull it apart and scientifically analyze it,” Hassabis told me, and scientists have only recently begun to build AI models worthy of study. Even older numerical simulations, although far less complex than today’s AI models, are hard to interpret in an intuitive way, but they have nevertheless informed new discoveries for decades.

If researchers understand how artificial neurons respond to an image, they might be able to translate those predictions to biological neurons; if researchers understand which parts of an AI model link a mutation to a disease, scientists could gain new insights into the human genome. Such models are “fully observable systems. You can measure all the parts,” DiCarlo said. Whereas he cannot measure every neuron and synapse in a monkey brain during surgery, he can do that for an AI model. With the right access, AI programs might present scientists not with black boxes so much as with a new type of object requiring a new sort of inquiry—not “models” of the natural world so much as addendums to it. Some scientists even hope to build “digital twins” to simulate cells, organs, and the planet.

AI is not a silver bullet, though. AlphaFold may be revolutionary, and perhaps Insilico will indeed drastically reduce the time it takes to develop new medicine. But the technology has significant limitations. For instance, AI models need to train on large amounts of relevant data. AlphaFold is a “spectacular success,” Jennifer Listgarten, a computational biologist and computer scientist at UC Berkeley, told me—but it also “relied on very expensive, highly curated data that was generated over decades in the laboratory on a very crisply defined problem that could be evaluated extremely cleanly.” The lack of high-quality data in other disciplines can prevent or limit the use of AI.

Even with those data, the real world can be more complex and dynamic than a silicon simulation. Translating the static structure of a molecule into its interactions with various systems in the body, for instance, is a problem that researchers are still working on, Andreas Bender, who studies molecular informatics at the University of Cambridge, told me. AI can propose new medicines quickly, but “you still need to run the drug-discovery process, which is, of course, quite long,” John Jumper, a researcher at DeepMind who led the development of AlphaFold, told me.

Clinical trials take years and many are unsuccessful; plenty of AI drug start-ups and initiatives have scaled back. Those failures are, in some sense, evidence of science working. Experimental results, along with known physical laws, allow scientists to prevent their models from hallucinating, Anima Anandkumar, a computer scientist at Caltech, told me. No analogous laws of linguistic accuracy exist for chatbots—consumers have to trust Big Tech.

In a lab, novel predictions can be physically and safely tested in an isolated setting. But when developing drugs or treating patients, the stakes are much higher. Existing maps of the human genome, for instance, skew toward white Europeans, but the expression of many conditions, such as diabetes, varies significantly by race and ethnicity. Just as biased data sets produce racist chatbots, skewed biological data might mean that “models are not applicable to people of non-caucasian origin,” Bender told me, or those of different ages, or with existing diseases and on co-medications. A cancer-diagnosis program or treatment designed by AI might be especially effective only on a small slice of the population.

AI models might transform not just how we understand the world but how we understand understanding itself. If so, we must build out new models of knowledge as well—what we can trust, why, and when. Otherwise, our reliance on a chatbot, drug-discovery tool, or AI hurricane forecast might depart from the realm of science. It might be more akin to faith.