Itemoids

OpenAI

The Year We Embraced Our Destruction

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 12 › panera-charged-lemonade-ai-existential-risk › 676984

The sounds came out of my mouth with an unexpected urgency. The cadence was deliberate—more befitting of an incantation than an order: one large strawberry-lemon-mint Charged Lemonade. The words hung in the air for a moment, giving way to a stillness punctuated only by the soft whir of distant fluorescent lights and the gentle hum of a Muzak cover of Bruce Hornsby’s “Mandolin Rain.”

The time was 9:03 a.m.; the sun had been up for only one hour. I watched the kind woman behind the counter stifle an eye roll, a small mercy for which I will be eternally grateful. Her look indicated that she’d been through this before, enough times to see through my bravado. I was just another man standing in front of a Panera Bread employee, asking her to hand me 30 fluid ounces of allegedly deadly lemonade. (I would have procured it myself, but it was kept behind the counter, like a controlled substance.)

I came to Panera to touch the face of God or, at the very least, experience the low-grade anxiety and body sweats one can expect from consuming 237 milligrams of caffeine in 15 minutes. Really, the internet sent me. Since its release last year, Panera’s highly caffeinated Charged Lemonade has become a popular meme—most notably on TikTok, where people vlog from the front seat of their car about how hopped up they are after chugging the neon beverage. Last December, a tongue-in-cheek Slate headline asked, “Is Panera Bread Trying to Kill Us?”

In the following months, two wrongful-death lawsuits were indeed filed against the restaurant chain, arguing that Panera was responsible for not adequately advertising the caffeine content of the drink. The suits allege that Charged Lemonade contributed to the fatal cardiac arrests of a 21-year-old college student and a 46-year-old man. Panera did not respond to my request for comment but has argued that both lawsuits are without merit and that it “stands firmly by the safety of our products.” In October, Panera changed the labeling of its Charged Lemonade to warn people who may be “sensitive to caffeine.”

The allegations seem to have done the impossible: They’ve made a suburban chain best known for its bread bowls feel exciting, even dangerous. The memes have escalated. Search death lemonade on any platform, and you’ll see a cascade of grimly ironic posts about everything from lemonade-assisted suicide to being able to peer into alternate dimensions after sipping the juice. Much like its late-aughts boozy predecessor Four Loko, Charged Lemonade is riding a wave of popularity because of the implication that consuming it is possibly unsafe. One viral post from October put it best: “Panera has apparently discovered the fifth loko.”

Like many internet-poisoned men and women before me, I possess both a classic Freudian death drive and an embarrassing desire to experience memes in the physical world—an effort, perhaps, to situate my human form among the algorithms and timelines that dominate my life. But there is another reason I was in a strip mall on the shortest day of the year, allowing the recommended daily allowance of caffeine to Evil Knievel its way across my blood-brain barrier. I came to make sense of a year that was defined by existential threats—and by a strange, pervasive celebration of them.

In 2023, I spent a lot of time listening to smart people talk about the end of the world. This was the year that AI supposedly “ate the internet”: The arrival of ChatGPT in late 2022 shifted something in the public consciousness. After decades of promise, the contours of an AI-powered world felt to some as if they were taking shape. Will these tools come for our jobs, our culture, even our humanity? Are they truly revolutionary or just showy—like spicier versions of autocorrect?

Some of the biggest players in tech—along with a flood of start-ups—are racing to develop their own generative-AI products. The technology has developed swiftly, lending a frenzied, disorienting feeling to the past several months. “I don’t think we’re ready for what we’re creating,” one AI entrepreneur told me ominously and unbidden when we spoke earlier this year. Civilizational extinction has moved from pure science fiction to immediate concern. Geoffrey Hinton, a well-known AI researcher who quit Google this year to warn against the dangers of the technology, suggested that there was as high as a 10 percent chance of extinction in the next 30 years. “I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,” Sam Altman, OpenAI’s CEO, told my colleague Ross Andersen this past spring.

In May, hundreds of AI executives, researchers, and tech luminaries including Bill Gates signed a one-sentence statement written by the Center for AI Safety. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” it read. Debates once contained to a small subculture of technologists and rationalists on niche online forums such as LessWrong became fodder for the press. Normal people trying to keep up with the news had to hack through a jungle of new terminology: x-risk, e/acc, alignment, p(doom). By mid-year, the AI-doomerism conversation was fully mainstreamed; existential calamity was in the air (and, we joked, in our fast-casual lemonades).

[Read: AI doomerism is a decoy]

Then, as if by cosmic coincidence, this strain of apocalyptic thought fused perfectly with pop culture in Christopher Nolan’s Oppenheimer. As the atomic-bomb creator’s biopic took over the box office, AI researchers toted around the Pulitzer Prize–winning book The Making of the Atomic Bomb, suggesting that they too were pushing humanity into an uncertain, possibly apocalyptic future. The parallels between Los Alamos and Silicon Valley, however facile, needled at a question that had been bothering me all year: What would compel a person to build something if they had any reasonable belief that it might end life on Earth?

Richard Rhodes, the author of The Making of the Atomic Bomb, offered me one explanation, using a concept from the Danish physicist Niels Bohr. At the core of quantum physics is the idea of complementarity, which describes how objects have conflicting properties that cannot be observed at the same time. Complementarity, he argued, was also the same principle that governed innovation: A weapon of mass destruction could also be a tool to avert war.

[Read: Oppenheimer’s cry of despair in The Atlantic]

Rhodes, an 86-year-old who’s spent most of his adult life thinking about our most destructive innovations and speaking with the men who built the bomb, told me that he believes this duality to be at the core of human progress. Pursuing our greatest ambitions may give way to an unthinkable nightmare, or it may allow our dreams to come true. The answer to my question, he offered, was somewhere on that thin line between the excitement and terror of true discovery.

Roughly 10 minutes and 15 ounces into my strawberry-lemon-mint Charged Lemonade, I felt a gentle twinge of euphoria—a barely perceptible effervescence taking place at a cellular level. I was alone in the restaurant, ensconced in a booth and checking my Instagram messages. I’d shared a picture of the giant cup sweating modestly on my table, a cheap bid for some online engagement that had paid off. “I hope you live,” one friend had written in response. I glanced down at my smartwatch, where my heart rate measured a pleasant 20 beats per minute higher than usual. The inside of my mouth felt wrong. I ran my tongue over my teeth, noticing a fine dusting of sugar blanketing the enamel.

I did not feel the warm creep of death’s sweet embrace, only a sensation that the lights were very bright. This was accompanied by an edgy feeling that I would characterize as the antithesis of focus. I stood up to ask a Panera employee if they’d been getting a lot of Charged Lemonade tourism around these parts. “I think there’s been a lot, but honestly most of them order it through the drive-through or online order,” they said. “Not many come up here like you did.” I retreated to my booth to let my brain vibrate in my skull.

It is absurd to imagine that lemonade could kill you—no less lemonade from a soda fountain within steps of a Jo-Ann Fabrics store. That absurdity is a large part of what makes Panera lemonade a good meme. But there’s something deeper too, a truth lodged in the banality of a strip-mall drink: Death is everywhere. Today, you might worry about getting shot at school or in a movie theater, or killed by police at a traffic stop; you also understand that you could contract a deadly virus at the grocery store or in the office. Meanwhile, most everyone carries on like everything’s fine. We tolerate what feels like it should be intolerable. This is the mood baked into the meme: Death by lemonade is ridiculous, but in 2023, it doesn’t seem so far-fetched, either.

The same goes for computers and large language models. Our lives already feel influenced beyond our control by the computations of algorithms we don’t understand and cannot see. Maybe it’s ludicrous to imagine a chatbot as the seed of a sentient intelligence that eradicates human life. Then again, it would have been hard in 2006 to imagine Facebook playing a role in the Rohingya genocide, in Myanmar.

I shifted uncomfortably in my seat for the next hour next to my now-empty vessel, anticipating some kind of side effect like the recipient of a novel vaccination. Around the time I could sense myself peaking, I grew quite cold. But that was it. No interdimensional vision, no heart palpitations. The room never melted into a Dalí painting. From behind my laptop, I watched a group of three teenagers, all dressed exactly like Kurt Cobain, grab their neon caffeine receptacles from the online-pickup stand and walk away. Each wore an indelible look of boredom incompatible with the respect one ought to have for death lemonade. I began to feel sheepish about my juice expedition and packed up my belongings.

I’d be lying if I told you I didn’t feel slightly ripped off; it’s an odd sensation, wanting a glass of lemonade to walk you right up to the edge of oblivion. But a hint of impending danger has always been an excellent marketing tool—one that can obscure reality. A quick glance at the Starbucks website revealed that my go-to order—a barely defensible Venti Pike Place roast with an added espresso shot—contains approximately 560 milligrams of caffeine, which is more than double that of a large Charged Lemonade. But I wanted to believe that the food engineers at Panera had pushed the bounds of the possible.

Some of us are drawn to (allegedly) killer lemonade for the same reason others fixate on potential Skynet scenarios. The world feels like it is becoming more chaotic and unknowable, hostile and exciting. AI and a ridiculous fast-casual death beverage may not be the same thing, but they both tap into this energy. We will always find ways to create new, glorious, terrifying things—some that may ultimately kill us. We may not want to die, but in 2023, it was hard to forget that we will.

Where Will AI Take Us in 2024?

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 12 › five-big-questions-about-ai-in-2024 › 676983

This is Atlantic Intelligence, an eight-week series in which The Atlantic’s leading thinkers on AI help you understand the complexity and opportunities of this groundbreaking technology. Sign up here.

What will next year hold for AI? In a new story, Atlantic staff writer Ross Andersen looks ahead, outlining five key questions that will define the technology’s trajectory from here. A big one: How will it affect the election? “Many blamed the spread of lies through social media for enabling [Donald] Trump’s victory in 2016, and for helping him gin up a conspiratorial insurrection following his 2020 defeat,” Andersen writes. “But the tools of misinformation that were used in those elections were crude compared with those that will be available next year.”

Thank you for reading Atlantic Intelligence. This is the final edition of our initial eight-week series. But keep an eye out for new entries later in 2024—we’re sure there will be much more to explore.

Damon Beres, senior editor

Illustration by Ben Kothe / The Atlantic. Source: Getty.

The Big Questions About AI in 2024

By Ross Andersen

Let us be thankful for the AI industry. Its leaders may be nudging humans closer to extinction, but this year, they provided us with a gloriously messy spectacle of progress. When I say “year,” I mean the long year that began late last November, when OpenAI released ChatGPT and, in doing so, launched generative AI into the cultural mainstream. In the months that followed, politicians, teachers, Hollywood screenwriters, and just about everyone else tried to understand what this means for their future. Cash fire-hosed into AI companies, and their executives, now glowed up into international celebrities, fell into Succession-style infighting. The year to come could be just as tumultuous, as the technology continues to evolve and its implications become clearer. Here are five of the most important questions about AI that might be answered in 2024.

Read the full article.

What to Read Next

You can’t truly be friends with an AI: Just because a relationship with a chatbot feels real, that doesn’t mean it is, Ethan Brooks writes. The internet’s next great power suck: AI’s carbon emissions are about to be a problem, Matteo Wong writes.

P.S.

The Atlantic’s Science desk just published its annual list of things that blew our minds this year. Readers of this newsletter will not be surprised to find that AI pops up a few different times. For example, item 47: “AI models can analyze the brain scans of somebody listening to a story and then reproduce the gist of every sentence.”

— Damon

In a bid to break free from OpenAI, companies are building their own custom AI chatbots

Quartz

qz.com › in-a-bid-to-break-free-from-openai-companies-are-build-1851112994

This story seems to be about:

OpenAI dominates the generative AI market, and its GPT-4 is the industry’s best-performing model to date. But businesses are increasingly opting to build their own, smaller AI models that are more tailored to their business needs.

Read more...

The Nine Breakthroughs of the Year

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 12 › scientific-breakthroughs-2023-list › 676952

This story seems to be about:

This is Work in Progress, a newsletter about work, technology, and how to solve some of America’s biggest problems. Sign up here.

The theme of my second-annual Breakthroughs of the Year is the long road of progress. My top breakthrough is Casgevy, a gene-editing treatment for sickle-cell anemia. In the 1980s and early 1990s, scientists in Spain and Japan found strange, repeating patterns in the DNA of certain bacteria. Researchers eventually linked these sequences to an immune defense system that they named “clustered regularly interspaced palindromic repeats”—or CRISPR. In the following decades, scientists found clever ways to build on CRISPR to edit genes in plants, animals, and even humans. CRISPR is this year’s top breakthrough not only because of heroic work done in the past 12 months, but also because of a long thread of heroes whose work spans decades.

Sometimes, what looks like a big deal amounts to nothing at all. For several weeks this summer, the internet lost its mind over claims that researchers in South Korea had built a room-temperature superconductor. One viral thread called it “the biggest physics discovery of my lifetime.” The technology could have paved the way to magnificently efficient energy grids and levitating cars. But, alas, it wasn’t real. So, perhaps, this is 2023’s biggest lesson about progress: Time is the ultimate test. The breakthrough of the year took more than three decades to go from discovery to FDA approval, while the “biggest” physics discovery of the year was disproved in about 30 days.

1. CRISPR’s Triumph: A Possible Cure for Sickle-Cell Disease

In December, the FDA approved the world’s first medicine based on CRISPR technology. Developed by Vertex Pharmaceuticals, in Boston, and CRISPR Therapeutics, based in Switzerland, Casgevy is a new treatment for sickle-cell disease, a chronic blood disorder that affects about 100,000 people in the U.S., most of whom are Black.

Sickle-cell disease is caused by a genetic mutation that affects the production of hemoglobin, a protein that carries oxygen in red blood cells. Abnormal hemoglobin makes blood cells hard and shaped like a sickle. When these misshapen cells get clogged together, they block blood flow throughout the body, causing intense pain and, in some cases, deadly anemia.

The Casgevy treatment involves a complex, multipart procedure. Stem cells are collected from a patient’s bone marrow and sent to a lab. Scientists use CRISPR to knock out a gene that represses the production of “fetal hemoglobin,” which most people stop making after birth. (In 1948, scientists discovered that fetal hemoglobin doesn’t “sickle.”) The edited cells are returned to the body via infusion. After weeks or months, the body starts producing fetal hemoglobin, which reduces cell clumping and improves oxygen supply to tissues and organs.

Ideally, CRISPR will offer a one-and-done treatment. In one trial, 28 of 29 patients, who were followed for at least 18 months, were free of severe pain for at least a year. But we don’t have decades’ worth of data yet.

Casgevy is a triumph for CRISPR. But a miracle drug that’s too expensive for its intended population—or too complex to be administered where it is most needed—performs few miracles. More than 70 percent of the world’s sickle-cell patients live in sub-Saharan Africa. The sticker price for Casgevy is about $2 million, which is roughly 2,000 times larger than the GDP per capita of, say, Burkina Faso. The medical infrastructure necessary to go through with the full treatment doesn’t exist in most places. Casgevy is a wondrous invention, but as always, progress is implementation.  

2. GLP-1s: A Diabetes and Weight-Loss Revolution

In the 1990s, a small team of scientists got to know the Gila monster, a thick lizard that can survive on less than one meal a month. When they studied its saliva, they found that it contained a hormone that, in experiments, lowered blood sugar and regulated appetite. A decade later, a synthetic version of this weird lizard spit became the first medicine of its kind approved to treat type 2 diabetes. The medicine was called a “glucagon-like peptide-1 receptor agonist.” Because that’s a mouthful, scientists mostly call these drugs “GLP-1s.”

Today the world is swimming in GLP-1 breakthroughs. These drugs go by many names. Semaglutide is sold by the Danish company Novo Nordisk, under the names Ozempic (approved for type 2 diabetes) or Wegovy (for weight loss). Tirzepatide is sold by Eli Lilly under the names Mounjaro (type 2 diabetes) or Zepbound (weight loss). These medications all mostly work the same way. They mimic gut hormones that stimulate insulin production and signal to the brain that the patient is full. In clinical trials, patients on these medications lose about 15 percent or more of their weight.

The GLP-1 revolution is reshaping medicine and culture “in ways both electrifying and discomfiting,” Science magazine said in an article naming these drugs its Breakthrough of the Year. Half a billion people around the world live with diabetes, and 40 percent of Americans alone are obese. A relatively safe drug that stimulates insulin production and reduces caloric intake could make an enormous difference in lifestyle and culture.

Some people on GLP-1s report nausea, and some fall out of love with their favorite foods. In rarer cases, the drugs might cause stomach paralysis. But for now, the miraculous effects of these drugs go far beyond diabetes and weight loss. In one trial supported by Novo Nordisk, the drug reduced the incidence of heart attack and stroke by 20 percent. Morgan Stanley survey data found that people on GLP-1s eat less candy, drink less alcohol, and eat 40 percent more vegetables. The medication seems to reduce smoking for smoking addicts, gambling for gambling addicts, and even compulsive nail biting for some. GLP-1s are an exceptional medicine, but they may also prove to be an exceptional tool that helps scientists see more clearly the ways our gut, mind, and willpower work together.

3. GPT and Protein Transformers: What Can’t Large Language Models Do?

In March, OpenAI released GPT-4, the latest and most sophisticated version of the language-model technology that powers ChatGPT. Imagine trying to parse that sentence two years ago—a useful reminder that some things, like large language models, advance at the pace of slowly, slowly, then all at once.

Surveys suggest that most software developers already use AI to accelerate code writing. There is evidence that these tools are raising the productivity of some workers, and surveys suggest that most software developers already use AI to accelerate code writing. These tools also appear to be nibbling away at freelance white-collar work. Famously, OpenAI has claimed that the technology can pass medical-licensing exams and score above the 85th percentile on the LSAT, parts of the SAT, and the uniform bar exam. Still, I am in the camp of believing that this technology is both a sublime accomplishment and basically a toy for most of its users.

One can think of transformers—that’s what the T stands for in GPT—as tools for building a kind of enormous recipe book of language, which AI can consult to cook up meaningful, novel answers to any prompt. If AI can build a cosmic cookbook of linguistic meaning, can it do the same for another corpus of information? For example, could it learn the “language” of how our cells talk to one another?

This spring, a team of researchers announced in Science that they had found a way to use transformer technology to predict protein sequences at the level of individual atoms. This accomplishment builds on AlphaFold, an AI system developed within Alphabet. As several scientists explained to me, the latest breakthrough suggests that we can use language models to quickly spin up the shapes of millions of proteins faster than ever. I’m most impressed by the larger promise: If transformer technology can map both languages and protein structures, it seems like an extraordinary tool for advancing knowledge.

4. Fusion: The Dream Gets a Little Closer

Inside the sun, atoms crash and merge in a process that produces heat and light, making life on this planet possible. Scientists have tried to harness this magic, known as fusion, to produce our own infinite, renewable, and clean energy. The problem: For the longest time, nobody could make it work.

The past 13 months, however, have seen not one but two historic fusion achievements. Last December, 192 lasers at the Lawrence Livermore National Laboratory, in California, blasted a diamond encasing a small amount of frozen hydrogen and created—for less than 100 trillionths of a second—a reaction that produced about three megajoules of energy, or 1.5 times the energy from the lasers. In that moment, scientists said, they achieved the first lab-made fusion reaction to ever create more energy than it took to produce it. Seven months later, they did it again. In July, researchers at the same ignition facility nearly doubled the net amount of energy ever generated by a fusion reaction. Start-ups are racing to keep up with the science labs. The new fusion companies Commonwealth Fusion Systems and Helion are trying to scale this technology.

Will fusion heat your home next year? Fat chance. Next decade? Cross your fingers. Within the lifetime of people reading this article? Conceivably. The naysayers have good reason for skepticism, but these breakthroughs prove that star power on this planet is possible.

5. Malaria and RSV Vaccines: Great News for Kids

Malaria, one of the world’s leading causes of childhood mortality, killed more than 600,000 people in 2022. But with each passing year, we seem to be edging closer to ridding the world of this terrible disease.

Fifteen months ago, the first malaria vaccine, developed by University of Oxford scientists, was found to have up to 80 percent efficacy at preventing infection. It has already been administered to millions of children. But demand still outstrips supply. That’s why it’s so important that in 2023, a second malaria vaccine called R21 was recommended by the World Health Organization, and it appears to be cheaper and easier to manufacture than the first one, and just as effective. The WHO says it expects the addition of R21 to result in sufficient vaccine supply for “all children living in areas where malaria is a public health risk.”

What’s more, in the past year, the FDA approved vaccines against RSV, or respiratory syncytial virus. The American Lung Association estimates that RSV is so common that 97 percent of children catch it before they turn 2, and in a typical year, up to 80,000 children age 5 and younger are hospitalized with RSV along with up to 160,000 older adults. In May, both Pfizer and GSK were granted FDA approval for an RSV vaccine for older adults, and in July, the FDA approved a vaccine to protect infants and toddlers.

6. Killer AI: Artificial Intelligence at War

In the nightmares of AI doomers, our greatest achievements in software will one day rise against us and cause mass death. Maybe they’re wrong. But by any reasonable analysis, the 2020s have already been a breakout decade for AI that kills. Unlike other breakthroughs on this list, this one presents obvious and immediate moral problems.

In the world’s most high-profile conflict, Israel has reportedly accelerated its bombing campaign against Gaza with the use of an AI target-creation platform called Habsora, or “the Gospel.” According to reporting in The Guardian and +972, an Israeli magazine, the Israel Defense Forces use Habsora to produce dozens of targeting recommendations every day based on amassed intelligence that can identify the private homes of individuals suspected of working with Hamas or Islamic Jihad. (The IDF has also independently acknowledged its use of AI to generate bombing targets.)

Israel’s retaliation against Hamas for the October 7 attack has involved one of the heaviest air-bombing campaigns in history. Military analysts told the Financial Times that the seven-week destruction of northern Gaza has approached the damage caused by the Allies’ years-long bombing of German cities in World War II. Clearly, Israel’s AI-assisted bombing campaign shows us another side of the idea that AI is an accelerant.

Meanwhile, the war in Ukraine is perhaps the first major conflict in world history to become a war of drone engineering. (One could also make the case that this designation should go to Azerbaijan's drone-heavy military campaign in the Armenian territory of Nagorno-Karabakh.) Initially, Ukraine depended on a drone called the Bayraktar TB2, made in Turkey, to attack Russian tanks and trucks. Aerial footage of the drone attacks produced viral video-game-like images of exploded convoys. As Wired UK reported, a pop song was written to honor the Bayraktar, and a lemur in the Kyiv Zoo was named after it. But Russia has responded by using jamming technology that is taking out 10,000 drones a month. Ukraine is now struggling to manufacture and buy enough drones to make up the difference, while Russia is using kamikaze drones to destroy Ukrainian infrastructure.

7. Fervo and Hydrogen: Making Use of a Hot Planet

If the energy industry is, in many respects, the search for more heat, one tantalizing solution is to take advantage of our hot subterranean planet. Traditional geothermal plants drill into underground springs and hot-water reservoirs, whose heat powers turbines. But in much of the world, these reservoirs are too deep to access. When we drill, we hit hard rock.

Last year’s version of this list mentioned Quaise, an experimental start-up that tries to vaporize granite with a highly concentrated beam of radio-frequency power. This year, we’re celebrating Fervo, which is part of a crop of so-called enhanced geothermal systems. Fervo uses fracking techniques developed by the oil-and-gas industry to break into hot underground rock. Then Fervo injects cold water into the rock fissures, creating a kind of artificial hot spring. In November, Fervo announced that its Nevada enhanced-geothermal project is operational and sending carbon-free electricity to Google data centers.

That’s not the end of this year’s advancement in underground heat. Eleven years ago, engineers in Mali happened upon a deposit of hydrogen gas. When it was hooked up to a generator, it produced electricity for the local town and only water as exhaust. In 2023, enough governments and start-ups accelerated their search for natural hydrogen-gas deposits that Science magazine named hydrogen-gas exploration one of its breakthroughs of the year. (This is different from the “natural gas” you’ve already heard of, which is a fossil fuel.) One U.S.-government study estimated that the Earth could hold 1 trillion tons of hydrogen, enough to provide thousands of years of fuel and fertilizer.

8. Engineered Skin Bacteria: What If Face Paint Cured Cancer?

In last year’s breakthroughs essay, I told you about a liquid solution that revived the organs of dead pigs. This year, in the category of Wait, what?, we bring you the news that face paint cures cancer. Well, sort of face paint. And more like “fight” cancer than cure. Also, just in mice. But still!

Let’s back up. Some common skin bacteria can trigger our immune system to produce T cells, which seek and destroy diseases in the body. This spring, scientists announced that they had engineered an ordinary skin bacterium to carry bits of tumor material. When they rubbed this concoction on the head of mice in a lab, the animals produced T cells inside the body that sought out distant tumor cells and attacked them. So yeah, basically, face paint that fights cancer.

Many vaccines already use modified viruses, such as adenovirus, as delivery trucks to drive disease-fighting therapies into the body. The ability to deliver cancer therapies (or even vaccines) through the skin represents an amazing possibility, especially in a world where people are afraid of needles. It’s thrilling to think that the future of medicine, whether vaccines or cancer treatments, could be as low-fuss as a set of skin creams.

9. Loyal Drugs: Life-Extension Meds for Dogs

Longevity science is having a moment. Bloomberg Businessweek recently devoted an issue to the “tech titans, venture capitalists, crypto enthusiasts and AI researchers [who] have turned longevity research into something between the hottest science and a tragic comedy.” There must be a trillion (I’m rounding up) podcast episodes about how metformin, statins, and other drugs can extend our life. But where is the hard evidence that we are getting any closer to figuring out how to help our loved ones live longer?

Look to the dogs. Large breeds, such as Great Danes and rottweilers, generally die younger than small dogs. A new drug made by the biotech company Loyal tries to extend their life span by targeting a hormone called “insulin-like growth factor-1,” or IGF-1. Some scientists believe that high levels of the chemical speed up aging in big dogs. By reducing IGF-1, Loyal hopes to curb aging-related increases in insulin. In November, the company announced that it had met a specific FDA requirement for future fast-tracked authorization of drugs that could extend the life span of big dogs. “The data you provided are sufficient to show that there is a reasonable expectation of effectiveness,” an official at the FDA wrote the company in a letter provided to The New York Times.

Loyal’s drug is not available to pet owners yet—and might not be for several years. But the FDA’s support nonetheless marks a historic acknowledgment of the promise of life-span-extension medicine.

Building AI Safely Is Getting Harder and Harder

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 12 › building-ai-safety-is-getting-harder-and-harder › 676960

This is Atlantic Intelligence, an eight-week series in which The Atlantic’s leading thinkers on AI will help you understand the complexity and opportunities of this groundbreaking technology. Sign up here.

The bedrock of the AI revolution is the internet, or more specifically, the ever-expanding bounty of data that the web makes available to train algorithms. ChatGPT, Midjourney, and other generative-AI models “learn” by detecting patterns in massive amounts of text, images, and videos scraped from the internet. The process entails hoovering up huge quantities of books, art, memes, and, inevitably, the troves of racist, sexist, and illicit material distributed across the web.

Earlier this week, Stanford researchers found a particularly alarming example of that toxicity: The largest publicly available image data set used to train AIs, LAION-5B, reportedly contains more than 1,000 images depicting the sexual abuse of children, out of more than 5 billion in total. A spokesperson for the data set’s creator, the nonprofit Large-scale Artificial Intelligence Open Network, told me in a written statement that it has a “zero tolerance policy for illegal content” and has temporarily halted the distribution of LAION-5B while it evaluates the report’s findings, although this and earlier versions of the data set have already trained prominent AI models.

Because they are free to download, the LAION data sets have been a key resource for start-ups and academics developing AI. It’s notable that researchers have the ability to peer into these data sets to find such awful material at all: There’s no way to know what content is harbored in similar but proprietary data sets from OpenAI, Google, Meta, and other tech companies. One of those researchers is Abeba Birhane, who has been scrutinizing the LAION data sets since the first version’s release, in 2021. Within six weeks, Birhane, a senior fellow at Mozilla who was then studying at University College Dublin, published a paper detailing her findings of sexist, pornographic, and explicit rape imagery in the data. “I’m really not surprised that they found child-sexual-abuse material” in the newest data set, Birhane, who studies algorithmic justice, told me yesterday.

Birhane and I discussed where the problematic content in giant data sets comes from, the dangers it presents, and why the work of detecting this material grows more challenging by the day. Read our conversation, edited for length and clarity, below.

Matteo Wong, assistant editor

More Challenging By the Day

Matteo Wong: In 2021, you studied the LAION data set, which contained 400 million captioned images, and found evidence of sexual violence and other harmful material. What motivated that work?

Abeba Birhane: Because data sets are getting bigger and bigger, 400 million image-and-text pairs is no longer large. But two years ago, it was advertised as the biggest open-source multimodal data set. When I saw it being announced, I was very curious, and I took a peek. The more I looked into the data set, the more I saw really disturbing stuff.

We found there was a lot of misogyny. For example, any benign word that is remotely related to womanhood, like mama, auntie, beautiful—when you queried the data set with those types of terms, it returned a huge proportion of pornography. We also found images of rape, which was really emotionally heavy and intense work, because we were looking at images that are really disturbing. Alongside that audit, we also put forward a lot of questions about what the data-curation community and larger machine-learning community should do about it. We also later found that, as the size of the LAION data sets increased, so did hateful content. By implication, so does any problematic content.

Wong: This week, the biggest LAION data set was removed because of the finding that it contains child-sexual-abuse material. In the context of your earlier research, how do you view this finding?

Birhane: It did not surprise us. These are the issues that we have been highlighting since the first release of the data set. We need a lot more work on data-set auditing, so when I saw the Stanford report, it’s a welcome addition to a body of work that has been investigating these issues.

Wong: Research by yourself and others has continuously found some really abhorrent and often illegal material in these data sets. This may seem obvious, but why is that dangerous?

Birhane: Data sets are the backbone of any machine-learning system. AI didn’t come into vogue over the past 20 years only because of new theories or new methods. AI became ubiquitous mainly because of the internet, because that allowed for mass harvesting of large-scale data sets. If your data contains illegal stuff or problematic representation, then your model will necessarily inherit those issues, and your model output will reflect these problematic representations.

But if we take another step back, to some extent it’s also disappointing to see data sets like the LAION data set being removed. For example, the LAION data set came into existence because the creators wanted to replicate data sets inside big corporations—for example, what data sets used in OpenAI might look like.

Wong: Does this research suggest that tech companies, if they’re using similar methods to collect their data sets, might harbor similar problems?

Birhane: It’s very, very likely, given the findings of previous research. Scale comes at the cost of quality.

Wong: You’ve written about research you couldn’t do on these giant data sets because of the resources necessary. Does scale also come at the cost of auditability? That is, does it become less possible to understand what’s inside these data sets as they become larger?

Birhane: There is a huge asymmetry in terms of resource allocation, where it’s much easier to build stuff but a lot more taxing in terms of intellectual labor, emotional labor, and computational resources when it comes to cleaning up what’s already been assembled. If you look at the history of data-set creation and curation, say 15 to 20 years ago, the data sets were much smaller scale, but there was a lot of human attention that went into detoxifying them. But now, all that human attention to data sets has really disappeared, because these days a lot of that data sourcing has been automated. That makes it cost-effective if you want to build a data set, but the reverse side is that, because data sets are much larger now, they require a lot of resources, including computational resources, and it’s much more difficult to detoxify them and investigate them.

Wong: Data sets are getting bigger and harder to audit, but more and more people are using AI built on that data. What kind of support would you want to see for your work going forward?

Birhane: I would like to see a push for open-sourcing data sets—not just model architectures, but data itself. As horrible as open-source data sets are, if we don’t know how horrible they are, we can’t make them better.

Related:

America already has an AI underclass AI’s present matters more than its imagined future

P.S.

Struggling to find your travel-information and gift-receipt emails during the holidays? You’re not alone. Designing an algorithm to search your inbox is paradoxically much harder than making one to search the entire internet. My colleague Caroline Mimbs Nyce explored why in a recent article.

— Matteo

The Big Questions About AI in 2024

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 12 › ai-chatbot-llm-questions-2024 › 676942

Let us be thankful for the AI industry. Its leaders may be nudging humans closer to extinction, but this year, they provided us with a gloriously messy spectacle of progress. When I say “year,” I mean the long year that began late last November, when OpenAI released ChatGPT and, in doing so, launched generative AI into the cultural mainstream. In the months that followed, politicians, teachers, Hollywood screenwriters, and just about everyone else tried to understand what this means for their future. Cash fire-hosed into AI companies, and their executives, now glowed up into international celebrities, fell into Succession-style infighting. The year to come could be just as tumultuous, as the technology continues to evolve and its implications become clearer. Here are five of the most important questions about AI that might be answered in 2024.

Is the corporate drama over?

OpenAI’s Greg Brockman is the president of the world’s most celebrated AI company and the golden-retriever boyfriend of tech executives. Since last month, when Sam Altman was fired from his position as CEO and then reinstated shortly thereafter, Brockman has appeared to play a dual role—part cheerleader, part glue guy—for the company. As of this writing, he has posted no fewer than five group selfies from the OpenAI office to show how happy and nonmutinous the staffers are. (I leave to you to judge whether and to what degree these smiles are forced.) He described this year’s holiday party as the company’s best ever. He keeps saying how focused, how energized, how united everyone is. Reading his posts is like going to dinner with a couple after an infidelity has been revealed: No, seriously, we’re closer than ever. Maybe it’s true. The rank and file at OpenAI are an ambitious and mission-oriented lot. They were almost unanimous in calling for Altman’s return (although some have since reportedly said that they felt pressured to do so). And they may have trauma-bonded during the whole ordeal. But will it last? And what does all of this drama mean for the company’s approach to safety in the year ahead?

An independent review of the circumstances of Altman’s ouster is ongoing, and some relationships within the company are clearly strained. Brockman has posted a picture of himself with Ilya Sutskever, OpenAI’s safety-obsessed chief scientist, adorned with a heart emoji, but Altman’s feelings toward the latter have been harder to read. In his post-return statement, Altman noted that the company was discussing how Sutskever, who had played a central role in Altman’s ouster, “can continue his work at OpenAI.” (The implication: Maybe he can’t.) If Sutskever is forced out of the company or otherwise stripped of his authority, that may change how OpenAI weighs danger against speed of progress.

Is OpenAI sitting on another breakthrough?

During a panel discussion just days before Altman lost his job as CEO, he told a tantalizing story about the current state of the company’s AI research. A couple of weeks earlier, he had been in the room when members of his technical staff had pushed “the frontier of discovery forward,” he said. Altman declined to offer more details, unless you count additional metaphors, but he did mention that only four times since the company’s founding had he witnessed an advance of such magnitude.

During the feverish weekend of speculation that followed Altman’s firing, it was natural to wonder whether this discovery had spooked OpenAI’s safety-minded board members. We do know that in the weeks preceding Altman’s firing, company researchers raised concerns about a new “Q*” algorithm. Had the AI spontaneously figured out quantum gravity? Not exactly. According to reports, it had only solved simple mathematical problems, but it may have accomplished this by reasoning from first principles. OpenAI hasn’t yet released any official information about this discovery, if it is even right to think of it as a discovery. “As you can imagine, I can’t really talk about that,” Altman told me recently when I asked him about Q*. Perhaps the company will have more to say, or show, in the new year.

Does Google have an ace in the hole?

When OpenAI released its large-language-model chatbot in November 2022, Google was caught flat-footed. The company had invented the transformer architecture that makes LLMs possible, but its engineers had clearly fallen behind. Bard, Google’s answer to ChatGPT, was second-rate.

Many expected OpenAI’s leapfrog to be temporary. Google has a war chest that is surpassed only by Apple’s and Microsoft’s, world-class computing infrastructure, and storehouses of potential training data. It also has DeepMind, a London-based AI lab that the company acquired in 2014. The lab developed the AIs that bested world champions at chess and Go and intuited protein-folding secrets that nature had previously concealed from scientists. Its researchers recently claimed that another AI they developed is suggesting novel solutions to long-standing problems of mathematical theory. Google had at first allowed DeepMind to operate relatively independently, but earlier this year, it merged the lab with Google Brain, its homegrown AI group. People expected big things.

Then months and months went by without Google so much as announcing a release date for its next-generation LLM, Gemini. The delays could be taken as a sign that the company’s culture of innovation has stagnated. Or maybe Google’s slowness is a sign of its ambition? The latter possibility seems less likely now that Gemini has finally been released and does not appear to be revolutionary. Barring a surprise breakthrough in 2024, doubts about the company—and the LLM paradigm—will continue.

Are large language models already topping out?

Some of the novelty has worn off LLM-powered software in the mold of ChatGPT. That’s partly because of our own psychology. “We adapt quite quickly,” OpenAI’s Sutskever once told me. He asked me to think about how rapidly the field has changed. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,” he said. Maybe he’s right. A decade ago, many of us dreaded our every interaction with Siri, with its halting, interruptive style. Now we have bots that converse fluidly about almost any subject, and we struggle to remain impressed.

AI researchers have told us that these tools will only get smarter; they’ve evangelized about the raw power of scale. They’ve said that as we pump more data into LLMs, fresh wonders will emerge from them, unbidden. We were told to prepare to worship a new sand god, so named because its cognition would run on silicon, which is made of melted-down sand.

ChatGPT has certainly improved since it was first released. It can talk now, and analyze images. Its answers are sharper, and its user interface feels more organic. But it’s not improving at a rate that suggests that it will morph into a deity. Altman has said that OpenAI has begun developing its GPT-5 model. That may not come out in 2024, but if it does, we should have a better sense of how much more intelligent language models can become.

How will AI affect the 2024 election?

Our political culture hasn’t yet fully sorted AI issues into neatly polarized categories. A majority of adults profess to worry about AI’s impact on their daily life, but those worries aren’t coded red or blue. That’s not to say the generative-AI moment has been entirely innocent of American politics. Earlier this year, executives from companies that make chatbots and image generators testified before Congress and participated in tedious White House roundtables. Many AI products are also now subject to an expansive executive order.

But we haven’t had a big national election since these technologies went mainstream, much less one involving Donald Trump. Many blamed the spread of lies through social media for enabling Trump’s victory in 2016, and for helping him gin up a conspiratorial insurrection following his 2020 defeat. But the tools of misinformation that were used in those elections were crude compared with those that will be available next year.

A shady campaign operative could, for instance, quickly and easily conjure a convincing picture of a rival candidate sharing a laugh with Jeffrey Epstein. If that doesn’t do the trick, they could whip up images of poll workers stuffing ballot boxes on Election Night, perhaps from an angle that obscures their glitchy, six-fingered hands. There are reasons to believe that these technologies won’t have a material effect on the election. Earlier this year, my colleague Charlie Warzel argued that people may be fooled by low-stakes AI images—the pope in a puffer coat, for example—but they tend to be more skeptical of highly sensitive political images. Let’s hope he’s right.

Soundfakes, too, could be in the mix. A politician’s voice can now be cloned by AI and used to generate offensive clips. President Joe Biden and former President Trump have been public figures for so long—and voters’ perceptions of them are so fixed—that they may be resistant to such an attack. But a lesser-known candidate could be vulnerable to a fake audio recording. Imagine if during Barack Obama’s first run for the presidency, cloned audio of him criticizing white people in colorful language had emerged just days before the vote. Until bad actors experiment with these image and audio generators in the heat of a hotly contested election, we won’t know exactly how they’ll be misused, and whether their misuses will be effective. A year from now, we’ll have our answer.

What Happens When AI Takes Over Science?

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 12 › science-is-becoming-less-human › 676363

This is Atlantic Intelligence, an eight-week series in which The Atlantic’s leading thinkers on AI will help you understand the complexity and opportunities of this groundbreaking technology. Sign up here.

Artificial intelligence is changing the way some scientists conduct research, leading to new discoveries on accelerated timetables. As The Atlantic’s Matteo Wong explores in a recent article, AI is assisting in drug discovery and more: “Neuroscientists at Meta and elsewhere, for instance, are turning artificial neural networks trained to ‘see’ photographs or ‘read’ text into hypotheses for how the brain processes both images and language. Biologists are using AI trained on genetic data to study rare diseases, improve immunotherapies, and better understand SARS-CoV-2 variants of concern.”

But these advances have a drawback. AI, through its inhuman ability to process and find connections between huge quantities of data, is also obfuscating how these breakthroughs happen, by producing results without explanation. Unlike human researchers, the technology tends not to show its work—a curious development for the scientific method that calls into question the meaning of knowledge itself.

Damon Beres, senior editor

Illustration by Joanne Imperio / The Atlantic

Science Is Becoming Less Human

By Matteo Wong

This summer, a pill intended to treat a chronic, incurable lung disease entered mid-phase human trials. Previous studies have demonstrated that the drug is safe to swallow, although whether it will improve symptoms of the painful fibrosis that it targets remains unknown; this is what the current trial will determine, perhaps by next year. Such a tentative advance would hardly be newsworthy, except for a wrinkle in the medicine’s genesis: It is likely the first drug fully designed by artificial intelligence to come this far in the development pipeline …

Medicine is just one aspect of a broader transformation in science. In only the past few months, AI has appeared to predict tropical storms with similar accuracy and much more speed than conventional models; Meta has released a model that can analyze brain scans to reproduce what a person is looking at; Google recently used AI to propose millions of new materials that could enhance supercomputers, electric vehicles, and more. Just as the technology has blurred the line between human-created and computer-generated text and images—upending how people work, learn, and socialize—AI tools are accelerating and refashioning some of the basic elements of science.

Read the full article.

What to Read Next

Earlier this week, OpenAI and Axel Springer—the media conglomerate behind publications such as Business Insider and Politico—announced a landmark deal that will bring news stories into ChatGPT. I wrote about the partnership and what it suggests about the changing internet: “ChatGPT is becoming more capable at the same time that its underlying technology is destroying much of the web as we’ve known it.”

Here are some other recent stories that are worth your time:

AI astrology is getting a little too personal: “Cryptic life guidance is one thing. Telling me to ditch my therapist is another,” Katherine Hu writes. AI’s spicy-mayo problem: A chatbot that can’t say anything controversial isn’t worth much. Bring on the uncensored models, Mark Gimein writes.

P.S.

My son will not be receiving any AI-infused toys for Christmas this year (I saw M3GAN), but the market for such things exists. The musician Grimes is working with OpenAI and the start-up Curio to launch a new plush that will use AI to converse with children. The ultimate goal is to develop toys that exhibit “a degree of some kind of pseudo consciousness,” according to a statement given to The Washington Post by Sam Eaton, Curio’s president. And to think my Furby used to freak me out.

— Damon