Itemoids

ChatGPT

The Year We Embraced Our Destruction

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 12 › panera-charged-lemonade-ai-existential-risk › 676984

The sounds came out of my mouth with an unexpected urgency. The cadence was deliberate—more befitting of an incantation than an order: one large strawberry-lemon-mint Charged Lemonade. The words hung in the air for a moment, giving way to a stillness punctuated only by the soft whir of distant fluorescent lights and the gentle hum of a Muzak cover of Bruce Hornsby’s “Mandolin Rain.”

The time was 9:03 a.m.; the sun had been up for only one hour. I watched the kind woman behind the counter stifle an eye roll, a small mercy for which I will be eternally grateful. Her look indicated that she’d been through this before, enough times to see through my bravado. I was just another man standing in front of a Panera Bread employee, asking her to hand me 30 fluid ounces of allegedly deadly lemonade. (I would have procured it myself, but it was kept behind the counter, like a controlled substance.)

I came to Panera to touch the face of God or, at the very least, experience the low-grade anxiety and body sweats one can expect from consuming 237 milligrams of caffeine in 15 minutes. Really, the internet sent me. Since its release last year, Panera’s highly caffeinated Charged Lemonade has become a popular meme—most notably on TikTok, where people vlog from the front seat of their car about how hopped up they are after chugging the neon beverage. Last December, a tongue-in-cheek Slate headline asked, “Is Panera Bread Trying to Kill Us?”

In the following months, two wrongful-death lawsuits were indeed filed against the restaurant chain, arguing that Panera was responsible for not adequately advertising the caffeine content of the drink. The suits allege that Charged Lemonade contributed to the fatal cardiac arrests of a 21-year-old college student and a 46-year-old man. Panera did not respond to my request for comment but has argued that both lawsuits are without merit and that it “stands firmly by the safety of our products.” In October, Panera changed the labeling of its Charged Lemonade to warn people who may be “sensitive to caffeine.”

The allegations seem to have done the impossible: They’ve made a suburban chain best known for its bread bowls feel exciting, even dangerous. The memes have escalated. Search death lemonade on any platform, and you’ll see a cascade of grimly ironic posts about everything from lemonade-assisted suicide to being able to peer into alternate dimensions after sipping the juice. Much like its late-aughts boozy predecessor Four Loko, Charged Lemonade is riding a wave of popularity because of the implication that consuming it is possibly unsafe. One viral post from October put it best: “Panera has apparently discovered the fifth loko.”

Like many internet-poisoned men and women before me, I possess both a classic Freudian death drive and an embarrassing desire to experience memes in the physical world—an effort, perhaps, to situate my human form among the algorithms and timelines that dominate my life. But there is another reason I was in a strip mall on the shortest day of the year, allowing the recommended daily allowance of caffeine to Evil Knievel its way across my blood-brain barrier. I came to make sense of a year that was defined by existential threats—and by a strange, pervasive celebration of them.

In 2023, I spent a lot of time listening to smart people talk about the end of the world. This was the year that AI supposedly “ate the internet”: The arrival of ChatGPT in late 2022 shifted something in the public consciousness. After decades of promise, the contours of an AI-powered world felt to some as if they were taking shape. Will these tools come for our jobs, our culture, even our humanity? Are they truly revolutionary or just showy—like spicier versions of autocorrect?

Some of the biggest players in tech—along with a flood of start-ups—are racing to develop their own generative-AI products. The technology has developed swiftly, lending a frenzied, disorienting feeling to the past several months. “I don’t think we’re ready for what we’re creating,” one AI entrepreneur told me ominously and unbidden when we spoke earlier this year. Civilizational extinction has moved from pure science fiction to immediate concern. Geoffrey Hinton, a well-known AI researcher who quit Google this year to warn against the dangers of the technology, suggested that there was as high as a 10 percent chance of extinction in the next 30 years. “I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,” Sam Altman, OpenAI’s CEO, told my colleague Ross Andersen this past spring.

In May, hundreds of AI executives, researchers, and tech luminaries including Bill Gates signed a one-sentence statement written by the Center for AI Safety. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” it read. Debates once contained to a small subculture of technologists and rationalists on niche online forums such as LessWrong became fodder for the press. Normal people trying to keep up with the news had to hack through a jungle of new terminology: x-risk, e/acc, alignment, p(doom). By mid-year, the AI-doomerism conversation was fully mainstreamed; existential calamity was in the air (and, we joked, in our fast-casual lemonades).

[Read: AI doomerism is a decoy]

Then, as if by cosmic coincidence, this strain of apocalyptic thought fused perfectly with pop culture in Christopher Nolan’s Oppenheimer. As the atomic-bomb creator’s biopic took over the box office, AI researchers toted around the Pulitzer Prize–winning book The Making of the Atomic Bomb, suggesting that they too were pushing humanity into an uncertain, possibly apocalyptic future. The parallels between Los Alamos and Silicon Valley, however facile, needled at a question that had been bothering me all year: What would compel a person to build something if they had any reasonable belief that it might end life on Earth?

Richard Rhodes, the author of The Making of the Atomic Bomb, offered me one explanation, using a concept from the Danish physicist Niels Bohr. At the core of quantum physics is the idea of complementarity, which describes how objects have conflicting properties that cannot be observed at the same time. Complementarity, he argued, was also the same principle that governed innovation: A weapon of mass destruction could also be a tool to avert war.

[Read: Oppenheimer’s cry of despair in The Atlantic]

Rhodes, an 86-year-old who’s spent most of his adult life thinking about our most destructive innovations and speaking with the men who built the bomb, told me that he believes this duality to be at the core of human progress. Pursuing our greatest ambitions may give way to an unthinkable nightmare, or it may allow our dreams to come true. The answer to my question, he offered, was somewhere on that thin line between the excitement and terror of true discovery.

Roughly 10 minutes and 15 ounces into my strawberry-lemon-mint Charged Lemonade, I felt a gentle twinge of euphoria—a barely perceptible effervescence taking place at a cellular level. I was alone in the restaurant, ensconced in a booth and checking my Instagram messages. I’d shared a picture of the giant cup sweating modestly on my table, a cheap bid for some online engagement that had paid off. “I hope you live,” one friend had written in response. I glanced down at my smartwatch, where my heart rate measured a pleasant 20 beats per minute higher than usual. The inside of my mouth felt wrong. I ran my tongue over my teeth, noticing a fine dusting of sugar blanketing the enamel.

I did not feel the warm creep of death’s sweet embrace, only a sensation that the lights were very bright. This was accompanied by an edgy feeling that I would characterize as the antithesis of focus. I stood up to ask a Panera employee if they’d been getting a lot of Charged Lemonade tourism around these parts. “I think there’s been a lot, but honestly most of them order it through the drive-through or online order,” they said. “Not many come up here like you did.” I retreated to my booth to let my brain vibrate in my skull.

It is absurd to imagine that lemonade could kill you—no less lemonade from a soda fountain within steps of a Jo-Ann Fabrics store. That absurdity is a large part of what makes Panera lemonade a good meme. But there’s something deeper too, a truth lodged in the banality of a strip-mall drink: Death is everywhere. Today, you might worry about getting shot at school or in a movie theater, or killed by police at a traffic stop; you also understand that you could contract a deadly virus at the grocery store or in the office. Meanwhile, most everyone carries on like everything’s fine. We tolerate what feels like it should be intolerable. This is the mood baked into the meme: Death by lemonade is ridiculous, but in 2023, it doesn’t seem so far-fetched, either.

The same goes for computers and large language models. Our lives already feel influenced beyond our control by the computations of algorithms we don’t understand and cannot see. Maybe it’s ludicrous to imagine a chatbot as the seed of a sentient intelligence that eradicates human life. Then again, it would have been hard in 2006 to imagine Facebook playing a role in the Rohingya genocide, in Myanmar.

I shifted uncomfortably in my seat for the next hour next to my now-empty vessel, anticipating some kind of side effect like the recipient of a novel vaccination. Around the time I could sense myself peaking, I grew quite cold. But that was it. No interdimensional vision, no heart palpitations. The room never melted into a Dalí painting. From behind my laptop, I watched a group of three teenagers, all dressed exactly like Kurt Cobain, grab their neon caffeine receptacles from the online-pickup stand and walk away. Each wore an indelible look of boredom incompatible with the respect one ought to have for death lemonade. I began to feel sheepish about my juice expedition and packed up my belongings.

I’d be lying if I told you I didn’t feel slightly ripped off; it’s an odd sensation, wanting a glass of lemonade to walk you right up to the edge of oblivion. But a hint of impending danger has always been an excellent marketing tool—one that can obscure reality. A quick glance at the Starbucks website revealed that my go-to order—a barely defensible Venti Pike Place roast with an added espresso shot—contains approximately 560 milligrams of caffeine, which is more than double that of a large Charged Lemonade. But I wanted to believe that the food engineers at Panera had pushed the bounds of the possible.

Some of us are drawn to (allegedly) killer lemonade for the same reason others fixate on potential Skynet scenarios. The world feels like it is becoming more chaotic and unknowable, hostile and exciting. AI and a ridiculous fast-casual death beverage may not be the same thing, but they both tap into this energy. We will always find ways to create new, glorious, terrifying things—some that may ultimately kill us. We may not want to die, but in 2023, it was hard to forget that we will.

Where Will AI Take Us in 2024?

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 12 › five-big-questions-about-ai-in-2024 › 676983

This is Atlantic Intelligence, an eight-week series in which The Atlantic’s leading thinkers on AI help you understand the complexity and opportunities of this groundbreaking technology. Sign up here.

What will next year hold for AI? In a new story, Atlantic staff writer Ross Andersen looks ahead, outlining five key questions that will define the technology’s trajectory from here. A big one: How will it affect the election? “Many blamed the spread of lies through social media for enabling [Donald] Trump’s victory in 2016, and for helping him gin up a conspiratorial insurrection following his 2020 defeat,” Andersen writes. “But the tools of misinformation that were used in those elections were crude compared with those that will be available next year.”

Thank you for reading Atlantic Intelligence. This is the final edition of our initial eight-week series. But keep an eye out for new entries later in 2024—we’re sure there will be much more to explore.

Damon Beres, senior editor

Illustration by Ben Kothe / The Atlantic. Source: Getty.

The Big Questions About AI in 2024

By Ross Andersen

Let us be thankful for the AI industry. Its leaders may be nudging humans closer to extinction, but this year, they provided us with a gloriously messy spectacle of progress. When I say “year,” I mean the long year that began late last November, when OpenAI released ChatGPT and, in doing so, launched generative AI into the cultural mainstream. In the months that followed, politicians, teachers, Hollywood screenwriters, and just about everyone else tried to understand what this means for their future. Cash fire-hosed into AI companies, and their executives, now glowed up into international celebrities, fell into Succession-style infighting. The year to come could be just as tumultuous, as the technology continues to evolve and its implications become clearer. Here are five of the most important questions about AI that might be answered in 2024.

Read the full article.

What to Read Next

You can’t truly be friends with an AI: Just because a relationship with a chatbot feels real, that doesn’t mean it is, Ethan Brooks writes. The internet’s next great power suck: AI’s carbon emissions are about to be a problem, Matteo Wong writes.

P.S.

The Atlantic’s Science desk just published its annual list of things that blew our minds this year. Readers of this newsletter will not be surprised to find that AI pops up a few different times. For example, item 47: “AI models can analyze the brain scans of somebody listening to a story and then reproduce the gist of every sentence.”

— Damon

The Most Important Technology of 2023 Wasn’t AI

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 12 › tesla-chatgpt-most-important-technology › 676980

One day in late November, I cradled a red Samsung flip phone in my hands as if it was a ruby gemstone. To me, it was just as precious. Deep inside an overstuffed dresser in my childhood bedroom, I had spotted the glint of my first-ever cellphone, a Samsung SGH-A707 purchased in the waning days of the George W. Bush presidency. The device, no bigger than a credit card, had long ago succumbed to the spider web of cracks on its screen. For a moment, I was brought back to life before the smartphone, clicking the phone’s plastic keys for the first time in more than a decade.

This device, and every other phone like it, of course, was made obsolete by the touchscreen slabs now in all of our pockets. Perhaps you have heard that we are now on the cusp of another iPhone moment—the rise of a new technology that changes the world. No, not that one. Despite the post-ChatGPT frenzy, artificial intelligence has so far been defined more by speculative hype than actual substance. Does anyone really want “AI-powered” smoothies, sports commentary, or roller skates? Assuming the bots don’t wipe out humanity, maybe AI will take the jobs of high-school teachers, coders, lawyers, fast-food workers, customer-service agents, writers, and graphic designers—but right now, ChatGPT is telling me that Cybertruck has 11 letters. There’s a long way to go.

Meanwhile, electric cars are already upending America. In 2023, our battery-powered future became so much more real—a boom in sales and new models is finally starting to push us into the post-gas age. Americans are on track to buy a record 1.44 million of them in 2023, according to a forecast by BloombergNEF, about the same number sold from 2016 to 2021 total. “This was the year that EVs went from experiments, or technological demonstrations, and became mature vehicles,” Gil Tal, the director of the Electric Vehicle Research Center at UC Davis, told me. They are beginning to transform not just the automotive industry, but also the very meaning of a car itself.

If the story of American EVs has long hinged on one company—Tesla—then this was the year that these cars became untethered from Elon Musk’s brand. “We’re at a point where EVs aren’t necessarily exclusively for the upper, upper, upper class,” Robby DeGraff, an analyst at the market-research firm AutoPacific, told me. If you wanted an electric car five years ago, you could choose from among various Tesla models, the Chevy Bolt, the Nissan Leaf—and that was really it. Now EVs come in more makes and models than Baskin-Robbins ice-cream flavors. We have more luxury sedans to vie with Tesla, but also cheaper five-seaters, SUVs, Hummers, pickup trucks, and … however you might categorize the Cybertruck. Nearly 40 new EVs have debuted since the start of 2022, and they are far more advanced than their ancestors. For $40,000, the Hyundai Ioniq 6, released this year, can get you 360 miles on a single charge; in 2018, for only a slightly lower cost, a Nissan Leaf couldn’t go half that distance.

[Read: Admit it, the Cybertruck is awesome]

All of these EVs are genuinely great for the planet, spewing zero carbon from their tailpipes, but that’s only a small part of what makes them different. In the EV age, cars are no longer just cars. They are computers. Stripping out a gas engine, transmission, and 100-plus moving parts turns a vehicle into something more digital than analog—sort of like how typing on an iPhone keyboard is different than on my clackety old Samsung flip phone. “It’s the software that is really the heart of an EV,” DeGraff said—it runs the motors, calculates how many miles are left on a charge, optimizes the brakes, and much more.

Just like with other gadgets that bug you about software updates, all of this firmware can be updated over Wi-Fi while a car charges overnight. Rivian has updated its software to add a “Sand Mode” that can enhance its cars’ driving ability on dusty terrain. Many new cars are getting stuffed with technology—a new gas-powered Mercedes-Benz E-Class comes with TikTok integration and a selfie stick—but EVs are capable of more significant updates. A gas car is never going to meaningfully get more miles per gallon, but one such update from Tesla in 2020 increased the range on its Model X car from 328 to 351 miles after the company found ways to wring more efficiency out of its internal parts. And because EVs all drive basically the same, tech is a bigger part of the sell. Instead of idly passing the time while an EV recharges, you can now use a car’s infotainment system to Zoom into a meeting, play Grand Theft Auto, and stream Amazon Prime.

The million-plus new EVs on the road are ushering in a fundamental, maybe existential, change in how to even think about cars—no longer as machines, but as gadgets that plug in and charge like all the others in our life. The wonderful things about computers are coming to cars, and so are the terrible ones: apps that crash. Subscription hell. Cyberattacks. There are new problems to contend with too: In Tesla’s case, its “Autopilot” software has been implicated in fatal crashes. (It was the subject of a massive recall earlier this month that required an over-the-air update.) You now might scroll on your phone in bed, commute in your EV, and log into your work laptop, all of which are powered by processors that are constantly bugging you to update them.

[Read: The end of manual transmission]

If cars are gadgets now, then carmakers are also now tech companies. An industry that has spent a century perfecting the internal combustion engine must now manufacture lithium-ion batteries and write the code to govern them. Imagine if a dentist had to pivot from filling cavities to performing open-heart surgery, and that’s roughly what’s going on here. “The transition to EVs is completely changing everything,” Loren McDonald, an EV consultant, told me. “It’s changing the people that automotive companies have to hire and their skills. It’s changing their suppliers, their factories, how they assemble and build them. And lots of automakers are struggling with that.”

Take the batteries. To manufacture battery cells powerful enough for a car is so phenomenally expensive and arduous that Toyota is pumping nearly $14 billion into a single battery plant in North Carolina. To create software-enabled cars, you need software engineers, and car companies cannot get enough of them. (Perhaps no other industry has benefited the most from Silicon Valley’s year of layoffs.) At the very low end, estimates Sam Abuelsamid, a transportation analyst at Guidehouse Insights, upwards of 10,000 “software engineers, interface designers, networking engineers, data center experts and silicon engineers have been hired by automakers and suppliers in recent years.” The tech wars can sometimes verge on farce: One former Apple executive runs Ford’s customer-software team, while another runs GM’s.

At every level, the auto industry is facing the type of headache-inducing questions about job losses and employment that still feels many years away with AI. “There’s a new skill set we’re going to need, and I don’t think I can teach everyone—it will take too much time,” Ford’s CEO, Jim Farley, said in May. “So there is going to be disruption in this transition.” Job cuts are already happening, and more may come—even after the massive autoworker strike this year that largely hinged on electrification. Such a big financial investment is needed to electrify the car industry that from July to September, Ford lost $60,000 for every EV it sold. Or peel back one more onion layer to car dealerships: Tesla, Rivian, and other EV companies are selling directly to consumers, cutting them out. EVs also require little service compared with gas vehicles, a reality that has upset many dealers, who could lose their biggest source of profit. None of this is the future. It is happening right now.

But if EVs are having an “iPhone moment,” we are still in the days when a few early adopters had the clunky, OG version. Most cars you see are a decade old; for all these EV sales, just 1 percent of cars on the road are all-electric. Even if we hit President Joe Biden’s EV target of 50 percent of sales by 2030, the sheer life span of cars will mean that gas vehicles will still greatly outnumber electric ones by then. Gas stations are not closing. Parking garages are not buckling under the weight of EVs and their hefty batteries. Electric cars remain too expensive, and they are limited by janky public chargers that are too slow, assuming they work at all. If you don’t have a house where you can install your own plug, EVs are still mostly just unrealistic. Most alarming might be the politics that surround them: Donald Trump and lots of other Republicans are vowing to stymie their growth. Carmakers are not even hiding that next year’s election might lead them to reconsider their EV plans.

Even so, the transition is not slowing down. Next year, America should hit 1.9 million EV sales, Corey Cantor, an EV analyst at BloombergNEF, told me. Another burst of models is coming: A retro-futuristic Volkswagen van! A Cadillac Escalade with a 55-inch touchscreen! A tiny Fiat 500e for just $30,000! And yes, they are succumbing a bit to hype themselves. In June, Mercedes’s infotainment screen got an optional update. Now you can talk to it through a chatbot.

This story is part of the Atlantic Planet series supported by HHMI’s Science and Educational Media Group.

The Nine Breakthroughs of the Year

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 12 › scientific-breakthroughs-2023-list › 676952

This story seems to be about:

This is Work in Progress, a newsletter about work, technology, and how to solve some of America’s biggest problems. Sign up here.

The theme of my second-annual Breakthroughs of the Year is the long road of progress. My top breakthrough is Casgevy, a gene-editing treatment for sickle-cell anemia. In the 1980s and early 1990s, scientists in Spain and Japan found strange, repeating patterns in the DNA of certain bacteria. Researchers eventually linked these sequences to an immune defense system that they named “clustered regularly interspaced palindromic repeats”—or CRISPR. In the following decades, scientists found clever ways to build on CRISPR to edit genes in plants, animals, and even humans. CRISPR is this year’s top breakthrough not only because of heroic work done in the past 12 months, but also because of a long thread of heroes whose work spans decades.

Sometimes, what looks like a big deal amounts to nothing at all. For several weeks this summer, the internet lost its mind over claims that researchers in South Korea had built a room-temperature superconductor. One viral thread called it “the biggest physics discovery of my lifetime.” The technology could have paved the way to magnificently efficient energy grids and levitating cars. But, alas, it wasn’t real. So, perhaps, this is 2023’s biggest lesson about progress: Time is the ultimate test. The breakthrough of the year took more than three decades to go from discovery to FDA approval, while the “biggest” physics discovery of the year was disproved in about 30 days.

1. CRISPR’s Triumph: A Possible Cure for Sickle-Cell Disease

In December, the FDA approved the world’s first medicine based on CRISPR technology. Developed by Vertex Pharmaceuticals, in Boston, and CRISPR Therapeutics, based in Switzerland, Casgevy is a new treatment for sickle-cell disease, a chronic blood disorder that affects about 100,000 people in the U.S., most of whom are Black.

Sickle-cell disease is caused by a genetic mutation that affects the production of hemoglobin, a protein that carries oxygen in red blood cells. Abnormal hemoglobin makes blood cells hard and shaped like a sickle. When these misshapen cells get clogged together, they block blood flow throughout the body, causing intense pain and, in some cases, deadly anemia.

The Casgevy treatment involves a complex, multipart procedure. Stem cells are collected from a patient’s bone marrow and sent to a lab. Scientists use CRISPR to knock out a gene that represses the production of “fetal hemoglobin,” which most people stop making after birth. (In 1948, scientists discovered that fetal hemoglobin doesn’t “sickle.”) The edited cells are returned to the body via infusion. After weeks or months, the body starts producing fetal hemoglobin, which reduces cell clumping and improves oxygen supply to tissues and organs.

Ideally, CRISPR will offer a one-and-done treatment. In one trial, 28 of 29 patients, who were followed for at least 18 months, were free of severe pain for at least a year. But we don’t have decades’ worth of data yet.

Casgevy is a triumph for CRISPR. But a miracle drug that’s too expensive for its intended population—or too complex to be administered where it is most needed—performs few miracles. More than 70 percent of the world’s sickle-cell patients live in sub-Saharan Africa. The sticker price for Casgevy is about $2 million, which is roughly 2,000 times larger than the GDP per capita of, say, Burkina Faso. The medical infrastructure necessary to go through with the full treatment doesn’t exist in most places. Casgevy is a wondrous invention, but as always, progress is implementation.  

2. GLP-1s: A Diabetes and Weight-Loss Revolution

In the 1990s, a small team of scientists got to know the Gila monster, a thick lizard that can survive on less than one meal a month. When they studied its saliva, they found that it contained a hormone that, in experiments, lowered blood sugar and regulated appetite. A decade later, a synthetic version of this weird lizard spit became the first medicine of its kind approved to treat type 2 diabetes. The medicine was called a “glucagon-like peptide-1 receptor agonist.” Because that’s a mouthful, scientists mostly call these drugs “GLP-1s.”

Today the world is swimming in GLP-1 breakthroughs. These drugs go by many names. Semaglutide is sold by the Danish company Novo Nordisk, under the names Ozempic (approved for type 2 diabetes) or Wegovy (for weight loss). Tirzepatide is sold by Eli Lilly under the names Mounjaro (type 2 diabetes) or Zepbound (weight loss). These medications all mostly work the same way. They mimic gut hormones that stimulate insulin production and signal to the brain that the patient is full. In clinical trials, patients on these medications lose about 15 percent or more of their weight.

The GLP-1 revolution is reshaping medicine and culture “in ways both electrifying and discomfiting,” Science magazine said in an article naming these drugs its Breakthrough of the Year. Half a billion people around the world live with diabetes, and 40 percent of Americans alone are obese. A relatively safe drug that stimulates insulin production and reduces caloric intake could make an enormous difference in lifestyle and culture.

Some people on GLP-1s report nausea, and some fall out of love with their favorite foods. In rarer cases, the drugs might cause stomach paralysis. But for now, the miraculous effects of these drugs go far beyond diabetes and weight loss. In one trial supported by Novo Nordisk, the drug reduced the incidence of heart attack and stroke by 20 percent. Morgan Stanley survey data found that people on GLP-1s eat less candy, drink less alcohol, and eat 40 percent more vegetables. The medication seems to reduce smoking for smoking addicts, gambling for gambling addicts, and even compulsive nail biting for some. GLP-1s are an exceptional medicine, but they may also prove to be an exceptional tool that helps scientists see more clearly the ways our gut, mind, and willpower work together.

3. GPT and Protein Transformers: What Can’t Large Language Models Do?

In March, OpenAI released GPT-4, the latest and most sophisticated version of the language-model technology that powers ChatGPT. Imagine trying to parse that sentence two years ago—a useful reminder that some things, like large language models, advance at the pace of slowly, slowly, then all at once.

Surveys suggest that most software developers already use AI to accelerate code writing. There is evidence that these tools are raising the productivity of some workers, and surveys suggest that most software developers already use AI to accelerate code writing. These tools also appear to be nibbling away at freelance white-collar work. Famously, OpenAI has claimed that the technology can pass medical-licensing exams and score above the 85th percentile on the LSAT, parts of the SAT, and the uniform bar exam. Still, I am in the camp of believing that this technology is both a sublime accomplishment and basically a toy for most of its users.

One can think of transformers—that’s what the T stands for in GPT—as tools for building a kind of enormous recipe book of language, which AI can consult to cook up meaningful, novel answers to any prompt. If AI can build a cosmic cookbook of linguistic meaning, can it do the same for another corpus of information? For example, could it learn the “language” of how our cells talk to one another?

This spring, a team of researchers announced in Science that they had found a way to use transformer technology to predict protein sequences at the level of individual atoms. This accomplishment builds on AlphaFold, an AI system developed within Alphabet. As several scientists explained to me, the latest breakthrough suggests that we can use language models to quickly spin up the shapes of millions of proteins faster than ever. I’m most impressed by the larger promise: If transformer technology can map both languages and protein structures, it seems like an extraordinary tool for advancing knowledge.

4. Fusion: The Dream Gets a Little Closer

Inside the sun, atoms crash and merge in a process that produces heat and light, making life on this planet possible. Scientists have tried to harness this magic, known as fusion, to produce our own infinite, renewable, and clean energy. The problem: For the longest time, nobody could make it work.

The past 13 months, however, have seen not one but two historic fusion achievements. Last December, 192 lasers at the Lawrence Livermore National Laboratory, in California, blasted a diamond encasing a small amount of frozen hydrogen and created—for less than 100 trillionths of a second—a reaction that produced about three megajoules of energy, or 1.5 times the energy from the lasers. In that moment, scientists said, they achieved the first lab-made fusion reaction to ever create more energy than it took to produce it. Seven months later, they did it again. In July, researchers at the same ignition facility nearly doubled the net amount of energy ever generated by a fusion reaction. Start-ups are racing to keep up with the science labs. The new fusion companies Commonwealth Fusion Systems and Helion are trying to scale this technology.

Will fusion heat your home next year? Fat chance. Next decade? Cross your fingers. Within the lifetime of people reading this article? Conceivably. The naysayers have good reason for skepticism, but these breakthroughs prove that star power on this planet is possible.

5. Malaria and RSV Vaccines: Great News for Kids

Malaria, one of the world’s leading causes of childhood mortality, killed more than 600,000 people in 2022. But with each passing year, we seem to be edging closer to ridding the world of this terrible disease.

Fifteen months ago, the first malaria vaccine, developed by University of Oxford scientists, was found to have up to 80 percent efficacy at preventing infection. It has already been administered to millions of children. But demand still outstrips supply. That’s why it’s so important that in 2023, a second malaria vaccine called R21 was recommended by the World Health Organization, and it appears to be cheaper and easier to manufacture than the first one, and just as effective. The WHO says it expects the addition of R21 to result in sufficient vaccine supply for “all children living in areas where malaria is a public health risk.”

What’s more, in the past year, the FDA approved vaccines against RSV, or respiratory syncytial virus. The American Lung Association estimates that RSV is so common that 97 percent of children catch it before they turn 2, and in a typical year, up to 80,000 children age 5 and younger are hospitalized with RSV along with up to 160,000 older adults. In May, both Pfizer and GSK were granted FDA approval for an RSV vaccine for older adults, and in July, the FDA approved a vaccine to protect infants and toddlers.

6. Killer AI: Artificial Intelligence at War

In the nightmares of AI doomers, our greatest achievements in software will one day rise against us and cause mass death. Maybe they’re wrong. But by any reasonable analysis, the 2020s have already been a breakout decade for AI that kills. Unlike other breakthroughs on this list, this one presents obvious and immediate moral problems.

In the world’s most high-profile conflict, Israel has reportedly accelerated its bombing campaign against Gaza with the use of an AI target-creation platform called Habsora, or “the Gospel.” According to reporting in The Guardian and +972, an Israeli magazine, the Israel Defense Forces use Habsora to produce dozens of targeting recommendations every day based on amassed intelligence that can identify the private homes of individuals suspected of working with Hamas or Islamic Jihad. (The IDF has also independently acknowledged its use of AI to generate bombing targets.)

Israel’s retaliation against Hamas for the October 7 attack has involved one of the heaviest air-bombing campaigns in history. Military analysts told the Financial Times that the seven-week destruction of northern Gaza has approached the damage caused by the Allies’ years-long bombing of German cities in World War II. Clearly, Israel’s AI-assisted bombing campaign shows us another side of the idea that AI is an accelerant.

Meanwhile, the war in Ukraine is perhaps the first major conflict in world history to become a war of drone engineering. (One could also make the case that this designation should go to Azerbaijan's drone-heavy military campaign in the Armenian territory of Nagorno-Karabakh.) Initially, Ukraine depended on a drone called the Bayraktar TB2, made in Turkey, to attack Russian tanks and trucks. Aerial footage of the drone attacks produced viral video-game-like images of exploded convoys. As Wired UK reported, a pop song was written to honor the Bayraktar, and a lemur in the Kyiv Zoo was named after it. But Russia has responded by using jamming technology that is taking out 10,000 drones a month. Ukraine is now struggling to manufacture and buy enough drones to make up the difference, while Russia is using kamikaze drones to destroy Ukrainian infrastructure.

7. Fervo and Hydrogen: Making Use of a Hot Planet

If the energy industry is, in many respects, the search for more heat, one tantalizing solution is to take advantage of our hot subterranean planet. Traditional geothermal plants drill into underground springs and hot-water reservoirs, whose heat powers turbines. But in much of the world, these reservoirs are too deep to access. When we drill, we hit hard rock.

Last year’s version of this list mentioned Quaise, an experimental start-up that tries to vaporize granite with a highly concentrated beam of radio-frequency power. This year, we’re celebrating Fervo, which is part of a crop of so-called enhanced geothermal systems. Fervo uses fracking techniques developed by the oil-and-gas industry to break into hot underground rock. Then Fervo injects cold water into the rock fissures, creating a kind of artificial hot spring. In November, Fervo announced that its Nevada enhanced-geothermal project is operational and sending carbon-free electricity to Google data centers.

That’s not the end of this year’s advancement in underground heat. Eleven years ago, engineers in Mali happened upon a deposit of hydrogen gas. When it was hooked up to a generator, it produced electricity for the local town and only water as exhaust. In 2023, enough governments and start-ups accelerated their search for natural hydrogen-gas deposits that Science magazine named hydrogen-gas exploration one of its breakthroughs of the year. (This is different from the “natural gas” you’ve already heard of, which is a fossil fuel.) One U.S.-government study estimated that the Earth could hold 1 trillion tons of hydrogen, enough to provide thousands of years of fuel and fertilizer.

8. Engineered Skin Bacteria: What If Face Paint Cured Cancer?

In last year’s breakthroughs essay, I told you about a liquid solution that revived the organs of dead pigs. This year, in the category of Wait, what?, we bring you the news that face paint cures cancer. Well, sort of face paint. And more like “fight” cancer than cure. Also, just in mice. But still!

Let’s back up. Some common skin bacteria can trigger our immune system to produce T cells, which seek and destroy diseases in the body. This spring, scientists announced that they had engineered an ordinary skin bacterium to carry bits of tumor material. When they rubbed this concoction on the head of mice in a lab, the animals produced T cells inside the body that sought out distant tumor cells and attacked them. So yeah, basically, face paint that fights cancer.

Many vaccines already use modified viruses, such as adenovirus, as delivery trucks to drive disease-fighting therapies into the body. The ability to deliver cancer therapies (or even vaccines) through the skin represents an amazing possibility, especially in a world where people are afraid of needles. It’s thrilling to think that the future of medicine, whether vaccines or cancer treatments, could be as low-fuss as a set of skin creams.

9. Loyal Drugs: Life-Extension Meds for Dogs

Longevity science is having a moment. Bloomberg Businessweek recently devoted an issue to the “tech titans, venture capitalists, crypto enthusiasts and AI researchers [who] have turned longevity research into something between the hottest science and a tragic comedy.” There must be a trillion (I’m rounding up) podcast episodes about how metformin, statins, and other drugs can extend our life. But where is the hard evidence that we are getting any closer to figuring out how to help our loved ones live longer?

Look to the dogs. Large breeds, such as Great Danes and rottweilers, generally die younger than small dogs. A new drug made by the biotech company Loyal tries to extend their life span by targeting a hormone called “insulin-like growth factor-1,” or IGF-1. Some scientists believe that high levels of the chemical speed up aging in big dogs. By reducing IGF-1, Loyal hopes to curb aging-related increases in insulin. In November, the company announced that it had met a specific FDA requirement for future fast-tracked authorization of drugs that could extend the life span of big dogs. “The data you provided are sufficient to show that there is a reasonable expectation of effectiveness,” an official at the FDA wrote the company in a letter provided to The New York Times.

Loyal’s drug is not available to pet owners yet—and might not be for several years. But the FDA’s support nonetheless marks a historic acknowledgment of the promise of life-span-extension medicine.

The Big Questions About AI in 2024

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 12 › ai-chatbot-llm-questions-2024 › 676942

Let us be thankful for the AI industry. Its leaders may be nudging humans closer to extinction, but this year, they provided us with a gloriously messy spectacle of progress. When I say “year,” I mean the long year that began late last November, when OpenAI released ChatGPT and, in doing so, launched generative AI into the cultural mainstream. In the months that followed, politicians, teachers, Hollywood screenwriters, and just about everyone else tried to understand what this means for their future. Cash fire-hosed into AI companies, and their executives, now glowed up into international celebrities, fell into Succession-style infighting. The year to come could be just as tumultuous, as the technology continues to evolve and its implications become clearer. Here are five of the most important questions about AI that might be answered in 2024.

Is the corporate drama over?

OpenAI’s Greg Brockman is the president of the world’s most celebrated AI company and the golden-retriever boyfriend of tech executives. Since last month, when Sam Altman was fired from his position as CEO and then reinstated shortly thereafter, Brockman has appeared to play a dual role—part cheerleader, part glue guy—for the company. As of this writing, he has posted no fewer than five group selfies from the OpenAI office to show how happy and nonmutinous the staffers are. (I leave to you to judge whether and to what degree these smiles are forced.) He described this year’s holiday party as the company’s best ever. He keeps saying how focused, how energized, how united everyone is. Reading his posts is like going to dinner with a couple after an infidelity has been revealed: No, seriously, we’re closer than ever. Maybe it’s true. The rank and file at OpenAI are an ambitious and mission-oriented lot. They were almost unanimous in calling for Altman’s return (although some have since reportedly said that they felt pressured to do so). And they may have trauma-bonded during the whole ordeal. But will it last? And what does all of this drama mean for the company’s approach to safety in the year ahead?

An independent review of the circumstances of Altman’s ouster is ongoing, and some relationships within the company are clearly strained. Brockman has posted a picture of himself with Ilya Sutskever, OpenAI’s safety-obsessed chief scientist, adorned with a heart emoji, but Altman’s feelings toward the latter have been harder to read. In his post-return statement, Altman noted that the company was discussing how Sutskever, who had played a central role in Altman’s ouster, “can continue his work at OpenAI.” (The implication: Maybe he can’t.) If Sutskever is forced out of the company or otherwise stripped of his authority, that may change how OpenAI weighs danger against speed of progress.

Is OpenAI sitting on another breakthrough?

During a panel discussion just days before Altman lost his job as CEO, he told a tantalizing story about the current state of the company’s AI research. A couple of weeks earlier, he had been in the room when members of his technical staff had pushed “the frontier of discovery forward,” he said. Altman declined to offer more details, unless you count additional metaphors, but he did mention that only four times since the company’s founding had he witnessed an advance of such magnitude.

During the feverish weekend of speculation that followed Altman’s firing, it was natural to wonder whether this discovery had spooked OpenAI’s safety-minded board members. We do know that in the weeks preceding Altman’s firing, company researchers raised concerns about a new “Q*” algorithm. Had the AI spontaneously figured out quantum gravity? Not exactly. According to reports, it had only solved simple mathematical problems, but it may have accomplished this by reasoning from first principles. OpenAI hasn’t yet released any official information about this discovery, if it is even right to think of it as a discovery. “As you can imagine, I can’t really talk about that,” Altman told me recently when I asked him about Q*. Perhaps the company will have more to say, or show, in the new year.

Does Google have an ace in the hole?

When OpenAI released its large-language-model chatbot in November 2022, Google was caught flat-footed. The company had invented the transformer architecture that makes LLMs possible, but its engineers had clearly fallen behind. Bard, Google’s answer to ChatGPT, was second-rate.

Many expected OpenAI’s leapfrog to be temporary. Google has a war chest that is surpassed only by Apple’s and Microsoft’s, world-class computing infrastructure, and storehouses of potential training data. It also has DeepMind, a London-based AI lab that the company acquired in 2014. The lab developed the AIs that bested world champions at chess and Go and intuited protein-folding secrets that nature had previously concealed from scientists. Its researchers recently claimed that another AI they developed is suggesting novel solutions to long-standing problems of mathematical theory. Google had at first allowed DeepMind to operate relatively independently, but earlier this year, it merged the lab with Google Brain, its homegrown AI group. People expected big things.

Then months and months went by without Google so much as announcing a release date for its next-generation LLM, Gemini. The delays could be taken as a sign that the company’s culture of innovation has stagnated. Or maybe Google’s slowness is a sign of its ambition? The latter possibility seems less likely now that Gemini has finally been released and does not appear to be revolutionary. Barring a surprise breakthrough in 2024, doubts about the company—and the LLM paradigm—will continue.

Are large language models already topping out?

Some of the novelty has worn off LLM-powered software in the mold of ChatGPT. That’s partly because of our own psychology. “We adapt quite quickly,” OpenAI’s Sutskever once told me. He asked me to think about how rapidly the field has changed. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,” he said. Maybe he’s right. A decade ago, many of us dreaded our every interaction with Siri, with its halting, interruptive style. Now we have bots that converse fluidly about almost any subject, and we struggle to remain impressed.

AI researchers have told us that these tools will only get smarter; they’ve evangelized about the raw power of scale. They’ve said that as we pump more data into LLMs, fresh wonders will emerge from them, unbidden. We were told to prepare to worship a new sand god, so named because its cognition would run on silicon, which is made of melted-down sand.

ChatGPT has certainly improved since it was first released. It can talk now, and analyze images. Its answers are sharper, and its user interface feels more organic. But it’s not improving at a rate that suggests that it will morph into a deity. Altman has said that OpenAI has begun developing its GPT-5 model. That may not come out in 2024, but if it does, we should have a better sense of how much more intelligent language models can become.

How will AI affect the 2024 election?

Our political culture hasn’t yet fully sorted AI issues into neatly polarized categories. A majority of adults profess to worry about AI’s impact on their daily life, but those worries aren’t coded red or blue. That’s not to say the generative-AI moment has been entirely innocent of American politics. Earlier this year, executives from companies that make chatbots and image generators testified before Congress and participated in tedious White House roundtables. Many AI products are also now subject to an expansive executive order.

But we haven’t had a big national election since these technologies went mainstream, much less one involving Donald Trump. Many blamed the spread of lies through social media for enabling Trump’s victory in 2016, and for helping him gin up a conspiratorial insurrection following his 2020 defeat. But the tools of misinformation that were used in those elections were crude compared with those that will be available next year.

A shady campaign operative could, for instance, quickly and easily conjure a convincing picture of a rival candidate sharing a laugh with Jeffrey Epstein. If that doesn’t do the trick, they could whip up images of poll workers stuffing ballot boxes on Election Night, perhaps from an angle that obscures their glitchy, six-fingered hands. There are reasons to believe that these technologies won’t have a material effect on the election. Earlier this year, my colleague Charlie Warzel argued that people may be fooled by low-stakes AI images—the pope in a puffer coat, for example—but they tend to be more skeptical of highly sensitive political images. Let’s hope he’s right.

Soundfakes, too, could be in the mix. A politician’s voice can now be cloned by AI and used to generate offensive clips. President Joe Biden and former President Trump have been public figures for so long—and voters’ perceptions of them are so fixed—that they may be resistant to such an attack. But a lesser-known candidate could be vulnerable to a fake audio recording. Imagine if during Barack Obama’s first run for the presidency, cloned audio of him criticizing white people in colorful language had emerged just days before the vote. Until bad actors experiment with these image and audio generators in the heat of a hotly contested election, we won’t know exactly how they’ll be misused, and whether their misuses will be effective. A year from now, we’ll have our answer.

Twitter’s Demise Is About So Much More Than Elon Musk

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 12 › twitter-tiktok-short-form-video › 676923

It’s really, really hard to kill a large, beloved social network. But Elon Musk has seemingly been giving it his absolute best shot: Over the past year, Twitter has gotten a new name (X), laid off much of its staff, struggled with outages, brought back banned accounts belonging to Alex Jones and Donald Trump, and lost billions in advertising revenue.

Opportunistic competitors have launched their own Twitter clones, such as Bluesky, Mastodon, and Threads. The hope is to capture fleeing users who want “microblogging”—places where people can shoot off little text posts about what they ate for lunch, their random thoughts about politics or pop culture, or perhaps a few words or sentences of harassment Threads, Meta’s entry which launched in July, seems the most promising, at least in terms of pure scale. Over the summer, it broke the record for fastest app to reach 100 million monthly active users—beating a milestone set by ChatGPT just months earlier—in part because Instagram users were pushed toward it. (Turns out, it’s pretty helpful to launch a new social network on the back of the defining social-media empire of our time.)

But the decline of Twitter, and the race to replace it, is in a sense a sideshow. Analytics experts shared data with me suggesting that the practice of microblogging, while never quite dominant, is only becoming more niche. In the era of TikTok, the act of posting your two cents in two sentences for strangers to consume is starting to feel more and more unnatural. The lasting social-media imprint of 2023 may not be the self-immolation of Twitter but rather that short-form videos—on TikTok, Instagram, and other platforms—have tightened their choke hold on the internet. Text posts as we’ve always known them just can’t keep up.

Social-media companies only tend to sporadically share data about their platforms, and of all the main microblogging sites —X, Threads, Bluesky, and Mastodon—just Bluesky provided a comment for this story. “We’ve grown to 2.6 million users on an invite-only basis in 2023,” BlueSky’s CEO, Jay Gruber, wrote in an email, “and are excited about growth while we open up the network more broadly next year.” So I reached out to outside companies that track social analytics. They told me that these new X competitors haven’t meaningfully chipped away at the site’s dominance. For all of the drama of the past year, X is by far still the predominant network for doing brief text posts. It is still home to more than four times as many monthly active users as Threads, Bluesky, and Mastodon combined, according to numbers shared with me by data.ai, a company that tracks app-store activity. (Data.ai looks only at mobile analytics, so it can’t account for desktop users.)

Mastodon and Bluesky amounted to just “rounding errors, in terms of the number of people engaging,” says Paul Quigley, the CEO of NewsWhip, a social-media-monitoring platform. Threads has not fared much better. Sensor Tower, another analytics firm, estimates that fewer than 1 percent of Threads users opened the app daily last month, compared with 18 percent of Twitter users. And even those who open the app are spending an average of just three minutes a day on it.

That doesn’t mean X is thriving. According to data.ai’s 2024-predictions report, the platform’s daily active users peaked in July 2022, at 316 million, and then dropped under Musk. Based on its data-science algorithms, data.ai predicts that X usership will decline to 250 million in 2024. And data.ai expects microblogging overall to decline alongside X next year, even though these new platforms seem positioned for growth: Threads, after all, just recently launched in Europe and became available as a desktop app, and to join Bluesky, you still need an invite code.

Of course, these are just predictions. Plenty of people do still want platforms for sending off quick thoughts, and perhaps X or any other alternative will gain more users. But the decline of microblogging is part of a larger change in how we consume media. On TikTok and other platforms, short clips are served up by an at-times-magical-seeming algorithm that makes note of our every interest. Text posts don’t have the same appeal. “While platforms like X are likely to maintain a core niche of users, the overall trends show consumers are swapping out text-based social networking apps for photo and video-first platforms,” data.ai noted in their predictions report.

Short-form videos have become an attention vortex. Users are spending an average of 95 minutes a day on TikTok and 61 minutes on Instagram as of this quarter, according to estimates from Sensor Tower. By comparison, they’re estimated to average just 30 minutes on Twitter and three minutes on Threads. People also want companies to shift to video along with them in what is perhaps this the real pivot to video: In a recent survey by Sprout Social, a social-media-analytics tool, 41 percent of consumers said that they want brands to publish more 15- to 30-second videos more than they want any other style of social-media post. Just 10 percent wanted more text-only content.

Maybe this really is the end for the short text post, at least en masse. Or maybe our conception of “microblogging” is due for an update. TikTok videos are perhaps “just a video version of what the original microblogs were doing when they first started coming out in the mid-2000s,” André Brock, a media professor at Georgia Tech who has studied Twitter, told me; they can feel as intimate and authentic as a tweet about having tacos for lunch. Trends such as “men are constantly thinking about the Roman empire” (and the ensuing pushback) could have easily been a viral Twitter or Facebook conversation in a different year. For a while, all of the good Twitter jokes were screenshotted and re-uploaded to Instagram. Now it can feel like all of the good TikToks are downloaded and reposted on Instagram. If the Dress (white-and-gold or black-and-blue?) were to go viral today, it would probably happen in a 30-second video with a narrator and a soundtrack.

But something is left behind when microblogging becomes video. Twitter became an invaluable resource during news moments—part of why journalists flocked to the platform, for better or for worse—allowing people to refresh and instantaneously get real-time updates on election results, or a sports game, or a natural disaster. Movements such as Occupy Wall Street and Black Lives Matter turned to Twitter to organize protests and spread their respective messages.

Some of the news and political content may just as easily move to TikTok: Russia’s war with Ukraine has been widely labeled the “first TikTok War,” as many experienced it for the first time through that lens. Roughly a third of adults under 30 now regularly get their news from TikTok, according to Pew Research. But we don’t yet totally know what it means to have short-form videos, delivered via an algorithmic feed, be the centerpiece of social media. You might log onto TikTok and be shown a video that was posted two weeks ago.

Perhaps the biggest stress test for our short-form-video world has yet to come: the 2024 U.S. presidential election. Elections are where Twitter, and microblogging, have thrived. Meanwhile, in 2020, TikTok was much smaller than what it is now. Starting next year, its true reign might finally begin.

What Happens When AI Takes Over Science?

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 12 › science-is-becoming-less-human › 676363

This is Atlantic Intelligence, an eight-week series in which The Atlantic’s leading thinkers on AI will help you understand the complexity and opportunities of this groundbreaking technology. Sign up here.

Artificial intelligence is changing the way some scientists conduct research, leading to new discoveries on accelerated timetables. As The Atlantic’s Matteo Wong explores in a recent article, AI is assisting in drug discovery and more: “Neuroscientists at Meta and elsewhere, for instance, are turning artificial neural networks trained to ‘see’ photographs or ‘read’ text into hypotheses for how the brain processes both images and language. Biologists are using AI trained on genetic data to study rare diseases, improve immunotherapies, and better understand SARS-CoV-2 variants of concern.”

But these advances have a drawback. AI, through its inhuman ability to process and find connections between huge quantities of data, is also obfuscating how these breakthroughs happen, by producing results without explanation. Unlike human researchers, the technology tends not to show its work—a curious development for the scientific method that calls into question the meaning of knowledge itself.

Damon Beres, senior editor

Illustration by Joanne Imperio / The Atlantic

Science Is Becoming Less Human

By Matteo Wong

This summer, a pill intended to treat a chronic, incurable lung disease entered mid-phase human trials. Previous studies have demonstrated that the drug is safe to swallow, although whether it will improve symptoms of the painful fibrosis that it targets remains unknown; this is what the current trial will determine, perhaps by next year. Such a tentative advance would hardly be newsworthy, except for a wrinkle in the medicine’s genesis: It is likely the first drug fully designed by artificial intelligence to come this far in the development pipeline …

Medicine is just one aspect of a broader transformation in science. In only the past few months, AI has appeared to predict tropical storms with similar accuracy and much more speed than conventional models; Meta has released a model that can analyze brain scans to reproduce what a person is looking at; Google recently used AI to propose millions of new materials that could enhance supercomputers, electric vehicles, and more. Just as the technology has blurred the line between human-created and computer-generated text and images—upending how people work, learn, and socialize—AI tools are accelerating and refashioning some of the basic elements of science.

Read the full article.

What to Read Next

Earlier this week, OpenAI and Axel Springer—the media conglomerate behind publications such as Business Insider and Politico—announced a landmark deal that will bring news stories into ChatGPT. I wrote about the partnership and what it suggests about the changing internet: “ChatGPT is becoming more capable at the same time that its underlying technology is destroying much of the web as we’ve known it.”

Here are some other recent stories that are worth your time:

AI astrology is getting a little too personal: “Cryptic life guidance is one thing. Telling me to ditch my therapist is another,” Katherine Hu writes. AI’s spicy-mayo problem: A chatbot that can’t say anything controversial isn’t worth much. Bring on the uncensored models, Mark Gimein writes.

P.S.

My son will not be receiving any AI-infused toys for Christmas this year (I saw M3GAN), but the market for such things exists. The musician Grimes is working with OpenAI and the start-up Curio to launch a new plush that will use AI to converse with children. The ultimate goal is to develop toys that exhibit “a degree of some kind of pseudo consciousness,” according to a statement given to The Washington Post by Sam Eaton, Curio’s president. And to think my Furby used to freak me out.

— Damon

ChatGPT Is Turning the Internet Into Plumbing

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 12 › openai-axel-springer-partnership-content › 676340

There is a tension at the heart of ChatGPT that may soon snap. Does the technology expand our world or constrain it? Which is to say, do AI-powered chatbots open new doors to learning and discovery, or do they instead risk siloing off information and leaving us stuck with unreliable access to truth?

Earlier today, OpenAI, the maker of ChatGPT, announced a partnership with the media conglomerate Axel Springer that seems to get us closer to an answer. Under the arrangement, ChatGPT will gain the capacity to present its users with “summaries of selected global news content” published by the news organizations in Axel Springer’s portfolio, which includes Politico and Business Insider. The details are not altogether clear, but the announcement indicates that when you query ChatGPT, the bot will be able to spin up responses based on Axel Springer stories, accompanied by links to the stories themselves. Likewise, material from Axel Springer publications will be used as training data for OpenAI, advancing the company’s products—which may have already consumed something like the entire internet.

[Read: What happens when AI has read everything?]

It’s arguably a strange move for the publisher, which in the old days might have seen some competitive advantage in maintaining a distinctive voice—that is, one that isn’t easily replicated by a chatbot. But Axel Springer will be paid for lending its work. That’s certainly better than getting ripped off for free, which is effectively what generative AI is presumed to have done to publishers across the industry. Julia Sommerfield, a spokesperson for Axel Springer, declined to give any specific details about the deal but told me over email, “Our reporters at Politico and Business Insider will continue to deliver high-quality journalism. The partnership introduces an additional channel for distribution and revenue, and enriches users’ experience on ChatGPT.” A spokesperson for OpenAI declined to comment on the deal.

It makes sense that the press release uses the phrase “global news content”—content is an ugly but useful word for understanding exactly what’s happening here. At its core, generative AI cannot distinguish original journalism from any other bit of writing; to the machine, it’s all slop pushed through the pipes and splattered out the other end. For this reason, the deal is notable not just for media wonks, but because it says something about the future of the internet—in particular, the vision that OpenAI has for it.

OpenAI’s most powerful model does not currently provide information about any event more recent than April 2023. That will change: Although OpenAI has an agreement to use archival material from the Associated Press, Axel Springer is reportedly the first publisher to provide ongoing news stories in this way. The benefit of this arrangement, in theory, is that current, accurate info is instantly available, and can conform to exactly what a ChatGPT user wants to know. But the generative-AI era has introduced a distancing effect. ChatGPT, Bing, or Google’s Gemini may present readers with information and links from publications, but they hardly seem to incentivize engagement with those publications. If ChatGPT reproduces news updates, what reason is there to click on the original? How many times have you Googled for more information after reading a headline on an elevator or taxicab screen?

[Read: Bing is a trap]

The shift to news via chatbot feels ironic when you consider what generative AI has wrought elsewhere: Old-school search engines such as Google have been flooded with supercharged, optimized spam and are struggling to handle it, while sites such as CNET and Gizmodo have published deeply flawed synthetic writing in a desperate bid to stay competitive. ChatGPT is becoming more capable at the same time that its underlying technology is destroying much of the web as we’ve known it.

Until such time as bad AI content can be quickly identified and dealt with, its sheer volume will continue to crowd out legitimate sources. This will be fine to the extent that people can still find good information; major publishers are likely to endure, and surely more of them will sign deals with OpenAI. But it is a sour development for the overall diversity of the web. For a long time, the internet was about discovery, about jumping from site to site to find different perspectives and styles; it was, in some sense, an equalizer or a democratizer. That became less true as we began to experience the internet through social-media platforms that serve as gatekeepers; it is becoming less true still as new generative-AI infrastructure is built on these ruins.

That seems to be the direction all of this is headed in. All of the websites, all of the writing: It’s plumbing for a digital faucet. With fresh content surging, ChatGPT will become more viable as a one-stop shop through which to experience the internet. It will offer something that better resembles the full range of human knowledge as it exists at any given moment—albeit dotted by “hallucinations,” lending just a bit of doubt to every interaction.

However flawed it is now, perhaps this transformation will ultimately be understood as an expansion of human potential. There are persuasive arguments that generative AI inspires creativity and facilitates good work, which is more than could be said of most websites. Whatever the case, it is certainly the end of one era and the beginning of another—one that will not be defined by the panoply of our digital creations but instead by a chatbot’s text box and its cursor, blinking and blinking, awaiting your command.

Google made sure to emphasize live demos of its new Gemini Pro developer tools

Quartz

qz.com › google-made-sure-to-emphasize-live-demos-of-its-new-gem-1851093215

When Google dropped its new generative AI model Gemini in a prerecorded video demo last week, it highlighted the differentiating elements of the ChatGPT rival, such as its ability to converse out loud. But there seemed to be a gap between what was shown and the large language model’s true capabilities.

Read more...