Itemoids

iPhone

The Government’s Computing Experts Say They Are Terrified

The Atlantic

www.theatlantic.com › technology › archive › 2025 › 02 › elon-musk-doge-security › 681600

Elon Musk’s unceasing attempts to access the data and information systems of the federal government range so widely, and are so unprecedented and unpredictable, that government computing experts believe the effort has spun out of control. This week, we spoke with four federal-government IT professionals—all experienced contractors and civil servants who have built, modified, or maintained the kind of technological infrastructure that Musk’s inexperienced employees at his newly created Department of Government Efficiency are attempting to access. In our conversations, each expert was unequivocal: They are terrified and struggling to articulate the scale of the crisis.

Even if the president of the United States, the head of the executive branch, supports (and, importantly, understands) these efforts by DOGE, these experts told us, they would still consider Musk’s campaign to be a reckless and dangerous breach of the complex systems that keep America running. Federal IT systems facilitate operations as varied as sending payments from the Treasury Department and making sure that airplanes stay in the air, the sources told us.

Based on what has been reported, DOGE representatives have obtained or requested access to certain systems at the U.S. Treasury, the Department of Health and Human Services, the Office of Personnel Management, and the National Oceanic and Atmospheric Administration, with eyes toward others, including the Federal Aviation Administration. “This is the largest data breach and the largest IT security breach in our country’s history—at least that’s publicly known,” one contractor who has worked on classified information-security systems at numerous government agencies told us this week. “You can’t un-ring this bell. Once these DOGE guys have access to these data systems, they can ostensibly do with it what they want.”

[Read: If DOGE goes nuclear]

What exactly they want is unclear. And much remains unknown about what, exactly, is happening here. The contractor emphasized that nobody yet knows which information DOGE has access to, or what it plans to do with it. Spokespeople for the White House, and Musk himself, did not respond to emailed requests for comment. Some reports have revealed the scope of DOGE’s incursions at individual agencies; still, it has been difficult to see the broader context of DOGE’s ambition.

The four experts laid out the implications of giving untrained individuals access to the technological infrastructure that controls the country. Their message is unambiguous: These are not systems you tamper with lightly. Musk and his crew could act deliberately to extract sensitive data, alter fundamental aspects of how these systems operate, or provide further access to unvetted actors. Or they may act with carelessness or incompetence, breaking the systems altogether. Given the scope of what these systems do, key government services might stop working properly, citizens could be harmed, and the damage might be difficult or impossible to undo. As one administrator for a federal agency with deep knowledge about the government’s IT operations told us, “I don’t think the public quite understands the level of danger.”

Each of our four sources, three of whom requested anonymity out of fear of reprisal, made three points very clear: These systems are immense, they are complex, and they are critical. A single program run by the FAA to help air-traffic controllers, En Route Automation Modernization, contains nearly 2 million lines of code; an average iPhone app, for comparison, has about 50,000. The Treasury Department disburses trillions of dollars in payments per year.

Many systems and databases in a given agency feed into others, but access to them is restricted. Employees, contractors, civil-service government workers, and political appointees have strict controls on what they can access and limited visibility into the system as a whole. This is by design, as even the most mundane government databases can contain highly sensitive personal information. A security-clearance database such as those used by the Department of Justice or the Bureau of Alcohol, Tobacco, Firearms and Explosives, one contractor told us, could include information about a person’s mental-health or sexual history, as well as disclosures about any information that a foreign government could use to blackmail them.

Even if DOGE has not tapped into these particular databases, The Washington Post reported on Wednesday that the group has accessed sensitive personnel data at OPM. Mother Jones also reported on Wednesday that an effort may be under way to effectively give Musk control over IT for the entire federal government, broadening his access to these agencies. Trump has said that Musk is acting only with his permission. “Elon can’t do and won’t do anything without our approval,” he said to reporters recently. “And we will give him the approval where appropriate. Where it’s not appropriate, we won’t.” The specter of what DOGE might do with that approval is still keeping the government employees we spoke with up at night. With relatively basic “read only” access, Musk’s people could easily find individuals in databases or clone entire servers and transfer that secure information somewhere else. Even if Musk eventually loses access to these systems—owing to a temporary court order such as the one approved yesterday, say—whatever data he siphons now could be his forever.

[Read: Trump advisers stopped Musk from hiring a noncitizen at DOGE]

With a higher level of access—“write access”—a motivated person may be able to put their own code into the system, potentially without any oversight. The possibilities here are staggering. One could alter the data these systems process, or they could change the way the software operates—without any of the testing that would normally accompany changes to a critical system. Still another level of access, administrator privileges, could grant the broad ability to control a system, including hiding evidence of other alterations. “They could change or manipulate treasury data directly in the database with no way for people to audit or capture it,” one contractor told us. “We’d have very little way to know it even happened.”

The specific levels of access that Musk and his team have remain unclear and likely vary between agencies. On Tuesday, the Treasury said that DOGE had been given “read only” access to the department’s federal payment system, though Wired then reported that one member of DOGE was able to write code on the system. Any focus on access tiers, for that matter, may actually simplify the problem at hand. These systems aren’t just complex at the code level—they are multifaceted in their architecture. Systems can have subsystems; each of these can have their own permission structures. It’s hard to talk about any agency’s tech infrastructure as monolithic. It’s less a database than it is a Russian nesting doll of databases, the experts said.

Musk’s efforts represent a dramatic shift in the way the government’s business has traditionally been conducted. Previously, security protocols were so strict that a contractor plugging a non-government-issued computer into an ethernet port in a government agency office was considered a major security violation. Contrast that with DOGE’s incursion. CNN reported yesterday that a 23-year-old former SpaceX intern without a background check was given a basic, low tier of access to Department of Energy IT systems, despite objections from department lawyers and information experts. “That these guys, who may not even have clearances, are just pulling up and plugging in their own servers is madness,” one source told us, referring to an allegation that DOGE had connected its own server at OPM. “It’s really hard to find good analogies for how big of a deal this is.” The simple fact that Musk loyalists are in the building with their own computers is the heart of the problem—and helps explain why activities ostensibly authorized by the president are widely viewed as a catastrophic data breach.

The four systems professionals we spoke with do not know what damage might already have been done. “The longer this goes on, the greater the risk of potential fatal compromise increases,” Scott Cory, a former CIO for an agency in the HHS, told us. At the Treasury, this could mean stopping payments to government organizations or outside contracts it doesn’t want to pay. It could also mean diverting funds to other recipients. Or gumming up the works in the attempt to do those, or other, things.

In the FAA, even a small systems disruption could cause mass grounding of flights, a halt in global shipping, or worse, downed planes. For instance, the agency oversees the Traffic Flow Management System, which calculates the overall demand for airspace in U.S. airports and which airlines depend on. “Going into these systems without an in-depth understanding of how they work both individually and interconnectedly is a recipe for disaster that will result in death and economic harm to our nation,” one FAA employee who has nearly a decade of experience with its system architecture told us. “‘Upgrading’ a system of which you know nothing about is a good way to break it, and breaking air travel is a worst-case scenario with consequences that will ripple out into all aspects of civilian life. It could easily get to a place where you can’t guarantee the safety of flights taking off and landing.” Nevertheless, on Wednesday Musk posted that “the DOGE team will aim to make rapid safety upgrades to the air traffic control system.”

Even if DOGE members are looking to modernize these systems, they may find themselves flummoxed. The government is big and old and complicated. One former official with experience in government IT systems, including at the Treasury, told us that old could mean that the systems were installed in 1962, 1992, or 2012. They might use a combination of software written in different programming languages: a little COBOL in the 1970s, a bit of Java in the 1990s. Knowledge about one system doesn’t give anyone—including Musk’s DOGE workers, some of whom were not even alive for Y2K—the ability to make intricate changes to another.

[Read: The “rapid unscheduled disassembly” of the United States government]

The internet economy, characterized by youth and disruption, favors inventing new systems and disposing of old ones. And the nation’s computer systems, like its roads and bridges, could certainly benefit from upgrades. But old computers don’t necessarily make for bad infrastructure, and government infrastructure isn’t always old anyway. The former Treasury official told us that mainframes—and COBOL, the ancient programming language they often run—are really good for what they do, such as batch processing for financial transactions.

Like the FAA employee, the payment-systems expert also fears that the most likely result of DOGE activity on federal systems will be breaking them, especially because of incompetence and lack of proper care. DOGE, he observed, may be prepared to view or hoover up data, but it doesn’t appear to be prepared to carry out savvy and effective alterations to how the system operates. This should perhaps be reassuring. “If you were going to organize a heist of the U.S. Treasury,” he said, “why in the world would you bring a handful of college students?” They would be useless. Your crew would need, at a minimum, a couple of guys with a decade or two of experience with COBOL, he said.

Unless, of course, you had the confidence that you could figure anything out, including a lumbering government system you don’t respect in the first place. That interpretation of DOGE’s theory of self seems both likely and even more scary, at the Treasury, the FAA, and beyond. Would they even know what to do after logging in to such a machine? we asked. “No, they’d have no idea,” the payment expert said. “The sanguine thing to think about is that the code in these systems and the process and functions they manage are unbelievably complicated,” Scott Cory said. “You’d have to be extremely knowledgeable if you were going into these systems and wanting to make changes with an impact on functionality.”

But DOGE workers could try anyway. Mainframe computers have a keyboard and display, unlike the cloud-computing servers in data centers. According to the former Treasury IT expert, someone who could get into the room and had credentials for the system could access it and, via the same machine or a networked one, probably also deploy software changes to it. It’s far more likely that they would break, rather than improve, a Treasury disbursement system in so doing, one source told us. “The volume of information they deal with [at the Treasury] is absolutely enormous, well beyond what anyone would deal with at SpaceX,” the source said. Even a small alteration to a part of the system that has to do with the distribution of funds could wreak havoc, preventing those funds from being distributed or distributing them wrongly, for example. “It’s like walking into a nuclear reactor and deciding to handle some plutonium.”

DOGE is many things—a dismantling of the federal government, a political project to flex power and punish perceived enemies—but it is also the logical end point of a strain of thought that’s become popular in Silicon Valley during the boom times of Big Tech and easy money: that building software and writing code aren’t just dominant skills for the 21st century, but proof of competence in any realm. In a post on X this week, John Shedletsky, a developer and an early employee at the popular gaming platform Roblox, summed up the philosophy nicely: “Silicon Valley built the modern world. Why shouldn’t we run it?”

This attitude disgusted one of the officials we spoke with. “There’s this bizarre belief that being able to do things with computers means you have to be super smart about everything else.” Silicon Valley may have built the computational part of the modern world, but the rest of that world—the money, the airplanes, the roads, and the waterways—still exists. Knowing something, even a lot, about computers guarantees no knowledge about the world beyond them.

“I’d like to think that this is all so massive and complex that they won’t succeed in whatever it is they’re trying to do,” one of the experts told us. “But I wouldn’t want to wager that outcome against their egos.”

Stop Listening to Music on a Single Speaker

The Atlantic

www.theatlantic.com › technology › archive › 2025 › 02 › bluetooth-speakers-ruining-music › 681571

When I was in my early 20s, commuting to work over the freeways of Los Angeles, I listened to Brian Wilson’s 2004 album, Smile, several hundred times. I like the Beach Boys just fine, but I’m not a superfan, and the decades-long backstory of Smile never really hooked me. But the album itself was sonic mesmerism: each hyper-produced number slicking into the next, with Wilson’s baroque, sometimes cartoonish tinkering laid over a thousand stars of sunshine. If I tried to listen again and my weathered Mazda mutely regurgitated the disc, as it often did, I could still hear the whole thing in my head.

Around this time, a friend invited me to see Wilson perform at the Hollywood Bowl, which is a 17,000-seat outdoor amphitheater tucked into the hills between L.A. and the San Fernando Valley. Elsewhere, this could only be a scene of sensory overload, but its eye-of-the-storm geography made the Bowl a kind of redoubt, cool and dark and almost hushed under the purple sky. My friend and I opened our wine bottle, and Wilson and his band took the stage.

From the first note of the a capella opening, they … well, they wobbled. The instruments, Wilson’s voice, all of it stretched and wavered through each beat of the album (which constituted their set list) as if they were playing not in a bandshell but far down a desert highway on a hot day, right against the horizon. Wilson’s voice, in particular, verged on frail—so far from the immaculate silk of the recording as to seem like a reinvention. Polished and rhythmic, the album had been all machine. But the performance was human—humans, by the thousand, making and hearing the music—and for me it was like watching consciousness flicker on for the first time in the head of a beloved robot.

Music is different now. Finicky CD players are a rarity, for one thing. We hold the divine power instead to summon any song we can think of almost anywhere. In some respects, our investment in how we listen has kept pace: People wear $500 headphones on the subway; they fork out the GDP of East Timor to see Taylor Swift across an arena. But the engine of this musical era is access. Forever, music was tethered to the human scale, performers and audience in a space small enough to carry an organic or mechanical sound. People alive today knew people who might have heard the first transmitted concert, a fragile experiment over telephone lines at the Paris Opera in 1881. Now a library of music too big for a person to hear in seven lifetimes has surfed the smartphone to most corners of the Earth.

In another important way, though, how we listen has shrunk. Not in every instance, but often enough to be worthy of attention. The culprit is the single speaker—as opposed to a pair of them, like your ears—and once you start looking for it, you might see it everywhere, an invasive species of flower fringing the highway. Every recorded sound we encounter is made up of layers of artifice, of distance from the originating disturbance of air. So this isn’t an argument about some standard of acoustic integrity; rather, it’s about the space we make with music, and what (and who) will fit inside.

From the early years of recorded music, the people selling it have relied on a dubious language of fidelity—challenging the listener to tell a recording apart from the so-called real thing. This is silly, even before you hear some of those tinny old records. We do listen to sound waves, of course, but we also absorb them with the rest of our body, and beyond the sound of the concert are all the physical details of its production—staging, lighting, amplification, decor. We hear some of that happening, too, and we see it, just as we see and sense the rising and falling of the people in the seats around us, as we feel the air whipping off their applauding hands or settling into the subtly different stillnesses of enrapturement or boredom. People will keep trying to reproduce all of that artificially, no doubt, because the asymptote of fidelity is a moneymaker. But each time you get one new piece of the experience right, you’ve climbed just high enough to crave the next rung on the ladder. Go back down, instead, to the floor of the most mundane auditorium, and you’ll feel before you can name all the varieties of sensation that make it real.

For a long time, the fidelity sell was a success. When American men got home from World War II, as the cultural historian Tony Grajeda has noted, they presented a new consumer class. Marketing phrases such as “concert-hall realism” got them buying audio equipment. And the advent of stereo sound, with separated left and right channels—which became practical for home use in the late ’50s—was an economic engine for makers of both recordings and equipment. All of that needed to be replaced in order to enjoy the new technology. The New York Times dedicated whole sections to the stereo transition: “Record dealers, including a considerable number who do not think that stereo is as yet an improvement over monophonic disks, are hopeful that, with sufficient advertising and other forms of publicity, the consumer will be converted,” a 1958 article observed.

Acoustic musicians were integral to the development of recorded sound, and these pioneers understood that the mixing panel was now as important as any instrument. When Bell Laboratories demonstrated its new stereophonic technology in a spectacle at Carnegie Hall, in 1940, the conductor Leopold Stokowski ran the audio levels himself, essentially remixing live the sounds he’d recorded with his Philadelphia Orchestra. Stokowski had worked, for years, with his pal Walt Disney to create a prototype of surround sound for Fantasia. The result was a system too elaborate to replicate widely, which had to be abandoned (and its parts donated to the war effort) before the movie went to national distribution.

Innovators like Stokowski recognized a different emerging power in multichannel sound, more persuasive and maybe more self-justifying than the mere simulation of a live experience: to make, and then remake in living rooms and dens across the country, an aural stage without a physical correlate—an acoustic space custom-built in the recording studio, with a soundtrack pieced together from each isolated instrument and voice. The musical space had always been monolithic, with players and listeners sharing it for the fleeting moment of performance. The recording process divided that space into three: one for recording the original sound, one for listening, and an abstract, theoretical “sound stage” created by the mixing process in between. That notional space could have a size and shape of its own, its own warmth and coolness and reverberance, and it could reposition each element of the performance in three dimensions, at the inclination of the engineer—who might also be the performer.

Glenn Gould won permanent fame with his recordings of Bach’s keyboard works in the 1950s. Although he was as formidable and flawless a live performer as you’ll get, his first recording innovation—and that it was, at the time—was to splice together many different takes of his performances to yield an exaggerated, daring perfection in each phrase of every piece, as if LeBron James only ever showed up on TV in highlight reels. (“Listen, we’ve got lots of endings,” Gould tells his producer in one recording session, a scene recalled in Paul Elie’s terrific Reinventing Bach.) By the ’70s, the editors of the anthology Living Stereo note, Gould had hacked the conventional use of multi-mic recording, “but instead of using it to render the conventional image of the concert hall ‘stage,’ he used the various microphone positions to create the effect of a highly mobile acoustic space—what he sometimes referred to as an ‘acoustic orchestration’ or ‘choreography.’” It was akin to shooting a studio film with a handheld camera, reworking the whole relationship of perceiver to perceived.

Pop music was surprisingly slow to match the classicalists’ creativity; many of the commercial successes of the ’60s were mastered in mono, which became an object of nostalgic fascination after the record companies later reengineered them—in “simulated stereo”—to goose sales. (Had it been released by the Beach Boys back then, Smile would have been a single-channel record, and, in fact, Brian Wilson himself is deaf in one ear.) It wasn’t really until the late ’60s, when Pink Floyd championed experiments in quadraphonic sound—four speakers—that pop music became a more reliable scene of fresh approaches in both recording and production.

Nowadays, even the most rudimentary pop song is a product of engineering you couldn’t begin to grasp without a few master’s degrees. But the technologization of music producing, distribution, and consumption is full of paradoxes. For the first 100 years, from that Paris Opera telephone experiment to the release of the compact disc in the early 1980s, recording was an uneven but inexorable march toward higher quality—as both a selling point and an artistic aim. Then came file sharing, in the late ’90s, and the iPod and its descendant, the iPhone, all of which compromised the quality of the music in favor of smaller files that could flourish on a low-bandwidth internet—convenience and scale at the expense of quality. Bluetooth, another powerful warrior in the forces of convenience, made similar trade-offs in order to spare us a cord. Alexa and Siri gave us new reasons to put a multifunctional speaker in our kitchens and bathrooms and garages. And the ubiquity of streaming services brought the whole chain together, one suboptimal link after another, landing us in a pre-Stokowski era of audio quality grafted onto a barely fathomable utopia of access: all music, everywhere, in mediocre form.

People still listen to music in their car or on headphones, of course, and many others have multichannel audio setups of one kind or another. Solitary speakers tend to be additive, showing up in places you wouldn’t think to rig for the best sound: in the dining room, on the deck, at the beach. They’re digital successors to the boombox and the radio, more about the presence of sound than its shape.

Yet what many of these places have in common is that they’re where people actually congregate. The landmark concerts and the music we listen to by ourselves keep getting richer, their real and figurative stages more complex. (I don't think I've ever felt a greater sense of space than at Beyoncé’s show in the Superdome two Septembers ago.) But our everyday communal experience of music has suffered. A speaker designed to get you to order more toilet paper, piping out its lonely strain from the corner of your kitchen—it’s the first time since the arrival of hi-fi almost a century ago that we’ve so widely acceded to making the music in our lives smaller.

For Christmas, I ordered a pair of $60 Bluetooth speakers. (This kind of thing has been a running joke with my boyfriend since a more ambitious Sonos setup showed up in his empty new house a few days after closing, the only thing I needed to make the place livable. “I got you some more speakers, babe!”) We followed the instructions to pair them in stereo, then took them out to the fire pit where we’d been scraping by with a single unit. I hung them from opposite trees, opened up Spotify, and let the algorithmic playlist roll. In the flickering darkness, you could hear the silence of the stage open up, like the moments when the conductor mounts the podium in Fantasia. As the music began, it seemed to come not from a single point on the ground, like we were used to, but from somewhere out in the woods or up in the sky—or maybe from a time before all this, when the musician would have been one of us, seated in the glow and wrapping us in another layer of warmth. This wasn’t high-fidelity sound. There wasn’t a stereo “sweet spot,” and the bass left something to be desired. But the sound made a space, and we were in it together.

Nicholas Carr: Is the Internet Making Us Stupid?

The Atlantic

www.theatlantic.com › newsletters › archive › 2025 › 01 › nicholas-carr-is-the-internet-making-us-stupid › 681517

This is an edition of Time-Travel Thursdays, a journey through The Atlantic’s archives to contextualize the present and surface delightful treasures. Sign up here.

“Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain,” Nicholas Carr wrote in 2008, “remapping the neural circuitry, reprogramming the memory. My mind isn’t going—so far as I can tell—but it’s changing. I’m not thinking the way I used to think.”

Carr’s cover story for The Atlantic, “Is Google Making Us Stupid?,” helped crystallize a sense of unease that had just started to dampen widespread enthusiasm for online life and its possibilities. New means of communication and knowledge transmission—the printing press, radio, television, now the internet—have always been met with fears about what may be lost with their adoption. Although these concerns can be overblown, they are not unfounded. Because communication technologies mediate our understanding of other humans and the outside world, changes in those technologies really do affect the way we think—sometimes profoundly.

Carr’s cover story was the first in a long line of explorations in The Atlantic about the unintended consequences of online life on our minds and behaviors. (Our February cover story, “The Anti-Social Century,” by Derek Thompson, is one of the latest installments.) Recently, I spoke with Carr about his essay, and about how the digital world continues to change the way we read, think, and remember.

This conversation has been edited for concision and clarity.

The Honeymoon Is Over

Don Peck: In 2008, before iPhones were widely used, before social media was ubiquitous, you made the argument that the internet was changing our brains, chipping away at our ability to think deeply. The tech environment then was in many ways very different from the one we live in today. How has that argument aged?

Nicholas Carr: When I wrote the article, I saw it as a personal essay built on my own sense that I was losing my ability to concentrate because I was spending so much time online. And I knew I was being speculative.

Unfortunately, I think my speculations have been proved correct. Look at how technology has changed since 2008: As you said, the iPhone had just come out. Social media was mainly used by kids. The kind of distractions and interruptions that I described—which back in 2008 kind of only happened when you were sitting in front of your laptop or desktop—now happen all the time. So I think that, if anything, disruptions to our train of thought and our ability to put information into context and to interpret things deeply—it’s now much worse than it was 17 years ago.

Peck: What have you done in your own life, since then, to resist the problems of scatter and superficiality? And has any of it worked?

Carr: I wish I could say I’ve solved the problem. When I wrote the article, we were still in a honeymoon phase with the internet, and most people assumed that by getting greater access to information, you’d make people smarter. But I think we all struggle today, because society has reshaped itself around the assumption that everybody is online all the time. It’s very hard to break free of that.

Social media is particularly good at distracting us, so I try to keep my presence there to a minimum. I try not to keep my phone on my person all the time: If I’m going out for a walk or going out to dinner, I’ll try to leave it behind. If your phone’s always with you, it grabs a permanent hold on your attention—even if you’re not looking at it, you’re thinking of looking at it because you know something new is always there.

But I don’t want to present myself as some model of a person who’s solved this problem. And I have to say, I think the struggle is getting harder rather than easier, even though we kind of see the problem more clearly now.

Peck: You have a new book out, Superbloom: How Technologies of Connection Tear Us Apart. It follows, to some extent, from some of the inquiries you began all those years ago. What’s the main message of the book?

Carr: So, ever since the Enlightenment, if not earlier, we’ve taken an idealistic view of communication. We believe that if communication among people is generally good, then more communication is going to be better. It’s going to bring more understanding and ultimately more social harmony.

In the book, I argue that that assumption is catastrophically wrong. When you speed up the exchange of messages and information beyond a certain point, you actually overwhelm the mind’s ability to make sense of it all in a deep way. To keep up with the flow, people have to sacrifice emotional and intellectual depth. We become reactive and impulsive, and that ends up triggering misunderstanding and animosity and, in general, misanthropy.

The book looks at how the internet affects our social lives—the way we converse, the way we develop relationships, the way we socialize in general—from a perspective that is kind of similar to the way that my 2008 cover story looked at our intellectual lives. In both, what I’m arguing is that there’s a fundamental conflict between how the technology works and how our minds work. And it’s a conflict that I’m not sure can be remedied.

Peck: Some of the changes involve not just the way we read or receive information, but also the way we write and post. Can you talk about how that affects our thinking as well?

Carr: In the 1980s and early 1990s, as email was becoming popular, I think most people initially saw it as a substitute for the postal system. And people wrote long, careful emails, in a very similar form to what they would have written in a personal letter. But as the intensity of email picked up, they became shorter, sloppier, and more superficial. And yet they displaced letters—very few people write personal letters anymore.

The flow of messages through social media and texting intensified all that, and telegraphic exchanges have become the default language we use today. In one sense, you can understand that. We’ve adopted this new way of speaking to one another because it’s the only way to stay afloat in the flood of messages we have to deal with. But self-creation comes through language, through expressing yourself. By constantly compressing the way we speak, we’ve lost a lot of nuance, and I think we’ve also compressed ourselves in a way. And we’ve let this all happen with very little resistance.

The January 6er Who Left Trumpism

The Atlantic

www.theatlantic.com › politics › archive › 2025 › 01 › january-6-riot-pardons › 681459

“I was okay with being a convict,” Jason Riddle told me this week, not long after learning that he was among the roughly 1,500 recipients of sweeping presidential pardons. Some Americans, including President Donald Trump, believe that Riddle and others who rioted at the Capitol on January 6, 2021, were unjustly persecuted and thus deserving of clemency—if not celebration. Riddle, a 36-year-old New Hampshire resident, rejects this framing. “I’m not a patriot or a hero just because the guy who started the riot says it’s okay,” he told me.

On Thursday, after consulting with his public defender, Riddle sent a pithy email to the Department of Justice:

To whom it may concern,

I’d like to reject my pardon please.

Sincerely,
Jason Riddle

Sent from my iPhone

Declining the pardon falls within Riddle’s legal rights. Many other January 6ers are holding out their hands for the president’s gift. “I can’t look myself in the mirror and do that,” Riddle said. Rather than whitewash his unsavory past, he feels called to own his behavior, even his most shameful moments—a tenet of Alcoholics Anonymous, which he says has saved him.

Some insurrectionists stormed the Capitol as true ideological warriors. Enrique Tarrio, a former leader of the Proud Boys, and Stewart Rhodes, founder of the Oath Keepers, for example, were convicted of seditious conspiracy against the United States (and both men are now free). But many others who participated in the violence and destruction that day were similar to Riddle—people with ordinary lives and ordinary problems who found community and catharsis in the MAGA movement.

None of the above is an excuse for taking part in one of the ugliest moments in American history. But actively planning to carry out violence is arguably different from getting swept up in a mob. Today, Riddle doesn’t shirk his complicity. But the path that led him to the Capitol sheds light on how someone without much direction suddenly found it in a day of rage and mayhem. His story also raises an intriguing possibility: A person who stumbled into the darker corners of Trumpism can also stumble out.

For Riddle, the road to January 6 began after he graduated from high school, years before Trump’s first campaign. He served in the Navy and, according to his sentencing memo, “was honorably released from active duty to the naval reserves in light of reocurring [sic] struggles with alcohol use.” In college, at Southern Connecticut State University, as an older student, he decided to major in political science. On campus, he recalls feeling surrounded by younger Bernie Sanders supporters, while he took a liking to Trump. He described himself and another early Trump-supporting buddy as “obnoxious,” noting that they’d frequently drink in class. During Trump’s first presidential campaign, Riddle drove to rallies all over the country. At first he told himself that, as a poli-sci major, he was making anthropological field trips. In truth, he was becoming swept up in MAGA world.

He liked the excitement and controversy that surrounded Trump. “There was this aggression. I think I really enjoyed it,” he said. He’d pregame before the rallies, then join the crowds listening to the future president rant. “You go, you know, bond with these strangers,” he said. At that time in his life, Riddle remembers having barely any other interests or hobbies. He didn’t watch sports or exercise. He’d sit at home, drinking and trolling. “I spent all my time in those comments [sections] on social media, arguing with strangers,” Riddle said. “It was all about proving someone wrong. That would make me feel good about myself.”

After college, he struggled to hold down a job. Eventually, he found work as a mail carrier for the Postal Service. On his route, he’d ruminate. He’d carry on long conversations with a drinking buddy. “I would just be on the phone with my Bluetooth in, talking to another maniac who thinks like me, while just slowly going crazy,” Riddle said.

Radicalization can be a gradual process. He described himself as more of a libertarian than a MAGA Republican. In Trumpism, though, Riddle found an always-there outlet for his pent-up dissatisfaction with how his life was unfolding. But Trump’s time in office was running out. As he plotted to cling to power by desperate means, the president and his allies were spreading conspiracy theories about alleged voter fraud, including lies about mail-in ballots. “So I’m, like, literally working at the mail, which is what I believed to be part of the problem with the election,” Riddle said. In the weeks before the insurrection, he told me, he was drinking more heavily than ever. Sometimes, he’d stash additional booze in the mailbag he carried for the day’s rounds.

One day, drunk on the job, he abruptly quit, leaving piles of mail in his truck. Soon, he and two friends were driving from New Hampshire to Washington, D.C. One was a Trump supporter; the other, Riddle now thinks, was just along for the ride. Riddle’s own commitment to the “Stop the Steal” narrative involved some doublethink. “I know I’m wrong,” Riddle recalls telling himself. “Fuck it; I’m going down anyways.”

He recalls very clearly when he stepped over a barrier and marched into the Capitol. His friends stopped following him. “I remember actually seeing politicians from where I was standing,” he told me. “I could tell they were scared. I do remember enjoying that.”

Images of some of the other Capitol invaders soon spread on social media: the Viking-helmeted QAnon Shaman, the man with his feet up on Nancy Pelosi’s desk, the guy carrying the speaker’s lectern. Riddle, too, achieved a kind of immortality: He was the insurrectionist hoisting a bottle of wine. In the immediate aftermath of the event, Riddle felt no remorse, or shame, or need to hide. He bragged about his exploits on a local newscast, and briefly enjoyed his newfound virality. He soon received a visit from the FBI.

In addition to pilfering booze from the Senate parliamentarian’s office, Riddle had stolen a leather-bound book labeled Senate Procedure, and quickly hawked it to a fellow rioter for $40. On April 4, 2022, at federal court in Washington, he was sentenced to 90 days in prison. “Three months for trying to stop the steal, one sip of wine at a time?” Riddle bragged to a New Hampshire newspaper. “Totally worth it.”

Even in prison, he still had his fame—or infamy. He remembers a correctional officer muttering “Let’s go, Brandon” to him on his first day, he told me, and that his fellow inmates nicknamed him “Trump.” But unlike some January 6ers, Riddle wasn’t further radicalized in prison, where he spent the summer of 2022. But neither did his conviction immediately lead him to repudiate the cause that had taken him to the Capitol. Riddle talked about running for Congress, leveraging what remained of his fleeting celebrity. He once filed paperwork, but never got any campaign off the ground.

Riddle thought he’d be able to manage his drinking after his release. But he struggled, and soon began attending daily Alcoholics Anonymous meetings. He has relapsed a few times, but thanks largely to what he calls the “forced intervention” of his encounter with the criminal-justice system, he’s been living his “new life” for a little more than two years. Although sobriety remains a daily project, he feels he has finally gained insight into the reckless and self-destructive behavior that led him to the January 6 insurrection.

These days, he’s working at a restaurant in Concord, New Hampshire. He told me he feels comfortable in chaotic environments, and he’s thinking about looking for a job at a hospital or in mental-health services. Sobriety has changed his political perspective, too. Whereas he once viewed Trump as a bold truth teller, raw and unvarnished, he now sees the president as self-serving. When Trump called for public protests around the time of his indictments, Riddle felt especially played. “And I remember thinking, like, why would he do that? People died at the Capitol riot,” Riddle said. “That was the ‘duh’ moment I had with myself: Well, obviously because he doesn’t care about anybody other than himself, and you’re an idiot for thinking otherwise.

Last fall, he donated to the Kamala Harris campaign, and voted for her in the election. An irony for him, after Trump’s reelection, is that he could be reliving his 2021 viral popularity—if he were still willing to exchange his version of reality for Trump’s. “One common thing I always hear is, like, ‘Good for you for going down there and expressing your views,’” he told me. “People who say that obviously don’t understand what they’re saying.”

The frustration in his voice was audible. “If I accept this pardon, if I agree to this pardon,” Riddle told me, “that means I disagree with that forced intervention.” Truth has finally collided with the president’s lies. Riddle may be enjoying one last hit of attention over his refusal of a pardon, but after the experience this week of seeing the insurrection’s ringleaders walk free, unrepentant, he is choosing a different path.

The Tech Oligarchy Arrives

The Atlantic

www.theatlantic.com › politics › archive › 2025 › 01 › tech-zuckerberg-trump-inauguration-oligarchy › 681381

On the day of Donald Trump’s 2017 inauguration, a group of his top billionaire donors, including the casino magnate Miriam Adelson and the future Republican National Committee finance chair Todd Ricketts, hosted a small private party, away from the publicly advertised inaugural balls.

It was the sort of event that carried no interest at the time for the Facebook founder Mark Zuckerberg. He greeted Trump’s first presidency by publicly identifying his wife’s parents and his own ancestors with the immigrants targeted by Trump’s early executive orders. “These issues are personal for me,” Zuckerberg wrote in a public letter of concern a week after Trump took office.

But this month, as the same donors made plans for Trump’s second inauguration, Zuckerberg successfully maneuvered to become a co-host of their black-tie event, scheduled for tonight. The party quickly became one of the most sought-after gatherings of the weekend, overwhelming organizers with RSVPs from people who had not received invitations.

Even more striking: Zuckerberg sat in front of Trump’s incoming Cabinet in the Capitol Rotunda at his inauguration—at the personal invitation of Trump himself, according to two people briefed on the plans who, like some other sources interviewed for this story, requested anonymity to describe private conversations. (A spokesperson for Meta declined to comment.)

[Charlie Warzel: We’re all trying to find the guy who did this]

Zuckerberg was not alone. Trump’s inauguration events featured a Silicon Valley smorgasbord, with leaders from Apple, Google, and TikTok in attendance, as well as Amazon’s Jeff Bezos and Tesla’s Elon Musk. Several of the tech moguls also joined a small prayer service this morning at St. John’s Episcopal Church. Later, they blended in with the Trump clan directly behind the incoming president as he officially assumed power just after noon, like honorary family members.

The scene announced a remarkable new dynamic in Washington: Far more so than in his first term, the ultra-wealthy—and tech billionaires in particular—are embracing Trump. And the new president is happy to entertain their courtship, setting up the possibility that Trump’s second turn in the White House could be shaped by person-to-person transactions with business and tech executives—a new kind of American oligarchy.

Eight years ago, Trump landed in Washington in a fit of defiance, denouncing in his inaugural address “the American carnage” wrought by “a small group in our nation’s capital.” Four years later, he left as an outcast, judged responsible for the U.S. Capitol riot and a haphazard attempt to undo the constitutional order. He returns this week with a clean sweep of swing states and the national popular vote, the loyal support of Republicans in Congress, and the financial backing of corporate donors who are expected to help the inaugural committee raise twice what it did in 2017. Organizers of the Women’s March, which stomped on Trump’s 2017 inauguration by sending hundreds of thousands of protesters to the streets, settled for a series of unremarkable Saturday gatherings. The Democratic opposition, which treated Trump’s first term as an existential threat, now lacks an evident strategy or leader.

Like nearly every entity that has tried and failed to bend Trump to its will—his party, his former rivals, his partners in Congress, and his former aides among them—the tech elites largely seem to have decided that they’re better off seeking Trump’s favor.

[Read: ‘If there’s one person who keeps their word, it’s Donald Trump’]   

Just months ago, Trump released a coffee-table photo book that included a pointed rant about Zuckerberg’s $420 million donation in 2020 to fund local election offices during the coronavirus pandemic, an undertaking that Trump called “a true PLOT AGAINST THE PRESIDENT.” “We are watching him closely,” Trump wrote of Zuckerberg, “and if he does anything illegal this time he will spend the rest of his life in prison.”

But since Trump’s victory, Zuckerberg has worked to get himself in the new president’s good graces. The Meta CEO traveled to Mar-a-Lago; added a Trump pal to his corporate board; extolled the importance of “masculine energy” on Joe Rogan’s podcast; abandoned the Meta fact-checking program, which MAGA world had viewed as biased; and personally worked with Trump to try to resolve a 2021 civil lawsuit over Facebook’s decision to ban him from the platform, a case that legal experts once considered frivolous.

Bezos, meanwhile, worried aloud in 2016 that Trump’s behavior “erodes our democracy around the edges” and spent his first term taking fire from the president for the aggressive reporting of The Washington Post, the newspaper that Bezos owns (and where, until recently, we both were reporters). Now Amazon, like Meta, has given $1 million to the 2025 inaugural committee, and the company recently announced it would release a documentary about, and produced by, the first lady, Melania Trump. Even Musk, who spent more than $250 million last year to elect Trump and now is one of his top advisers, called for the aging Trump to “sail into the sunset” as recently as 2022.

“In the first term, everybody was fighting me,” Trump marveled at a mid-December news conference. “In this term, everybody wants to be my friend.”

The sheer quantity of money flowing to, and surrounding, Trump has increased. In his first term, he assembled the wealthiest Cabinet in history; this time, his would-be Cabinet includes more than a dozen billionaires. Sixteen of his appointees come not just from the top one percent, but from the top one-ten-thousandth percent, according to the Public Citizen, a nonprofit consumer-advocacy organization. Democrats, too, have long kept their wealthiest donors close, inviting them in on policy discussions and providing special access, but never before have the nation’s wealthiest played such a central role in the formation of a new administration.

As recently as last week, before the inauguration proceedings were moved indoors because of cold weather, a donor adviser got a last-minute offer of $500,000 for four tickets, according to the person who fielded the call and had to gently decline the request. Trump’s 2017 committee raised $107 million, more than twice the 2013 record set by Barack Obama, and spent $104 million. So far this year, the 2025 inaugural committee is expected to raise at least $225 million and spend less than $75 million on the inaugural festivities, according to a person familiar with the plans. At least some of the unspent tens of millions could go to Trump’s presidential library, several people involved with fundraising told us.  

Trump’s first inauguration had all the markings of a hastily arranged bachelor party put on someone else’s credit card. Trump’s company and the 2017 inaugural committee ultimately paid $750,000 to the District of Columbia to settle claims of illegal payments, including allegations of inflated charges to a Washington hotel then owned by Trump. (Neither entity admitted wrongdoing.) This time, the inauguration organizers have been more disciplined, and donors have been eager to reward Trump’s victory.  

“People were prepared, so when he did win, Trump was looking for checks,” a person involved in all of the Trump campaigns and both inaugural events told us. “Once Elon got in there, that was kind of the holy water that allowed all the other tech guys to follow. They all followed each other like cattle.”

What wealthy donors could get in return for their support of Trump remains an open question. Zuckerberg’s, Bezos’s, and Musk’s federal business interests include rocket-ship and cloud-computing contracts, a federal investigation of Tesla’s auto-driving technology, a pending Federal Trade Commission lawsuit against Meta, and a separate antitrust case against Amazon. Just last week, the Securities and Exchange Commission sued Musk for allegedly failing to disclose his early stake in Twitter, the social-media giant he later took over and renamed X. (A lawyer for Musk has said he did “nothing wrong.”) When Trump promised in his inaugural address to “plant the Stars and Stripes on the planet Mars,” the cameras panned to Musk, whose SpaceX is racing Bezos’s Blue Origin; Musk raised both thumbs and mouthed “Yeah!” as he broke into an ebullient grin.

[Read: He’s no Elon Musk]

Existing federal ethics rules were not designed to address the possibility of the world’s wealthiest people padding the pockets of the first family through television rights or legal settlements. The Trump family’s recently announced cryptocurrency, $TRUMP, creates yet another way for the wealthy to invest directly in an asset to benefit the commander in chief. “There is no enforcement mechanism against the president under these laws,” Trevor Potter, a former general counsel for the late Arizona Senator John McCain’s campaign, told us.

Even as Silicon Valley elites try to ingratiate themselves with the incoming president, some of Trump’s populist supporters are murmuring that the emerging tech oligarchy is diluting the purity of the MAGA base. Steve Bannon, a former adviser to Trump who has clashed in recent weeks with Musk over immigration policy, has fashioned himself as the field general for a fight against the tech bros and their outsize influence on a president eager to cut deals.

“He’s got them on display as ‘I kicked their ass.’ I’m stunned that these nerds don’t get anything to be up there,” Bannon told us last week, referring to the tech leaders appearing in prime camera position at Trump’s inauguration. “It’s like walking into Teddy Roosevelt’s lodge and seeing the mounted heads of all the big game he shot.”

For now, the ragtag populist figures like Bannon who defined Trump’s early years in politics are still celebrating. Roger Stone, the convicted and subsequently pardoned Trump kibitzer, attended inauguration events in his anachronistic morning suit—with plans for evening white tie. The British MP Nigel Farage hosted a party Friday at the Hay-Adams hotel, while former British Prime Minister Boris Johnson managed to get a ticket for the U.S. Capitol Rotunda.

On Thursday, Bannon threw his own party, titled “Novus Ordo Seclorum,” or “A New Order of the Ages,” at Butterworth’s club on Capitol Hill. Drinks included, perhaps predictably, the Covfefe Martini (vodka, Fernet, espresso) and the Im-Peach This (gin, peach, Cocci Americano). Bannon arrived fashionably late and was followed from the moment he ducked through the door by a mob of iPhone documenters, and even a man with a flashbulb. He received an impromptu line of frenzied well-wishers that one British journalist quipped was “as if for the Queen.”

[Read: The MAGA honeymoon is over]

As seared foie gras and freshly shucked oysters moved through the room, Bannon urged his supporters to “set new lows tonight,” reminding them that once Trump took the oath of office on Monday, “then the real fun happens.”

“You have two to three days to get sober,” he exhorted. “Go for it!”

The tech barons also fanned out through the city in celebration. The next night, across town, Bezos and his fiancée, Lauren Sánchez, dined at Georgetown’s new hot spot, Osteria Mozza, sitting at a window table with leaders of the Post. On Saturday, Palantir and the PayPal co-founder Peter Thiel hosted a party at his Woodley Park mansion; a bow-tied and mop-topped Zuckerberg arrived before the sun had fully set. And yesterday, Trump called Musk up onstage during his pre-inauguration rally inside the Capital One Arena—“C’mere, Elon!” he growled—briefly ceding the spotlight to the Tesla executive and his young son X.

During the 2024 election, many liberals and some conservatives feared that Trump’s second term would usher in a new kind of American autocracy, à la Hungary. But on its first day, at least, Trump’s new administration seems, more than anything else, oligarchal—albeit one where the transactions mainly flow one way, at least so far.

“They’re lining up to obey in advance. because they think they’re buying themselves peace of mind,” Ruth Ben-Ghiat, an expert on authoritarianism who has been critical of Trump, told us. But, added Ben-Ghiat, who noted the overlap between autocracy and oligarchy: “They can give that million and everything can be fine—but the minute they displease Trump, he could come after them.”

America Is No Longer the Home of the Free Internet

The Atlantic

www.theatlantic.com › ideas › archive › 2025 › 01 › internet-censorship-tiktok-ban › 681361

Twenty years ago, my day job was researching internet censorship, and my side hustle was advising activist organizations on internet security. I tried to help journalists in China access the unfiltered internet, and helped demonstrators in the Middle East avoid having their online content taken down.

Back then, unfiltered internet meant “the internet as accessed from the United States,” and most censorship-circumvention strategies focused on giving someone in a censored country access to a U.S. internet connection. The easiest way to keep sensitive content online—footage of a protest, for instance—was to upload it to a U.S.-based service such as YouTube. In early 2008, I gave a lecture for digital activists called “The Cute Cat Theory.” The theory was that U.S. platforms used for hosting pictures and videos of cat memes were the best tools for activists because if censorious governments blocked activist content, they would alienate their citizens by banning lots of innocuous content as well.

That was a simpler time. Elon Musk was a mere millionaire, only a few years removed from reportedly overstaying his U.S. student visa (he has denied working here illegally). Mark Zuckerberg was being mocked for wearing anonymous sweatshirts, not a $900,000 wristwatch. And the U.S. was seen as the home of the free, uncensored internet.

That era is now over. When Donald Trump is inaugurated on January 20, videos of his oath of office will flood YouTube and Instagram. But those clips likely won’t circulate on TikTok, at least not any clips posted by U.S. users. In April 2024, President Joe Biden signed a bipartisan bill, the Protecting Americans From Foreign Adversary Controlled Applications Act, designed to force TikTok to sell the Chinese-owned app to a U.S. company or shut down operations in the U.S. by January 19, 2025. Yesterday, the Supreme Court unanimously upheld the law. News outlets have reported that Trump is considering issuing an executive order to delay the ban, leading to speculation that Chinese officials might sell the platform to “first buddy” Musk. (Bytedance, the owner of TikTok, has dismissed such speculation.)

[Read: ‘I won’t touch Instagram’]

Whether or not that happens, this is a depressing moment for anyone who cherishes American protections for speech and access to information. In 1965, while the Cold War shaped the U.S. national-security environment, the Supreme Court, in Lamont v. Postmaster General, determined that the post office had to send people publications that the government claimed were “communist political propaganda,” rather than force recipients to first declare in writing that they wanted to receive this mail. The decision was unanimous, and established the idea that Americans had the right to discover whatever they wanted within “a marketplace of ideas.” As lawyers at the Knight First Amendment Center argued in an amicus brief supporting TikTok, the level of speech suppression that the U.S. government is demanding now is far more serious, because it would prevent American citizens from accessing information entirely, not just require them to get permission to access that information.

According to the Biden administration and its bipartisan supporters, TikTok is simply too dangerous for impressionable Americans to access. Solicitor General Elizabeth Prelogar’s national-security argument in defense of the ban was that “ByteDance’s ownership and control of TikTok pose an unacceptable threat to national security because that relationship could permit a foreign adversary government to collect intelligence on and manipulate the content received by TikTok’s American users,” though she admitted that “those harms had not yet materialized.” The Supreme Court’s decision explicitly affirms these fears: “Congress has determined that divestiture is necessary to address its well-supported national security concerns regarding TikTok’s data collection practices and relationship with a foreign adversary.”

We don’t yet know how TikTok users in the United States will respond to the ban of a platform used by 170 million Americans, but what happened in India might provide some insights.

My lab at the University of Massachusetts at Amherst studies content on TikTok and YouTube, and a few months ago, we stumbled on some interesting data. In 2016, videos in Hindi represented less than 1 percent of all videos uploaded that year to YouTube. By 2022, more than 10 percent of new YouTube videos were in Hindi. We believe that this huge increase was due not just to broadband improvement and mobile-phone adoption in India, but to the Indian government’s ban of TikTok in June 2020. As we examined Hindi videos uploaded in 2020, we saw clear evidence of an influx of TikTok refugees onto YouTube. Many of the newly posted videos were exactly 15 seconds long, the limit that TikTok put on video recordings until 2017. Others featured TikTok branding at the beginning or end of the video.

Like the U.S., India had cited national-security reasons for the ban, and it had a more defensible justification: India and China were then clashing militarily along their shared border. But TikTok was much more important to India than it is to the United States. We estimate that, when India banned TikTok in mid-2020, more than 5 billion videos had been uploaded to the service by Indian users. (Examining some of these videos, we see evidence that TikTok in South Asia might be used more as a videochat service to stay in touch with family and friends than as a platform for wannabe influencers.) Even now, more than four years after the ban, the only countries with more videos uploaded to TikTok than India are Pakistan, Indonesia, and the United States; we estimate that more than a quarter of TikTok-video uploads are from South Asia, while just over 7 percent are from the United States.

When those Indian TikTok creators were forced off the platform, new Indian short-video apps such as Moj and Chingari hoped to capture the wave of users. They were largely unsuccessful—none of these small start-ups has achieved visibility in India to compete with YouTube and Instagram, both well-financed, U.S.-based businesses. In effect, Indian Prime Minister Narendra Modi’s TikTok ban was a subsidy to the U.S. companies Google and Meta. It was also correctly seen as evidence of the Modi government’s retreat from global democratic values and toward a less open society.

Until recently, I’d expected the TikTok ban to have the same result in the U.S.: effectively creating a nationalist subsidy protecting domestic tech providers (who, oddly enough, have been lining up to donate to inaugural parties for the incoming administration). But American TikTok users are a creative bunch, and in the past week, enough of them have migrated to the Chinese social network Xiaohongshu—often translated as “Red Book” or “Red Note” in English—that the app now tops social-media-download charts on Android and iPhone operating systems. Xiaohongshu, initially created as a video travel guide to Hong Kong for mainland-Chinese tourists, has an interface that’s familiar to TikTok users, and Chinese users are welcoming American newcomers with a charming stream of invitations to teach conversational Mandarin or Chinese cooking, and tips on how to avoid censorship on the network.

[Hana Kiros: The internet is TikTok now]

Chinese and American users aren’t likely to share space on Xiaohongshu for long. The Chinese government has generally required service providers whose tools become popular outside China to bifurcate their product offerings for Chinese and other users. Weixin, the popular messaging and microblogging app in China, is a separate platform—WeChat—in the rest of the world. TikTok itself branched off from the domestic-Chinese network Douyin. And even if Beijing, sensing a great PR opportunity, allows TikTok refugees to remain on Xiaohongshu, the same logic that allowed Congress to ban TikTok would presumably apply to any other Chinese-owned company with potential to “collect intelligence on and manipulate” American users’ content.

Although I don’t think this specific rebellion can last, I’m encouraged that American TikTok users realize that banning the popular platform directly contradicts America’s values. If only America’s leaders were so wise.

When I advised internet activists on how to avoid censorship in 2008, I included a section in my presentation called “The China Corollary.” Although most nations could not easily censor social-media platforms without antagonizing their citizens, China was big enough to create its own parallel social-media system that met the needs of most users for entertainment while blocking activists. What I could not have anticipated was that Americans would find themselves fleeing their own censorious government for a Chinese video platform with tight content controls.

Trump might decide to get around the TikTok ban with an executive order stating that the platform is no longer a national-security threat. Or the Trump administration could elect not to enforce the law. Musk, Zuckerberg, or another Trump friend might purchase the platform. But for millions of Americans, the damage is done: The idea of America as a champion of free speech is forever shattered by this shameful ban.

The Scientist vs. the Machine

The Atlantic

www.theatlantic.com › podcasts › archive › 2025 › 01 › ai-scientific-productivity › 681298

Subscribe here: Apple Podcasts | Spotify | YouTube | Overcast | Pocket Casts

People have long worried about robots automating the jobs of truck drivers and restaurant servers. After all, from the invention of the cotton gin to the washing machine, we’re used to an economy where technology transforms low-wage, physically arduous work.

But the past few years have shown that highly educated white-collar workers should be the ones bracing for artificial intelligence to fundamentally transform their—I should probably say our—professions. The angst this has spurred from all corners of white-collar America has been intense, and not without merit. AI has the potential to take over much of our creative life, and the risks to humanity are well documented.

The discourse around AI has focused so squarely on the terrifying risks and potential job losses that I’ve noticed there’s been very little discussion around why so many people are working so hard to create this doom monster in the first place.

On today’s episode of Good on Paper, I’m joined by someone researching what happens when AI enters a workplace. Aidan Toner-Rodgers is a Ph.D. student of economics at MIT and has a working paper out on what happened to scientific discovery (and the jobs of scientists) when an R&D lab at a U.S. firm introduced artificial intelligence to aid in the discovery of new materials.

Materials science is an area of research where we can see the direct applications of scientific innovation. Materials scientists were the ones who developed graphene, thus transforming “numerous products ranging from batteries to desalination filters” and photovoltaic structures that “have enhanced solar panel efficiency, driving down the steep decline in renewable energy costs,” Toner-Rodgers writes. There are also countless more applications in fields such as medicine and industrial manufacturing.

New discoveries in this field have the potential to transform human life, making us happier, healthier, and richer. And when scientists at this company were required to integrate an AI assistant in generating new ideas, they became more productive, discovering 44 percent more materials.

“I think a big takeaway from economic-growth models is that in the long run, really, productivity is the key driver of improvements in living standards and in health,” Toner-Rodgers argued when we spoke. “So I think all the big improvements in living standards we’ve seen over the last 250 years or so really are driven fundamentally by improvements in productivity. And those come, really, from advances in science and innovation driving new technologies.”

The following is a transcript of the episode:

[Music]

Jerusalem Demsas: What is the point of artificial intelligence? Why, when there is so much concern about the potential consequences, are we hurtling towards a technology that could be a mass job killer? Why, when we face so many competing energy and land-use needs, are we devoting ever more resources to data centers for AI?

There are good reasons to worry about its negative consequences, and the media has a bias toward negativity. As a result, we don’t tend to explore these questions.

My name’s Jerusalem Demsas. I’m a staff writer at The Atlantic, and this is Good on Paper, a policy show that questions what we really know about popular narratives.

Today’s episode is about one of the best applications of AI: helping push the boundaries of science forward to make life better for billions of people. This isn’t a Pollyannaish conversation that skates past concerns with AI, but I do want to spend some time investigating the ways that this technology could improve our lives before we get into the business of complicating it.

In some ways, this conversation isn’t just about AI. It’s about technological progress and the trade-offs that come with it. Are the productivity benefits of AI worth all the downstream consequences? How can we know?

My guest today is Aidan Toner-Rodgers. He’s a Ph.D. student in economics at MIT with a fascinating new working paper that shows what happens when scientists are required to begin using AI in their work.

Aidan, welcome to the show!

Aidan Toner-Rodgers: Thanks so much for having me.

Demsas: You have a really great paper that I’m interested in talking to you about, but first I want us to sort of set the stage here a bit about productivity. So productivity is something that economists talk about a lot, and I think it can be ephemeral to people about why it’s so important.

So why do economists care about productivity?

Toner-Rodgers: Yeah, so I think a big takeaway from economic-growth models is that in the long run, really, productivity is the key driver of improvements in living standards and in health. So I think all the big improvements in living standards we’ve seen over the last, like, 250 years or so really are driven fundamentally by improvements in productivity.

And those come, really, from advances in science and innovation driving new technologies. So when economists think about what are the most important drivers of living standards, it really is kind of coming back to productivity.

Demsas: Yeah, and I think that sometimes it’s useful to think about ways in which society gets better, right?

Like, most increases in inputs—so if you increase labor, it means you have less leisure time. And if you increase investments in capital, that means you’re lowering your current consumption. So you’re moving away from buying things that you may want in order to invest in the future, and if you’re increasing material inputs, that reduces natural resources.

So the idea is: How can we get more efficient? And one stat that I like to point to is that “productivity increases have enabled the U.S. business sector to produce nine times more goods and services since 1947 with a [pretty] small increase in hours worked.” So we’re just getting a lot more stuff without having to kill ourselves working to get it. And that can be, you know, just clothes and things like that, but that can also be services. Like now, because it’s really easy to produce a T-shirt, you need less people making T-shirts, and they can teach yoga or do other things. And so I think that’s really important to set the stage here.

But I want to ask you, because your paper is about AI, about this bet that I wonder which side you take on. There’s this bet—I don’t know if you’ve heard about it. It’s between Robert Gordon and Erik Brynjolfsson. Have you heard about this bet?

Toner-Rodgers: I don’t think so, actually.

Demsas: Okay, yeah. It’s basically a $400 bet to GiveWell, so I don’t know if it really has the impact of me making people put their money where their mouth is.

But Robert Gordon is an economist. He’s kind of a longtime skeptic of digital technology’s ability to match the impact of things like electricity or the internal combustion engine. And his argument, basically, is just that he doesn’t expect AI to have a significant impact on productivity. And he argues that because, you know—he points at things like how the U.S. stock of robots has doubled in the past decade, but you haven’t seen this massive revolution in production, productivity growth, and manufacturing. And he also says that AI is really nothing new. You know, we’ve had human customer-service representatives replaced by digital systems without much to show for it. And then he also says things like a lot of economic activity that is relevant to people’s lives, like home construction, isn’t really going to be impacted by AI.

So it’s one side of the debate. It’s kind of more pessimistic on AI. And the other is kind of represented by Erik Brynjolfsson—he’s more of a techno-optimist—and he argues that recent breakthroughs in machine learning will boost productivity in places like biotech, medicine, energy, finance, but it’ll take a few years to show up in the official statistics, because organizations need time to adjust.

Again, they’re only betting $400, so I don’t know if they’re putting their money where their mouth is, but whose side do you kind of take in this debate?

Toner-Rodgers I mean, I think I’m probably more on Erik’s side. So Robert Gordon’s research, I think, has done a great job showing that over the past 40 years or so there’s been this big stagnation, kind of, in innovation in the physical world.

But I think something I’m really excited about in AI is that all these advances in digital technologies, computing power, and algorithms maybe can now, finally, have this impact kind of back to physical infrastructure and physical things in the world. So I think, actually, materials science is a great example of this, where we have these kinds of new AI algorithms that can maybe come up with new important materials that can then be used in physical things.

Because I think a lot of the advances in information technology so far haven’t had big productivity improvements, because they were kind of confined just to the digital world, but now maybe we can use these breakthroughs to actually create new things in the world. And I do think the point—that there’s a lot of constraints to building things, and a lot of the barriers to productivity growth are not, like, we don’t know how to do things, but there’s just big either regulatory or other barriers to building things in the world—is very important.

And I think that’s why the people who are super optimistic about AI’s impact—I think I’m a bit more pessimistic than them because of these kind of bottlenecks in the world. But I’m very excited about things—like biomedicine, drug discovery, or materials science—where we can maybe create new actual things with AI.

Demsas: So materials science, I think, is the place where your research really is focused. So can you just set the stage for us? What type of company were you looking at, and what kind of work are the employees doing?

Toner-Rodgers: Yeah, so the setting of my paper is the R & D lab of a large U.S. firm which focuses on materials discovery. So this involves coming up with new materials that are then incorporated into products. And so this lab focuses on applications in areas like healthcare, optics, or industrial manufacturing.

And so the scientists in this lab, many hold Ph.D.s or other advanced degrees in areas like chemical engineering or materials science or physics. And what they’re doing is trying to come up with materials that have useful properties and then incorporate these into products that are then going to be sold to consumers or other firms.

Demsas: And help us set—what do you mean by materials? Like, what are we trying to find here?

Toner-Rodgers: So in some sense, everything in every product uses materials in important ways. Like, one estimate I have in the paper: Someone was kind of looking at all-new technologies and products—How important were new materials to these?—and he found that two-thirds of new technologies really relied on some advance in discovering or manufacturing at scale some new material. So this could be anything from the glass in your iPhone, to the metals in semiconductors, to different kinds of methods for drug delivery. So this is like a lot of the technologies in the world really are relying on new materials.

Demsas: Yeah. I mean, you note in your paper that materials science is kind of the unsung hero of technological progress. And when you start to think about it, it really just adds up. Like, basically every single thing that you could care about, it ends up boiling down to specific materials that you want to find—so whether it’s computing or it’s biomedical innovation, like you said, but also just stuff that we’ve been surprised by recently, like the lowering costs of solar panels. Like, new photovoltaic structures being found is helping drive down the cost of those renewables.

So all these different things—and I think it’s funny, because, I mean, we are an increasingly service-sector-based economy. So I think that we’re kind of abstracted away from some of the materials’ impact on our lives, because we just don’t really see it in our day-to-day. But it’s just as important. I think the pandemic really showed this one when we were missing semiconductor chips.

Toner-Rodgers: Yeah, maybe an economics way to put this is that materials science is very central in the innovation network. So there’s been some papers looking at which other fields rely on research from materials science. And it’s really one that’s very central in this network, where things like biomedicine to manufacturing are really relying on new discoveries in materials science. And so kind of focusing on this is a key driver of growth in a lot of areas.

Demsas: And so the scientists in this firm—can you just walk us through what they’re actually doing? Like, what is the process of their work? And then we can get into how AI changed it.

Toner-Rodgers: Sure. So a lot of what they’re doing is basically coming up with ideas, designs for new materials. And then because materials discovery is very hard, many, many of these materials don’t end up having the properties that they hope they do or don’t yield a viable, stable compound. So a lot of what they’re doing is doing tests either in silico tests—like doing simulations—or actually kind of making these materials and testing their properties to see which ones are actually going to be helpful and can later be incorporated into products.

So their time is split. Maybe, like, 40 percent or so is on this initial idea-generation phase, and then the rest is testing these things and seeing which materials are actually viable.

Demsas: When I was reading your paper, I analogized it to coming up with recipes in a kitchen. And you can have a test kitchen or something like that, where basically, if your goal is to come up with a bunch of new recipes for food or for baking or whatever, you may come up with some on paper, and then you’re like, Okay, well, I have to pick which one is potentially going to be a really good recipe, and then you would, you know, test it. And probably you don’t do a simulation. You probably just go make the donut or whatever it is. Is that kind of a good analogy for this?

Toner-Rodgers: Yeah, I think it is, and also just in the sense that we know a lot about the ingredients or sets of elements and their bonds, and we know a lot about that at a small scale, but it becomes very hard to predict what a material’s property will be as these materials become bigger and more complicated. And so even though we know a lot in some small sense, actually prediction gets pretty hard.

Demsas: So AI gets introduced at this company because they want to figure out if that can help their scientists be more productive at coming up with new materials. At what point in the process is AI coming in? What is it actually doing? How does it change the scientists’ jobs?

Toner-Rodgers: Yeah, so AI’s role is really in this initial idea-generation phase. And so how it works is that scientists are going to input to the tool some set of desired properties that they want a material to possess. So in this setting, this is really driven by commercial application because this is a corporate R & D lab. So they want to come up with something that’s going to be used in a product. And then they’re going to input these desired properties to the AI tool, which is then going to generate a large set of suggested compounds that are predicted by the AI to possess these properties.

And so before, scientists would have been coming up with these material designs themselves. And now this part is automated by the tool.

Jerusalem Demsas: So it’s like, Now I’m having an AI tool give me a bunch of potential donut recipes instead of me coming up with them myself.

Toner-Rodgers: Exactly. And I think it’s important to note that this whole prediction process is very hard. And so even though I’m going to find pretty large improvements from the AI tool on average, many, many of its suggestions are just not that good and either aren’t going to yield a stable compound or aren’t going to actually have the other properties that you wanted to begin with.

Demsas: Yeah. And so before we get into your results, which are really shocking to me actually, it’s kind of cool—the company set up a natural experiment, basically, for you. Can you walk us through what they did and how they randomized researchers?

Toner-Rodgers: Yeah. So I think the lab had just a lot of uncertainty going in about whether this tool was going to be actually helpful. Like, you could have thought, Maybe it’s going to generate a lot of stuff, and it’s all bad, or it’s going to kind of slow people down as they have to sort through all these AI suggestions.

So I think they just had a lot of questions about: Is this tool going to work, and are we going to get actually helpful compounds? So what they did, instead of just rolling it out all at once, was to do three waves of adoption where they randomly assigned teams of scientists to waves. And so this allows me, as a researcher, to look at treated and not-yet-treated scientists and identify the effects of the tool.

Demsas: And did they control for different things? Like, did they control for, you know, what types of research they were working on or how many years of experience they had?

Toner-Rodgers: Yeah, so there’s a lot of balance between waves because of the randomization on what exactly these scientists are working on, which types of technologies and materials, as well as just the team composition in terms of their areas of expertise and tenure in the lab and so on.

Demsas: So now I want to turn to the results. What did you find?

Toner-Rodgers: So my first result is just looking, on average, at how this tool impacted both the discovery of new materials as well as downstream innovation in terms of patent filings and product prototypes. So I find that researchers with access to the AI tool discover 44 percent more materials, and then this results in a 39 percent increase in patent filings and then a 17 percent rise in downstream product innovation, which I measure using the creation of new product prototypes that incorporate those materials.

Demsas: These are, like, massive numbers.

Toner-Rodgers: Yeah, I think they’re pretty big. And also, I think it’s helpful to kind of step back and look at the underlying rate of productivity growth in terms of the output of these researchers. So I look back at the last five years before the tool was introduced, and output per researcher had actually declined over this period. So these are huge numbers relative to the baseline rate of improvement.

Demsas: So it’s interesting—well, I guess first: How? Like, why are people becoming more productive here?

Toner-Rodgers: I think there’s two things. So one is just that the tool is pretty good at coming up with new compounds. So being able to train a model on a huge set of existing compounds is able to give a lot of good suggestions.

And then second: Not having to do that compound design part of the process themselves frees scientists to spend more time on those second two categories, kind of deciding which materials to test and then actually going and testing their properties.

Demsas: It’s interesting when I was looking at your results because you’re able to kind of look at, you know, one month after, four months after the adoption of this new AI tool, how it changes things. Things look kind of grim in the short run, right? Like, four months after AI adoption, the number of new materials actually drops. And it’s not until eight months after that you see a significant increase in new materials. And that’s around when you see the patent filings increase. And it’s not until 20 months after that you actually see it show up in product prototypes.

And, you know, part of the problem of trying to figure out if new technology like AI is having a big impact is that it might take a while to show up in statistics. Is that why you think maybe we’re not seeing a massive jump in productivity right now in the U.S., despite the rollout of a ton of new machine-learning tools?

Toner-Rodgers: Yeah, I think that’s partly true. Like, you definitely need some forms of organizational adaptation or people learning to actually utilize these tools well. So part of why there’s this lag in the results is just that materials discovery takes a while. So it takes a little bit to actually go and kind of synthesize these compounds and then go and find their properties.

But another thing I find is that in the first couple months after the tool’s introduction, scientists are very bad, across the board, at determining which of the AI suggestions are good and which are bad. And this is part of the reason we don’t see effects right away.

Demsas: So it’s like your job has changed significantly, and you just need time to adjust to that.

Toner-Rodgers: Yeah, totally.

Demsas: So I want to ask you about material quality, though, because what you’re measuring, largely, is the number of materials made. But has the quality of the materials improved or declined, and how would we know?

Toner-Rodgers: So I think that’s a key concern when you’re doing these things, is we don’t only care about how many new discoveries we’re getting, but what they are. So a very nice thing about my setting and materials science, in general, is that there’s direct measures of quality in terms of the properties of these compounds. And in particular, at the beginning of the discovery phase, scientists define a set of target properties that they want materials to possess.

And so I can compare those target properties to the measured properties of materials that are actually created. And so when I do this, I find that, in fact, quality increases in the treatment group, which is showing that we’re not actually having this compromised quality as a result of faster discovery.

Demsas: So there’s this joke that I was looking up, and apparently Wikipedia tells me it’s attributed to this character from Muslim folklore called Nasreddin, but I could not independently verify this. Most people have probably heard some version of this. It goes: A policeman sees a drunk man searching for his keys under a streetlight, and he tries to help him find it. They look for it for a bit of time, and then he’s like, Are you sure you dropped them here? And the drunk guy is like, No, I lost them in a park somewhere else. The policeman is kind of incredulous; he’s like, Why are you looking for them here? And the drunk guy goes, This is where the light is.

And this has been, you know, referred to by a lot of researchers as the streetlight effect, right? So it’s a phenomenon that people tend to work where the light is or like easiest problems, even if those aren’t the ones that are actually likely to bear the most fruit. Do you think that AI helps us avoid the streetlight effect or it exacerbates the problem?

Toner-Rodgers: So I think talking to people before this project, I would have guessed that it would exacerbate the problem. And the reason is that the tool is trained on a huge set of existing compounds. So you might expect that the things it suggests are going to be just very similar to what we already know. So you might think that because of that, the streetlight effect is going to get worse. We’re not going to come up with the best things but rather just things that look very similar to what we already know.

And I think, surprisingly to me, I find that, in my setting, this is not the case. And so to do that, I measure novelty at each stage of R & D. So first I look at the novelty of the new materials themselves. And to do that, I look at their chemical structures—so the sets of atoms in a material, as well as how they’re arranged geometrically. And I can compare this to existing compounds and see, like, Are we creating things that look very similar to existing materials, or are they very novel?

So on this measure, AI decreases average material similarity by 0.4 standard deviation. So these things are becoming more novel. And it also increases the share of materials that are highly distinct—which I define as being in the bottom quartile of the similarity distribution—by four percentage points. So it seems like, both on average and in terms of coming up with highly distinct things, we’re getting more.

Demsas: This is kind of surprising to me, right? There’s a paper by some researchers at NYU and Tel Aviv University called “The Impact of Large Language Models on Open-Source Innovation,” and they sort of raised this question about whether AI has asymmetric impact on outside-the-box thinking and inside-the-box thinking. And you know, the thing is that most AI systems are evaluated on tasks with well-defined solutions, rather than open-ended exploration. And, you know, models are predicting the most likely next response. Like, what’s happening with ChatGPT is it’s just predicting what the next word is going to be. Or that’s what most of these systems are trying to do. And they’re trained on this corpus of existing stuff, and it’s not like they’re independent minds.

And so they kind of theorize that, you know, AI might be good at finding answers to questions that have right answers or ones where there’s clearly defined evaluation metrics. But can it really push the bounds of human understanding, and does our reliance on it really reduce innovation in the long term? So I mean, this seems to be a really big problem in the field of AI, and I wonder: How confident are you that your findings are really pushing against this? Or is it kind of like, maybe in the short term, there’s some low-hanging fruit that looks really novel, and in the long term, you’re not really going to have that?

Toner-Rodgers: Yeah, so I think one drawback of the measurements I have is that I can see that, on average, novelty increases, but what I can’t see is whether the likelihood of coming up with really truly revolutionary discoveries has changed. And so if you think of science as being driven, really, by these far-right-tail breakthroughs, you’re just not going to see much of these in your data. This has been an issue highlighted by Michael Nielsen in some essays that I like a lot.

And so one kind of thing you might be worried about is, Well, we got, on average, more novel things, but maybe these very revolutionary discoveries have a lower probability of being discovered by the AI, and that in the long term this is not a good trade-off. And because you’re just never going to see very many of these right-tail discoveries in your data, you just can’t say much about this using these types of methods.

Demsas: I mean, how confident, then, are you that we can even test whether this is happening?

Toner-Rodgers: Yeah, I think one answer is that we’ll just need some time to see, like, do these new materials open up new avenues for research? Like, are there other materials that are going to be built on these new ideas that the AI generated? But one thing I’d say is just that I think a lot of people would have said beforehand that, even on average, I expect novelty to go down. And the fact that it went up, I think, does push back somewhat against the view that these things are going to be bad for novelty.

Demsas: And then I guess, kind of on this question of generalizability to other fields, like, materials science is a place, of course, where you can measure productivity pretty cleanly. Like, you can see what the compounds are. You can see what people are trying to look for. A lot of fields, even in science, are not like this. They’re not super easy to measure what exactly you’re trying to find, and innovations can have spurts and stops for long periods of time, even if a lot of work is happening. So I guess, do you expect AI to be as helpful in fields that look a lot less like materials science?

Toner-Rodgers: So I think in the short run, I would say probably not, right? I think there’s areas where it does look a lot like this, like things like drug discovery, but then there’s a lot of areas where it doesn’t look like this at all. I would say, I think kind of fundamentally, this comes down to how much of science is about prediction versus maybe coming up with new theories or something like that. And I think maybe I’ve been surprised over the last several years how many parts of science, at least in part, can have big impacts from AI, right?

So we see in things like math, where maybe it really feels like it’s not a prediction problem at all, like doing a proof, but we see things like large language models and other more specialized tools really being able to make progress in these areas. And I think they’re not at the frontier of research by any means, but I think we’ve seen huge improvements.

So this is absolutely an open question how much these tools can generalize to other fields and come up with new discoveries more broadly. But I would say that betting against deep learning has not had a great track record in recent years.

Demsas: Yeah, fair.

[Music]

After the break: AI doesn’t benefit everyone equally, even when we’re talking about brilliant scientists.

[Break]

Demsas: I want to ask you about the distributional impacts. I think this is probably the most pessimistic, concerning part of your paper. You find that the bottom third of researchers see minimal gains to productivity, while the top 10 percent have their productivity increase by 81 percent. Can you talk through how you’re measuring the sort of productivity of these researchers and this finding, in particular?

Toner-Rodgers: Yeah. So first I kind of just look at scientists’ discoveries in the two years before the tool was introduced. And there’s a fair amount of heterogeneity across scientists and their rate of discovery. And I do some tests showing that these are kind of correlated over time, so it’s not like some scientists are just particularly lucky. And, instead, there do seem to be these kinds of persistent productivity differences across scientists. And then I just look at each decile of initial productivity: How much do those scientists’ output change once the tool is introduced? And we see these just massive gains at the high end. And at the low end, on average, they do see some improvement, maybe 10 percent or so, but nowhere near as much as the kind of initially high-productivity scientists.

Demsas: Why? Like, at what stage are the low-productivity scientists getting caught up? Because, you know, if this tool is just giving them a bunch of potential recipes for new materials, are they just worse at selecting which ones to test, or what’s happening?

Toner-Rodgers: Yeah, so I think the key mechanism that I identify in the paper is that it’s really this ability to discern between the AI suggestions that are going to be actually yielding a compound that’s helpful versus not. So I think just the vast majority of AI suggestions are bad. They’re not going to yield a stable compound, or it’s not going to have desirable properties. And so because actually synthesizing and testing these things is very costly, being able to determine the good from the bad is very important in this setting. And I find that it’s exactly these initially high-performing scientists that are good at doing this. And so the lower-performing scientists spend a lot of time testing false positives, while these high-ability ones are able to kind of pick out the good suggestions and see their productivity improve a lot.

Demsas: But lower-performing scientists aren’t getting worse at their jobs, right? They’re just not really helped by the tool.

Toner-Rodgers: Yeah, that’s true. But I think it’s worth saying that it’s not like they’re not using the tool. So it really is that their research process changed a lot, but because their discernment is not great, it ended up being kind of a similar productivity level to before.

Demsas: And were you able to observe this inequality over time? Was it stagnant? Did it widen? Did it decrease? Was there learning that you were able to see happen with less-productive researchers?

Toner-Rodgers: Yeah. So I think something very interesting is, like, if I look in the first five months after the tool was introduced, across the productivity distribution, scientists are pretty bad at this discernment. So all of them are kind of doing something that looks like testing at random. They’re not really able to pick out the best AI suggestions. But as we look further on, scientists in the top quartile of initial productivity do seem to start being able to prioritize the best ones, while scientists in the bottom quartile show basically no improvement at all. And so I think this is pretty striking. And there’s just something about these scientists that’s allowing some to learn and some to see no improvement.

Demsas: And how long were you able to observe this for? Like, is it possible that maybe they just needed more time?

Toner-Rodgers: Yeah, so I think I see, like, two years of post-treatment observations. So in that time, I don’t see improvement. I think it’s possible either they need more time, or maybe they need some sort of training to be able to learn to do this better. So I think one question: Is this something fundamental about these scientists that’s not allowing them to do this? Or is there some form of either training or different kind of hiring characteristics the firm could look at to identify scientists that are good at this task?

Demsas: So were you surprised by this finding? After reading your paper, our CEO here at The Atlantic, Nicholas Thompson—he pointed out that in studies of call centers, the opposite is often true. For instance, the guy we mentioned earlier, Erik Brynjolfsson, who’s kind of a techno-optimist, and two of his co-authors recently put out a working paper that looks at over 5,000 customer-service agents and found that AI increased worker productivity. And they’re measuring that as issues resolved per hour. And it increases their productivity by 14 percent, with less-experienced and lower-skilled workers improving the speed and quality of their output, while the most experienced and the highest skilled saw only small gains. So I guess, looking at the field, in general, is it strange that you’re seeing the biggest impact happening with the most-skilled people? Should we expect the opposite?

Toner-Rodgers: Yeah, so I think a lot of the early results on AI have found that result that you just mentioned, where the productivity kind of compresses, and it’s these lower-performing people that benefit the most. And I think in that call-center paper, for example, I think one thing that’s going on is just that the top performers are already maybe nearly as good as you’re going to get at being a call-center person. Like, there’s kind of just a cap on how good you can do in this job.

Demsas: You can’t resolve an issue every second. You actually have to have a conversation.

Toner-Rodgers: Right. You kind of have to do it. And they’re maybe close to the productivity frontier in that setting. So that’s one thing.

And I think in materials science, this is just not the case at all. Like, this is just super hard, and these are very expert scientists struggling to come up with things, is one thing. And then I think the second thing is that in the call-center setting, AI is going to give you some suggestions of what to say to your customer. And it’s probably not that hard to kind of evaluate whether that suggestion is good or bad. Like, you kind of read the text and, like, All right, I’m gonna say this.

And in materials science, that’s also not the case—where, like, you’re getting some new compound. It’s very hard to tell if this thing is good or bad. Many, many of them are bad. And so this kind of judgment step, where you’re deciding whether to trust the suggestion or not, is very important. And I think in a lot of the settings where we’ve seen productivity compression, this step is just not there at all, and you can kind of out-of-the-box use the AI suggestion.

Demsas: So do you think a good heuristic is if AI is being applied to a job where there’s a right way to do things that we kind of basically know how to do, or there’s very little sort of experimentation or imagination or creativity necessary to do that job, that you will see the lower-skilled, the less-experienced people gain the most? And then when it’s the opposite, when a lot of creativity is needed, high-skilled people are going to get the most out of AI?

Toner-Rodgers: Yeah, I think that sounds true to me. And I think maybe one way I’d put it is it’s something about the variation and the quality of the AI’s output that’s very important. So even in materials science, I’m not sure that, say, in three years or something, the AI could just be incredibly good and, like, 90 percent of its suggestions are awesome, and you’re not going to see this effect where this judgment step is very important.

So I think it really depends on the quality of the AI output relative to your goal. And if there’s a lot of variation, and it’s hard to tell the good suggestions from the bad, that seems to be the type of setting where we’re seeing the top performers benefit the most.

Demsas: And I assume that with this tool at this company, like, when they come up with successful materials, they’re feeding that information back into the model. Did you observe that the tool was getting better at providing more high-quality suggestions over time?

Toner-Rodgers: Yeah, so they’re definitely doing that. There’s definitely some reinforcement learning with the actual tests. Like, I think over this period, I don’t see huge results like that. I think, relative to the amount of data it was trained on initially and the previous test results that went into the first version of the model, it’s just not that much data. But I think as these things are adopted at scale, we could absolutely see something like that.

Demsas: If that sort of reinforcement learning happens, do you think that that increases the likelihood that AI kind of pushes us down the same sorts of paths? Like, so you get kind of path dependent because you’re basically telling the model, Oh, good job. You did really good on these things, and then it becomes trained to sort of do those sorts of things over and over, and it gets less creative over time?

Toner-Rodgers: Yeah, I think that is definitely a concern. And I think something that people are thinking about is maybe there’s ways to reward novel output, per se. Because I think in these settings, one thing that’s helpful with novel output, even if it’s not actually a good compound, is that you learn about new areas of the design space. And even getting a result that’s very novel and not good is pretty helpful information. So I think rewarding the model for novelty, per se, is maybe one kind of avenue for fixing that problem.

Demsas: So this paper and this field, in general, kind of reminds me of some of the findings in the remote-work space. We had Natalia Emanuel from the New York Fed on the show, actually on our very first, inaugural episode. And you know, we talked about her research on remote work, and one finding that she has is that more-senior people are more productive or have higher gains of productivity when they’re able to go remote, because they stop having to mentor young people, and that is a drain on their productivity in person. They’re having someone younger than you kind of ask you questions, interrupt your day and, like—I’m not saying they hate the job—but that takes away from your ability to just work and not have to focus on other things.

And I wonder if AI becoming the sort of “bouncing off” buddy of scientists, rather than, like, you’re turning to your less-productive lab partner and just kind of tossing out ideas or talking. Instead, you’re sort of engaging with this AI tool, and that’s what you’re using to sort of figure out new methods and materials. Does that change science to become less collaborative with human peers, and does that have those knock-on harms, where maybe these most-productive scientists are getting better, but the less-productive scientists aren’t able to actually get the learning necessary to improve their own productivity?

Toner-Rodgers: Yeah, I think that’s super interesting. And I think a general question about these results are, like: What does this look like in the longer term?

I think something that might absolutely be true is: These people who are very good at judgment might have gotten good at judgment by designing the materials themselves in the past, and this is kind of where you got that expertise. But going forward, if the AI is just used, maybe new scientists that enter the firm never get that experience and maybe never have the ability to get the judgment. And so that’s one reason you could see different effects in the long run.

In terms of the specific question of collaboration, I think that’s something super interesting. I don’t have, really, evidence on that in the paper, because I don’t see good data on how much scientists are communicating with each other. But something I’m very interested in is: We have some scientists that are good at judgment. Like, could they teach whatever that skill is to the people who are worse? And I think one way to get at this, which I haven’t done yet, is: If you have a teammate who’s very good at this task, do you somehow learn, over time, from them? And I think that would be very interesting to look at.

Demsas: And you mentioned, like, how does someone become a high-productivity scientist, and that requires you doing this on your own, potentially. And I wonder—companies, whether they will have the incentive at all to invest in this long-term training when there are these sorts of short- and even medium-run, huge benefits they could get. I mean, you’re talking about massive increases in patents and new technologies they’re able to operationalize and commercialize, even. And if that’s the case, even if everyone knows that there’s this long-term cost to science and to scientists, who is actually incentivized to make sure this training happens until we’re already kind of in a bad place where a lot of technology has stagnated?

Toner-Rodgers: Yeah, I think that makes a lot of sense. Like, there’s kind of a collective-action problem where you don’t want to be the one that’s doing all the training in the short run while all your competitors are, like, coming out with all these amazing materials and products.

Demsas: And then poaching all your people.

Toner-Rodgers: Exactly. I think that’s definitely a concern. But also more generally, I do kind of have some confidence that organizations are going to be able to adapt to these tools and find out new ways to either train scientists for these things, kind of as they’re using them, or be able to, in the selection process for new employees, find predictors of being good at that this new task. Because, in some sense, what we’re saying is that these new technologies are changing the skills required to make scientific discoveries, and I think we’ve seen a long history of technological progress that’s done exactly that—like, changed the returns to different skills—and firms have adjusted to that.

Demsas: What I want to ask you about next is about the survey you did about the scientists’ job satisfaction. Can you tell us about that survey?

Toner-Rodgers: Yeah. So the goal of the survey was just to see both how scientists use the tool and then whether they liked it—how did this impact their job satisfaction?

And so after the whole experiment was completed, I just conducted a survey of all the lab scientists. About half answered. And one thing I found is that, basically across the board, scientists were fairly unhappy with the changes in the content of their work brought on by AI. So what they say is that they found a lot of enjoyment from this process of coming up with ideas for compounds themselves, and when this was automated, their job became a lot less enjoyable. So they say, like, My job became less creative, and some of the key skills that I’d built over time, I’m no longer getting to use.

And I think one thing that’s very striking is this is true both for the scientists that saw huge productivity improvements from AI, as well as the lower performers. And so we really see that it’s not as much dependent on productivity. I also ask, kind of, Well, you’re also getting more productive. Does this somehow somewhat offset your dissatisfaction with the tasks you’re doing at work? And it does somewhat. But overall, I find that 82 percent of scientists report a kind of net reduction in job satisfaction.

Demsas: I mean, that’s kind of depressing, right? Obviously, if you’re told, like, Oh, your work is having a big impact on the world and maybe making life better for people who are sick or who need renewable energy, or whatever it is, that can feel good. But if your day-to-day just sucks, you can imagine there’s gonna be some attrition, right?

Toner-Rodgers: Yeah, absolutely. Because yeah—one thing sometimes people say when they hear this result is, like, Well, scientific discovery is very important. Maybe these new materials are gonna be used by millions of people. Why do we really care about these scientists and how much they’re enjoying their job? But I really think it could have important implications for who chooses to go into these fields and the overall kind of direction of scientific progress. So I think it’s very important to think about these questions of well-being at the subjective, individual level for that reason.

Demsas: I feel like it’s really difficult for me to kind of weigh out what actually happens in the long term here, because I could imagine that the types of scientists who went into these fields were selected for people who really, really enjoyed the creativity aspect of figuring out new materials. Whether or not they’re productive at doing that, like, that’s just the kind of thing you’re selecting for.

And I would analogize it to someone who’s really excited about coming up with new recipes. And I’m someone who likes—I don’t like coming up with new recipes, but some of my favorite recipes are ones where I saw a New York Times Cooking recipe, and then I change some things about it. And as I’ve cooked it a bunch of times, I’ve tweaked some things, and I’ve come up with something that’s sort of my own, sort of already existing. And I can imagine there are a lot of people like that and that the skill of discernment does not necessarily correlate with the skill of loving to be creative.

So you could see shifts happening in the field, right, where the types of people who go into materials science change, and these scientists go do something else where they’re able to be more creative. And you mentioned that a lot of them are thinking about taking on new skills. How do you think that all kind of shakes out?

Toner-Rodgers: This really maybe comes back to the question of training. So I think a lot of these people’s complaints were like, Look—I built up all this expertise for one thing, and now I don’t get to do that thing anymore. And you could think that now if we start training people for this slightly different task, which also requires a lot of expertise, of judgment, that that also is fulfilling. And whether that’s true in the long run, I think I’m not sure.

So one analogy that someone said to me is, like, Well, you’re a Ph.D. student. Imagine if, instead of writing papers, you just did referee reports all the time.

Demsas: Yeah. And sorry—can you explain what a referee report is?

Toner-Rodgers: It’s like you’re looking at someone else’s research and saying, like, It’s good, or, It has these problems.

And that doesn’t sound awesome. Like, it definitely takes a lot of expertise to do a referee report, but it’s not why you got into this—like, you do want to come up with ideas. And so I think I’m very uncertain how this is going to all shake out. I do think that part of it really was, like, I got trained to do a thing, and now I don’t get to do it anymore. And I think that part will go away somewhat, but whether this is just fundamentally a worse job, I think it definitely could be.

Demsas: It’s interesting, the way in which we kind of have always thought of automation as disrupting the jobs of people with less-well-compensated skills—so, like, manufacturing jobs, or, you know, now your job is shifting a lot if you’re someone who works at a restaurant. Now robots are doing some of that work. And you know, there’s just been this kind of pejorative, like, Learn to code! sort of response to some of those people.

And it’s interesting to see that, like, a lot of generative AI is actually really impacting the fields of higher-income individuals, like people who are working in heavily writing fields or like legal fields and now, also, science fields. And it does, really, I think, raise this question of just: Will society be as tolerant of disruptions in those spaces as it has been in disruptions in spaces where workers have had less kind of political and social power?

Toner-Rodgers: Yeah, I totally agree. And I think there really is something different about these technologies where they’re creating novel output based on patterns in their training data, whereas before, like, from industrial robots to computers, it really was about automating routine tasks. And now for the first time, we’re automating the creative tasks. And I think how people feel about this and how we react might look very different.

Demsas: Yeah. I came across this quote from the chief AI officer at Western University, Mark Daley. It’s a blog post. He’s commenting on your paper. He writes, “Because AI isn’t just augmenting human creativity—it’s replacing it. The study found that artificial intelligence now handles 57 percent of ‘idea generation’ tasks, traditionally the most intellectually rewarding part of scientific work. Instead of dreaming up new possibilities, scientists may find themselves relegated to testing AI’s ideas in the lab, reduced to what one might grimly call highly educated lab technicians.”

I don’t know if there’s a survey of scientists or whatever, but I wonder here if you see that there’s a kind of a growing pessimism as a result of findings like this and just, like, the experiences many people are having with AI where they do feel like, Hey, the good part of life—I don’t want AI or robots or technology to be taking away the fun, creative stuff like writing or art or whatever. I want them to take away the drudgery the way that, like, laundry machines took away drudgery or dishwashers took away drudgery. I don’t know how you think about that as a shift in how the discourse is happening on this issue.

Toner-Rodgers: Yeah. I think that’s interesting. And I also think, when I talk to scientists, for example, materials scientists that work on actually building the computational tools, like, they’re super excited about this stuff because they’re coming up with ideas for the tool itself and, like, going and testing it and all these things.

Something in this setting is like: This was a tool that was kind of imposed on these people, not something they kind of created themself. And I think that’s maybe something we’ll see, where the people that are actually having input and creating the new technologies themselves might find, like, they’re very happy with the output, even though these tasks are being automated. Whereas people in this setting, where the tool kind of just came in and changed their job a lot, maybe see kind of big decreases in enjoyment.

Demsas: Well, Aidan, always our last and final question: What is an idea that you thought was good at the time but ended up only being good on paper?

Toner-Rodgers: So I went to undergrad in Minnesota. And for background, I’m from California. So the first winter I was there, me and a couple of friends decided it’d be a great idea to go ice fishing.

Demsas: Okay.

Toner-Rodgers: And so we drive up to this lake. And literally three steps out on the ice, I step on a crack and fall through into this frozen lake. So ice fishing for Californians is good on paper.

Demsas: This is like the scene in Little Women where, like, Amy falls into the lake or whatever. What happened? Was it actually dangerous, or did you just immediately pull yourself out?

Toner-Rodgers: Luckily, we weren’t far from civilization. Like, we were near the car, so we ran back to the car.

Demsas: Oh my God.

Toner-Rodgers: And that was the end of my ice-fishing career.

Demsas: I’m glad you learned this early in your Minnesota life and did not get too adventurous. Well, Aidan, thank you so much for coming on the show.

Toner-Rodgers: Yeah, it was great. Thanks so much.

[Music]

Demsas: Good on Paper is produced by Rosie Hughes. It was edited by Dave Shaw, fact-checked by Ena Alvarado, and engineered by Erica Huang. Our theme music is composed by Rob Smierciak. Claudine Ebeid is the executive producer of Atlantic audio. Andrea Valdez is our managing editor.

And hey, if you like what you’re hearing, please leave us a rating and review on Apple Podcasts.

I’m Jerusalem Demsas, and we’ll see you next week.