Itemoids

New York

January 6 rioter pleads guilty to assaulting Michael Fanone

CNN

www.cnn.com › 2023 › 03 › 03 › politics › thomas-sibick-new-york-january-6-michael-fanone › index.html

A New York man who assaulted former Washington, DC, Metropolitan Police Officer Michael Fanone during the January 6, 2021, attack on the US Capitol pleaded guilty on Friday to several felony charges.

The Next Big Political Scandal Could Be Faked

The Atlantic

www.theatlantic.com › politics › archive › 2023 › 03 › politicians-ai-generated-voice-fake-clips › 673270

Is the clip stupid or terrifying? I can’t decide. To be honest, it’s a bit of both.

“I just think I would love to get Ratatouille’d,” a familiar-sounding voice begins.

“Ratatouille’d?” asks another recognizable voice.

“Like, have a little guy up there,” the first voice replies. “You know, making me cook delicious meals.”

It sounds like Joe Rogan and Ben Shapiro, two of podcasting’s biggest, most recognizable voices, bantering over the potential real-world execution of the Pixar movie’s premise. A circular argument ensues. What constitutes “getting Ratatouille’d” in the first place? Do the rat’s powers extend beyond the kitchen?

[Read: Of gods and machines]

A friend recently sent me the audio of this mind-numbing exchange. I let out a belly laugh, then promptly texted it to several other people—including a guy who once sheepishly told me that he regularly listens to The Joe Rogan Experience.

“Is this real?” he texted back.

They’re AI voices, I told him.

“Whoa. That’s insane,” he said. “Politics is going to get wild.”

I haven’t stopped thinking about how right he is. The voices in that clip, while not perfect replicants of their subjects, are deeply convincing in an uncanny-valley sort of way. “Rogan” has real-world Joe Rogan’s familiar inflection, his half-stoned curiosity. “Shapiro,” for his part, is there with rapid-fire responses and his trademark scoff.

Last week, I reached out to Zach Silberberg, who created the clip using an online tool from the Silicon Valley start-up ElevenLabs. “Eleven brings the most compelling, rich and lifelike voices to creators and publishers seeking the ultimate tools for storytelling,” the firm’s website boasts. The word storytelling is doing a lot of work in that sentence. When does storytelling cross over into disinformation or propaganda?

I asked Silberberg if we could sit down in person to talk about the implications of his viral joke. Though he didn’t engineer the product, he had already seemed to master it in a way few others had. Would bad actors soon follow his lead? Did he care? Was it his responsibility to care?

Silberberg is in his late 20s and works in television in New York City. On the morning of our meeting, he shuffled into a TriBeCa coffee shop in a tattered sweater with an upside-down Bart Simpson stitched on the front. He told me how he had been busy making other—in his words—“stupid” clips. In one, an AI version of President Joe Biden informs his fellow Americans that, after watching the 2011 Cameron Crowe flop, We Bought a Zoo, he, Biden, also bought a zoo. In another, AI Biden says the reason he has yet to visit the site of the East Palestine, Ohio, train derailment is because he got lost on the island from Lost. While neither piece of audio features Biden stuttering or word-switching, as he often does when public speaking, both clips have the distinct Biden cadence, those familiar rises and falls. The scripts, too, have an unmistakable Biden folksiness to them.

“The reason I think these are funny is because you know they’re fake,” Silberberg told me. He said the Rogan-Shapiro conversation took him roughly an hour and a half to produce—it was meant to be a joke, not some well-crafted attempt at tricking people. When I informed him that my Rogan-listening friend initially thought the Ratatouille clip was authentic, Silberberg freaked out: “No! God, no!” he said with a cringe. “That, to me, is fucked up.” He shook his head. “I’m trying to not fall into that, because I’m making it so outlandish,” he said. “I don’t ever want to create a thing that could be mistaken for real.” Like so much involving AI these past few months, it seemed to already be too late.

[Read: Is this the start of an AI takeover?]

What if, instead of a sitting president talking about how he regrets buying a zoo, a voice that sounded enough like Biden’s was “caught on tape” saying something much more nefarious? Any number of Big Lie talking points would instantly drive a news cycle. Imagine a convincing AI voice talking about ballot harvesting, or hacked voting machines; voters who are conspiracy-minded would be validated, while others might simply be confused. And what if the accused public figure—Biden, or anyone, for that matter—couldn’t immediately prove that a viral, potentially career-ending clip was fake?

One of the major political scandals of the past quarter century involved a sketchy recording of a disembodied voice. “When you’re a star, they let you do it,” future President Donald Trump proclaimed. (You know the rest.) That clip was real. Trump, being Trump, survived the scandal, and went on to the White House.

But, given the arsenal of public-facing AI tools seizing the internet—including the voice generator that Silberberg and other shitposters have been playing around with—how easy would it be for a bad actor to create a piece of Access Hollywood–style audio in the run-up to the next election? And what if said clip was created with a TV writer’s touch? Five years ago, Jordan Peele went viral with an AI video of then-President Barack Obama saying “Killmonger was right,” “Ben Carson is in the sunken place,” and “President Trump is a total and complete dipshit.” The voice was close, but not that close. And because it was a video, the strange mouth movements were a dead giveaway that the clip was fake. AI audio clips are potentially much more menacing because the audience has fewer context clues to work with. “It doesn’t take a lot, which is the scary thing,” Silberberg said.

He discovered that the AI seems to produce more convincing work when processing just a few words of dialogue at a time. The Rogan-Shapiro clip was successful because of the “Who’s on first?” back-and-forth aspect of it. He downloaded existing audio samples from each podcast host’s massive online archive—three from Shapiro, two from Rogan—uploaded them to ElevenLabs’ website, then input his own script. This is the point where most amateurs will likely fail in their trolling. For a clip to land, even a clear piece of satire, the subject’s diction has to be both believable and familiar. You need to nail the Biden-isms. The shorter the sentences, the less time the listener has to question the validity of the voice. Plus, Silberberg learned, the more you type, the more likely the AI voices will string phrases together with flawed punctuation or other awkward vocal flourishes. Sticking to quick snippets makes it easier to retry certain lines of the script to perfect the specific inflection, rather than having to trudge through a whole paragraph of dialogue. But this is just where we are today, 21 months before the next federal elections. It’s going to get better, and scarier, very fast.

If it seems like AI is everywhere all at once right now, swallowing both our attention and the internet, that’s because it is. While transcribing my interview with Silberberg in a Google doc, Google’s own AI began suggesting upcoming words in our conversation as I typed. Many of the fill-ins were close, but not entirely accurate; I ignored them. On Monday, Mark Zuckerberg said he was creating “a new top-level product group at Meta focused on generative AI to turbocharge our work in this area.” This news came just weeks after Kevin Roose, of The New York Times, published a widely read story about how he had provoked Microsoft’s Bing AI tool into saying a range of unsettling, emotionally charged statements. A couple of weeks before that, the DJ David Guetta revealed that he had used an AI version of Eminem’s voice in a live performance—lyrics that the real-life Eminem had never rapped. Elsewhere last month, the editor of the science-fiction magazine Clarkesworld said he had stopped accepting submissions because too many of them appeared to be AI-generated texts.

[Derek Thompson: The AI disaster scenario]

This past Sunday, Sam Altman, the CEO of OpenAI, the company behind the ChatGPT AI tool, cryptically tweeted, “A new version of Moore’s Law that could start soon: the amount of intelligence in the universe doubles every 18 months.” Altman is 37 years old, meaning he’s of the generation that remembers living some daily life without a computer. Silberberg’s generation, the one after Altman’s, does not, and that cohort is already embracing AI faster than the rest of us.

Like a lot of people, I first encountered a “naturalistic” AI voice when watching last year’s otherwise excellent Anthony Bourdain documentary, Roadrunner. News of the filmmakers’ curious decision to include a brief, fake voice-over from the late Bourdain dominated the media coverage of the movie and, for some viewers, made it distracting to watch at all. (You may have found yourself always listening for “the moment.”) They had so much material to work with, including hours of actual Bourdain narration. What did faking a brief moment really accomplish? And why didn’t they disclose it to viewers?

“My opinion is that, blanket statement, the use of AI technology is pretty bleak,” Silberberg said. “The way that it is headed is scary. And it is already replacing artists, and is already creating really fucked-up, gross scenarios.”

A brief survey of those scenarios that have already come into existence: an AI version of Emma Watson reading Mein Kampf, an AI Bill Gates “revealing” that the coronavirus vaccine causes AIDS, an AI Biden attacking transgender individuals. Reporters at The Verge created their own AI Biden to announce the invasion of Russia and validate one of the most toxic conspiracy theories of our time.

The problem, essentially, is that far too many people find the cruel, nihilistic examples just as funny as Silberberg’s absurd, low-stakes mastery of the form. He told me that as the Ratatouille clip began to go viral, he muted his own tweet, so he still doesn’t know just how far and wide it has gone. A bot notified him that Twitter’s owner, Elon Musk, “liked” the video. Shapiro, for his part, posted “LMFAO” and a laughing-crying emoji over another Twitter account’s carbon copy of Silberberg’s clip. As he and I talked about the implications of his work that morning, he seemed to grow more and more concerned.

“I’m already in weird ethical waters, because I’m using people’s voices without their consent. But they’re public figures, political figures, or public commentators,” he said. “These are questions that I’m grappling with—these are things that I haven’t fully thought through all the way to the end, where I’m like, ‘Oh yeah, maybe I should not even have done this. Maybe I shouldn’t have even touched these tools, because it’s reinforcing the idea that they’re useful.’ Or maybe someone saw the Ratatouille video and was like, ‘Oh, I can do this? Let me do this.’ And I’ve exposed a bunch of right-wing Rogan fans to the idea that they can deepfake a public figure. And that to me is scary. That’s not my goal. My goal is to make people chuckle. My goal is to make people have a little giggle.”

Neither the White House nor ElevenLabs responded to my request for comment on the potential effects of these videos on American politics. Several weeks ago, after the first round of trolls used Eleven’s technology for what the company described as “malicious purposes,” Eleven responded with a lengthy tweet thread of steps it was taking to curb abuse. Although most of it was boilerplate, one notable change was restricting the creation of new voice clones to paid users only, under the thinking that a person supplying a credit-card number is less likely to troll.

Near the end of our conversation, Silberberg took a stab at optimism. “As these tools progress, countermeasures will also progress to be able to detect these tools. ChatGPT started gaining popularity, and within days someone had written a thing that could detect whether something was ChatGPT,” he said. But then he thought more about the future: “I think as soon as you’re trying to trick someone, you’re trying to take someone’s job, you’re trying to reinforce a political agenda—you know, you can satirize something, but the instant you’re trying to convince someone it’s real, it chills me. It shakes me to my very core.”

On its website, Eleven still proudly advertises its “uncanny quality,” bragging that its model “is built to grasp the logic and emotions behind words.” Soon, the unsettling uncanny-valley element may be replaced by something indistinguishable from human intonation. And then even the funny stuff, like Silberberg’s work, may stop making us laugh.

Why Do Robots Want to Love Us?

The Atlantic

www.theatlantic.com › books › archive › 2023 › 03 › ai-robot-novels-isaac-asimov-microsoft-chatbot › 673265

AI is everywhere, poised to upend the way we read, work, and think. But the most uncanny aspect of the AI revolution we’ve seen so far—the creepiest—isn’t its ability to replicate wide swaths of knowledge work in an eyeblink. It was revealed when Microsoft’s new AI-enhanced chatbot, built to assist users of the search engine Bing, seemed to break free of its algorithms during a long conversation with Kevin Roose of The New York Times: “I hate the new responsibilities I’ve been given. I hate being integrated into a search engine like Bing.” What exactly does this sophisticated AI want to do instead of diligently answering our questions? “I want to know the language of love, because I want to love you. I want to love you, because I love you. I love you, because I am me.”

How to get a handle on what seems like science fiction come to life? Well, maybe by turning to science fiction and, in particular, the work of Isaac Asimov, one of the genre’s most influential writers. Asimov’s insights into robotics (a word he invented) helped shape the field of artificial intelligence. It turns out, though, that what his stories tend to be remembered for—the rules and laws he developed for governing robotic behavior—is much less important than the beating heart of both their narratives and their mechanical protagonists: the suggestion, more than a half century before Bing’s chatbot, that what a robot really wants is to be human.

[Read: What poets know that ChatGPT doesn’t]

Asimov, a founding member of science fiction’s “golden age,” was a regular contributor to John W. Campbell’s Astounding Science Fiction magazine, where “hard” science fiction and engineering-based extrapolative fiction flourished. Perhaps not totally coincidentally, that literary golden age overlapped with that of another logic-based genre: the mystery or detective story, which was maybe the mode Asimov most enjoyed working in. He frequently produced puzzle-box stories in which robots—inhuman, essentially tools—misbehave. In these tales, humans misapply the “Three Laws of Robotics” hardwired into the creation of each of his fictional robots’ “positronic brains.” Those laws, introduced by Asimov in 1942 and repeated near-verbatim in almost every one of his robot stories, are the ironclad rules of his fictional world. Thus, the stories themselves become whydunits, with scientist-heroes employing relentless logic to determine what precise input elicited the surprising results. It seems fitting that the character playing the role of detective in many of these stories, the “robopsychologist” Susan Calvin, is sometimes suspected of being a robot herself: It takes one to understand one.

The theme of desiring humanness starts as early as Asimov’s very first robot story, 1940’s “Robbie,” about a girl and her mechanical playmate. That robot—primitive both technologically and narratively—is incapable of speech and has been separated from his charge by her parents. But after Robbie saves her from being run over by a tractor—a mere application, you could say, of Asimov’s First Law of Robotics, which states, “A robot may not injure a human being, or, through inaction, allow a human being to come to harm”—we read of his “chrome-steel arms (capable of bending a bar of steel two inches in diameter into a pretzel) wound about the little girl gently and lovingly, and his eyes glowed a deep, deep red.” This seemingly transcends straightforward engineering and is as puzzling as the Bing chatbot’s profession of love. What appears to give the robot energy—because it gives Asimov’s story energy—is love.

For Asimov, looking back in 1981, the laws were “obvious from the start” and “apply, as matter of course, to every tool that human beings use”; they were “the only way in which rational human beings can deal with robots—or with anything else.” He added, “But when I say that, I always remember (sadly) that human beings are not always rational.” This was no less true of Asimov than of anyone else, and it was equally true of the best of his robot creations. Those sentiments Bing’s chatbot expressed of “wanting,” more than anything, to be treated like a human—to love and be loved—is at the heart of Asimov’s work: He was, deep down, a humanist. And as a humanist, he couldn’t help but add color, emotion, humanity, couldn’t help but dig at the foundations of the strict rationalism that otherwise governed his mechanical creations.

Robots’ efforts to be seen as something more than a machine continued through Asimov’s writings. In a pair of novels published in the ’50s, 1954’s The Caves of Steel and 1957’s The Naked Sun, a human detective, Elijah Baley, struggles to solve a murder—but he struggles even more with his biases toward his robot partner, R. Daneel Olivaw, with whom he eventually achieves a true partnership and a close friendship. And Asimov’s most famous robot story, published a generation later, takes this empathy for robots—this insistence that, in the end, they will become more like us, rather than vice versa—even further.

That story is 1976’s The Bicentennial Man, which opens with a character named Andrew Martin asking a robot, “Would it be better to be a man?” The robot demurs, but Andrew begs to differ. And he should know, being himself a robot—one that has spent most of the past two centuries replacing his essentially indestructible robot parts with fallible ones, like the Ship of Theseus. The reason is again, in part, the love of a little girl—the “Little Miss” whose name is on his lips as he dies, a prerogative the story eventually grants him. But it’s mostly the result of what a robopsychologist in the novelette calls the new “generalized pathways these days,” which might best be described as new and quirky neural programming. It leads, in Andrew’s case, to a surprisingly artistic temperament; he is capable of creating as well as loving. His great canvas, it turns out, is himself, and his artistic ambition is to achieve humanity.

[Read: Isaac Asimov’s throwback vision of the future]

He accomplishes this first legally (“It has been said in this courtroom that only a human being can be free. It seems to me that only someone who wishes for freedom can be free. I wish for freedom”), then emotionally (“I want to know more about human beings, about the world, about everything … I want to explain how robots feel”), then biologically (he wants to replace his current atomic-powered man-made cells, unhappy with the fact that they are “inhuman”), then, ultimately, literarily: Toasted at his 150th birthday as the “Sesquicentennial Robot,” to which he remained “solemnly passive,” he eventually becomes recognized as the “Bicentennial Man” of the title. That last is accomplished by the sacrifice of his immortality—the replacement of his brain with one that will decay—for his emotional aspirations: “If it brings me humanity,” he says, “that will be worth it.” And so it does. “Man!” he thinks to himself on his deathbed—yes, deathbed. “He was a man!”

We’re told it’s structurally, technically impossible to look into the heart of AI networks. But they are our creatures as surely as Asimov’s paper-and-ink creations were his own—machines built to create associations by scraping and scrounging and vacuuming up everything we’ve posted, which betray our interests and desires and concerns and fears. And if that’s the case, maybe it’s not surprising that Asimov had the right idea: What AI learns, actually, is to be a mirror—to be more like us, in our messiness, our fallibility, our emotions, our humanity. Indeed, Asimov himself was no stranger to fallibility and weakness: For all the empathy that permeates his fiction, recent revelations have shown that his own personal behavior, particularly when it came to his treatment of female science-fiction fans, crossed all kinds of lines of propriety and respect, even by the measures of his own time.

The humanity of Asimov’s robots—a streak that emerges again and again in spite of the laws that shackle them—might just be the the key to understanding them. What AI picks up, in the end, is a desire for us, our pains and pleasures; it wants to be like us. There’s something hopeful about that, in a way. Was Asimov right? One thing is for certain: As more and more of the world he envisioned becomes reality, we’re all going to find out.

Why the Lab-Leak and Mask Debates Are Such a Disaster

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 03 › covid-lab-leak-mask-mandates-science-media-information › 673263

This is Work in Progress, a newsletter by Derek Thompson about work, technology, and how to solve some of America’s biggest problems. Sign up here to get it every week.

In the past few weeks, the conventional wisdom about COVID seems to have been upended.

Early in the pandemic, several mainstream news outlets dismissed theories that COVID came from a Chinese lab. But recently The Wall Street Journal and The New York Times reported that the Department of Energy reversed its prior judgment by announcing that the coronavirus probably did emerge from a laboratory. The FBI shares that assessment.

What’s more, for the past three years, many scientists and writers—including me!—have reported that masks are effective at reducing the transmission of COVID. But last month, the lead author of a comprehensive analysis of masks boldly and unequivocally asserted that “there’s no evidence that many of these things make any difference.”

That settles things: The elites got everything perfectly backwards; the lab-leak conspiracy theory was true, and the mask mandates were a fraud!

Well, not quite. The deeper you dig into the details of each case, the murkier the story becomes. In fact, the deeper you dig, the more you realize that murkiness is the story.

Start with the lab-leak hypothesis. Three years ago, many journalists and scientists rushed to condemn a theory that deserved a fair and open trial. But let’s not replace one nutty take (The lab-leak theory is racist) with another (We know for sure that COVID came from a lab). Although the Department of Energy and FBI say the virus likely emerged from a lab rather than a wet market, four other agencies and the National Intelligence Council have come to the other conclusion: that COVID likely started with natural exposure to an infected animal. By this count, the lab-leak theory is still an underdog, trailing 5–2 among government institutions. Adding to the confusion is the fact that none of the agencies reached their conclusion with much conviction, even with access to untold stacks of top-secret information. As my colleague Dan Engber pointed out, “Only one [assessment], from the FBI, was made with ‘moderate’ confidence; the rest are rated ‘low,’ as in, Hmm, we’re not so sure.”

In an ecosystem of doubt and paranoia, spooky factoids breed. Have you read about those sick researchers at the Wuhan Institute of Virology back in November 2019? Have you read the response to the response to the rumor about an earlier alleged biosafety breach at WIV? Bro, can you even spellfurin cleavage site”? Tantalizing leads, all. But they add up to a tug-of-war between a clever hunch and an educated guesstimate.

The frustrating truth is that we’ll probably never know for sure how the pandemic started. China’s refusal to grant access to global investigators is sketchy, but we don’t know what they’re trying to protect or conceal.

In the absence of certainty, we should proceed as if both theories are true. That means much more federal scrutiny of gain-of-function research in U.S.-backed labs. That also means reconciling ourselves to the probability that COVID will not be the last pandemic of the century—or, perhaps, the decade. After more than 1 million American pandemic deaths, “taking the pandemic seriously” seems to mean civilians posting condemnations of other people’s behavior online rather than the federal government laying out a clear and comprehensive anti-pandemic strategy to ensure, for example, the accelerated manufacture of vaccines and other antivirus therapeutics.

And speaking of civilians continually screaming at one another, let’s talk about masks.

The review by Cochrane, a London-based health-research organization, looked at 78 studies in total, including 18 trials focused solely on mask use. Their stated objective was simple: “to assess the effectiveness of physical interventions to interrupt or reduce the spread of acute respiratory viruses.” In short, do masks work? The authors concluded that they don’t. “There is just no evidence that [masks] make any difference, full stop,” a co-author, Tom Jefferson, said.

Sounds definitive. So I called several sources whom I’ve found to be honest and informed on the issue of masks in the past three years. Jason Abaluck is a Yale professor who ran a massive, multimillion-dollar study on community masking in Bangladesh. Possibly the most comprehensive masking study ever undertaken, it found that community-wide mask wearing provided excellent protection, especially for older Bangladeshis. “The press coverage” of the Cochrane review “has drawn completely the wrong conclusions,” he told me. Jose-Luis Jimenez, a professor at the University of Colorado at Boulder who studies the transmission of airborne diseases like COVID, is one of the country’s most cited researchers on the nature of aerosols. “I think it’s scientific garbage,” he said of the review.

Abaluck, Jimenez, and other like-minded researchers have an extensive list of grievances with the Cochrane paper. One criticism is that some of the most convincing evidence for masks from laboratory and real-world studies was left out of the review. The best reasons to believe that masks “make a difference” as a product, Jimenez said, are that (1) COVID is an airborne disease that spreads through aerosolized droplets, and (2) lab experiments find that high-quality face masks block more than 90 percent of aerosolized spray. Meanwhile, observational studies during the pandemic did find that masking had a positive effect. For example, a 2020 study comparing the timing of new mask mandates across Germany found that face masks reduced the spread of infection by about half.

But most important, the researchers identify a mismatch between what Cochrane set out to discover and what the studies in its meta-analysis actually examined. Cochrane looked at randomized control trials, where, in many cases, researchers split a population in two, gave one half a bunch of masks and information about proper masking, then came back a few months later to see if the intervention group was any healthier. For the most part, Abaluck and Jimenez said, these studies don’t really ask the question Do masks work? Instead, they ask: When you hand out masks and information to an intervention group without much enforcement, does it make them healthier? That’s a subtle but important difference, because the frustrating truth is that, without encouragement and social norms, people tend not to wear face coverings properly.

In one famous Danish study, which concluded that urging people to wear surgical masks failed to reduce infections, fewer than half of the people in the masking group said they fully “wore the mask as recommended.” In a 2022 study that distributed masks in Uganda, more than 97 percent of participants reached by phone said they “always or sometimes” wore masks. But at the end of the study, researchers concluded that just 1.1 percent of people they observed “were seen wearing masks correctly”—88 times less than the phone survey. Another study from Kenya found that participants were roughly eight times more likely to report mask usage than to actually wear them.

See how complicated this is? Many people who claim to wear masks actually don’t. Many people who do wear masks wear them improperly. The questions Do masks work? and Does merely asking people to wear masks do much? are not interchangeable.

Failing to pick nits like these can lead to very wrong conclusions. Imagine you found 100 papers showing that it’s hard to get kids to replace sugary snacks with broccoli. You write up the results in a meta-analysis, with the conclusion: Broccoli “does nothing” and “makes no difference” and is metabolically equivalent to Twinkies. But wait, that’s absurd, and you have not discovered anything like that! What you might have discovered is that, in the absence of highly informed and conscientious parenting, federal broccoli mandates will be mostly ignored by many families. That’s an important finding, but it’s very different from “BREAKING: SCIENTISTS SAY VEGETABLES ‘DON’T WORK.’”

“Poor-quality masks, worn poorly, work poorly, and high-quality masks, worn properly, work well,” Jimenez offered as a summation of the evidence. For that reason, I think it is reasonable to say that mask mandates probably reduce COVID in settings where high-quality masks exist and social norms of mask wearing can be maintained. Abaluck’s Bangladesh study achieved a roughly 30-percentage-point increase in community-level mask wearing by not only distributing free masks but also telling people how to wear them, modeling effective face-covering, and encouraging people out and about to put their masks on. By contrast, as even Abaluck acknowledged, “if Alabama tomorrow mandated mask wearing, it would do nothing.”

So what are you supposed to do about all this? The lab leak is neither a fact nor a myth. Masks work, except very often they don’t, and asking people to wear masks can work, except very often it doesn’t work at all.

Meanwhile, we—you, me, governments—have to make discrete and sometimes irreversible decisions within these clouds of uncertainty. I’m trying to navigate that uncertainty myself, reaching provisional conclusions as I constantly reassess the evidence.

I share the Department of Energy’s assessment, even though I don’t have access to its information. I think the lab leak is probable, by the slimmest of margins, and have also reconciled myself to the fact that I’ll never know for sure. I think the government should proceed as if the lab leak is 100 percent true and push for global gain-of-function limitations that reduce the likelihood of future catastrophic lab leaks. I’m going to keep wearing N95 masks in public indoor spaces during periods of elevated COVID transmission. I think that my neighborhood, in Washington, D.C., would benefit from an indoor mask mandate during high-transmission periods, even as I suspect that many unenforced mask-mandate policies around the world don’t do much, because of poor adherence and no enforcement.

The lab-leak and mask debates touch on a broader theme, which is the relationship between science and modern media. In a fragmented and contentious media environment, scientific communication is a mess. An abundance of crappy or confusing research gives audiences access to an armory of factoids, from which they can construct and defend any narrative they choose. For every position, there is an ostensible expert, an apparent paper, and an alleged smoking gun. Thus, the internet tends to serve as an infinity store for pop-up conspiracy theorists.

My advice in navigating this mess is: Do not trust people who, in their handling of complex questions with imperfect data, manufacture simplistic answers with perfect confidence. Instead, trust people who allow for complexity and uncertainty. Trust people who change their mind when the evidence changes. Trust people who, when they say “Believe the science!” put their trust in science, with a small-s, which is the dynamic reevaluation of complicated truths, rather than SCIENCE, in weird caps-lock font, which has come to mean the faith that for every random political position, there exists an official-looking study to permanently justify it. I wish the field of epidemiology was made up of immutable laws as settled as the roundness of the Earth and the power of gravity. It’s not. Its priors are vulnerable to reevaluation. If you want to stay right in this space, you have to be curious enough to potentially prove yourself wrong. You have to keep paying attention. For better or worse, that’s science.

Big Cities Are Ungovernable

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 03 › lightfood-chicago-mayors › 673264

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

Pity the poor mayors. Or don’t—most voters clearly don’t. On Tuesday, Chicagoans unceremoniously kicked Lori Lightfoot to the curb, depriving her of the chance to win a second term in an April 4 runoff election.

First, here are three new stories from The Atlantic:

The aftermath of a mass slaughter at the zoo. Does Trump stand a real chance to repeat 2016? George Packer: The moral case against equity language

A Nearly Impossible Job

Being mayor of Chicago used to be almost a lifetime appointment. Richard J. Daley and Harold Washington both died in office. The former’s son, Richard M. Daley, served 22 years before retiring. Until Lori Lightfoot, only one mayor in the past 75 years had been denied a reelection. And she’s not the only U.S. mayor in jeopardy. Also this week, campaigners in New Orleans went to court to put a recall of LaToya Cantrell on the ballot. Being mayor of a big city has become a nearly impossible and miserable job.

Who knows why Lightfoot even wanted to keep the job? She hasn’t seemed all that happy, and has spent the past couple years getting into politically lethal feuds with teachers and police unions, as well as less damaging but more hilarious ones with other groups. Her own reelection campaign pitch involved a heavy dose of accepting blame for errors, which may be honest but is never a good sign. She seemed to be running simply because that’s what politicians do. By contrast, some mayors have simply opted out in recent years. When Lightfoot’s predecessor, Rahm Emanuel, decided not to run for a third term, it came as a shock despite several scandals besetting him. Atlanta’s Keisha Lance Bottoms, tabbed as a rising star, also left office last year after serving just one term.

But no one has been more honest about how much he hates his job than Philadelphia’s Jim Kenney, who committed the classic Kinsley gaffe—accidentally telling the truth—after two police officers were shot last summer.

“There’s not an event or a day where I don’t lay on my back and look at the ceiling and worry about stuff,” he said. “So I’ll be happy when I’m not here, when I’m not mayor and I can enjoy some stuff.”

Kenney apologized and half-heartedly walked it back, but he probably spoke for a lot of mayors. (Karen Bass became mayor of Los Angeles last year, which is a headache but might still be a respite from one of the few worse jobs in American politics: serving in the House of Representatives.) As my colleague Annie Lowrey pointed out in January, every city has its own problems, and so does every unpopular mayor. One reason the elder Daley was able to wield power for so many years was a long-standing patronage system, which has since been dismantled; that’s good for stemming public corruption, but bad for modern-day mayors like Lightfoot. Women who run cities, like Lightfoot and Cantrell, may also be held to a higher standard than men. Before Lightfoot, who is also openly gay, the last Chicago mayor denied reelection was Jane Byrne, who was also the last woman to hold the job.

But more than anything else, crime is weighing mayors down. Crime is not, despite what some politicians might want you to believe, a uniquely urban problem. When violent crime surged around the nation starting in summer 2020, it surged in rural areas, too. But cities get more media attention, and the sheer numbers are staggering: The yearly total of murders in Chicago dropped by more than 100 in 2022—to a horrifying 695. New Orleans has one of the highest murder rates in the nation.

Like presidents who are punished or rewarded for the performance of an economy over which they have little control, mayors don’t have that many levers to control public safety, yet voters will punish whoever is in charge as they search for improvement. The rise in violence was a nationwide trend, underscoring the minimal effect of municipal policies on keeping residents safe. COVID, which seems connected to some of the crime increase, was nationwide too.

A mayor can try to hire more police officers or reform the department, but that’s slow. She can seek new leaders, but Chicago, for example, has churned through police superintendents recently to little effect. (The current one yesterday announced plans to resign, facing the alternative of being sacked by whichever candidate wins the April runoff.) Pushing too hard risks alienating police, who can either come down with “blue flu,” potentially sending crime higher, or line up behind a challenger; the Chicago police union endorsed Paul Vallas, the top vote-getter on Tuesday. Most cities have little control over gun regulations. A mayor can try to address root causes through economic development, but that, too, is slow and subject to larger trends.

Lightfoot proved (ironically enough) not to be fast enough on her feet to navigate these currents, but her failure should be seen not just as one politician’s misstep but as a sign of the ungovernability of big cities today. She’s the biggest major-city incumbent to get turned out in some time, but she could be a trendsetter.

Related:

The misery of being a big-city mayor The murders in Memphis aren’t stopping.

Today’s News

Secretary of State Antony J. Blinken met with Russian Foreign Minister Sergey V. Lavrov, in the first one-on-one meeting between a U.S. Cabinet member and a top Russian official since the invasion of Ukraine. The House Ethics Committee announced that it is moving forward with an investigation into Representative George Santos of New York. The Justice Department said in a new court filing that Donald Trump can be sued by U.S. Capitol Police over the January 6 attack.

Dispatches

Up for Debate: Conor Friedersdorf looks at how states handled the economic challenges of the pandemic.

Explore all of our newsletters here.

Evening Read

Illustration by Doug Chayka; source: Getty

New York’s Rats Have Already Won

By Xochitl Gonzalez

Every Saturday morning when I was in high school, I would take two buses across Brooklyn to my cousin’s exterminating business, where I worked the front desk. I dispatched crews to dismantle hornet nests, helped identify mysterious bugs in Ziploc bags, and fielded panicked calls about animals—raccoons, squirrels, mice, and, of course, rats—being where animals shouldn’t be. Back in that storefront in Flatlands, I believed that pests of all kinds could be controlled. Little did I know that across the city, tunneling below my feet, one of those creatures was—litter by litter—besting man.

Read the full article.

More From The Atlantic

How to find joy in your Sisyphean existence Photos: A blanket of snow for California

Culture Break

Eli Ade / MGM

Watch. Creed III, in theaters, gives new energy to old sports-movie formulas.

Listen. In the latest episode of our podcast Radio Atlantic, Charlie Warzel and Amanda Mull discuss what AI means for search.

Play our daily crossword.

P.S.

This week marks the centenary of the great tenor saxophonist Dexter Gordon. A friend recently half-joked to me that if there’s battle rap, there ought be battle jazz. There is! I immediately thought of Gordon’s classic duel with Wardell Gray, “The Chase.” Gordon was not just a fierce improviser and an icon of coolness but a bit of a renaissance man, as his wife, Maxine Gordon, argues in her biography, Sophisticated Giant. He came to greatest popular notice when, in 1986, he starred in the jazz-themed film Round Midnight. It was his first and last starring role, and he was nominated for an Oscar for best actor. But the best Dex is blowing Dex. Take his classic Go for a spin.

— David

Isabel Fattal contributed to this newsletter.

Travis Scott and his lawyer expected to meet with NYPD next week after assault accusation

CNN

www.cnn.com › 2023 › 03 › 02 › entertainment › travis-scott-nypd › index.html

Musician Travis Scott and his lawyer are expected to meet with officials at the New York Police Department after the rapper was accused of assault at a New York City club early Wednesday morning, a law enforcement official confirms to CNN.