Itemoids

Atlantic

Autocracies Are Winning the Information War

The Atlantic

www.theatlantic.com › newsletters › archive › 2024 › 05 › the-plot-to-discredit-democracy › 678315

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

In The Atlantic’s newest cover story, Anne Applebaum details the onslaught of antidemocratic propaganda flooding the United States. If only Americans weren’t so ready to believe so much of it.

First, here are three new stories from The Atlantic:

The real meaning of divestment Is Donald Trump trying to get thrown in jail? “Say plainly what the protesters want,” Jill Filipovic argues.

Propaganda for American Tastes

Back in 2017, I was asked by the State Department to give a series of lectures on disinformation to audiences in various cities in the Czech Republic. (I wrote about it here.) I was stunned, even then, at how the European information environment was poisoned by a deluge of Russian propaganda—including the obvious cross-pollination between Russians and malevolent actors in the United States. This global problem, Anne Appelebaum writes in our new cover story, has since gotten much worse.

As Anne points out, the Chinese, the Russians, and others are on a propaganda offensive around the world, even in places that most Americans don’t pay much attention to. She described how a European diplomat was “mystified” to find students in Africa parroting Russian talking points about the war in Ukraine. “He grasped for explanations,” she writes: “Maybe the legacy of colonialism explained the spread of these conspiracy theories, or Western neglect of the global South, or the long shadow of the Cold War.”

The simpler but more ominous truth, Anne explains, involved “China’s systematic efforts to buy or influence both popular and elite audiences around the world; carefully curated Russian propaganda campaigns, some open, some clandestine, some amplified by the American and European far right; and other autocracies using their own networks to promote the same language.”

These efforts differ from Cold War–era propaganda campaigns. In those days, the Soviets and others tried to paint a happy picture of the successes of their autocratic regimes as a way of legitimizing their rule and as a kind of enticement to other nations to join Team Red. Many of these efforts “backfired,” Anne writes, “because people could compare what they saw on posters and in movies with a far more impoverished reality.”

Those were the days. Now, Anne points out, the goal of most autocracies is not to replace truth with regime-friendly lies but to destroy truth itself, and to obliterate the human ability—or desire—to even distinguish between truths and lies. “The new authoritarians,” she writes, “have a different attitude toward reality.”

When Soviet leaders lied, they tried to make their falsehoods seem real. They became angry when anyone accused them of lying. But in [Vladimir] Putin’s Russia, Bashar al-Assad’s Syria, and Nicolás Maduro’s Venezuela, politicians and television personalities play a different game. They lie constantly, blatantly, obviously. But they don’t bother to offer counterarguments when their lies are exposed … This tactic—the so-called fire hose of falsehoods—ultimately produces not outrage but nihilism. Given so many explanations, how can you know what actually happened? What if you just can’t know?

The point of such efforts is not really to mobilize support for bad regimes but to numb the brains and neutralize the agency of citizens everywhere. As Anne writes, “If you don’t know what happened, you’re not likely to join a great movement for democracy, or to listen when anyone speaks about positive political change. Instead, you are not going to participate in any politics at all.”

I recommend that you read Anne’s article in its entirety to see the full spectrum of these autocratic efforts around the world, but I want to focus here on what’s happening in the United States. Americans are being targeted by foreign propagandists who are using the internet and social media to pump their toxic slurry directly into American veins. “A part of the American political spectrum is not merely a passive recipient of the combined authoritarian narratives that come from Russia, China, and their ilk,” Anne writes, “but an active participant in creating and spreading them. Like the leaders of those countries, the American MAGA right also wants Americans to believe that their democracy is degenerate, their elections illegitimate, their civilization dying.”

As is always the case, this propaganda has found willing customers in a bored and listless society that alleviates its ennui by gorging on entertaining conspiracy theories. Americans don’t have to seek out foreign propaganda when plenty of their fellow citizens are eager to sell them lies that have been altered to suit American tastes. But why does American society have so many takers for such soul-destroying nonsense? Anne points out that after the ISIS terrorist attack on a concert hall in Moscow in March, the former PayPal entrepreneur (and close pal of Elon Musk’s) David Sacks posted on X that “if the Ukrainian government was behind the terrorist attack, as looks increasingly likely, the U.S. must renounce it.” This inane and baseless charge has been viewed 2.5 million times.

More than David Sacks himself, however, the problem is a culture that even thinks to take people such as David Sacks seriously. Democracies have always had conspiracy theorists and other cranks wandering about the public square, sneezing and coughing various forms of weirdness on their fellow citizens. But even in the recent past, most people with a basic level of education and a healthy dollop of common sense had no trouble resisting the contagion of idiocy.

Today, the immune system of once-healthy democratic societies is compromised. Be it the idea that the moon landings were faked or the attacks on the legitimacy of elections, wild theories have become surprisingly easy for Americans to believe, a sign of a national gullibility that makes the United States an obvious target for outlandish propaganda.

Governments alone cannot solve this problem. Individual citizens have to take the initiative—as exhausting as it might be—to confront one another over bad information. They need to ask questions: Where did you hear that? Why do you trust that source? Do you think that I, as a friend or a family member, am lying to you if I tell you it’s not true? People who have already been captured by propaganda will not believe official disclaimers from authoritative sources, and will see such disclaimers only as further proof of the conspiracy. But when conspiracists and deeply misinformed people encounter people close to them, those whom they care about, who gently but firmly refuse to join them in the maze of misinformation, such discussions can sometimes have a positive effect, at least in the short term.

What I am suggesting is not fun, and should be limited to friends and family. (It’s probably not a strategy to pursue at a bar with strangers after a few drinks.) And it may not change very much. But right now, it’s all any of us can do.

Related:

The new propaganda war The bad guys are winning. (From 2021)

Today’s News

Hamas laid out a proposal for a cease-fire in Gaza that the group’s political leader said was based on a plan from Egypt and Qatar. Israel’s leadership said that the terms were “far from Israel’s essential demands” but that it would be sending a delegation to Cairo to continue the negotiations. The judge in Donald Trump’s hush-money criminal trial ruled that the former president was in contempt of court after he once again broke a gag order preventing him from attacking jurors and others involved in the trial. The Israeli cabinet voted to ban Al Jazeera yesterday and immediately moved to shut down the news channel’s offices in the country and to seize some of the company’s communication equipment.

Dispatches

The Wonder Reader: The new question du jour is “What is milk?” Isabel Fattal examines the factors complicating milk’s identity.

Explore all of our newsletters here.

Evening Read

Harold M. Lambert / Getty

Is It Wrong to Tell Kids to Apologize?

By Stephanie H. Murray

Say you’re sorry. For generations, parents have leaned on the phrase during sibling tiffs and playground scuffles. But it has lately become controversial, particularly among a certain subset of Millennial parents—those for whom the hallmark of good parenting is the reverence they show for their kids’ feelings. Under this model, gone are the days of scolding a child for melting down, sending them to a time-out, or ignoring them until they settle. (Joining them for “time-ins” to help them process their emotions? That’s okay.) The guiding principle seems to be to take children’s current or future feelings into consideration at every parental decision point—even when they are the ones who have hurt the feelings of someone else.

Read the full article.

More From The Atlantic

This is helicopter protesting. What “intifada revolution” looks like A fundamental stage of human reproduction is shifting. ElevenLabs is building an army of voice clones.

Culture Break

Brian Lackey / Gallery Stock

Adventure. These six books reflect on what drives our species to explore what’s uncharted and unknown.

Read. No Subject,” a poem by Andrew Motion:

“Hope exhausted years ago / but I still try.”

Play our daily crossword.

Stephanie Bai contributed to this newsletter.

When you buy a book using a link in this newsletter, we receive a commission. Thank you for supporting The Atlantic.

The Thingification of AI

The Atlantic

www.theatlantic.com › newsletters › archive › 2024 › 05 › the-thingification-of-ai › 678289

This is Atlantic Intelligence, a limited-run series in which our writers help you wrap your mind around artificial intelligence and a new machine age. Sign up here.

Recent weeks have seen the introduction of new consumer gadgets whose entire selling point revolves around artificial intelligence. Humane, a company started by ex-Apple employees, released an “AI Pin” that a user wears like a boutonniere; it answers spoken questions, can recognize and comment on objects through its camera, and projects a limited screen for displaying text. At $600 with a $24 monthly fee, the device was positioned as a kind of smartphone replacement, though reviews have not been kind, calling the Pin slow, challenging to use, and error-prone.

Last week, my colleague Caroline Mimbs Nyce reported on the Rabbit R1, a less ambitious and more affordable handheld gadget that similarly presents an AI assistant as its entire selling point. Yet, like the AI Pin, it has severe issues: “It managed to speak a summary of a handwritten page when I asked, though only with about 65 percent accuracy,” Caroline writes. “I was able to use the gadget to order an acai bowl on DoorDash, although it couldn’t handle any customizations. (I wanted peanut butter.) And I never got Uber to work. (Though at one point, the device told me the request had failed when it in fact hadn’t, leaving me on the hook for a $9 ride I didn’t even take.)”

AI has its place in consumer hardware, of course. But for now, that place seems to be the device you’re reading this newsletter on, where services such as ChatGPT, Google Gemini, and Claude are a dime a dozen.

— Damon Beres, senior editor

Illustration by The Atlantic. Sources: Rabbit; Getty.

I Witnessed the Future of AI, and It’s a Broken Toy

By Caroline Mimbs Nyce

This story was supposed to have a different beginning. You were supposed to hear about how, earlier this week, I attended a splashy launch party for a new AI gadget—the Rabbit R1—in New York City, and then, standing on a windy curb outside the venue, pressed a button on the device to summon an Uber home. Instead, after maybe an hour of getting it set up and fidgeting with it, the connection failed.

The R1 is a bright-orange chunk of a device, with a camera, a mic, and a small screen. Press and hold its single button, ask it a question or give it a command using your voice, and the cute bouncing rabbit on screen will perk up its ears, then talk back to you. It’s theoretically like communicating with ChatGPT through a walkie-talkie. You could ask it to identify a given flower through its camera or play a song based on half-remembered lyrics; you could ask it for an Uber, but it might get hung up on the last step and leave you stranded in Queens.

Read the full article.

What to Read Next

Things get strange when AI starts training itself: “Programs that teach and learn from one another could warp our experience of the world and unsettle our basic understandings of intelligence,” Matteo Wong writes.

P.S.

I recently revisited my colleague Kaitlyn Tiffany’s 2021 article about the “dead internet theory,” a conspiracy that has proven to be uncomfortably prescient about the generative-AI era. “Much of the ‘supposedly human-produced content’ you see online was actually created using AI, [a conspiracy theorist who uses the online handle] IlluminatiPirate claims, and was propagated by bots,” Kaitlyn wrote. Many of the theory’s specifics are well beyond the bounds of plausibility and good taste. Yet the web is indeed being stuffed with synthetic content these days—to the detriment of all.

— Damon

Trump’s VP Search Is Different This Time

The Atlantic

www.theatlantic.com › newsletters › archive › 2024 › 05 › trumps-vp-search-is-different-this-time › 678296

This story seems to be about:

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

By killing her dog, South Dakota Governor Kristi Noem may have also killed her chances of becoming Donald Trump’s vice president. So who else is on the list? We’ll get into Trump’s options after four new stories from The Atlantic:

The blindness of elites What’s left to restrain Donald Trump? David Frum: What Joe Biden needs to say about anti-Semitism Mark Leibovich: “House Republicans showed up at a campus protest. Of course.”

Trump’s Big Decision

As a reporter, it is my duty to remind you that Trump’s team loves messing with the media almost as much as it loves jockeying for influence with the big man himself. Trump’s advisers might dish, for example, that after careful consideration, so-and-so is off the vice-president list, and you know who is back on. They might explain that, actually, some of the usual considerations of geography and gender aren’t playing a role in this VP decision.

But the truth is, none of these supposed insiders really knows much. No one has any idea what Trump is thinking, except for Trump himself. And the former president is quite famously unpredictable, with a well-established tendency to make decisions based on his most recent conversation. Predicting his Veep pick, then, is a bit futile. It’s also really early: Candidates don’t typically choose a running mate until around the party convention, in late summer. And Trump will likely try to milk as much media coverage as he can out of making people wait.

Still, without prognosticating too much, we can anticipate what Trump is probably looking for in a vice president. He’ll want someone who looks good on television but not someone who might outshine him. Someone who isn’t polarizing to the MAGA base but who demonstrates range. He’ll choose a candidate with experience, or at least with some record of being a winner. He is probably not looking for a politician to “balance” out his ticket like Mike Pence did in 2016, when Trump desperately needed to win over evangelicals.

Above all, of course, Trump will want someone unfailingly loyal to him. This time around, it’s not about logic or persuasion—it’s about personality. The Republican strategists Doug Heye and Mike Murphy, neither of whom are involved with the Trump campaign, walked me through some of Trump’s VP options.

South Carolina Senator Tim Scott

Why does this name keep floating around? Well, the senator, who’s been in office for more than a decade, has always been popular. He’s a former insurance salesman who knows how to schmooze, and, Heye told me, he’s also a “prodigious fundraiser.” Scott never fully cozied up to Trump while the latter was president, but he didn’t criticize him much either. “He played it smart,” Murphy told me, by not getting too close or too far. The dynamic changed when Scott launched his own presidential campaign last year. “He was the puppy on his back, supplicant,” even while he was running against Trump, Murphy said, and that loyalty “will appeal to Trump.”

Scott could also—the thinking goes—help Trump appeal to Black voters, who have already started peeling off from Democrats, albeit in a small way. Trump and his campaign have seemed obsessed with this task as they try to avoid a repeat of 2020, and Scott could help them do it.

Arkansas Governor Sarah Huckabee Sanders

Trump’s former press secretary was on even the earliest iterations of his 2024 VP shortlist. She is in her first term as a state governor and has enacted plenty of MAGA-style legislation. She’s smart and spent two years working for Trump, which means that she’s familiar with handling the D.C. media and that Trump is probably pretty comfortable with her. Having a woman like Sanders on the ticket could help Trump pick up women voters, another demographic he’s struggled with. “She’s never going to have any agenda or not be the completely loyal type,” Murphy said. “And [she’s] less of a star, so no worry of [Trump] being diminished at all.”

North Dakota Governor Doug Burgum

Burgum has been governor for eight years and seems well liked. He’s personally wealthy, like Trump, but not famous. He’s ambitious, but not in a way that intimidates Trump. He ran for president this cycle too, remember? If you don’t, that’s probably a plus for Trump.

When you pick a vice president, you should “pick a slightly less impressive version of yourself,” Murphy told me—like when Bill Clinton picked Al Gore, another moderate, Protestant white man. “When you’re John McCain, [if] you pick a Sarah Palin, it’s just trouble,” he said. Could Burgum be that slightly less impressive version of Trump?

New York Representative Elise Stefanik

This 39-year-old House Republican has been openly auditioning for the VP slot for years now. She’s a gifted fundraiser and easily the most powerful Republican in New York. She has establishment bona fides—Harvard, the George W. Bush White House, aide to Paul Ryan—but has devoted herself entirely to Trump’s defense and the MAGA cause. She’s a competent woman who could help Trump appeal to other educated women. The problem, of course, is that he may not find her particularly authentic. “She’d poison her mother to get two points on Election Day,” Murphy said. “And I think he would smell that.”

Ohio Senator J. D. Vance

The Hillbilly Elegy author and former venture capitalist seems to share Trump’s populist sensibilities. Vance was once a Trump critic but changed his tune when he ran for the Senate. He’s ambitious in a way that Trump might read as disingenuous—probably because it is. “If I were Trump, I’d be troubled by the fact that J. D. Vance was calling [Republican strategists] to ask about running as an anti-Trump Republican when he first looked at running statewide in Ohio,” Murphy said. Then again, he said, “Vance is a clever-enough chameleon to be able to suck up to Trump with skill.”

Former Department of Housing and Urban Development Secretary Ben Carson

Carson, a former neurosurgeon, ran for president against Trump back in 2016. He worked in the administration for a while, heading up HUD. We haven’t heard much from him since then, but he does seem to hang out in Trump’s circles, and has been spotted at Mar-a-Lago on more than one occasion.

Carson could, in theory, help Trump appeal to Black voters. But he doesn’t have quite the political credentials that Scott does. “I was meeting a friend for drinks back in February, and he said he knows for a fact that it’s going to be Ben Carson,” Heye told me. “I’m like, ‘Okay, well, one, it’s February. Two, why Ben Carson?’”

Florida Senator Marco Rubio

Rubio is young and telegenic, with two terms in the Senate (plus a failed presidential campaign) under his belt. The son of Cuban immigrants, he could theoretically help Trump appeal to Latino voters. The problem is, Rubio would have to resign from the Senate. He’d also have to change his residence, because the Constitution bars electors from voting for a president and a vice president from the same state. Trump picking Rubio is “completely far-fetched—with the caveat that when you’re dealing with Donald Trump, far-fetched things happen,” Heye said.

Kari Lake

The Arizona TV anchor turned Stop the Steal devotee would clearly love to serve as Trump’s vice president. (See her here, vacuuming a red carpet for the former president.) But Lake has never actually won a race, and Trump, as we all know, prefers a winner.

South Dakota Governor Kristi Noem

She’s still on the list, because in Trumpworld anything is possible. But shooting a dog in a gravel pit? It’s about the worst thing you can do for your political career.

Related:

Did Kristi Noem just doom her career? Elise Stefanik’s Trump audition

Today’s News

The Justice Department announced that Texas Representative Henry Cuellar and his wife, Imelda, have been indicted on bribery and money-laundering charges. In a statement, Cuellar said that he and his wife are innocent of the charges. The former White House official Hope Hicks, who once was one of Donald Trump’s closest advisers, testified at Trump’s hush-money criminal trial. Canadian police arrested three people tied to last year’s killing of a prominent Sikh separatist in British Columbia, and are continuing to investigate allegations that the individuals were hired by the Indian government.

Dispatches

The Books Briefing: Poetry is an act of hope, Maya Chung writes. It can help us come closest to capturing events that exist beyond our capacity to describe them. Atlantic Intelligence: New consumer gadgets are coming out, and their entire selling point revolves around artificial intelligence, Damon Beres writes. The broken-gadget era is upon us.

Explore all of our newsletters here.

Evening Read

Illustration by Matteo Giuseppe Pani. Source: Getty.

Racehorses Have No Idea What’s Going On

By Haley Weiss

This weekend, more than 150,000 pastel-wrapped spectators and bettors will descend upon Louisville’s Churchill Downs complex to watch one of America’s greatest competitive spectacles. The 150th running of the Kentucky Derby, headlined by animals whose names (Resilience, Stronghold, Catching Freedom) sound more like Taylor Swift bonus tracks than living creatures, is expected to bring more revenue to the city and venue than ever, with resale tickets reportedly at record highs. If you count TV spectators, nearly 16 million people are expected to tune in to an event that awards major titles to athletes who may not know they’ve won and cannot be interviewed.

Read the full article.

More From The Atlantic

Medieval pets had one of humanity’s most cursed diseases. When writers silence writers What is Wagner doing in Africa? Marijuana’s health effects are about to get a whole lot clearer.

Culture Break

Michael Buckner / Deadline via Contour RA by Getty

Watch. I Saw the TV Glow (out now in theaters), the unsettling new film directed by Jane Schoenbrun. They’ve got some ideas about how to make a genuinely weird mainstream movie.

Read. “Noon,” a poem by Li-Young Lee:

“The tall curtains billow / with presences coming and going, impossible / to confirm.”

Play our daily crossword.

P.S.

As a 30-year-old city dweller with a dog and no kids, I’ve been spending a lot of time thinking about the role of friendship in my life. Making friends feels harder when you’re an adult—your days are suddenly so full of commitments, and interesting new people aren’t standing right in front of you at recess. Worse, at least in a place like D.C., where I live, friends tend to come and go with the seasons: They get new jobs, leave for grad school, have babies. I’m curious to hear from readers who’ve figured it out: What’s your best advice for making new friends as an adult? And what are your tips for keeping in touch with the old ones, as you all move along in life?

— Elaine

Stephanie Bai contributed to this newsletter.

When you buy a book using a link in this newsletter, we receive a commission. Thank you for supporting The Atlantic.

ElevenLabs Is Building an Army of Voice Clones

The Atlantic

www.theatlantic.com › technology › archive › 2024 › 05 › elevenlabs-ai-voice-cloning-deepfakes › 678288

Updated at 3:05 p.m. ET on May 4, 2024

My voice was ready. I’d been waiting, compulsively checking my inbox. I opened the email and scrolled until I saw a button that said, plainly, “Use voice.” I considered saying something aloud to mark the occasion, but that felt wrong. The computer would now speak for me.

I had thought it’d be fun, and uncanny, to clone my voice. I’d sought out the AI start-up ElevenLabs, paid $22 for a “creator” account, and uploaded some recordings of myself. A few hours later, I typed some words into a text box, hit “Enter,” and there I was: all the nasal lilts, hesitations, pauses, and mid-Atlantic-by-way-of-Ohio vowels that make my voice mine.

It was me, only more pompous. My voice clone speaks with the cadence of a pundit, no matter the subject. I type I like to eat pickles, and the voice spits it out as if I’m on Meet the Press. That’s not my voice’s fault; it is trained on just a few hours of me speaking into a microphone for various podcast appearances. The model likes to insert ums and ahs: In the recordings I gave it, I’m thinking through answers in real time and choosing my words carefully. It’s uncanny, yes, but also quite convincing—a part of my essence that’s been stripped, decoded, and reassembled by a little algorithmic model so as to no longer need my pesky brain and body.

Listen to the author's AI voice:

Using ElevenLabs, you can clone your voice like I did, or type in some words and hear them spoken by “Freya,” “Giovanni,” “Domi,” or hundreds of other fake voices, each with a different accent or intonation. Or you can dub a clip into any one of 29 languages while preserving the speaker’s voice. In each case, the technology is unnervingly good. The voice bots don’t just sound far more human than voice assistants such as Siri; they also sound better than any other widely available AI audio software right now. What’s different about the best ElevenLabs voices, trained on far more audio than what I fed into the machine, isn’t so much the quality of the voice but the way the software uses context clues to modulate delivery. If you feed it a news report, it speaks in a serious, declarative tone. Paste in a few paragraphs of Hamlet, and an ElevenLabs voice reads it with a dramatic storybook flare.

Listen to ElevenLabs read Hamlet:

ElevenLabs launched an early version of its product a little over a year ago, but you might have listened to one of its voices without even knowing it. Nike used the software to create a clone of the NBA star Luka Dončić’s voice for a recent shoe campaign. New York City Mayor Eric Adams’s office cloned the politician’s voice so that it could deliver robocall messages in Spanish, Yiddish, Mandarin, Cantonese, and Haitian Creole. The technology has been used to re-create the voices of children killed in the Parkland school shooting, to lobby for gun reform. An ElevenLabs voice might be reading this article to you: The Atlantic uses the software to auto-generate audio versions of some stories, as does The Washington Post.

It’s easy, when you play around with the ElevenLabs software, to envision a world in which you can listen to all the text on the internet in voices as rich as those in any audiobook. But it’s just as easy to imagine the potential carnage: scammers targeting parents by using their children’s voice to ask for money, a nefarious October surprise from a dirty political trickster. I tested the tool to see how convincingly it could replicate my voice saying outrageous things. Soon, I had high-quality audio of my voice clone urging people not to vote, blaming “the globalists” for COVID, and confessing to all kinds of journalistic malpractice. It was enough to make me check with my bank to make sure any potential voice-authentication features were disabled.

I went to visit the ElevenLabs office and meet the people responsible for bringing this technology into the world. I wanted to better understand the AI revolution as it’s currently unfolding. But the more time I spent—with the company and the product—the less I found myself in the present. Perhaps more than any other AI company, ElevenLabs offers a window into the near future of this disruptive technology. The threat of deepfakes is real, but what ElevenLabs heralds may be far weirder. And nobody, not even its creators, seems ready for it.

In mid-November, I buzzed into a brick building on a side street and walked up to the second floor. The London office of ElevenLabs—a $1 billion company—is a single room with a few tables. No ping-pong or beanbag chairs—just a sad mini fridge and the din of dutiful typing from seven employees packed shoulder to shoulder. (Many of the company’s staff is remote, scattered around the world.) Mati Staniszewski, ElevenLabs’ 29-year-old CEO, got up from his seat in the corner to greet me. He beckoned for me to follow him back down the stairs to a windowless conference room ElevenLabs shares with a company that, I presume, is not worth $1 billion.

Staniszewski is tall, with a well-coiffed head of blond hair, and he speaks quickly in a Polish accent. Talking with him sometimes feels like trying to engage in conversation with an earnest chatbot trained on press releases. I started our conversation with a few broad questions: What is it like to work on AI during this moment of breathless hype, investor interest, and genuine technological progress? What’s it like to come in each day and try to manipulate such nascent technology? He said that it’s exciting.

We moved on to Staniszewski’s background. He and the company’s co-founder, Piotr Dabkowski, grew up together in Poland watching foreign movies that were all clumsily dubbed into a flat Polish voice. Man, woman, child—whoever was speaking, all of the dialogue was voiced in the same droning, affectless tone by male actors known as lektors.

They both left Poland for university in the U.K. and then settled into tech jobs (Staniszewski at Palantir and Dabkowski at Google). Then, in 2021, Dabkowski was watching a film with his girlfriend and realized that Polish films were still dubbed in the same monotone lektor style. He and Staniszewski did some research and discovered that markets outside Poland were also relying on lektor-esque dubbing.

Mati Staniszewski’s story as CEO of ElevenLabs begins in Poland, where he grew up watching foreign films clumsily dubbed into a flat voice. (Daniel Stier for The Atlantic)

The next year, they founded ElevenLabs. AI voices were everywhere—think Alexa, or a car’s GPS—but actually good AI voices, they thought, would finally put an end to lektors. The tech giants have hundreds or thousands of employees working on AI, yet ElevenLabs, with a research team of just seven people, built a voice tool that’s arguably better than anything its competitors have released. The company poached researchers from top AI companies, yes, but it also hired a college dropout who’d won coding competitions, and another “who worked in call centers while exploring audio research as a side gig,” Staniszewski told me. “The audio space is still in its breakthrough stage,” Alex Holt, the company’s vice president of engineering, told me. “Having more people doesn’t necessarily help. You need those few people that are incredible.”

ElevenLabs knew its model was special when it started spitting out audio that accurately represented the relationships between words, Staniszewski told me—pronunciation that changed based on the context (minute, the unit of time, instead of minute, the description of size) and emotion (an exclamatory phrase spoken with excitement or anger).

Much of what the model produces is unexpected—sometimes delightfully so. Early on, ElevenLabs’ model began randomly inserting applause breaks after pauses in its speech: It had been training on audio clips from people giving presentations in front of live audiences. Quickly, the model began to improve, becoming capable of ums and ahs. “We started seeing some of those human elements being replicated,” Staniszewski said. The big leap was when the model began to laugh like a person. (My voice clone, I should note, struggles to laugh, offering a machine-gun burst of “haha”s that sound jarringly inhuman.)

Compared with OpenAI and other major companies, which are trying to wrap their large language models around the entire world and ultimately build an artificial human intelligence, ElevenLabs has ambitions that are easier to grasp: a future in which ALS patients can still communicate in their voice after they lose their speech. Audiobooks that are ginned up in seconds by self-published authors, video games in which every character is capable of carrying on a dynamic conversation, movies and videos instantly dubbed into any language. A sort of Spotify of voices, where anyone can license clones of their voice for others to use—to the dismay of professional voice actors. The gig-ification of our vocal cords.

What Staniszewski also described when talking about ElevenLabs is a company that wants to eliminate language barriers entirely. The dubbing tool, he argued, is its first step toward that goal. A user can upload a video, and the model will translate the speaker’s voice into a different language. When we spoke, Staniszewski twice referred to the Babel fish from the science-fiction book The Hitchhiker’s Guide to the Galaxy—he described making a tool that immediately translates every sound around a person into a language they can understand.

Every ElevenLabs employee I spoke with perked up at the mention of this moonshot idea. Although ElevenLabs’ current product might be exciting, the people building it view current dubbing and voice cloning as a prelude to something much bigger. I struggled to separate the scope of Staniszewski’s ambition from the modesty of our surroundings: a shared conference room one floor beneath the company’s sparse office space. ElevenLabs may not achieve its lofty goals, but I was still left unmoored by the reality that such a small collection of people could build something so genuinely powerful and release it into the world, where the rest of us have to make sense of it.

ElevenLabs’ voice bots launched in beta in late January 2023. It took very little time for people to start abusing them. Trolls on 4chan used the tool to make deepfakes of celebrities saying awful things. They had Emma Watson reading Mein Kampf and the right-wing podcaster Ben Shapiro making racist comments about Representative Alexandria Ocasio-Cortez. In the tool’s first days, there appeared to be virtually no guardrails. “Crazy weekend,” the company tweeted, promising to crack down on misuse.

ElevenLabs added a verification process for cloning; when I uploaded recordings of my voice, I had to complete multiple voice CAPTCHAs, speaking phrases into my computer in a short window of time to confirm that the voice I was duplicating was my own. The company also decided to limit its voice cloning strictly to paid accounts and announced a tool that lets people upload audio to see if it is AI generated. But the safeguards from ElevenLabs were “half-assed,” Hany Farid, a deepfake expert at UC Berkeley, told me—an attempt to retroactively focus on safety only after the harm was done. And they left glaring holes. Over the past year, the deepfakes have not been rampant, but they also haven’t stopped.

I first started reporting on deepfakes in 2017, after a researcher came to me with a warning of a terrifying future where AI-generated audio and video would bring about an “infocalypse” of impersonation, spam, nonconsensual sexual imagery, and political chaos, where we would all fall into what he called “reality apathy.” Voice cloning already existed, but it was crude: I used an AI voice tool to try to fool my mom, and it worked only because I had the halting, robotic voice pretend I was losing cell service. Since then, fears of an infocalypse have lagged behind the technology’s ability to distort reality. But ElevenLabs has closed the gap.

The best deepfake I’ve seen was from the filmmaker Kenneth Lurt, who used ElevenLabs to clone Jill Biden’s voice for a fake advertisement where she’s made to look as if she’s criticizing her husband over his handling of the Israel-Gaza conflict. The footage, which deftly stitches video of the first lady giving a speech with an ElevenLabs voice-over, is incredibly convincing and has been viewed hundreds of thousands of times. The ElevenLabs technology on its own isn’t perfect. “It’s the creative filmmaking that actually makes it feel believable,” Lurt said in an interview in October, noting that it took him a week to make the clip.

“It will totally change how everyone interacts with the internet, and what is possible,” Nathan Lambert, a researcher at the Allen Institute for AI, told me in January. “It’s super easy to see how this will be used for nefarious purposes.” When I asked him if he was worried about the 2024 elections, he offered a warning: “People aren’t ready for how good this stuff is and what it could mean.” When I pressed him for hypothetical scenarios, he demurred, not wanting to give anyone ideas.

Daniel Stier for The Atlantic

A few days after Lambert and I spoke, his intuitions became reality. The Sunday before the New Hampshire presidential primary, a deepfaked, AI-generated robocall went out to registered Democrats in the state. “What a bunch of malarkey,” the robocall began. The voice was grainy, its cadence stilted, but it was still immediately recognizable as Joe Biden’s drawl. “Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again,” it said, telling voters to stay home. In terms of political sabotage, this particular deepfake was relatively low stakes, with limited potential to disrupt electoral outcomes (Biden still won in a landslide). But it was a trial run for an election season that could be flooded with reality-blurring synthetic information.

Researchers and government officials scrambled to locate the origin of the call. Weeks later, a New Orleans–based magician confessed that he’d been paid by a Democratic operative to create the robocall. Using ElevenLabs, he claimed, it took him less than 20 minutes and cost $1.

Afterward, ElevenLabs introduced a “no go”–voices policy, preventing users from uploading or cloning the voice of certain celebrities and politicians. But this safeguard, too, had holes. In March, a reporter for 404 Media managed to bypass the system and clone both Donald Trump’s and Joe Biden’s voices simply by adding a minute of silence to the beginning of the upload file. Last month, I tried to clone Biden’s voice, with varying results. ElevenLabs didn’t catch my first attempt, for which I uploaded low-quality sound files from YouTube videos of the president speaking. But the cloned voice sounded nothing like the president’s—more like a hoarse teenager’s. On my second attempt, ElevenLabs blocked the upload, suggesting that I was about to violate the company’s terms of service.

For Farid, the UC Berkeley researcher, ElevenLabs’ inability to control how people might abuse its technology is proof that voice cloning causes more harm than good. “They were reckless in the way they deployed the technology,” Farid said, “and I think they could have done it much safer, but I think it would have been less effective for them.”

The core problem of ElevenLabs—and the generative-AI revolution writ large—is that there is no way for this technology to exist and not be misused. Meta and OpenAI have built synthetic voice tools, too, but have so far declined to make them broadly available. Their rationale: They aren’t yet sure how to unleash their products responsibly. As a start-up, though, ElevenLabs doesn’t have the luxury of time. “The time that we have to get ahead of the big players is short,” Staniszewski said, referring to the company’s research efforts. “If we don’t do it in the next two to three years, it’s going to be very hard to compete.” Despite the new safeguards, ElevenLabs’ name is probably going to show up in the news again as the election season wears on. There are simply too many motivated people constantly searching for ways to use these tools in strange, unexpected, even dangerous ways.

In the basement of a Sri Lankan restaurant on a soggy afternoon in London, I pressed Staniszewski about what I’d been obliquely referring to as “the bad stuff.” He didn’t avert his gaze as I rattled off the ways ElevenLabs’ technology could be and has been abused. When it was his time to speak, he did so thoughtfully, not dismissively; he appears to understand the risks of his products and other open-source AI tools. “It’s going to be a cat-and-mouse game,” he said. “We need to be quick.”

Later, over email, he cited the “no go”–voices initiative and told me that ElevenLabs is “testing new ways to counteract the creation of political content,” adding more human moderation and upgrading its detection software. The most important thing ElevenLabs is working on, Staniszewski said—what he called “the true solution”—is digitally watermarking synthetic voices at the point of creation so civilians can identify them. That will require cooperation across dozens of companies: ElevenLabs recently signed an accord with other AI companies, including Anthropic and OpenAI, to combat deepfakes in the upcoming elections, but so far, the partnership is mostly theoretical.

The uncomfortable reality is that there aren’t a lot of options to ensure bad actors don’t hijack these tools. “We need to brace the general public that the technology for this exists,” Staniszewski said. He’s right, yet my stomach sinks when I hear him say it. Mentioning media literacy, at a time when trolls on Telegram channels can flood social media with deepfakes, is a bit like showing up to an armed conflict in 2024 with only a musket.

The conversation went on like this for a half hour, followed by another session a few weeks later over the phone. A hard question, a genuine answer, my own palpable feeling of dissatisfaction. I can’t look at ElevenLabs and see beyond the risk: How can you build toward this future? Staniszewski seems unable to see beyond the opportunities: How can’t you build toward this future? I left our conversations with a distinct sense that the people behind ElevenLabs don’t want to watch the world burn. The question is whether, in an industry where everyone is racing to build AI tools with similar potential for harm, intentions matter at all.

To focus only on deepfakes elides how ElevenLabs and synthetic audio might reshape the internet in unpredictable ways. A few weeks before my visit, ElevenLabs held a hackathon, where programmers fused the company’s tech with hardware and other generative-AI tools. Staniszewski said that one team took an image-recognition AI model and connected it to both an Android device with a camera and ElevenLabs’ text-to-speech model. The result was a camera that could narrate what it was looking at. “If you’re a tourist, if you’re a blind person and want to see the world, you just find a camera,” Staniszewski said. “They deployed that in a weekend.”

Repeatedly during my visit, ElevenLabs employees described these types of hybrid projects—enough that I began to see them as a helpful way to imagine the next few years of technology. Products that all hook into one another herald a future that’s a lot less recognizable. More machines talking to machines; an internet that writes itself; an exhausting, boundless comingling of human art and human speech with AI art and AI speech until, perhaps, the provenance ceases to matter.

I came to London to try to wrap my mind around the AI revolution. By staring at one piece of it, I thought, I would get at least a sliver of certainty about what we’re barreling toward. Turns out, you can travel across the world, meet the people building the future, find them to be kind and introspective, ask them all of your questions, and still experience a profound sense of disorientation about this new technological frontier. Disorientation. That’s the main sense of this era—that something is looming just over the horizon, but you can’t see it. You can only feel the pit in your stomach. People build because they can. The rest of us are forced to adapt.

This article previously misquoted Staniszewski as calling his background an "investor story."

What Will Biden’s Stance on Israel Mean for His Campaign?

The Atlantic

www.theatlantic.com › national › archive › 2024 › 05 › biden-campaign-gaza-washington-week › 678298

Editor’s Note: Washington Week With The Atlantic is a partnership between NewsHour Productions, WETA, and The Atlantic airing every Friday on PBS stations nationwide. Check your local listings or watch full episodes here.  

This week, President Joe Biden contended with navigating the overlapping domestic and global challenges of the war in Gaza. At home, the president addressed the pro-Palestinian protests that have spread across college campuses. And abroad, the Biden administration continues to work toward a deal with Saudi Arabia that would allow for a bilateral defense agreement with the United States. Such plans, however, are contingent on how the conflict in Israel continues to unfold.

Meanwhile, both Biden and former President Donald Trump are grappling with how their approach to the war in Gaza will play out in their campaigns for the presidency. As Biden balances his stance on Israel with appeals to younger voters, Trump aims to keep his focus on student unrest in an attempt to fracture the Democratic coalition to his advantage, especially in swing states.

Joining the editor in chief of The Atlantic and moderator, Jeffrey Goldberg, to discuss this and more: Eric Cortellessa, a staff writer for Time; Franklin Foer, a staff writer for The Atlantic; Asma Khalid, a White House correspondent for NPR; and Nancy Youssef, ​​a national security correspondent for The Wall Street Journal.

Watch the full episode here.

A Terse and Gripping Weekend Read

The Atlantic

www.theatlantic.com › newsletters › archive › 2024 › 05 › a-terse-and-gripping-weekend-read › 678295

This story seems to be about:

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

Welcome back to The Daily’s Sunday culture edition, in which one Atlantic writer or editor reveals what’s keeping them entertained. Today’s special guest is Kevin Townsend, a senior producer on our podcast team. He currently works on the Radio Atlantic podcast and has helped produce Holy Week—about the week after Martin Luther King Jr.’s assassination—and the Peabody-winning Floodlines, which explores the devastation of Hurricane Katrina.

Kevin enjoys reading Philip Levine’s poems and visiting the National Gallery of Art, in Washington, D.C., where he can sit with Mark Rothko’s large-scale works. He’s also a Canadian-punk-music fan—Metz is one of his favorite bands—and a self-proclaimed Star Trek nerd who’s excited to binge the final season of Star Trek: Discovery.

First, here are three Sunday reads from The Atlantic:

Amanda Knox: “What if Jens Söring actually did it?” How Daniel Radcliffe outran Harry Potter The blindness of elites

The Culture Survey: Kevin Townsend

A quiet song that I love, and a loud song that I love: In college, I developed a steady rotation of quiet songs that didn’t distract me while I was studying. Artists such as Tycho and Washed Out were some of my favorites.

Recently, I’ve been into Floating Points, the moniker for Samuel Shepherd, a British electronic-music producer. I could recommend his Late Night Tales album or Elaenia, but the one that stands out most to me is his collaborative album, Promises, featuring the saxophonist Pharoah Sanders and the London Symphony Orchestra. It’s a gorgeous, layered work that’s best listened to all the way through—but if you’re pressed for time, “Movement 6” is an exceptional track.

As for a loud song, one of my favorite bands is the Canadian punk trio Metz. I’ve had “A Boat to Drown In” on heavy rotation for the past year. It doesn’t have the thrumming precision of their earlier singles such as “Headache” and “Wet Blanket,” but the song is a knockout every time. Metz just released a new record, Up on Gravity Hill, that I’m excited to get lost in.

The last museum or gallery show that I loved: Mark Rothko: Paintings on Paper,” an exhibition at the National Gallery of Art, showcased some of the abstract painter’s lesser-known works. The show closed recently, but the museum’s permanent collection features a good number of his works, including some of his famous color-field paintings. The National Gallery is also home to many pieces from the collection of the now-closed Corcoran Gallery of Art, and they’re worth a visit—especially the Hudson River School paintings, which must be seen in person in all of their maximalist glory.

Best novel I’ve recently read, and the best work of nonfiction: A few months ago, on my honeymoon, I reread No Country for Old Men. It’s far from a romantic beach read, but few writers are as tersely gripping as Cormac McCarthy. The Coen brothers’ film adaptation is fantastic, but the novel—published in 2005, two years into the Iraq War—encompasses a wider story about generations of men at war. It’s worth reading even if you’ve seen the movie.

I also brought with me a book I’d long meant to read: Lulu Miller’s Why Fish Don’t Exist. Part science history, part memoir, the book is mostly a biography of David Starr Jordan, Stanford University’s first president and a taxonomist who catalogued thousands of species of fish. It’s a unique and remarkable read that I can’t recommend highly enough. Fundamentally, it’s about our need for order—in our personal world, and in the natural world around us.

Miller’s book reminds me of a recent Radio Atlantic episode that I produced, in which Atlantic staff writer Zoë Schlanger discusses her new book, The Light Eaters, about the underappreciated biological creativity of plants. Miller and Schlanger both examine and challenge the hierarchies we apply to the natural world—and why humanity can be better off questioning those ideas.

A poem, or line of poetry, that I return to: My favorite poet is Philip Levine. His work is spare and direct, alive with love for the unsung corners of America and the people who inhabit them. Levine lived in Detroit during the Depression and spent more than three decades teaching in Fresno. Having grown up in Pittsburgh and moved to California as a teenager, I connected easily with the world he saw.

“What Work Is” and “The Simple Truth” are two of his poems that I often return to, especially for the final lines, which feel like gut punches. [Related: An interview with Philip Levine (From 1999)]

Speaking of final-line gut punches, the poem (and line) that I think of most frequently is by another favorite poet of mine: the recently departed Louise Glück. “Nostos,” from her 1996 book, Meadowlands, touches on how essential yet fragile our memories are, and there’s a haunting sweetness to its last line: “We look at the world once, in childhood. / The rest is memory.”

The television show I’m most enjoying right now: It’s May, so, honestly: the NHL playoffs. (And it’s been a great year for hockey.) But when it comes to actual television, I’m excited to binge the fifth and final season of Star Trek: Discovery.

It’s bittersweet that the series is ending. Sonequa Martin-Green gives an Emmy-worthy lead performance, but for all of the show’s greatness, it can lean a bit too much into space opera, with the galaxy at stake every season and a character on the verge of tears every episode. Trek is usually at its best when it’s trying to be TV, not cinema. (And that’s including the films—Star Trek II: The Wrath of Khan succeeded by essentially serving up a movie-length episode.) [Related: A critic’s case against cinema]

Being a friend of DeSoto, I want to give another Trek-related recommendation: The Greatest Generation and Greatest Trek podcasts, which go episode by episode through the wider Trek Industrial Complex. The humor, analysis, and clever audio production elevate the shows above the quality of your typical rewatch podcast. I came to The Greatest Generation as an audio-production and comedy nerd, and it turned me into a Trek nerd as well. So be warned.

Something I recently rewatched, reread, or otherwise revisited: The Hunt for Red October. Somehow, it gets better with every watch. “Give me a ping, Vasili. One ping only, please.”

The Week Ahead

Kingdom of the Planet of the Apes, an action sci-fi movie about a young ape who must face a tyrannical new ape leader (in theaters Friday) Dark Matter, a mystery series, based on the best-selling novel, about a man who is pulled into an alternate reality and must save his family from himself (premieres Wednesday on Apple TV+) First Love, a collection of essays by Lilly Dancyger that portray women’s friendships as their great loves (out Tuesday)

Essay

Illustration by Ben Kothe / The Atlantic. Source: Courtesy of Elena Dudum.

I Am Building an Archive to Prove That Palestine Exists

By Elena Dudum

My father collects 100-year-old magazines about Palestine—Life, National Geographic, even The Illustrated London News, the world’s first graphic weekly news magazine. For years, he would talk about these mysterious documents but rarely show them to anyone. “I have proof,” he would say, “that Palestine exists.”

His father, my paternal grandfather, whom I called Siddi, had a similar compulsion to prove his heritage, though it manifested differently. Siddi used to randomly recite his family tree to my father when he was a child. As if answering a question that had not been asked, he would recount those who came before him …

Although my American-born father didn’t inherit Siddi’s habit of reciting his family tree, he did recite facts; he lectured me about Palestine ad nauseam in my youth, although he had not yet visited. Similar to his father’s, these speeches were unprompted. “Your Siddi only had one business partner his entire life,” he would say for the hundredth time. “And that business partner was a rabbi. Palestinians are getting pitted against the Jews because it’s convenient, but it’s not the truth.”

Read the full article.

More in Culture

How do you make a genuinely weird mainstream movie? The godfather of American comedy The sci-fi writer who invented conspiracy theory Hacks goes for the jugular. “What I wish someone had told me 30 years ago” Will Americans ever get sick of cheap junk? The complicated ethics of rare-book collecting The diminishing returns of having good taste When poetry could define a life

Catch Up on The Atlantic

What’s left to restrain Donald Trump? Democrats defang the House’s far right. America’s colleges are reaping what they sowed, Tyler Austin Harper argues.

Photo Album

Shed hunters unpack their haul on the opening day of the Wyoming shed-hunt season. (Natalie Behring / Getty)

Take a look at these images of devastating floods across Kenya, a pagan fire festival in Scotland, antler gathering in Wyoming, and more.

Explore all of our newsletters.

When you buy a book using a link in this newsletter, we receive a commission. Thank you for supporting The Atlantic.

The New Propaganda War

The Atlantic

www.theatlantic.com › magazine › archive › 2024 › 06 › china-russia-republican-party-relations › 678271

This story seems to be about:

Illustrations by Tyler Comrie

This article was featured in the One Story to Read Today newsletter. Sign up for it here.

On June 4, 1989, the Polish Communist Party held partially free elections, setting in motion a series of events that ultimately removed the Communists from power. Not long afterward, street protests calling for free speech, due process, accountability, and democracy brought about the end of the Communist regimes in East Germany, Czechoslovakia, and Romania. Within a few years, the Soviet Union itself would no longer exist.

Also on June 4, 1989, the Chinese Communist Party ordered the military to remove thousands of students from Tiananmen Square. The students were calling for free speech, due process, accountability, and democracy. Soldiers arrested and killed demonstrators in Beijing and around the country. Later, they systematically tracked down the leaders of the protest movement and forced them to confess and recant. Some spent years in jail. Others managed to elude their pursuers and flee the country forever.

In the aftermath of these events, the Chinese concluded that the physical elimination of dissenters was insufficient. To prevent the democratic wave then sweeping across Central Europe from reaching East Asia, the Chinese Communist Party eventually set out to eliminate not just the people but the ideas that had motivated the protests. In the years to come, this would require policing what the Chinese people could see online.

Nobody believed that this would work. In 2000, President Bill Clinton told an audience at the Johns Hopkins School of Advanced International Studies that it was impossible. “In the knowledge economy,” he said, “economic innovation and political empowerment, whether anyone likes it or not, will inevitably go hand in hand.” The transcript records the audience reactions:

“Now, there’s no question China has been trying to crack down on the internet.” (Chuckles.) “Good luck!” (Laughter.) “That’s sort of like trying to nail Jell-O to the wall.” (Laughter.)

While we were still rhapsodizing about the many ways in which the internet could spread democracy, the Chinese were designing what’s become known as the Great Firewall of China. That method of internet management—which is in effect conversation management—contains many different elements, beginning with an elaborate system of blocks and filters that prevent internet users from seeing particular words and phrases. Among them, famously, are Tiananmen, 1989, and June 4, but there are many more. In 2000, a directive called “Measures for Managing Internet Information Services” prohibited an extraordinarily wide range of content, including anything that “endangers national security, divulges state secrets, subverts the government, undermines national unification,” and “is detrimental to the honor and interests of the state”—anything, in other words, that the authorities didn’t like.

[From the May 2022 issue: There is no liberal world order]

The Chinese regime also combined online tracking methods with other tools of repression, including security cameras, police inspections, and arrests. In Xinjiang province, where China’s Uyghur Muslim population is concentrated, the state has forced people to install “nanny apps” that can scan phones for forbidden phrases and pick up unusual behavior: Anyone who downloads a virtual private network, anyone who stays offline altogether, and anyone whose home uses too much electricity (which could be evidence of a secret houseguest) can arouse suspicion. Voice-recognition technology and even DNA swabs are used to monitor where Uyghurs walk, drive, and shop. With every new breakthrough, with every AI advance, China has gotten closer to its holy grail: a system that can eliminate not just the words democracy and Tiananmen from the internet, but the thinking that leads people to become democracy activists or attend public protests in real life.

But along the way, the Chinese regime discovered a deeper problem: Surveillance, regardless of sophistication, provides no guarantees. During the coronavirus pandemic, the Chinese government imposed controls more severe than most of its citizens had ever experienced. Millions of people were locked into their homes. Untold numbers entered government quarantine camps. Yet the lockdown also produced the angriest and most energetic Chinese protests in many years. Young people who had never attended a demonstration and had no memory of Tiananmen gathered in the streets of Beijing and Shanghai in the autumn of 2022 to talk about freedom. In Xinjiang, where lockdowns were the longest and harshest, and where repression is most complete, people came out in public and sang the Chinese national anthem, emphasizing one line: “Rise up, those who refuse to be slaves!” Clips of their performance circulated widely, presumably because the spyware and filters didn’t identify the national anthem as dissent.

Even in a state where surveillance is almost total, the experience of tyranny and injustice can radicalize people. Anger at arbitrary power will always lead someone to start thinking about another system, a better way to run society. The strength of these demonstrations, and the broader anger they reflected, was enough to spook the Chinese Communist Party into lifting the quarantine and allowing the virus to spread. The deaths that resulted were preferable to public anger and protest.

Like the demonstrations against President Vladimir Putin in Russia that began in 2011, the 2014 street protests in Venezuela, and the 2019 Hong Kong protests, the 2022 protests in China help explain something else: why autocratic regimes have slowly turned their repressive mechanisms outward, into the democratic world. If people are naturally drawn to the image of human rights, to the language of democracy, to the dream of freedom, then those concepts have to be poisoned. That requires more than surveillance, more than close observation of the population, more than a political system that defends against liberal ideas. It also requires an offensive plan: a narrative that damages both the idea of democracy everywhere in the world and the tools to deliver it.

On February 24, 2022, as Russia launched its invasion of Ukraine, fantastical tales of biological warfare began surging across the internet. Russian officials solemnly declared that secret U.S.-funded biolabs in Ukraine had been conducting experiments with bat viruses and claimed that U.S. officials had confessed to manipulating “dangerous pathogens.” The story was unfounded, not to say ridiculous, and was repeatedly debunked.

Nevertheless, an American Twitter account with links to the QAnon conspiracy network—@WarClandestine—began tweeting about the nonexistent biolabs, racking up thousands of retweets and views. The hashtag #biolab started trending on Twitter and reached more than 9 million views. Even after the account—later revealed to belong to a veteran of the Army National Guard—was suspended, people continued to post screenshots. A version of the story appeared on the Infowars website created by Alex Jones, best known for promoting conspiracy theories about the shooting at Sandy Hook Elementary School and harassing families of the victims. Tucker Carlson, then still hosting a show on Fox News, played clips of a Russian general and a Chinese spokesperson repeating the biolab fantasy and demanded that the Biden administration “stop lying and [tell] us what’s going on here.”

Chinese state media also leaned hard into the story. A foreign-ministry spokesperson declared that the U.S. controlled 26 biolabs in Ukraine: “Russia has found during its military operations that the U.S. uses these facilities to conduct bio-military plans.” Xinhua, a Chinese state news agency, ran multiple headlines: “U.S.-Led Biolabs Pose Potential Threats to People of Ukraine and Beyond,” “Russia Urges U.S. to Explain Purpose of Biological Labs in Ukraine,” and so on. U.S. diplomats publicly refuted these fabrications. Nevertheless, the Chinese continued to spread them. So did the scores of Asian, African, and Latin American media outlets that have content-sharing agreements with Chinese state media. So did Telesur, the Venezuelan network; Press TV, the Iranian network; and Russia Today, in Spanish and Arabic, as well as on many Russia Today–linked websites around the world.

This joint propaganda effort worked. Globally, it helped undermine the U.S.-led effort to create solidarity with Ukraine and enforce sanctions against Russia. Inside the U.S., it helped undermine the Biden administration’s effort to consolidate American public opinion in support of providing aid to Ukraine. According to one poll, a quarter of Americans believed the biolabs conspiracy theory to be true. After the invasion, Russia and China—with, again, help from Venezuela, Iran, and far-right Europeans and Americans—successfully created an international echo chamber. Anyone inside this echo chamber heard the biolab conspiracy theory many times, from different sources, each one repeating and building on the others to create the impression of veracity. They also heard false descriptions of Ukrainians as Nazis, along with claims that Ukraine is a puppet state run by the CIA, and that NATO started the war.

Outside this echo chamber, few even know it exists. At a dinner in Munich in February 2023, I found myself seated across from a European diplomat who had just returned from Africa. He had met with some students there and had been shocked to discover how little they knew about the war in Ukraine, and how much of what they did know was wrong. They had repeated the Russian claims that the Ukrainians are Nazis, blamed NATO for the invasion, and generally used the same kind of language that can be heard every night on the Russian evening news. The diplomat was mystified. He grasped for explanations: Maybe the legacy of colonialism explained the spread of these conspiracy theories, or Western neglect of the global South, or the long shadow of the Cold War.

Tyler Comrie

But the story of how Africans—as well as Latin Americans, Asians, and indeed many Europeans and Americans—have come to spout Russian propaganda about Ukraine is not primarily a story of European colonial history, Western policy, or the Cold War. Rather, it involves China’s systematic efforts to buy or influence both popular and elite audiences around the world; carefully curated Russian propaganda campaigns, some open, some clandestine, some amplified by the American and European far right; and other autocracies using their own networks to promote the same language.

To be fair to the European diplomat, the convergence of what had been disparate authoritarian influence projects is still new. Russian information-laundering and Chinese propaganda have long had different goals. Chinese propagandists mostly stayed out of the democratic world’s politics, except to promote Chinese achievements, Chinese economic success, and Chinese narratives about Tibet or Hong Kong. Their efforts in Africa and Latin America tended to feature dull, unwatchable announcements of investments and state visits. Russian efforts were more aggressive—sometimes in conjunction with the far right or the far left in the democratic world—and aimed to distort debates and elections in the United States, the United Kingdom, Germany, France, and elsewhere. Still, they often seemed unfocused, as if computer hackers were throwing spaghetti at the wall, just to see which crazy story might stick. Venezuela and Iran were fringe players, not real sources of influence.

Slowly, though, these autocracies have come together, not around particular stories, but around a set of ideas, or rather in opposition to a set of ideas. Transparency, for example. And rule of law. And democracy. They have heard language about those ideas—which originate in the democratic world—coming from their own dissidents, and have concluded that they are dangerous to their regimes. Their own rhetoric makes this clear. In 2013, as Chinese President Xi Jinping was beginning his rise to power, an internal Chinese memo, known enigmatically as Document No. 9—or, more formally, as the Communiqué on the Current State of the Ideological Sphere—listed “seven perils” faced by the Chinese Communist Party. “Western constitutional democracy” led the list, followed by “universal human rights,” “media independence,” “judicial independence,” and “civic participation.” The document concluded that “Western forces hostile to China,” together with dissidents inside the country, “are still constantly infiltrating the ideological sphere,” and instructed party leaders to push back against these ideas wherever they found them, especially online, inside China and around the world.

[From the December 2021 issue: The bad guys are winning]

Since at least 2004, the Russians have been focused on the same convergence of internal and external ideological threats. That was the year Ukrainians staged a popular revolt, known as the Orange Revolution—the name came from the orange T-shirts and flags of the protesters—against a clumsy attempt to steal a presidential election. The angry intervention of the Ukrainian public into what was meant to have been a carefully orchestrated victory for Viktor Yanukovych, a pro-Russian candidate directly supported by Putin himself, profoundly unnerved the Russians. This was especially the case because a similarly unruly protest movement in Georgia had brought a pro-European politician, Mikheil Saakashvili, to power the year before.

Shaken by those two events, Putin put the bogeyman of “color revolution” at the center of Russian propaganda. Civic protest movements are now always described as color revolutions in Russia, and as the work of outsiders. Popular opposition leaders are always said to be puppets of foreign governments. Anti-corruption and prodemocracy slogans are linked to chaos and instability wherever they are used, whether in Tunisia, Syria, or the United States. In 2011, a year of mass protest against a manipulated election in Russia itself, Putin bitterly described the Orange Revolution as a “well-tested scheme for destabilizing society,” and he accused the Russian opposition of “transferring this practice to Russian soil,” where he feared a similar popular uprising intended to remove him from power.

Putin was wrong—no “scheme” had been “transferred.” Public discontent in Russia simply had no way to express itself except through street protest, and Putin’s opponents had no legal means to remove him from power. Like so many other people around the world, they talked about democracy and human rights because they recognized that these concepts represented their best hope for achieving justice, and freedom from autocratic power. The protests that led to democratic transitions in the Philippines, Taiwan, South Africa, South Korea, and Mexico; the “people’s revolutions” that washed across Central and Eastern Europe in 1989; the Arab Spring in 2011; and, yes, the color revolutions in Ukraine and Georgia—all were begun by those who had suffered injustice at the hands of the state, and who seized on the language of freedom and democracy to propose an alternative.

This is the core problem for autocracies: The Russians, the Chinese, the Iranians, and others all know that the language of transparency, accountability, justice, and democracy appeals to some of their citizens, as it does to many people who live in dictatorships. Even the most sophisticated surveillance can’t wholly suppress it. The very ideas of democracy and freedom must be discredited—especially in the places where they have historically flourished.

In the 20th century, Communist Party propaganda was overwhelming and inspiring, or at least it was meant to be. The future it portrayed was shiny and idealized, a vision of clean factories, abundant produce, and healthy tractor drivers with large muscles and square jaws. The architecture was designed to overpower, the music to intimidate, the public spectacles to awe. In theory, citizens were meant to feel enthusiasm, inspiration, and hope. In practice, this kind of propaganda backfired, because people could compare what they saw on posters and in movies with a far more impoverished reality.

A few autocracies still portray themselves to their citizens as model states. The North Koreans continue to hold colossal military parades with elaborate gymnastics displays and huge portraits of their leader, very much in the Stalinist style. But most modern authoritarians have learned from the mistakes of the previous century. Freedom House, a nonprofit that advocates for democracy around the world, lists 56 countries as “not free.” Most don’t offer their fellow citizens a vision of utopia, and don’t inspire them to build a better world. Instead, they teach people to be cynical and passive, apathetic and afraid, because there is no better world to build. Their goal is to persuade their own people to stay out of politics, and above all to convince them that there is no democratic alternative: Our state may be corrupt, but everyone else is corrupt too. You may not like our leader, but the others are worse. You may not like our society, but at least we are strong. The democratic world is weak, degenerate, divided, dying.

Instead of portraying China as the perfect society, modern Chinese propaganda seeks to inculcate nationalist pride, based on China’s real experience of economic development, and to promote a Beijing model of progress through dictatorship and “order” that’s superior to the chaos and violence of democracy. Chinese media mocked the laxity of the American response to the pandemic with an animated film that ended with the Statue of Liberty on an intravenous drip. China’s Global Times wrote that Chinese people were mocking the January 6 insurrection as “karma” and “retribution”: “Seeing such scenarios,” the publication’s then-editor wrote in an op-ed, “many Chinese will naturally recall that Nancy Pelosi once praised the violence of Hong Kong protesters as ‘a beautiful sight to behold.’ ” (Pelosi, of course, had praised peaceful demonstrators, not violence.) The Chinese are told that these forces of chaos are out to disrupt their own lives, and they are encouraged to fight against them in a “people’s war” against foreign influence.

[Read: I watched Russian TV so you don’t have to]

Russians, although they hear very little about what happens in their own towns and cities, receive similar messages about the decline of places they don’t know and have mostly never visited: America, France, Britain, Sweden, Poland—countries apparently filled with degeneracy, hypocrisy, and Russophobia. A study of Russian television from 2014 to 2017 found that negative news about Europe appeared on the three main Russian channels, all state-controlled, an average of 18 times a day. Some of the stories were obviously invented (European governments are stealing children from straight families and giving them to gay couples! ), but even the true ones were cherry-picked to support the idea that daily life in Europe is frightening and chaotic, that Europeans are weak and immoral, and that the European Union is aggressive and interventionist. If anything, the portrayal of America has been more dramatic. Putin himself has displayed a surprisingly intimate acquaintance with American culture wars about transgender rights, and mockingly sympathized with people who he says have been “canceled.”

The goal is clear: to prevent Russians from identifying with Europe the way they once did, and to build alliances between Putin’s domestic audience and his supporters in Europe and North America, where some naive conservatives (or perhaps cynical, well-paid conservatives) seek to convince their followers that Russia is a “white Christian state.” In reality, Russia has very low church attendance, legal abortion, and a multiethnic population containing millions of Muslim citizens and migrants. The autonomous region of Chechnya, which is part of the Russian Federation, is governed, in practice, by elements of Sharia law. The Russian state harasses and represses many forms of religion outside the state-sanctioned Russian Orthodox Church, including evangelical Protestantism. Nevertheless, among the slogans shouted by white nationalists marching in the infamous Charlottesville, Virginia, demonstration in 2017 was “Russia is our friend.” Putin sends periodic messages to this constituency: “I uphold the traditional approach that a woman is a woman, a man is a man, a mother is a mother, and a father is a father,” he told a press conference in December 2021, almost as if this “traditional approach” would be justification for invading Ukraine.

[Michael Carpenter: Russia is co-opting angry young men]

This manipulation of the strong emotions around gay rights and feminism has been widely copied throughout the autocratic world, often as a means of defending against criticism of the regime. Yoweri Museveni, who has been the president of Uganda for more than three decades, passed an “anti-homosexuality” bill in 2014, instituting a life sentence for gay people who have sex or marry and criminalizing the “promotion” of a homosexual lifestyle. By picking a fight over gay rights, he was able to consolidate his supporters at home while neutralizing foreign criticisms of his regime, describing them as “social imperialism”: “Outsiders cannot dictate to us; this is our country,” he declared. Viktor Orbán, the prime minister of Hungary, also ducks discussion of Hungarian corruption by hiding behind a culture war. He pretends that ongoing tension between his government and the U.S. ambassador to Hungary concerns religion and gender: During Tucker Carlson’s recent visit to Hungary, Carlson declared that the Biden administration “hates” Hungary because “it’s a Christian country,” when in fact it is Orbán’s deep financial and political ties to Russia and China that have badly damaged American-Hungarian relations.

The new authoritarians also have a different attitude toward reality. When Soviet leaders lied, they tried to make their falsehoods seem real. They became angry when anyone accused them of lying. But in Putin’s Russia, Bashar al-Assad’s Syria, and Nicolás Maduro’s Venezuela, politicians and television personalities play a different game. They lie constantly, blatantly, obviously. But they don’t bother to offer counterarguments when their lies are exposed. After Russian-controlled forces shot down Malaysia Airlines Flight MH17 over Ukraine in 2014, the Russian government reacted not only with a denial, but with multiple stories, plausible and implausible: It blamed the Ukrainian army, and the CIA, and a nefarious plot in which dead people were placed on a plane in order to fake a crash and discredit Russia. This tactic—the so-called fire hose of falsehoods—ultimately produces not outrage but nihilism. Given so many explanations, how can you know what actually happened? What if you just can’t know? If you don’t know what happened, you’re not likely to join a great movement for democracy, or to listen when anyone speaks about positive political change. Instead, you are not going to participate in any politics at all.

[Anne Applebaum: The American face of authoritarian propaganda]

Fear, cynicism, nihilism, and apathy, coupled with disgust and disdain for democracy: This is the formula that modern autocrats, with some variations, sell to their citizens and to foreigners, all with the aim of destroying what they call “American hegemony.” In service of this idea, Russia, a colonial power, paints itself as a leader of the non-Western civilizations in what the analyst Ivan Klyszcz calls their struggle for “messianic multipolarity,” a battle against “the West’s imposition of ‘decadent,’ ‘globalist’ values.” In September 2022, when Putin held a ceremony to mark his illegal annexation of southern and eastern Ukraine, he claimed that he was protecting Russia from the “satanic” West and “perversions that lead to degradation and extinction.” He did not speak of the people he had tortured or the Ukrainian children he had kidnapped. A year later, Putin told a gathering in Sochi: “We are now fighting not just for Russia’s freedom but for the freedom of the whole world. We can frankly say that the dictatorship of one hegemon is becoming decrepit. We see it, and everyone sees it now. It is getting out of control and is simply dangerous for others.” The language of “hegemony” and “multipolarity” is now part of Chinese, Iranian, and Venezuelan narratives too.

In truth, Russia is a genuine danger to its neighbors, which is why most of them are re-arming and preparing to fight against a new colonial occupation. The irony is even greater in African countries like Mali, where Russian mercenaries from the Wagner Group have helped keep a military dictatorship in power, reportedly by conducting summary executions, committing atrocities against civilians, and looting property. In Mali, as in Ukraine, the battle against Western decadence means that white Russian thugs brutally terrorize people with impunity.

And yet Mali Actu, a pro-Russian website in Mali, solemnly explains to its readers that “in a world that is more and more multipolar, Africa will play a more and more important role.” Mali Actu is not alone; it’s just a small part of a propaganda network, created by the autocracies, that is now visible all over the world.

The infrastructure of antidemocratic propaganda takes many forms, some overt and some covert, some aimed at the public and some aimed at elites. The United Front, the fulcrum of the Chinese Communist Party’s most important influence strategy, seeks to shape perceptions of China around the world by creating educational and exchange programs, controlling Chinese exile communities, building Chinese chambers of commerce, and courting anyone willing to be a de facto spokesperson for China. The Confucius Institutes are probably the best-known elite Chinese influence project. Originally perceived as benign cultural bodies not unlike the Goethe-Institut, run by the German government, and the Alliance Française, they were welcomed by many universities because they provided cheap or even free Chinese-language classes and professors. Over time, the institutes aroused suspicion, policing Chinese students at American universities by restricting open discussions of Tibet and Taiwan, and in some cases altering the teaching of Chinese history and politics to suit Chinese narratives. They have now been mostly disbanded in the United States. But they are flourishing in many other places, including Africa, where there are several dozen.

These subtler operations are augmented by China’s enormous investment in international media. The Xinhua wire service, the China Global Television Network, China Radio International, and China Daily all receive significant state financing, have social-media accounts in multiple languages and regions, and sell, share, or otherwise promote their content. These Chinese outlets cover the entire world, and provide feeds of slickly produced news and video segments to their partners at low prices, sometimes for free, which makes them more than competitive with reputable Western newswires, such as Reuters and the Associated Press. Scores of news organizations in Europe and Asia use Chinese content, as do many in Africa, from Kenya and Nigeria to Egypt and Zambia. Chinese media maintain a regional hub in Nairobi, where they hire prominent local journalists and produce content in African languages. Building this media empire has been estimated to cost billions of dollars a year.

Tyler Comrie

For the moment, viewership of many of these Chinese-owned channels remains low; their output can be predictable, even boring. But more popular forms of Chinese television are gradually becoming available. StarTimes, a satellite-television company that is tightly linked to the Chinese government, launched in Africa in 2008 and now has 13 million television subscribers in more than 30 African countries. StarTimes is cheap for consumers, costing just a few dollars a month. It prioritizes Chinese content—not just news but kung-fu movies, soap operas, and Chinese Super League football, with the dialogue and commentary all translated into Hausa, Swahili, and other African languages. In this way, even entertainment can carry China-positive messages.

This subtler shift is the real goal: to have the Chinese point of view appear in the local press, with local bylines. Chinese propagandists call this strategy “borrowing boats to reach the sea,” and it can be achieved in many ways. Unlike Western governments, China doesn’t think of propaganda, censorship, diplomacy, and media as separate activities. Legal pressure on news organizations, online trolling operations aimed at journalists, cyberattacks—all of these can be deployed as part of a single operation designed to promulgate or undermine a given narrative. China also offers training courses or stipends for local journalists across Asia, Africa, and Latin America, sometimes providing phones and laptops in exchange for what the regime hopes will be favorable coverage.

The Chinese also cooperate, both openly and discreetly, with the media outlets of other autocracies. Telesur, a Hugo Chávez project launched in 2005, is headquartered in Caracas and led by Venezuela in partnership with Cuba and Nicaragua. Selectively culled bits of foreign news make it onto Telesur from its partners, including headlines that presumably have limited appeal in Latin America: “US-Armenia Joint Military Drills Undermine Regional Stability,” for example, and “Russia Has No Expansionist Plans in Europe.” Both of these stories, from 2023, were lifted directly from the Xinhua wire.

Iran, for its part, offers HispanTV, the Spanish-language version of Press TV, the Iranian international service. HispanTV leans heavily into open anti-Semitism and Holocaust denial: One March 2020 headline declared that the “New Coronavirus Is the Result of a Zionist Plot.” Spain banned HispanTV and Google blocked it from its YouTube and Gmail accounts, but the service is easily available across Latin America, just as Al-Alam, the Arabic version of Press TV, is widely available in the Middle East. After the October 7 Hamas attack on Israel, the Institute for Strategic Dialogue, an international group dedicated to fighting disinformation, found that Iran was creating additional hacking groups to target digital, physical, and electoral infrastructure in Israel (where it went after electoral rolls) and the United States. In the future, these hacking operations may be combined with propaganda campaigns.

RT—Russia Today—has a bigger profile than either Telesur or Press TV; in Africa, it has close links to China. Following the invasion of Ukraine, some satellite networks dropped RT. But China’s StarTimes satellite picked it up, and RT immediately began building offices and relationships across Africa, especially in countries run by autocrats who echo its anti-Western, anti-LGBTQ messages, and who appreciate its lack of critical or investigative reporting.

RT—like Press TV, Telesur, and even CGTN—also functions as a production facility, a source of video clips that can be spread online, repurposed and reused in targeted campaigns. Americans got a firsthand view of how the clandestine versions work in 2016, when the Internet Research Agency—now disbanded but based then in St. Petersburg and led by the late Yevgeny Prigozhin, more famous as the mercenary boss of the Wagner Group who staged an aborted march on Moscow—pumped out fake material via fake Facebook and Twitter accounts, designed to confuse American voters. Examples ranged from virulently anti-immigration accounts aimed at benefiting Donald Trump to fake Black Lives Matter accounts that attacked Hillary Clinton from the left.

Since 2016, these tactics have been applied across the globe. The Xinhua and RT offices in Africa and around the world—along with Telesur and HispanTV—create stories, slogans, memes, and narratives promoting the worldview of the autocracies; these, in turn, are repeated and amplified in many countries, translated into many languages, and reshaped for many local markets. The material produced is mostly unsophisticated, but it is inexpensive and can change quickly, according to the needs of the moment. After the October 7 Hamas attack, for example, official and unofficial Russian sources immediately began putting out both anti-Israel and anti-Semitic material, and messages calling American and Western support for Ukraine hypocritical in light of the Gaza conflict. The data-analytics company Alto Intelligence found posts smearing both Ukrainians and Israelis as “Nazis,” part of what appears to be a campaign to bring far-left and far-right communities closer together in opposition to U.S.-allied democracies. Anti-Semitic and pro-Hamas messages also increased inside China, as well as on Chinese-linked accounts around the world. Joshua Eisenman, a professor at Notre Dame and the author of a new book on China’s relations with Africa, told me that during a recent trip to Beijing, he was astonished by how quickly the previous Chinese line on the Middle East—“China-Israel relations are stronger than ever”—changed. “It was a complete 180 in just a few days.”

Not that everyone hearing these messages will necessarily know where they come from, because they often appear in forums that conceal their origins. Most people probably did not hear the American-biolabs conspiracy theory on a television news program, for example. Instead, they heard it thanks to organizations like Pressenza and Yala News. Pressenza, a website founded in Milan and relocated to Ecuador in 2014, publishes in eight languages, describes itself as “an international news agency dedicated to news about peace and nonviolence,” and featured an article on biolabs in Ukraine. According to the U.S. State Department, Pressenza is part of a project, run by three Russian companies, that planned to create articles in Moscow and then translate them for these “native” sites, following Chinese practice, to make them seem “local.” Pressenza denied the allegations; one of its journalists, Oleg Yasinsky, who says he is of Ukrainian origin, responded by denouncing America’s “planetary propaganda machine” and quoting Che Guevara.

Like Pressenza, Yala News also markets itself as independent. This U.K.-registered, Arabic-language news operation provides slickly produced videos, including celebrity interviews, to its 3 million followers every day. In March 2022, as the biolabs allegation was being promoted by other outlets, the site posted a video that echoed one of the most sensational versions: Ukraine was planning to use migratory birds as a delivery vehicle for bioweapons, infecting the birds and then sending them into Russia to spread disease.

Yala did not invent this ludicrous tale: Russian state media, such as the Sputnik news agency, published it in Russian first, followed by Sputnik’s Arabic website and RT Arabic. Russia’s United Nations ambassador addressed the UN Security Council about the biobird scandal, warning of the “real biological danger to the people in European countries, which can result from an uncontrolled spread of bioagents from Ukraine.” In an April 2022 interview in Kyiv, Ukrainian President Volodymyr Zelensky told The Atlantic’s editor in chief, Jeffrey Goldberg, and me that the biobirds story reminded him of a Monty Python sketch. If Yala were truly an “independent” publication, as it describes itself, it would have fact-checked this story, which, like the other biolab conspiracies, was widely debunked.

[Read: Anne Applebaum and Jeffrey Goldberg interview Volodymyr Zelensky]

But Yala News is not a news organization at all. As the BBC has reported, it’s an information laundromat, a site that exists to spread and propagate material produced by RT and other Russian facilities. Yala News has posted claims that the Russian massacre of Ukrainian civilians at Bucha was staged, that Zelensky appeared drunk on television, and that Ukrainian soldiers were running away from the front lines. Although the company is registered to an address in London—a mail drop shared by 65,000 other companies—its “news team” is based in a suburb of Damascus. The company’s CEO is a Syrian businessman based in Dubai who, when asked by the BBC, insisted on the organization’s “impartiality.”

Another strange actor in this field is RRN—the company’s name is an acronym, originally for Reliable Russian News, later changed to Reliable Recent News. Created in the aftermath of Russia’s invasion of Ukraine, RRN, part of a bigger information-laundering operation known to investigators as Doppelganger, is primarily a “typosquatter”: a company that registers domain names that look similar to real media domain names—Reuters.cfd instead of Reuters.com, for example—as well as websites with names that sound authentic (like Notre Pays, or “Our Country”) but are created to deceive. RRN is prolific. During its short existence, it has created more than 300 sites targeting Europe, the Middle East, and Latin America. Links to these sites are then used to make Facebook, Twitter, and other social-media posts appear credible. When someone is quickly scrolling, they might not notice that a headline links to a fake Spiegel.pro website, say, rather than to the authentic German-magazine website Spiegel.de.

Doppelganger’s efforts, run by a clutch of companies in Russia, have varied widely, and seem to have included fake NATO press releases, with the same fonts and design as the genuine releases, “revealing” that NATO leaders were planning to deploy Ukrainian paramilitary troops to France to quell pension protests. In November, operatives who the French government believes are linked to Doppelganger spray-painted Stars of David around Paris and posted them on social media, hoping to amplify French divisions over the Gaza war. Russian operatives built a social-media network to spread the false stories and the photographs of anti-Semitic graffiti. The goal is to make sure that the people encountering this content have little clue as to who created it, or where or why.

Russia and China are not the only parties in this space. Both real and automated social-media accounts geolocated to Venezuela played a small role in the 2018 Mexican presidential election, for example, boosting the campaign of Andrés Manuel López Obrador. Notable were two kinds of messages: those that promoted images of Mexican violence and chaos—images that might make people feel they need an autocrat to restore order—and those that were angrily opposed to NAFTA and the U.S. more broadly. This tiny social-media investment must have been deemed successful. After he became president, López Obrador engaged in the same kinds of smear campaigns as unelected politicians in autocracies, empowered and corrupted the military, undermined the independence of the judiciary, and otherwise degraded Mexican democracy. In office, he has promoted Russian narratives about the war in Ukraine along with Chinese narratives about the repression of the Uyghurs. Mexico’s relationship with the United States has become more difficult—and that, surely, was part of the point.

None of these efforts would succeed without local actors who share the autocratic world’s goals. Russia, China, and Venezuela did not invent anti-Americanism in Mexico. They did not invent Catalan separatism, to name another movement that both Russian and Venezuelan social-media accounts supported, or the German far right, or France’s Marine Le Pen. All they do is amplify existing people and movements—whether anti-LGBTQ, anti-Semitic, anti-Muslim, anti-immigrant, anti-Ukrainian, or, above all, antidemocratic. Sometimes they provide a social-media echo. Sometimes they employ reporters and spokespeople. Sometimes they use the media networks they built for this purpose. And sometimes, they just rely on Americans to do it for them.

Here is a difficult truth: A part of the American political spectrum is not merely a passive recipient of the combined authoritarian narratives that come from Russia, China, and their ilk, but an active participant in creating and spreading them. Like the leaders of those countries, the American MAGA right also wants Americans to believe that their democracy is degenerate, their elections illegitimate, their civilization dying. The MAGA movement’s leaders also have an interest in pumping nihilism and cynicism into the brains of their fellow citizens, and in convincing them that nothing they see is true. Their goals are so similar that it is hard to distinguish between the online American alt-right and its foreign amplifiers, who have multiplied since the days when this was solely a Russian project. Tucker Carlson has even promoted the fear of a color revolution in America, lifting the phrase directly from Russian propaganda. The Chinese have joined in too: Earlier this year, a group of Chinese accounts that had previously been posting pro-Chinese material in Mandarin began posting in English, using MAGA symbols and attacking President Joe Biden. They showed fake images of Biden in prison garb, made fun of his age, and called him a satanist pedophile. One Chinese-linked account reposted an RT video repeating the lie that Biden had sent a neo-Nazi criminal to fight in Ukraine. Alex Jones’s reposting of the lie on social media reached some 400,000 people.

Given that both Russian and Chinese actors now blend in so easily with the MAGA messaging operation, it is hardly surprising that the American government has difficulty responding to the newly interlinked autocratic propaganda network. American-government-backed foreign broadcasters—Voice of America, Radio Free Europe/Radio Liberty, Radio Farda, Radio Martí—still exist, but neither their mandate nor their funding has changed much in recent years. The intelligence agencies continue to observe what happens—there is a Foreign Malign Influence Center under the Office of the Director of National Intelligence—but they are by definition not part of the public debate. The only relatively new government institution fighting antidemocratic propaganda is the Global Engagement Center, but it is in the State Department, and its mandate is to focus on authoritarian propaganda outside the United States. Established in 2016, it replaced the Center for Strategic Counterterrorism Communications, which sought to foil the Islamic State and other jihadist groups that were recruiting young people online. In 2014–15, as the scale of Russian disinformation campaigns in Europe were becoming better known, Congress designated the GEC to deal with Russian as well as Chinese, Iranian, and other propaganda campaigns around the world—although not, again, inside the United States. Throughout the Trump administration, the organization languished under the direction of a president who himself repeated Russian propaganda lines during the 2016 campaign—“Obama founded ISIS,” for example, and “Hillary will start World War III.”

Today the GEC is run by James Rubin, a former State Department spokesperson from the Bill Clinton era. It employs 125 people and has a budget of $61 million—hardly a match for the many billions that China and Russia spend building their media networks. But it is beginning to find its footing, handing out small grants to international groups that track and reveal foreign disinformation operations. It’s now specializing in identifying covert propaganda campaigns before they begin, with the help of U.S. intelligence agencies. Rubin calls this “prebunking” and describes it as a kind of “inoculation”: “If journalists and governments know that this is coming, then when it comes, they will recognize it.”

The revelation in November of the Russian ties to seemingly native left-wing websites in Latin America, including Pressenza, was one such effort. More recently, the GEC published a report on the African Initiative, an agency that had planned a huge campaign to discredit Western health philanthropy, starting with rumors about a new virus supposedly spread by mosquitoes. The idea was to smear Western doctors, clinics, and philanthropists, and to build a climate of distrust around Western medicine, much as Russian efforts helped build a climate of distrust around Western vaccines during the pandemic. The GEC identified the Russian leader of the project, Artem Sergeyevich Kureyev; noted that several employees had come to the African Initiative from the Wagner Group; and located two of its offices, in Mali and Burkina Faso. Rubin and others subsequently spent a lot of time talking with regional reporters about the African Initiative’s plans so that “people will recognize them” when they launch. Dozens of articles in English, Spanish, and other languages have described these operations, as have thousands of social-media posts. Eventually, the goal is to create an alliance of other nations who also want to share information about planned and ongoing information operations so that everyone knows they are coming.

It’s a great idea, but no equivalent agency functions inside the United States. Some social-media companies have made purely voluntary efforts to remove foreign-government propaganda, sometimes after being tipped off by the U.S. government but mostly on their own. In the U.S., Facebook created a security-policy unit that still regularly announces when it discovers “coordinated inauthentic behavior”—meaning accounts that are automated and/or evidently part of a planned operation from (usually) Russian, Iranian, or Chinese sources—and then takes down the posts. It is difficult for outsiders to monitor this activity, because the company restricts access to its data, and even controls the tools that can be used to examine the data. In March, Meta announced that by August, it would phase out CrowdTangle, a tool used to analyze Facebook data, and replace it with a tool that analysts fear will be harder to use.

X (formerly Twitter) also used to look for foreign propaganda activity, but under the ownership of Elon Musk, that voluntary effort has been badly weakened. The new blue-check “verification” process allows users—including anonymous, pro-Russian users—to pay to have their posts amplified; the old “safety team” no longer exists. The result: After the collapse of the Kakhovka dam in Ukraine last summer, a major environmental and humanitarian disaster caused by Russian bombing over many weeks, the false narrative that Ukraine had destroyed it appeared hundreds of thousands of times on X. After the ISIS terrorist attack on a concert hall in Moscow in March, David Sacks, the former PayPal entrepreneur and a close associate of Musk’s, posted on X, with no evidence, that “if the Ukrainian government was behind the terrorist attack, as looks increasingly likely, the U.S. must renounce it.” His completely unfounded post was viewed 2.5 million times. This spring, some Republican congressional leaders finally began speaking about the Russian propaganda that had “infected” their base and their colleagues. Most of that “Russian propaganda” is not coming from inside Russia.

Over the past several years, universities and think tanks have used their own data analytics to try to identify inauthentic networks on the largest websites—but they are also now meeting resistance from MAGA-affiliated Republican politicians. In 2020, teams at Stanford University and the University of Washington, together with the Digital Forensic Research Lab at the Atlantic Council and Graphika, a company that specializes in social-media analytics, decided to join forces to monitor false election information. Renée DiResta, one of the leaders of what became the Election Integrity Partnership, told me that an early concern was Russian and Chinese campaigns. DiResta assumed that these foreign interventions wouldn’t matter much, but she thought it would be useful and academically interesting to understand their scope. “Lo and behold,” she said, “the entity that becomes the most persistent in alleging that American elections are fraudulent, fake, rigged, and everything else turns out to be the president of the United States.” The Election Integrity Partnership tracked election rumors coming from across the political spectrum, but observed that the MAGA right was far more prolific and significant than any other source.

The Election Integrity Partnership was not organized or directed by the U.S. government. It occasionally reached out to platforms, but had no power to compel them to act, DiResta told me. Nevertheless, the project became the focus of a complicated MAGA-world conspiracy theory about alleged government suppression of free speech, and it led to legal and personal attacks on many of those involved. The project has been smeared and mischaracterized by some of the journalists attached to Musk’s “Twitter Files” investigation, and by Representative Jim Jordan’s Select Subcommittee on the Weaponization of the Federal Government. A series of lawsuits alleging that the U.S. government sought to suppress conservative speech, including one launched by Missouri and Louisiana that has now reached the Supreme Court, has effectively tried to silence organizations that investigate both domestic and foreign disinformation campaigns, overt and covert. To state baldly what is happening: The Republican Party’s right wing is actively harassing legitimate, good-faith efforts to track the production and dissemination of autocratic disinformation here in the United States.

Over time, the attack on the Election Integrity Partnership has itself acquired some of the characteristics of a classic information-laundering operation. The most notorious example concerns a reference, on page 183 of the project’s final post-2020-election report, to the 21,897,364 tweets gathered after the election, in an effort to catalog the most viral false rumors. That simple statement of the size of the database has been twisted into another false and yet constantly repeated rumor: the spurious claim that the Department of Homeland Security somehow conspired with the Election Integrity Partnership to censor 22 million tweets. This never happened, and yet DiResta said that “this nonsense about the 22 million tweets pops up constantly as evidence of the sheer volume of our duplicity”; it has even appeared in the Congressional Record.

The same tactics have been used against the Global Engagement Center. In 2021, the GEC gave a grant to another organization, the Global Disinformation Index, which helped develop a technical tool to track online campaigns in East Asia and Europe. For a completely unrelated, separately funded project, the Global Disinformation Index also conducted a study, aimed at advertisers, that identified websites at risk for publishing false stories. Two conservative organizations, finding their names on that latter list, sued the GEC, although it had nothing to do with creating the list. Musk posted, again without any evidence, “The worst offender in US government censorship & media manipulation is an obscure agency called GEC,” and that organization also became caught up in the endless whirlwind of conspiracy and congressional investigations.

As it happens, I was caught up in it too, because I was listed online as an “adviser” to the Global Disinformation Index, even though I had not spoken with anyone at the organization for several years and was not aware that it even had a website. A predictable, and wearisome, pattern followed: false accusations (no, I was not advising anyone to censor anyone) and the obligatory death threats. Of course, my experience was mild compared with the experience of DiResta, who has been accused of being, as she put it, “the head of a censorship-industrial complex that does not exist.”

These stories are symptomatic of a larger problem: Because the American extreme right and (more rarely) the extreme left benefit from the spread of antidemocratic narratives, they have an interest in silencing or hobbling any group that wants to stop, or even identify, foreign campaigns. Senator Mark Warner, the chair of the Senate Intelligence Committee, told me that “we are actually less prepared today than we were four years ago” for foreign attempts to influence the 2024 election. This is not only because authoritarian propaganda campaigns have become more sophisticated as they begin to use AI, or because “you obviously have a political environment here where there’s a lot more Americans who are more distrustful of all institutions.” It’s also because the lawsuits, threats, and smear tactics have chilled government, academic, and tech-company responses.

One could call this a secret authoritarian “plot” to preserve the ability to spread antidemocratic conspiracy theories, except that it’s not a secret. It’s all visible, right on the surface. Russia, China, and sometimes other state actors—Venezuela, Iran, Hungary—work with Americans to discredit democracy, to undermine the credibility of democratic leaders, to mock the rule of law. They do so with the goal of electing Trump, whose second presidency would damage the image of democracy around the world, as well as the stability of democracy in America, even further.

This article appears in the June 2024 print edition with the headline “Democracy Is Losing the Propaganda War.” Anne Applebaum’s new book, Autocracy, Inc.: The Dictators Who Want to Run the World, will be published in July.

Is It Wrong to Tell Kids to Apologize?

The Atlantic

www.theatlantic.com › family › archive › 2024 › 05 › should-kids-apologize-parenting-debate › 678294

Say you’re sorry. For generations, parents have leaned on the phrase during sibling tiffs and playground scuffles. But it has lately become controversial, particularly among a certain subset of Millennial parents—those for whom the hallmark of good parenting is the reverence they show for their kids’ feelings. Under this model, gone are the days of scolding a child for melting down, sending them to a time-out, or ignoring them until they settle. (Joining them for “time-ins” to help them process their emotions? That’s okay.) The guiding principle seems to be to take children’s current or future feelings into consideration at every parental decision point—even when they are the ones who have hurt the feelings of someone else.

At first blush, making a child express remorse would seem an obvious violation of the feelings-informed approach. And indeed, both Big Little Feelings, the tremendously popular Instagram account and parenting course, and Dr. Becky, the internet-appointed headmistress of the school of Millennial parenting, have condemned the practice. Telling your children to apologize, the argument goes, is useless, unnecessary, even harmful. Useless because it will produce an empty apology. Unnecessary because there are other, better ways to teach children to make amends. Harmful because—well, accusations of harm run the gamut: It will train children to lie or to apologize only as a formality to escape punishment; make them “less kind and thoughtful”; alienate them from their feelings; or shame them into never apologizing again.

These points aren’t necessarily wrong. But as is often the case in modern parenting debates, the stakes are lower and the reality is more nuanced than many influencers would have you believe. Instructing a kid to say sorry is sometimes useless, at least in the moment; it could be unnecessary, depending on the child’s temperament; and it might be harmful, depending on how you go about it. But when you account for the emotional complexity stirred up by conflict, you can find as many feelings-informed reasons to insist on an apology as not.

Take the classic anecdote that’s used to illustrate the downsides of “forced apologies”: A child snatches a toy from a friend or pushes him over. A parent barks at him to “say you’re sorry,” which he does, but in a half-hearted manner. He then carries on with his play, having learned nothing and leaving the victim feeling no better for it.

Broadly, those opposed to forced apologies would argue that for an apology to have any value, it must be rooted in genuine remorse. They would say that young kids lack the cognitive capacity to empathize with someone they’ve hurt, and that simply telling them to apologize won’t help them develop empathy. (“You’re not actually teaching your kid to feel sorry,” as Deena Margolin, a child therapist and co-founder of Big Little Feelings, has put it.) Instead, if parents take the time to cultivate empathy through reflection and good example, genuine apologies will naturally flower.

When it comes to toy-snatching or shoving, that could mean modeling an apology on the child’s behalf, engaging your child in a private conversation about what went down, suggesting (not insisting!) that the child find some way to help the harmed party feel better, or some combination of the three. “The goal is to help them recognize that their actions have consequences for others,” Karina Schumann, an associate professor of social psychology at the University of Pittsburgh who specializes in conflict resolution, told me. “In the same way that their actions caused harm, they can also take an active role in repairing that harm by making amends.” These tactics will be more effective if parents themselves, after their own misdeeds, routinely demonstrate what a good apology looks like: one that names the harm and how it affected the other person, and offers a promise to change future behavior. If children “have observed others in their life apologize readily and empathically for their offenses,” Schumann said, “they will learn in time.”

[Read: The fairy-tale promises of Montessori parenting]

Yet apologies are socially and emotionally tricky. Observing my own children, I’ve found that what stops them from apologizing often isn’t an absence of remorse but the presence of other strong emotions—a lingering frustration over whatever precipitated their actions, embarrassment for having publicly messed up, a vague but overblown fear of what will happen if they do apologize. (This last point is true for adults as well: Schumann pointed me to a study noting that adults anticipate that apologizing will feel more humiliating and stressful than it ends up being.) Sometimes, guilt itself seems to be the obstacle; my children feel bad for what they’ve done and want to disappear into my arms rather than call any more attention to it. In other words, the issue isn’t always that a kid doesn’t feel sorry but that, for a variety of reasons, he doesn’t feel like saying so.

And what of the person who was harmed? Surely their feelings matter. The idea that anything less than a freely volunteered apology is worthless is unsupported by research. Especially among the youngest children, both prompted and spontaneous apologies can help repair kids’ relationships. One study found that only when it’s abundantly clear that a child is apologizing against their will does a prompted apology start to lose its value—and even then, kids younger than 7 thought it was better than nothing.

Younger children’s more ready acceptance of shoddy apologies may have something to do with the very fact that they are emotionally underdeveloped. Theory of mind—the ability to recognize that other people have thoughts and feelings different from one’s own—develops gradually in humans, but it’s a process that starts fairly early. Cara Goodwin, a licensed clinical psychologist and the founder of Parenting Translator, a newsletter that breaks down scientific research on parenting, told me that, from infanthood, children can express concern for others’ emotions; for instance, when babies see another baby in distress, they look around for help. But even after kids develop a grasp on others’ emotions, they still often struggle with making apologies—because the big challenge for them is regulating their own emotions.

[Read: Why don’t we teach people how to parent?]

Goodwin agreed that modeling apologies and helping children reflect on their actions are essential. But she thinks there’s a place for prompting, or even insisting on, children’s apologies—for the simple reason that apologizing often doesn’t feel good, at least not right away. Nudging a child through an apology, even one that comes out clouded by other emotions, can teach them to cope with discomfort, help dispel any exaggerated fears, and expose them to some of apologies’ upsides—the relief of being forgiven, or the satisfaction of knowing you’ve done something to right a past wrong. Marjorie Ingall, a co-author of Getting to Sorry: The Art of Apology at Work and at Home, compared apologizing to learning to tie your shoes: You can get only so far watching someone else do it. Trying it yourself is awkward and frustrating at first, but fumble through it enough times and eventually it clicks.

As for concerns about harm, there’s little reason to think that making kids apologize will cause enduring emotional damage, as long as parents take an appropriate approach, Goodwin told me. She drew a distinction between psychological and behavioral control. Attempts to psychologically control kids—guilting, shaming, or otherwise emotionally manipulating them—have been linked to a variety of negative outcomes. So you shouldn’t berate children for their lack of remorse or shame them into expressing it. But there’s nothing wrong with establishing ground rules and then enforcing them by setting a behavioral limit. If you’d like your child to apologize when he knocks over someone’s sand castle, or to find some other way to make amends if you’re stuck on not making him say “I’m sorry,” it’s fine to make him leave the sandbox if he refuses.

Of course, there’s no guarantee that getting your child to apologize will succeed in smoothing over a situation. Perhaps he won’t be forgiven. Perhaps his muffled apology will draw scorn from onlooking peers. There are all manner of ways for conflict resolution to result in emotional bruising—but this is true regardless of your approach.

That brings us to the hard reality of feelings-informed parenting: Children’s emotions are slippery and unpredictable. When you put their feelings in command—especially amid the minefield of childhood conflict—it becomes painfully clear that adults have far less sway than they’d like to believe.

When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.

The Atlantic’s June Cover Story: Anne Applebaum on How “Democracy Is Losing the Propaganda War”

The Atlantic

www.theatlantic.com › press-releases › archive › 2024 › 05 › june-2024-cover-anne-applebaum-new-propaganda-war › 678302

For The Atlantic’s June cover story, “Democracy Is Losing the Propaganda War,” staff writer Anne Applebaum reports on how autocrats in China, Russia, and other places around the world are now making common cause with MAGA Republicans to discredit liberalism and freedom everywhere. Applebaum’s story is adapted from her forthcoming book, Autocracy Inc. (publishing July 23), and draws from her exceptional reporting for The Atlantic.

Even in authoritarian states where surveillance is almost total, Applebaum reports, “the experience of tyranny and injustice can radicalize people. Anger at arbitrary power will always lead someone to start thinking about another system, a better way to run society.” This has resulted in autocratic regimes slowly turning their repressive mechanisms outward, into the democratic world. Applebaum writes: “If people are naturally drawn to the image of human rights, to the language of democracy, to the dream of freedom, then those concepts have to be poisoned. That requires more than surveillance, more than close observation of the population, more than a political system that defends against liberal ideas. It also requires an offensive plan: a narrative that damages both the idea of democracy everywhere in the world and the tools to deliver it.”

To accomplish this, Applebaum reports, autocracies are now making systematic efforts to influence both popular and elite audiences, including via the use of state-controlled media—most notably China’s Xinhua news agency and Russia’s RT, but also Venezuela’s Telesur network and Iran’s Press TV, along with numerous others—to create stories, slogans, memes, and narratives promoting the worldview of the autocracies. These, in turn, are repeated and amplified in other countries, translated into multiple languages, and reshaped for local markets around the world.

When these stories make their way to the U.S., Applebaum reports, “a part of the American political spectrum is not merely a passive recipient of the combined authoritarian narratives that come from Russia, China, and their ilk, but an active participant in creating and spreading them. Like the leaders of those countries, the American MAGA right also wants Americans to believe that their democracy is degenerate, their elections illegitimate, their civilization dying. The MAGA movement’s leaders also have an interest in pumping nihilism and cynicism into the brains of their fellow citizens, and in convincing them that nothing they see is true. Their goals are so similar that it is hard to distinguish between the online American alt-right and its foreign amplifiers.” The State Department has in the past decade created a division to preemptively combat (or “prebunk”) foreign disinformation operations. But no such agency exists to combat the spread of Russian and Chinese propaganda within the United States.

“One could call this a secret authoritarian ‘plot’ to preserve the ability to spread antidemocratic conspiracy theories, except that it’s not a secret. It’s all visible, right on the surface,” Applebaum writes. “Russia, China, and sometimes other state actors—Venezuela, Iran, Hungary—work with Americans to discredit democracy, to undermine the credibility of democratic leaders, to mock the rule of law. They do so with the goal of electing Trump, whose second presidency would damage the image of democracy around the world, as well as the stability of democracy in America, even further.”

Democracy Is Losing the Propaganda War” was published today in The Atlantic. Please reach out with any questions or requests: press@theatlantic.com.