Itemoids

Cornell

The Search for Earth Look-alikes Is Getting Serious

The Atlantic

www.theatlantic.com › science › archive › 2023 › 03 › jwst-earthlike-trappist-1-exoplanet-system › 673559

Updated at 5:30 p.m. ET on March 29, 2023.

Several years ago, astronomers pointed a telescope at another star and discovered something remarkable: seven planets, each one about the same size as Earth. The planets were quite close to their small star—all seven of their orbits would fit inside Mercury’s. And yet, because this star is smaller, cooler, and dimmer than our own, at least three of those rocky worlds are in the habitable zone, at the right temperature for liquid, flowing water. Earthlike size and sunniness don’t guarantee that you’ll find ET, but if you were looking for signs of alien life beyond this solar system, this corner of the universe would be a promising place to start.

The system, which orbits a star known as TRAPPIST-1, is unusual; scientists had never found one like it before, nor have they since. We can’t see the exoplanets, which are named b, c, d, e, f, g, and h; from 40 light-years away, they were just tiny blips in telescope data. Artists at NASA have illustrated them, their imaginations guided by details of the worlds in our system, including Earth’s clouds and oceans, but the exoplanets have fundamentally remained a mystery. So when the James Webb Space Telescope, the newest and most powerful telescope out there, was launched, experts and space enthusiasts alike were anxious to point it toward this cosmic alphabet and get a real glimpse of the worlds within.

Now the first results are out: The Webb telescope has observed b, the innermost planet, and found … nothing. No signs of carbon dioxide, a key component of our atmosphere, and which Webb is designed to detect even from many light-years away. And good evidence that there was no significant atmosphere at all. “We’re surprised,” Tom Greene, an astrophysicist at NASA who led the team behind the new research, told me. “I was a little disappointed.”

The good news is that we still have six other planets to check out, and the worlds that are farther away from their star might be more likely to have a substantial atmosphere. That means we have six more chances to find an atmosphere around a rocky world, and perhaps even detect the presence of compounds associated with life as we know it. More observations would also give us a richer understanding of whether stars like the one in the TRAPPIST-1 system, known as red dwarfs, are promising candidates in the search for habitable planets in the cosmos. This has big implications: Red dwarfs far outnumber sunlike stars in the Milky Way, and they’re likely to have rocky planets too. If even one TRAPPIST-1 planet has the conditions that we know are needed for life, it would suggest that the galaxy could be teeming with habitable worlds—and Earth might not be so special.

[Read: There is a planet with clouds made of sand]

Other astronomers I spoke with shared Greene’s disappointment at TRAPPIST-1b’s lack of an atmosphere, but some aren’t surprised at all. Since the existence of the system was announced to the public in 2017, scientists have developed countless models for the planets, and the predictions were split. “Some people thought that the planet would have no atmosphere at all, and some folks thought that it would have maybe a Venuslike atmosphere that was mostly made of carbon dioxide,” Jonathan Fortney, an astronomer at UC Santa Cruz who worked with Greene on the new research on b, told me.

Before Webb came along, the Hubble Space Telescope observed most of the TRAPPIST planets, including b, and found no evidence of light and puffy atmospheres made of hydrogen. This was just fine with astronomers, because such a Neptunelike atmosphere wouldn’t be conducive to the kind of life that arose here on Earth. Scientists wanted to detect heavier gasses such as carbon dioxide, methane, and oxygen—a trio that, at least on Earth, indicates life respiring beneath the clouds—and for that, they needed the Webb telescope.

Greene and his team used Webb to assess b’s atmosphere in a new way: They measured heat in the form of infrared light radiating from the planet. A cooler result would suggest the presence of an atmosphere, circulating the star’s heat around the globe. A hotter one would mean a bare surface, absorbing the energy and then reflecting it back, like asphalt after a warm day. The Webb data revealed the latter case to be true; with a day-side temperature of about 450 degrees Fahrenheit, TRAPPIST-1b is “just about perfect for baking pizza,” as NASA put it, but it’s also an airless ball of rock.

[Read: We’ve found 5,000 exoplanets, and we’re still alone]

The planet might have had an atmosphere many eons ago, but its star likely took it away, Megan Mansfield, an astronomer at the University of Arizona who also uses Webb to study exoplanets, told me. Red-dwarf stars are cool stars, technically speaking—they are far less luminous than the sun—but they love to flare, blasting radiation out into space. “Those kinds of things can strip the atmosphere off a planet,” Mansfield said, especially one orbiting this close. TRAPPIST-1b might still have a very tenuous atmosphere, too ephemeral for Webb to detect, like the wisp of gas that envelops Mercury—but that’s not the kind of Earthlike environment researchers are hoping to discover in that system.

So astronomers will move down the line of planets to c, d, e, f, g, and h. Greene said he was more optimistic about detecting atmospheres around TRAPPIST-1’s other planets—at least before the disappointing discovery on b. But it’s too early to lose hope. Perhaps conditions are more comfortable farther out, where “there’s more space for that intense radiation and flaring from the star to spread out,” Mansfield said.

The Webb telescope has already observed c, and the results should be out soon, Greene told me. If it also turns out to be an atmospheric dud, that might not be a reason for astronomers to worry. Same with d, even, because it orbits at the edge of the habitable zone. But e? Then they’ll be nervous. Planets e, f, and g stand the best chance of being Earthlike, with not only an atmosphere but also an ocean. “Every data point we get, just like the one we just got now, will help to refine those theories of what habitability means for planets in [red dwarf] systems,” Nikole Lewis, an astrophysicist at Cornell, told me. Poor, barefaced b might even help researchers determine whether the more promising planets have water: Lewis said that the lack of atmosphere means that the Webb telescope can study the surface of the planet, searching for the chemical signature of water molecules in the light it reflects. A strong-enough signal would give astronomers hope that the substance exists elsewhere in the TRAPPIST system under better conditions—hope that, perhaps, one of these worlds could be a home.

Not for us, of course. A trek to the TRAPPIST system remains the stuff of science fiction. For the time being, humanity is tied to our calm, bright star, and to the planets and moons around it. We’ll build our fancy telescopes and train them on other worlds in the galaxy, wondering whether they have silky clouds of their own, and something, or someone, gazing up at them from the ground.

How Ivermectin Became a Belief System

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 03 › ivermectin-medical-subculture-covid-pandemic › 673467

Since fall 2021, Daniel Lemoi has been a central figure in the online community dedicated to experimental use of the antiparasitic drug ivermectin. “You guys all know I’m not a doctor,” he often reminded them. “I’m a guy that grew up on a farm. I ran equipment all my life. I live on a dirt road and I drive an old truck—a 30-year-old truck. I’m just one of you.” Lemoi’s folksy Rhode Island accent, his avowed regular-guy-ness, and his refusal to take any money in exchange for his advice made him into an alt-wellness influencer and a personal hero for those who followed him. He joked about his tell-it-like-it-is style and liberal use of curse words: “If you don’t like my mouth, go pray to God, because he’s the one that chose me for this mission.”

Last March, during an episode of his biweekly podcast, Dirt Road Discussions, he thanked his audience for their commitment to his ivermectin lifestyle: “I love that you guys are all here trusting my voice.” His group currently has more than 130,000 members and lives on Telegram, a messaging app that has become popular as an alternative social-media network. When Lemoi died earlier this month, at age 50, his followers found out via the chat. As first reported by Vice, Lemoi had given no indication that his health may have been failing. In fact, one of his last posts in the group was from the morning of the day he died: “HAPPY FRIDAY ALL YOU POISONOUS HORSE PASTE EATING SURVIVORS !!!”

Members of Lemoi’s family did not respond to requests for interviews, but according to his obituary, he was a heavy-equipment operator for a naval-engineering company. In the weekly podcast-style chats he hosted on his Telegram channel, he described working on the waterfront of the Narragansett Bay. He shared every detail of his ivermectin story with followers, starting on a Friday in August 2012 when he first started suffering from vertigolike symptoms. This kicked off a labyrinthine journey through the medical system, involving, he said, many huge courses of antibiotics, bouts of extreme illness and pain, and a significant financial burden. (“And alone, living alone, like this whole thing—it was just me,” he explained in a chat recorded in November 2022.) Finally, in January 2017, a doctor specializing in Lyme disease prescribed Lemoi hydroxychloroquine. He was shocked to learn that it would cost him $288 a month. “So I had no choice,” he told his followers. “I had to go with Plan B.” He got the idea to take ivermectin from a friend’s daughter, who was studying to be a veterinarian and had, according to Lemoi, written a paper about the genetic similarities between horses and people.

After Lemoi’s death, whoever took over the Telegram chat wrote to the group that “his heart was quite literally overworking and overgrowing beyond its capacity, nearly doubled in size from what it should have been.” Previously, Lemoi had claimed to have no side effects from ivermectin except for “herxing”—a term borrowed from the world of chronic Lyme disease, which he used to describe symptoms such as dizziness, chills, fatigue, sweating, headaches, and blurred vision. All of these, he told his audience, were temporary. Although ivermectin has not been cited as a cause of death, Ilan Schwartz, an infectious-disease expert at the Duke University School of Medicine, explained that it could have contributed to Lemoi’s health problems. “Incorrect use—mostly encountered in the last few years when people self-medicate, often with veterinary formulations of the drug—can cause damage to a wide range of organs, most notably the brain and gastrointestinal tract,” he told me. “Cardiovascular effects are occasionally seen, mostly low blood pressure and fast heart rate.” Regardless, the Telegram group has continued its daily routine of pro-ivermectin, antipharma posting—a sign that fringe content will continue to bloom on the fractured social web.

[Read: Twitter has no answers for #DiedSuddenly]

Ivermectin gained national attention during the pandemic, when it was touted by some Republican lawmakers as a possible treatment for the coronavirus—but Lemoi had already spent years self-administering the medication in the version intended for large mammals. “I still haven’t found anything the 1.87% horse paste won’t or can’t handle,” he wrote on the “About” page of his website, referring to a common formulation of the drug. “Except if you break a bone or fall out of a window!” Lemoi said that he’d gone off his prescriptions and that ivermectin was the only thing he needed to feel better than he had in years. He’d mostly kept his treatment to himself, until the pandemic changed everything. “I literally felt hands on my back pushing me forward because the media was talking about how bad ivermectin was,” he said in the chat from last November. He recommended the drug to people he knew, then to people on Facebook. “Facebook turned into Telegram, turned into this chat,” he summarized.

Lemoi’s fans have promised to keep his legacy alive. In the Telegram group, they’ve shared “Dannyisms” like “You have everything you need in the chats.” And in the comments on his online obituary, hundreds of group members have left condolences and thanks: “We were so blessed by his voice and tender heart,” one reads. “Ivermectin forever.” “My whole take on Danny is he’s just like me—he is a truth seeker,” one member, Diana Pilkington Barry, told me, when we spoke after his death. “I hold him in very high regard,” she said. “He was a pretty remarkable man.” She admired him for coming up with his ivermectin regimen and then sharing it with other people, and for the broader anti-establishment worldview he represented. “It’s a belief system I’ve now adopted,” she said.

By the time Lemoi started his Telegram group, in November 2021, ivermectin and its rapid politicization had become inseparable from the pandemic. In April 2020, when an early lab test had seemed to indicate that ivermectin could be used as a possible COVID-19 treatment, the FDA had warned Americans not to self-administer versions of the drug “intended for animals.” Later that year, Republican Senator Ron Johnson of Wisconsin had invited a pro-ivermectin doctor to a Senate hearing, where that doctor referred to the drug as a “miracle.” (Johnson has since emerged as a vocal anti-vaxxer.) Clinical trials never found good evidence that human formulations of ivermectin were useful for treating COVID-19, and experts have continued to warn that formulations created for animals are dangerous to people. Some high-profile Republican lawmakers went to bat for the medication despite clear and consistent warnings from physicians, and many state-level legislators pushed for new laws that would protect doctors who prescribed it from censure or liability. Since then, semi-infamous groups of renegade doctors and nurses have continued pushing it. As reported by The Washington Post, a group of doctors who call themselves the Frontline COVID-19 Critical Care Alliance has recently started recommending ivermectin to treat the flu and RSV as well.

[Read: A major clue to COVID’s origins is just out of reach ]

The members of Lemoi’s group are not solely focused on the coronavirus. Many—as Lemoi did—use horse-grade ivermectin in a misguided attempt to treat the symptoms of Lyme disease, cancer, anxiety, depression, and other maladies. Some, like Barry, take it preventively in hopes of strengthening their immune system and avoiding brushes with the “evil” pharmaceutical industry. The chat is also not only about ivermectin. It has an anti-vaccine, right-wing bent—a quick scroll brings up homophobic memes; a graphic, Photoshopped image mocking Nancy Pelosi; and a post explaining how unvaccinated people could inadvertently “contaminate” their blood by having sex with a vaccinated person. But the (incorrect) idea that unites the group is that most diseases are caused by parasites, and that members can prevent almost all illness by following the regimen that Lemoi created.

Though Lemoi’s experience with ivermectin originally had nothing to do with COVID-related conspiracy theories, it seems to have steered him in that direction over time. In the last episode of his podcast, posted on February 26, he spoke about “the biggest red pill the world is ever going to swallow.” He was convinced that the pharmaceutical industry wants to keep people in poor health, and that ivermectin use was considered fringe only because the powers that be want to keep people full of parasites.

At this late stage in the pandemic, ivermectin is still attracting new attention through social platforms. Recently, in a YouTube video with 1.7 million views, the mega-popular podcaster Joe Rogan talked about using it and feeling frustrated that the media keep referring to it as “horse dewormer” (though it literally is one). Tracking the extent of its use is also getting harder. Some of the biggest and most unruly Facebook groups promoting ivermectin have been removed, but many groups remain that are smaller, private, more careful about avoiding automated content moderation, and more selective about who they admit. (My request to join one of them was immediately denied.) The conversation has moved out of mainstream spaces and into more specialized communities that were originally organized around other shared attitudes or experiences. On Reddit, ivermectin discussion mainly appears in the infamous, openly paranoid forum r/conspiracy, or in the newer forum r/covidlonghaulers, populated by people dealing with long-term COVID symptoms and experimenting with whatever treatments sound like they could possibly help. Like the #DiedSuddenly conspiracy theory, ivermectin also has a big presence in the alt-tech ecosystem—Gab, the far-right platform, runs ads for the drug in its main feed.

The continued misuse of Ivermectin reminds us that a dangerous idea doesn’t go away when it’s removed from the center of attention on major social-media platforms. In fact, as some researchers have argued, it may become more concentrated—a greater source of identity and of in-group self-definition. “Shared experience that is not acknowledged or appreciated by mainstream communities is a very powerful source of community-building,” Drew Margolin, an associate professor at Cornell who studies online communication and alternative health groups, told me. And though much pressure has been put on social-media companies to prevent the proliferation of medical misinformation in the past three years, a platform like Telegram, which is not end-to-end encrypted by default but does present itself as a place for private, unmoderated messaging, offers an easy alternative.

Robert Aronowitz, a professor of history and the sociology of science at the University of Pennsylvania who has studied the controversy around Lyme disease, has been following the tension between medical authority and anti-authority medical activist groups since the 1970s. A lot of these groups involved improvisational home remedies, influencers who became icons, and a strong sense of community. “Many of us journalists, doctors, blame social media for inciting distrust in medical authority and allowing communities of people to form,” he told me. “I’m not saying social media doesn’t have a role, but in terms of ultimate cause or origins, it has very little to do with it.”

If anything, the internet may have helped different existing groups find one another and comingle. When Aronowitz was studying Lyme disease, he said, there was no overlap between that community and the anti-vaccine movement—“there weren’t obvious alliances or even sympathies.” (The alliance now is not total—many Lyme activists also promote COVID-19 vaccination.) Nor was there a hint of polarized “left-right politics.” Today, the anti-vaccine movement has made so much progress at co-opting other alternative health movements, and has been so thoroughly claimed by the political right, that this is hard to imagine.

It’s even harder to imagine anti-vaxxers engaging productively with a faction of the pro-vaccine mainstream that has begun to build a morally superior identity around its acceptance of science. Just look through the self-satisfied tweets about Lemoi’s death: “I just want to thank Danny Lemoi for his hard work in the extremely competitive field of ‘Natural Selection,’” a typical post reads. Another person wrote: “Here lies Danny Lemoi, who fucked around and found out.”

On YouTube, You Never Know What You Did Wrong

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 03 › youtube-content-moderation-rules › 673322

Recently, on a YouTube channel, I said something terrible, but I don’t know what it was. The main subject of discussion—my reporting on the power of online gurus—was not intrinsically offensive. It might have been something about the comedian turned provocateur Russell Brand’s previous heroin addiction, or child-abuse scandals in the Catholic Church. I know it wasn’t the word Nazi, because we carefully avoided that. Whatever it was, it was enough to get the interview demonetized, meaning no ads could be placed against it, and my host received no revenue from it.

“It does start to drive you mad,” says Andrew Gold, whose channel, On the Edge, was the place where I committed my unknowable offense. Like many full-time YouTubers, he relies on the Google-owned site’s AdSense program, which gives him a cut of revenues from the advertisements inserted before and during his interviews. When launching a new episode, Gold explained to me, “you get a green dollar sign when it’s monetizable, and it goes yellow if it’s not.” Creators can contest these rulings, but that takes time—and most videos receive the majority of their views in the first hours after launch. So it’s better to avoid the yellow dollar sign in the first place. If you want to make money off of YouTube, you need to watch what you say.

[From the November 2018 issue: Raised by YouTube]

But how? YouTube’s list of content guidelines manages to be both exhaustive and nebulous. “Content that covers topics such as child or sexual abuse as a main topic without detailed descriptions or graphic depictions” is liable to be demonetized, as are “personal accounts or opinion pieces related to abortion as a main topic without graphic depiction.” First-person accounts of domestic violence, eating disorders, and child abuse are definite no-no’s if they include “shocking details.” YouTube operates a three-strike policy for infractions: The first strike is a warning; the second prevents creators from making new posts for a week; and the third (if received within 90 days of the second) gets the channel banned.

For the most popular creators, the site can bring in audiences of millions, and financial rewards to match. But for almost everyone else, content production is a grind, as creators are encouraged to post regularly and repackage content into its TikTok rival, Shorts. Although many types of content may never run afoul of the guidelines—if you’re MrBeast giving out money to strangers, to the delight of your 137 million subscribers, rules against hate speech and misinformation are not going to be an issue—political discussions are subject to the whims of algorithms.

Absent enough human moderators to deal with the estimated 500 hours of videos uploaded every minute, YouTube uses artificial intelligence to enforce its guidelines. Bots scan auto-generated transcripts and flag individual words and phrases as problematic, hence the problem with saying heroin. Even though “educational” references to drug use are allowed, the word might snag the AI trip wire, forcing a creator to request a time-consuming review.

Andrew Gold requested such a review for his interview with me, and the dollar sign duly turned green—meaning the site did eventually serve ads alongside the content. “It was a risk,” he told me, “because I don’t know how it affects my rating if I get it wrong … And they don’t tell me if it’s Nazis, heroin, or anything. You’re just left wondering what it was.”

Frustrations like Gold’s rarely receive much attention, because the conversation about content moderation online is dominated by big names complaining about outright bans. Perversely, though, the most egregious peddlers of misinformation are better placed than everyday creators to work within the YouTube rules. A research paper last year from Cornell University’s Yiqing Hua and others found that people making fringe content at high risk of being demonetized—such as content for alt-right or “manosphere” channels—were more likely than other creators to use alternative money-making practices, such as affiliate links or pushing viewers to subscribe on other platforms. They didn’t even attempt to monetize their content on YouTube—sidestepping the strike system—and instead used the platform as a shop window. They then became more productive on YouTube because demonetization no longer affected their ability to make a living.

The other platforms such influencers use include Rumble, a site that bills itself as “immune to cancel culture” and has received investment from the venture capitalist Peter Thiel and Senator J. D. Vance of Ohio. In January, Florida’s Republican governor, Ron DeSantis, announced that Rumble was now his “video-sharing service of choice” for press conferences because he had been “silenced” by Google over his YouTube claims about the coronavirus pandemic. Recently, in a true demonstration of horseshoe theory, Russell Brand (a left-wing, crunchy, COVID-skeptical hater of elites) posed with Donald Trump Jr. (a right-wing, nepo-baby, COVID-skeptical hater of elites) at a party hosted by Rumble, where they are two of the most popular creators. Brand maintains a presence on YouTube, where he has 6 million subscribers, but uses it as exactly the kind of shop window identified by the Cornell researchers. He recently told Joe Rogan that he now relies on Rumble as his main platform because he was tired of YouTube’s “wild algebra.”

[Read: Why is Joe Rogan so popular?]

For mega-celebrities—including highly paid podcasters and prospective presidential candidates—railing against Big Tech moderation is a great way to pose as an underdog or a martyr. But talk with everyday creators, and they are more than willing to work inside the rules, which they acknowledge are designed to make YouTube safer and more accurate. They just want to know what those rules are, and to see them applied consistently. As it stands, Gold compared his experience of being impersonally notified of unspecified infractions to working for HAL9000, the computer overlord from 2001: A Space Odyssey.

One of the most troublesome areas of content is COVID—about which there is both legitimate debate over treatments, vaccines, and lockdown policies and a great river of misinformation and conspiracy theorizing. “The first video I ever posted to YouTube was a video about ivermectin, which explained why there was no evidence supporting its use in COVID,” the creator Susan Oliver, who has a doctorate in nanomedicine, told me. “YouTube removed the video six hours later. I appealed the removal, but they rejected my appeal. I almost didn’t bother making another video after this.”

Since then, Oliver’s channel, Back to the Science, which has about 7,500 subscribers, has run into a consistent problem—one that other debunkers have also faced. If she cites false information in a video in order to challenge it, she faces being reported for misinformation. This happened with a video referencing the popular creator John Campbell’s false claims about COVID vaccines being linked to brain injuries. Her video was taken down (and restored only on appeal) and his video remained up. “The only things in my video likely to have triggered the algorithm were clips from Campbell’s original video,” Oliver told me. Another problem facing YouTube: COVID skepticism is incredibly popular. Oliver’s content criticizing Campbell’s brain-injury rhetoric has just more than 10,000 views. His original video has more than 800,000.

Oliver wondered if Campbell’s fans were mass-reporting her—a practice known as “brigading.”

“It appears that YouTube allows large, profitable channels to use any loophole to spread misinformation whilst coming down hard on smaller channels without even properly checking their content,” she said. But a Google spokesperson, Michael Aciman, told me that wasn’t the case. “The number of flags a piece of content may receive is not a factor we use when evaluating content against our community guidelines,” he said. “Additionally, these flags do not factor into monetization decisions.”

YouTube is not the only social network where creators struggle to navigate opaque moderation systems with limited avenues for appeal. Users of TikTok—where some contributors are paid from a “creator fund” based on their views—have developed an entire vocabulary to navigate automated censorship. No one gets killed on TikTok; they get “unalived.” There are no lesbians, but instead “le dollar beans” (le$beans). People who sell sex are “spicy accountants.” The aim is to preserve these social networks as both family- and advertiser-friendly; both parents and corporations want these spaces to be “safe.” The result is a strange blossoming of euphemisms that wouldn’t fool a 7-year-old.

Not everyone finds YouTube’s restrictions unduly onerous. The podcaster Chris Williamson, whose YouTube channel has 750,000 subscribers and releases about six videos a week, told me that he now mutes swearing in the first five minutes of videos after receiving a tip from a fellow creator. Even though his channel “brush[es] the edge of a lot of spicy topics,” he said, the only real trouble has been when he “dropped the C-bomb” 85 minutes into a two-and-a-half-hour video, which was then demonetized. “The policy may be getting tighter in other areas which don’t affect me,” he said, “but as long as I avoid C-bombs, my channel seems to be fine.” (While I was reporting this story, YouTube released an update to the guidelines clarifying the rules on swearing, and promised to review previously demonetized videos.)

[Read: Social media’s silent filter]

As a high-profile creator, Williamson has one great advantage: YouTube assigned him to a partner-manager who can help him understand the site’s guidelines. Smaller channels have to rely on impersonal, largely automated systems. Using them can feel like shouting into a void. Williamson also supplements his AdSense income from YouTube’s adverts with sponsorship and affiliate links, making demonetization less of a concern. “Any creator who is exclusively reliant on AdSense for their income is playing a suboptimal game,” he said.

Aciman, the Google spokesperson, told me that all channels on YouTube have to comply ​​with its community guidelines, which prohibit COVID-19 medical misinformation and hate speech—and that channels receiving ad revenue are held to a higher standard in order to comply with the “advertiser-friendly content guidelines.” “We rely on machine learning to evaluate millions of videos on our platform for monetization status,” Aciman added. “No system is perfect, so we encourage creators to appeal for a human review when they feel we got it wrong. As we’ve shown, we reverse these decisions when appropriate, and every appeal helps our systems get smarter over time.”

YouTube is caught in a difficult position, adjudicating between those who claim that it moderates too heavily and others who complain that it doesn’t do enough. And every demonetization is a direct hit to its own bottom line. I sympathize with the site’s predicament, while also noting that YouTube is owned by one of the richest tech companies in the world, and some of that wealth rests on a business model of light-touch, automated moderation. In the last quarter of 2022, YouTube made nearly $8 billion in advertising revenue. There’s a very good reason journalism is not as profitable as that: Imagine if YouTube edited its content as diligently as a legacy newspaper or television channel—even quite a sloppy one. Its great river of videos would slow to a trickle.