Itemoids

TikTok

On YouTube, You Never Know What You Did Wrong

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 03 › youtube-content-moderation-rules › 673322

Recently, on a YouTube channel, I said something terrible, but I don’t know what it was. The main subject of discussion—my reporting on the power of online gurus—was not intrinsically offensive. It might have been something about the comedian turned provocateur Russell Brand’s previous heroin addiction, or child-abuse scandals in the Catholic Church. I know it wasn’t the word Nazi, because we carefully avoided that. Whatever it was, it was enough to get the interview demonetized, meaning no ads could be placed against it, and my host received no revenue from it.

“It does start to drive you mad,” says Andrew Gold, whose channel, On the Edge, was the place where I committed my unknowable offense. Like many full-time YouTubers, he relies on the Google-owned site’s AdSense program, which gives him a cut of revenues from the advertisements inserted before and during his interviews. When launching a new episode, Gold explained to me, “you get a green dollar sign when it’s monetizable, and it goes yellow if it’s not.” Creators can contest these rulings, but that takes time—and most videos receive the majority of their views in the first hours after launch. So it’s better to avoid the yellow dollar sign in the first place. If you want to make money off of YouTube, you need to watch what you say.

[From the November 2018 issue: Raised by YouTube]

But how? YouTube’s list of content guidelines manages to be both exhaustive and nebulous. “Content that covers topics such as child or sexual abuse as a main topic without detailed descriptions or graphic depictions” is liable to be demonetized, as are “personal accounts or opinion pieces related to abortion as a main topic without graphic depiction.” First-person accounts of domestic violence, eating disorders, and child abuse are definite no-no’s if they include “shocking details.” YouTube operates a three-strike policy for infractions: The first strike is a warning; the second prevents creators from making new posts for a week; and the third (if received within 90 days of the second) gets the channel banned.

For the most popular creators, the site can bring in audiences of millions, and financial rewards to match. But for almost everyone else, content production is a grind, as creators are encouraged to post regularly and repackage content into its TikTok rival, Shorts. Although many types of content may never run afoul of the guidelines—if you’re MrBeast giving out money to strangers, to the delight of your 137 million subscribers, rules against hate speech and misinformation are not going to be an issue—political discussions are subject to the whims of algorithms.

Absent enough human moderators to deal with the estimated 500 hours of videos uploaded every minute, YouTube uses artificial intelligence to enforce its guidelines. Bots scan auto-generated transcripts and flag individual words and phrases as problematic, hence the problem with saying heroin. Even though “educational” references to drug use are allowed, the word might snag the AI trip wire, forcing a creator to request a time-consuming review.

Andrew Gold requested such a review for his interview with me, and the dollar sign duly turned green—meaning the site did eventually serve ads alongside the content. “It was a risk,” he told me, “because I don’t know how it affects my rating if I get it wrong … And they don’t tell me if it’s Nazis, heroin, or anything. You’re just left wondering what it was.”

Frustrations like Gold’s rarely receive much attention, because the conversation about content moderation online is dominated by big names complaining about outright bans. Perversely, though, the most egregious peddlers of misinformation are better placed than everyday creators to work within the YouTube rules. A research paper last year from Cornell University’s Yiqing Hua and others found that people making fringe content at high risk of being demonetized—such as content for alt-right or “manosphere” channels—were more likely than other creators to use alternative money-making practices, such as affiliate links or pushing viewers to subscribe on other platforms. They didn’t even attempt to monetize their content on YouTube—sidestepping the strike system—and instead used the platform as a shop window. They then became more productive on YouTube because demonetization no longer affected their ability to make a living.

The other platforms such influencers use include Rumble, a site that bills itself as “immune to cancel culture” and has received investment from the venture capitalist Peter Thiel and Senator J. D. Vance of Ohio. In January, Florida’s Republican governor, Ron DeSantis, announced that Rumble was now his “video-sharing service of choice” for press conferences because he had been “silenced” by Google over his YouTube claims about the coronavirus pandemic. Recently, in a true demonstration of horseshoe theory, Russell Brand (a left-wing, crunchy, COVID-skeptical hater of elites) posed with Donald Trump Jr. (a right-wing, nepo-baby, COVID-skeptical hater of elites) at a party hosted by Rumble, where they are two of the most popular creators. Brand maintains a presence on YouTube, where he has 6 million subscribers, but uses it as exactly the kind of shop window identified by the Cornell researchers. He recently told Joe Rogan that he now relies on Rumble as his main platform because he was tired of YouTube’s “wild algebra.”

[Read: Why is Joe Rogan so popular?]

For mega-celebrities—including highly paid podcasters and prospective presidential candidates—railing against Big Tech moderation is a great way to pose as an underdog or a martyr. But talk with everyday creators, and they are more than willing to work inside the rules, which they acknowledge are designed to make YouTube safer and more accurate. They just want to know what those rules are, and to see them applied consistently. As it stands, Gold compared his experience of being impersonally notified of unspecified infractions to working for HAL9000, the computer overlord from 2001: A Space Odyssey.

One of the most troublesome areas of content is COVID—about which there is both legitimate debate over treatments, vaccines, and lockdown policies and a great river of misinformation and conspiracy theorizing. “The first video I ever posted to YouTube was a video about ivermectin, which explained why there was no evidence supporting its use in COVID,” the creator Susan Oliver, who has a doctorate in nanomedicine, told me. “YouTube removed the video six hours later. I appealed the removal, but they rejected my appeal. I almost didn’t bother making another video after this.”

Since then, Oliver’s channel, Back to the Science, which has about 7,500 subscribers, has run into a consistent problem—one that other debunkers have also faced. If she cites false information in a video in order to challenge it, she faces being reported for misinformation. This happened with a video referencing the popular creator John Campbell’s false claims about COVID vaccines being linked to brain injuries. Her video was taken down (and restored only on appeal) and his video remained up. “The only things in my video likely to have triggered the algorithm were clips from Campbell’s original video,” Oliver told me. Another problem facing YouTube: COVID skepticism is incredibly popular. Oliver’s content criticizing Campbell’s brain-injury rhetoric has just more than 10,000 views. His original video has more than 800,000.

Oliver wondered if Campbell’s fans were mass-reporting her—a practice known as “brigading.”

“It appears that YouTube allows large, profitable channels to use any loophole to spread misinformation whilst coming down hard on smaller channels without even properly checking their content,” she said. But a Google spokesperson, Michael Aciman, told me that wasn’t the case. “The number of flags a piece of content may receive is not a factor we use when evaluating content against our community guidelines,” he said. “Additionally, these flags do not factor into monetization decisions.”

YouTube is not the only social network where creators struggle to navigate opaque moderation systems with limited avenues for appeal. Users of TikTok—where some contributors are paid from a “creator fund” based on their views—have developed an entire vocabulary to navigate automated censorship. No one gets killed on TikTok; they get “unalived.” There are no lesbians, but instead “le dollar beans” (le$beans). People who sell sex are “spicy accountants.” The aim is to preserve these social networks as both family- and advertiser-friendly; both parents and corporations want these spaces to be “safe.” The result is a strange blossoming of euphemisms that wouldn’t fool a 7-year-old.

Not everyone finds YouTube’s restrictions unduly onerous. The podcaster Chris Williamson, whose YouTube channel has 750,000 subscribers and releases about six videos a week, told me that he now mutes swearing in the first five minutes of videos after receiving a tip from a fellow creator. Even though his channel “brush[es] the edge of a lot of spicy topics,” he said, the only real trouble has been when he “dropped the C-bomb” 85 minutes into a two-and-a-half-hour video, which was then demonetized. “The policy may be getting tighter in other areas which don’t affect me,” he said, “but as long as I avoid C-bombs, my channel seems to be fine.” (While I was reporting this story, YouTube released an update to the guidelines clarifying the rules on swearing, and promised to review previously demonetized videos.)

[Read: Social media’s silent filter]

As a high-profile creator, Williamson has one great advantage: YouTube assigned him to a partner-manager who can help him understand the site’s guidelines. Smaller channels have to rely on impersonal, largely automated systems. Using them can feel like shouting into a void. Williamson also supplements his AdSense income from YouTube’s adverts with sponsorship and affiliate links, making demonetization less of a concern. “Any creator who is exclusively reliant on AdSense for their income is playing a suboptimal game,” he said.

Aciman, the Google spokesperson, told me that all channels on YouTube have to comply ​​with its community guidelines, which prohibit COVID-19 medical misinformation and hate speech—and that channels receiving ad revenue are held to a higher standard in order to comply with the “advertiser-friendly content guidelines.” “We rely on machine learning to evaluate millions of videos on our platform for monetization status,” Aciman added. “No system is perfect, so we encourage creators to appeal for a human review when they feel we got it wrong. As we’ve shown, we reverse these decisions when appropriate, and every appeal helps our systems get smarter over time.”

YouTube is caught in a difficult position, adjudicating between those who claim that it moderates too heavily and others who complain that it doesn’t do enough. And every demonetization is a direct hit to its own bottom line. I sympathize with the site’s predicament, while also noting that YouTube is owned by one of the richest tech companies in the world, and some of that wealth rests on a business model of light-touch, automated moderation. In the last quarter of 2022, YouTube made nearly $8 billion in advertising revenue. There’s a very good reason journalism is not as profitable as that: Imagine if YouTube edited its content as diligently as a legacy newspaper or television channel—even quite a sloppy one. Its great river of videos would slow to a trickle.