Itemoids

Digital

X Is a White-Supremacist Site

The Atlantic

www.theatlantic.com › technology › archive › 2024 › 11 › x-white-supremacist-site › 680538

X has always had a Nazi problem. I’ve covered the site, formerly known as Twitter, for more than a decade and reported extensively on its harassment problems, its verification (and then de-verification) of a white nationalist, and the glut of anti-Semitic hatred that roiled the platform in 2016.

But something is different today. Heaps of unfiltered posts that plainly celebrate racism, anti-Semitism, and outright Nazism are easily accessible and possibly even promoted by the site’s algorithms. All the while, Elon Musk—a far-right activist and the site’s owner, who is campaigning for and giving away millions to help elect Donald Trump—amplifies horrendous conspiracy theories about voter fraud, migrants run amok, and the idea that Jewish people hate white people. Twitter was always bad if you knew where to look, but because of Musk, X is far worse. (X and Musk did not respond to requests for comment for this article.)

It takes little effort to find neo-Nazi accounts that have built up substantial audiences on X. “Thank you all for 7K,” one white-nationalist meme account posted on October 17, complete with a heil-Hitler emoji reference. One week later, the account, which mostly posts old clips of Hitler speeches and content about how “Hitler was right,” celebrated 14,000 followers. One post, a black-and-white video of Nazis goose-stepping, has more than 187,000 views. Another racist and anti-Semitic video about Jewish women and Black men—clearly AI-generated—has more than 306,000 views. It was also posted in late October.

Many who remain on the platform have noticed X decaying even more than usual in recent months. “I’ve seen SO many seemingly unironic posts like this on Twitter recently this is getting insane,” one X user posted in response to a meme that the far-right influencer Stew Peters recently shared. It showed an image of Adolf Hitler holding a telephone with overlaid text reading, “Hello … 2024? Are you guys starting to get it yet?” Peters appended the commentary, “Yes. We’ve noticed.” The idea is simply that Hitler was right, and X users ate it up: As of this writing, the post has received about 67,000 likes, 10,000 reposts, and 11.4 million views. When Musk took over, in 2022, there were initial reports that hate speech (anti-Black and anti-Semitic slurs) was surging on the platform. By December of that year, one research group described the increase in hate speech as “unprecedented.” And it seems to only have gotten worse. There are far more blatant examples of racism now, even compared with a year ago. In September, the World Bank halted advertising on X after its promoted ads were showing up in the replies to pro-Nazi and white-nationalist content from accounts with hundreds of thousands of followers. Search queries such as Hitler was right return posts with tens of thousands of views—they’re indistinguishable from the poison once relegated to the worst sites on the internet, including 4chan, Gab, and Stormfront.

The hatred isn’t just coming from anonymous fringe posters either. Late last month, Clay Higgins, a Republican congressman from Louisiana, published a racist, threatening post about the Haitians in Springfield, Ohio, saying they’re from the “nastiest country in the western hemisphere.” Then he issued an ultimatum: “All these thugs better get their mind right and their ass out of our country before January 20th,” he wrote in the post, referencing Inauguration Day. Higgins eventually deleted the post at the request of his House colleagues on both sides of the aisle but refused to apologize. “I can put up another controversial post tomorrow if you want me to. I mean, we do have freedom of speech. I’ll say what I want,” he told CNN later that day.

And although Higgins did eventually try to walk his initial post back, clarifying that he was really referring to Haitian gangs, the sentiment he shared with CNN is right. The lawmaker can put up another vile post maligning an entire country whenever he desires. Not because of his right to free speech—which exists to protect against government interference—but because of how Musk chooses to operate his platform. Despite the social network’s policy that prohibits “incitement of harassment,” X seemingly took no issue with Higgins’s racist post or its potential to cause real-world harm for Springfield residents. (The town has already closed and evacuated its schools twice because of bomb threats.) And why would X care? The platform, which reinstated thousands of banned accounts following Musk’s takeover, in 2022—accounts that belong to QAnon supporters, political hucksters, conspiracy theorists, and at least one bona fide neo-Nazi—is so inundated with bigoted memes, racist AI slop, and unspeakable slurs that Higgins’s post seemed almost measured by comparison. In the past, when Twitter seemed more interested in enforcing content-moderation standards, the lawmaker’s comments may have resulted in a ban or some other disciplinary response: On X, he found an eager, sympathetic audience willing to amplify his hateful message.

His deleted post is instructive, though, as a way to measure the degradation of X under Musk. The site is a political project run by a politically radicalized centibillionaire. The worthwhile parts of Twitter (real-time news, sports, culture, silly memes, spontaneous encounters with celebrity accounts) have been drowned out by hateful garbage. X is no longer a social-media site with a white-supremacy problem, but a white-supremacist site with a social-media problem.

Musk has certainly bent the social network to support his politics, which has recently involved joking on Tucker Carlson’s show (which streams on X) that “nobody is even bothering to try to kill Kamala” and repurposing the @america handle from an inactive user to turn it into a megaphone for his pro-Trump super PAC. Musk has also quite clearly reengineered the site so that users see him, and his tweets, whether or not they follow him.

When Musk announced his intent to purchase Twitter, in April 2022, the New York Times columnist Ezra Klein aptly noted that “Musk reveals what he wants Twitter to be by how he acts on it.” By this logic, it would seem that X is vying to be the official propaganda outlet not just for Trump generally but also for the “Great Replacement” theory, which states that there is a global plot to eradicate the white race and its culture through immigration. In just the past year, Musk has endorsed multiple posts about the conspiracy theory. In November 2023, in response to a user named @breakingbaht who accused Jews of supporting bringing “hordes of minorities” into the United States, Musk replied, “You have said the actual truth.” Musk’s post was viewed more than 8 million times.

[Read: Musk’s Twitter is the blueprint for a MAGA government]

Though Musk has publicly claimed that he doesn’t “subscribe” to the “Great Replacement” theory, he appears obsessed with the idea that Republican voters in America are under attack from immigrants. Last December, he posted a misleading graph suggesting that the number of immigrants arriving illegally was overtaking domestic birth rates. He has repeatedly referenced a supposed Democratic plot to “legalize vast numbers of illegals” and put an end to fair elections. He has falsely suggested that the Biden administration was “flying ‘asylum seekers’, who are fast-tracked to citizenship, directly into swing states like Pennsylvania, Ohio, Wisconsin and Arizona” and argued that, soon, “everywhere in America will be like the nightmare that is downtown San Francisco.” According to a recent Bloomberg analysis of 53,000 of Musk’s posts, the billionaire has posted more about immigration and voter fraud than any other topic (more than 1,300 posts in total), garnering roughly 10 billion views.

But Musk’s interests extend beyond the United States. This summer, during a period of unrest and rioting in the United Kingdom over a mass stabbing that killed three children, the centibillionaire used his account to suggest that a civil war there was “inevitable.” He also shared (and subsequently deleted) a conspiracy theory that the U.K. government was building detainment camps for people rioting against Muslims. Additionally, X was instrumental in spreading misinformation and fueling outrage among far-right, anti-immigration protesters.

In Springfield, Ohio, X played a similar role as a conduit for white supremacists and far-right extremists to fuel real-world harm. One of the groups taking credit for singling out Springfield’s Haitian community was Blood Tribe, a neo-Nazi group known for marching through city streets waving swastikas. Blood Tribe had been focused on the town for months, but not until prominent X accounts (including Musk’s, J. D. Vance’s, and Trump’s) seized on a Facebook post from the region did Springfield become a national target. “It is no coincidence that there was an online rumor mill ready to amplify any social media posts about Springfield because Blood Tribe has been targeting the town in an effort to stoke racial resentment against ‘subhuman’ Haitians,” the journalist Robert Tracinski wrote recently. Tracinski argues that social-media channels (like X) have been instrumental in transferring neo-Nazi propaganda into the public consciousness—all the way to the presidential-debate stage. He is right. Musk’s platform has become a political tool for stoking racial hatred online and translating it into harassment in the physical world.

The ability to drag fringe ideas and theories into mainstream political discourse has long been a hallmark of X, even back when it was known as Twitter. There’s always been a trade-off with the platform’s ability to narrow the distance between activists and people in positions of power. Social-justice movements such as the Arab Spring and Black Lives Matter owe some of the success of their early organizing efforts to the platform.

Yet the website has also been one of the most reliable mainstream destinations on the internet to see Photoshopped images of public figures (or their family members) in gas chambers, or crude, racist cartoons of Jewish men. Now, under Musk’s stewardship, X seems to run in only one direction. The platform eschews healthy conversation. It abhors nuance, instead favoring constant escalation and engagement-baiting behavior. And it empowers movements that seek to enrage and divide. In April, an NBC News investigation found that “at least 150 paid ‘Premium’ subscriber X accounts and thousands of unpaid accounts have posted or amplified pro-Nazi content on X in recent months.” According to research from the extremism expert Colin Henry, since Musk’s purchase, there’s been a decline in anti-Semitic posts on 4chan’s infamous “anything goes” forum, and a simultaneous rise in posts targeting Jewish people on X.

X’s own transparency reports show that the social network has allowed hateful content to flourish on its site. In its last report before Musk’s acquisition, in just the second half of 2021, Twitter suspended about 105,000 of the more than 5 million accounts reported for hateful conduct. In the first half of 2024, according to X, the social network received more than 66 million hateful-conduct reports, but suspended just 2,361 accounts. It’s not a perfect comparison, as the way X reports and analyzes data has changed under Musk, but the company is clearly taking action far less frequently.

[Read: I’m running out of ways to explain how bad this is]

Because X has made it more difficult for researchers to access data by switching to a paid plan that prices out many academics, it is now difficult to get a quantitative understanding of the platform’s degradation. The statistics that do exist are alarming. Research from the Center for Countering Digital Hate found that in just the first month of Musk’s ownership, anti–Black American slurs used on the platform increased by 202 percent. The Anti-Defamation League found that anti-Semitic tweets on the platform increased by 61 percent in just two weeks after Musk’s takeover. But much of the evidence is anecdotal. The Washington Post summed up a recent report from the Institute for Strategic Dialogue, noting that pro-Hitler content “reached the largest audiences on X [relative to other social-media platforms], where it was also most likely to be recommended via the site’s algorithm.” Since Musk took over, X has done the following:

Seemingly failed to block a misleading advertisement post purchased by Jason Köhne, a white nationalist with the handle @NoWhiteGuiltNWG. Seemingly failed to block an advertisement calling to reinstate the death penalty for gay people. Reportedly run ads on 20 racist and anti-Semitic hashtags, including #whitepower, despite Musk pledging that he would demonetize posts that included hate speech. (After NBC asked about these, X removed the ability for users to search for some of these hashtags.) Granted blue-check verification to an account with the N-word in its handle. (The account has since been suspended.) Allowed an account that praised Hitler to purchase a gold-check badge, which denotes an “official organization” and is typically used by brands such as Doritos and BlackRock. (This account has since been suspended.) Seemingly failed to take immediate action on 63 of 66 accounts flagged for disseminating AI-generated Nazi memes from 4chan. More than half of the posts were made by paid accounts with verified badges, according to research by the nonprofit Center for Countering Digital Hate.

None of this is accidental. The output of a platform tells you what it is designed to do: In X’s case, all of this is proof of a system engineered to give voice to hateful ideas and reward those who espouse them. If one is to judge X by its main exports, then X, as it exists now under Musk, is a white-supremacist website.

You might scoff at this notion, especially if you, like me, have spent nearly two decades willingly logged on to the site, or if you, like me, have had your professional life influenced in surprising, occasionally delightful ways by the platform. Even now, I can scroll through the site’s algorithmic pond scum and find things worth saving—interesting commentary, breaking news, posts and observations that make me laugh. But these exceptional morsels are what make the platform so insidious, in part because they give cover to the true political project that X now represents and empowers.

As I was preparing to write this story, I visited some of the most vile corners of the internet. I’ve monitored these spaces for years, and yet this time, I was struck by how little distance there was between them and what X has become. It is impossible to ignore: The difference between X and a known hateful site such as Gab are people like myself. The majority of users are no doubt creators, businesses, journalists, celebrities, political junkies, sports fans, and other perfectly normal people who hold their nose and cling to the site. We are the human shield of respectability that keeps Musk’s disastrous $44 billion investment from being little more than an algorithmically powered Stormfront.

The justifications—the lure of the community, the (now-limited) ability to bear witness to news in real time, and of the reach of one’s audience of followers—feel particularly weak today. X’s cultural impact is still real, but its promotional use is nonexistent. (A recent post linking to a story of mine generated 289,000 impressions and 12,900 interactions, but only 948 link clicks—a click rate of roughly 0.00328027682 percent.) NPR, which left the platform in April 2023, reported almost negligible declines in traffic referrals after abandoning the site.

Continuing to post on X has been indefensible for some time. But now, more than ever, there is no good justification for adding one’s name to X’s list of active users. To leave the platform, some have argued, is to cede an important ideological battleground to the right. I’ve been sympathetic to this line of thinking, but the battle, on this particular platform, is lost. As long as Musk owns the site, its architecture will favor his political allies. If you see posting to X as a fight, then know it is not a fair one. For example: In October, Musk shared a fake screenshot of an Atlantic article, manipulated to show a fake headline—his post, which he never deleted, garnered more than 18 million views. The Atlantic’s X post debunking Musk’s claim received just 28,000 views. Musk is unfathomably rich. He’s used that money to purchase a platform, take it private, and effectively turn it into a megaphone for the world’s loudest racists. Now he’s attempting to use it to elect a corrupt, election-denying felon to the presidency.

To stay on X is not an explicit endorsement of this behavior, but it does help enable it. I’m not at all suggesting—as Musk has previously alleged—that the site be shut down or that Musk should be silenced. But there’s no need to stick around and listen. Why allow Musk to appear even slightly more credible by lending our names, our brands, and our movements to a platform that makes the world more dangerous for real people? To my dismay, I’ve hid from these questions for too long. Now that I’ve confronted them, I have no good answers.

The Gateway Pundit Is Still Pushing an Alternate Reality

The Atlantic

www.theatlantic.com › technology › archive › 2024 › 11 › gateway-pundit-ccdh-research › 680506

The Gateway Pundit, a right-wing website with a history of spreading lies about election fraud, recently posted something out of the ordinary. It took a break from its coverage of the 2024 presidential election (sample headlines: “KAMALA IS KOLLAPSING,” “KAMALA FUNDS NAZIS”) to post a three-sentence note from the site’s founder and editor, Jim Hoft, offering some factual information about the previous presidential election.

In his brief statement, presented without any particular fanfare, Hoft writes that election officials in Georgia concluded that no widespread voter fraud took place at Atlanta’s State Farm Arena on Election Day 2020. He notes specifically that they concluded that two election workers processing votes that night, Ruby Freeman and Wandrea Moss, had not engaged “in ballot fraud or criminal misconduct.” And he explains that “a legal matter with this news organization and the two election workers has been resolved to the mutual satisfaction of the parties through a fair and reasonable settlement.”  

Indeed, the blog post appeared just days after the Gateway Pundit settled a defamation lawsuit brought by Freeman and Moss, who sued the outlet for promoting false claims that they had participated in mass voter fraud. (These claims, quickly debunked, were focused on video footage of the mother-daughter pair storing ballots in their appropriate carriers—conspiracy theorists had claimed that they were instead packing them into suitcases for some wicked purpose.) The terms of the settlement were not disclosed, but after it was announced, almost 70 articles previously published on the Gateway Pundit, and cited in the lawsuit, were no longer available, according to an analysis by the Associated Press.

Even so, the site—which has promoted numerous lies and conspiracy theories in the past, and which still faces a lawsuit from Eric Coomer, a former executive at Dominion Voting Systems, for pushing false claims that he helped rig the 2020 election—shows no signs of retreat. (The Gateway Pundit has fought this lawsuit, including by filing a motion to dismiss. Although the site filed for bankruptcy in April, a judge tossed it out, concluding that the filing was in “bad faith.”) The site has continued to post with impunity, promoting on a number of occasions the conspiracy that Democrats are “openly stealing” the 2024 election with fraudulent overseas votes. A political-science professor recently told my colleague Matteo Wong that this particular claim has been one of the “dominant narratives” this year, as Donald Trump’s supporters seek ways to undermine faith in the democratic process.  

This is to be expected: The Gateway Pundit has been around since 2004, and it has always been a destination for those disaffected by the “establishment media.” Comment sections—on any website, let alone those that explicitly cater to the far-right fringe—have never had a reputation for sobriety and thoughtfulness. And the Gateway Pundit’s is particularly vivid. One recent commenter described a desire to see Democratic officials “stripped naked and sprayed down with a firehose like Rambo in First Blood.” Even so, data recently shared with me by the Center for Countering Digital Hate—a nonprofit that studies disinformation and online abuse, and which reports on companies that it believes allow such content to spread—show just how nasty these communities can get. Despite the fracturing of online ecosystems in recent years—namely, the rise and fall of various social platforms and the restructuring of Google Search, both of which have resulted in an overall downturn in traffic to news sites—the Gateway Pundit has remained strikingly relevant on social media, according to the CCDH. And its user base, as seen in the comments, has regularly endorsed political violence in the past few months, despite the site’s own policies forbidding such posts.

Researchers from the CCDH recently examined the comment sections beneath 120 Gateway Pundit articles about alleged election fraud published between May and September. They found that 75 percent of those sections contained “threats or calls for violence.” One comment cited in the report reads: “Beat the hell out of any Democrat you come across today just for the hell of it.”

Another: “They could show/televise the hangings or lined up and executed by firing squad and have that be a reminder not to try to overthrow our constitution.” Overall, the researchers found more than 200 comments with violent content hosted on the Gateway Pundit.

Sites like the Gateway Pundit often attempt to justify the vitriol they host on their platforms by arguing in free-speech terms. But even free-speech absolutists can understand legitimate concerns about incitements to violence. Local election officials in Georgia and Arizona have blamed the site and its comment section for election-violence threats in the past. A 2021 Reuters report found links between the site and more than 80 “menacing” messages sent to election workers. According to Reuters, after the Gateway Pundit published a fake report about ballot fraud in Wisconsin, one election official found herself identified in the comment section, along with calls for her to be killed. “She found one post especially unnerving,” the Reuters reporters Peter Eisler and Jason Szep write. “It recommended a specific bullet for killing her—a 7.62 millimeter round for an AK-47 assault rifle.”

The CCDH researchers used data from a social-media monitoring tool called Newswhip to measure social-media engagement with election-related content from Gateway Pundit and similar sites. Although Gateway Pundit was second to Breitbart as a source for election misinformation on social media overall, the researchers found that the Gateway Pundit was actually the most popular on X, where its content was shared more than 800,000 times from the start of the year through October 2.  

In response to a request for comment, John Burns, a lawyer representing Hoft and the Gateway Pundit, told me that the site relies on users reporting “offending” comments, including those expressing violence or threats. “If a few slipped through the cracks, we’ll look into it,” Burns said. He did not comment on the specifics of the CCDH report, nor the recent lawsuits against the company.

The site uses a popular third-party commenting platform called Disqus, which has taken a hands-off approach to policing far-right, racist content in the past. Disqus offers clients AI-powered, customizable moderation tools that allow them to filter out toxic or inappropriate comments from their site, or ban users. The CCDH report points out that violent comments are against Disqus’s own terms of service. “Publishers monitor and enforce their own community rules,” a Disqus spokesperson wrote in an email statement. “Only if a comment is flagged directly to the Disqus team do we review it against our terms of service. Once flagged, we aim to review within 24 hours and determine whether or not action is required based on our rules and terms of service.”

The Gateway Pundit is just one of a constellation of right-wing sites that offer readers an alternate reality. Emily Bell, the founding director of the Tow Center for Digital Journalism, told me that these sites pushed the range of what’s considered acceptable speech “quite a long way to the right,” and in some cases, away from traditional, “fact-based” media. They started to grow more popular with the rise of the social web, in which algorithmic recommendation systems and conservative influencers pushed their articles to legions of users.

The real power of these sites may come not in their broad reach, but in how they shape the opinions of a relatively small, radical subset of people. According to a paper published in Nature this summer, false and inflammatory content tends to reach “a narrow fringe” of highly motivated users. Sites like the Gateway Pundit are “influential in a very small niche,” Brendan Nyhan, a professor of government at Dartmouth and one of the authors of the paper, told me over email. As my colleague Charlie Warzel recently noted, the effect of this disinformation is not necessarily to deceive people, but rather to help this small subset of people stay anchored in their alternate reality.

I asked Pasha Dashtgard, the director of research for the Polarization and Extremism Research and Innovation Lab at American University, what exactly the relationship is between sites like Gateway Pundit and political violence. “That is such a million-dollar question,” he said. “It’s hard to tell.” By that, he means that it’s hard for researchers and law enforcement to know when online threats will translate into armed vigilantes descending on government buildings. Social-media platforms have only gotten less transparent with their data since the previous cycle, making it more difficult for researchers to suss out what’s happening on them.

“The pathway to radicalization is not linear,” Dashtgard explained. “Certainly I would want to disabuse anyone of the idea that it’s like, you go on this website and that makes you want to kill people.” People could have other risk factors that make them more likely to commit violence, such as feeling alienated or depressed, he said. These sites just represent another potential push mechanism.

And they don’t seem to be slowing down. Three hours after Hoft posted his blog post correcting the record in the case of Freeman and Moss, he posted another statement. This one was addressed to readers. “Many of you may be aware that The Gateway Pundit was in the news this week. We settled an ongoing lawsuit against us,” the post reads in part. “Despite their best efforts, we are still standing.”