Itemoids

Dominion

AI’s Fingerprints Were All Over the Election

The Atlantic

www.theatlantic.com › technology › archive › 2024 › 11 › ai-election-propaganda › 680677

The images and videos were hard to miss in the days leading up to November 5. There was Donald Trump with the chiseled musculature of Superman, hovering over a row of skyscrapers. Trump and Kamala Harris squaring off in bright-red uniforms (McDonald’s logo for Trump, hammer-and-sickle insignia for Harris). People had clearly used AI to create these—an effort to show support for their candidate or to troll their opponents. But the images didn’t stop after Trump won. The day after polls closed, the Statue of Liberty wept into her hands as a drizzle fell around her. Trump and Elon Musk, in space suits, stood on the surface of Mars; hours later, Trump appeared at the door of the White House, waving goodbye to Harris as she walked away, clutching a cardboard box filled with flags.

[Read: We haven’t seen the worst of fake news]

Every federal election since at least 2018 has been plagued with fears about potential disruptions from AI. Perhaps a computer-generated recording of Joe Biden would swing a key county, or doctored footage of a poll worker burning ballots would ignite riots. Those predictions never materialized, but many of them were also made before the arrival of ChatGPT, DALL-E, and the broader category of advanced, cheap, and easy-to-use generative-AI models—all of which seemed much more threatening than anything that had come before. Not even a year after ChatGPT was released in late 2022, generative-AI programs were used to target Trump, Emmanuel Macron, Biden, and other political leaders. In May 2023, an AI-generated image of smoke billowing out of the Pentagon caused a brief dip in the U.S. stock market. Weeks later, Ron DeSantis’s presidential primary campaign appeared to have used the technology to make an advertisement.

And so a trio of political scientists at Purdue University decided to get a head start on tracking how generative AI might influence the 2024 election cycle. In June 2023, Christina Walker, Daniel Schiff, and Kaylyn Jackson Schiff started to track political AI-generated images and videos in the United States. Their work is focused on two particular categories: deepfakes, referring to media made with AI, and “cheapfakes,” which are produced with more traditional editing software, such as Photoshop. Now, more than a week after polls closed, their database, along with the work of other researchers, paints a surprising picture of how AI appears to have actually influenced the election—one that is far more complicated than previous fears suggested.

The most visible generated media this election have not exactly planted convincing false narratives or otherwise deceived American citizens. Instead, AI-generated media have been used for transparent propaganda, satire, and emotional outpourings: Trump, wading in a lake, clutches a duck and a cat (“Protect our ducks and kittens in Ohio!”); Harris, enrobed in a coppery blue, struts before the Statue of Liberty and raises a matching torch. In August, Trump posted an AI-generated video of himself and Musk doing a synchronized TikTok dance; a follower responded with an AI image of the duo riding a dragon. The pictures were fake, sure, but they weren’t feigning otherwise. In their analysis of election-week AI imagery, the Purdue team found that such posts were far more frequently intended for satire or entertainment than false information per se. Trump and Musk have shared political AI illustrations that got hundreds of millions of views. Brendan Nyhan, a political scientist at Dartmouth who studies the effects of misinformation, told me that the AI images he saw “were obviously AI-generated, and they were not being treated as literal truth or evidence of something. They were treated as visual illustrations of some larger point.” And this usage isn’t new: In the Purdue team’s entire database of fabricated political imagery, which includes hundreds of entries, satire and entertainment were the two most common goals.

That doesn’t mean these images and videos are merely playful or innocuous. Outrageous and false propaganda, after all, has long been an effective way to spread political messaging and rile up supporters. Some of history’s most effective propaganda campaigns have been built on images that simply project the strength of one leader or nation. Generative AI offers a low-cost and easy tool to produce huge amounts of tailored images that accomplish just this, heightening existing emotions and channeling them to specific ends.

These sorts of AI-generated cartoons and agitprop could well have swayed undecided minds, driven turnout, galvanized “Stop the Steal” plotting, or driven harassment of election officials or racial minorities. An illustration of Trump in an orange jumpsuit emphasizes Trump’s criminal convictions and perceived unfitness for the office, while an image of Harris speaking to a sea of red flags, a giant hammer-and-sickle above the crowd, smears her as “woke” and a “Communist.” An edited image showing Harris dressed as Princess Leia kneeling before a voting machine and captioned “Help me, Dominion. You’re my only hope” (an altered version of a famous Star Wars line) stirs up conspiracy theories about election fraud. “Even though we’re noticing many deepfakes that seem silly, or just seem like simple political cartoons or memes, they might still have a big impact on what we think about politics,” Kaylyn Jackson Schiff told me. It’s easy to imagine someone’s thought process: That image of “Comrade Kamala” is AI-generated, sure, but she’s still a Communist. That video of people shredding ballots is animated, but they’re still shredding ballots. That’s a cartoon of Trump clutching a cat, but immigrants really are eating pets. Viewers, especially those already predisposed to find and believe extreme or inflammatory content, may be further radicalized and siloed. The especially photorealistic propaganda might even fool someone if reshared enough times, Walker told me.

[Read: I’m running out of ways to explain how bad this is]

There were, of course, also a number of fake images and videos that were intended to directly change people’s attitudes and behaviors. The FBI has identified several fake videos intended to cast doubt on election procedures, such as false footage of someone ripping up ballots in Pennsylvania. “Our foreign adversaries were clearly using AI” to push false stories, Lawrence Norden, the vice president of the Elections & Government Program at the Brennan Center for Justice, told me. He did not see any “super innovative use of AI,” but said the technology has augmented existing strategies, such as creating fake-news websites, stories, and social-media accounts, as well as helping plan and execute cyberattacks. But it will take months or years to fully parse the technology’s direct influence on 2024’s elections. Misinformation in local races is much harder to track, for example, because there is less of a spotlight on them. Deepfakes in encrypted group chats are also difficult to track, Norden said. Experts had also wondered whether the use of AI to create highly realistic, yet fake, videos showing voter fraud might have been deployed to discredit a Trump loss. This scenario has not yet been tested.

Although it appears that AI did not directly sway the results last week, the technology has eroded Americans’ overall ability to know or trust information and one another—not deceiving people into believing a particular thing so much as advancing a nationwide descent into believing nothing at all. A new analysis by the Institute for Strategic Dialogue of AI-generated media during the U.S. election cycle found that users on X, YouTube, and Reddit inaccurately assessed whether content was real roughly half the time, and more frequently thought authentic content was AI-generated than the other way around. With so much uncertainty, using AI to convince people of alternative facts seems like a waste of time—far more useful to exploit the technology to directly and forcefully send a motivated message, instead. Perhaps that’s why, of the election-week, AI-generated media the Purdue team analyzed, pro-Trump and anti-Kamala content was most common.

More than a week after Trump’s victory, the use of AI for satire, entertainment, and activism has not ceased. Musk, who will soon co-lead a new extragovernmental organization, routinely shares such content. The morning of November 6, Donald Trump Jr. put out a call for memes that was met with all manner of AI-generated images. Generative AI is changing the nature of evidence, yes, but also that of communication—providing a new, powerful medium through which to illustrate charged emotions and beliefs, broadcast them, and rally even more like-minded people. Instead of an all-caps thread, you can share a detailed and personalized visual effigy. These AI-generated images and videos are instantly legible and, by explicitly targeting emotions instead of information, obviate the need for falsification or critical thinking at all. No need to refute, or even consider, a differing view—just make an angry meme about it. No need to convince anyone of your adoration of J. D. Vance—just use AI to make him, literally, more attractive. Veracity is beside the point, which makes the technology perhaps the nation’s most salient mode of political expression. In a country where facts have gone from irrelevant to detestable, of course deepfakes—fake news made by deep-learning algorithms—don’t matter; to growing numbers of people, everything is fake but what they already know, or rather, feel.

The Gateway Pundit Is Still Pushing an Alternate Reality

The Atlantic

www.theatlantic.com › technology › archive › 2024 › 11 › gateway-pundit-ccdh-research › 680506

The Gateway Pundit, a right-wing website with a history of spreading lies about election fraud, recently posted something out of the ordinary. It took a break from its coverage of the 2024 presidential election (sample headlines: “KAMALA IS KOLLAPSING,” “KAMALA FUNDS NAZIS”) to post a three-sentence note from the site’s founder and editor, Jim Hoft, offering some factual information about the previous presidential election.

In his brief statement, presented without any particular fanfare, Hoft writes that election officials in Georgia concluded that no widespread voter fraud took place at Atlanta’s State Farm Arena on Election Day 2020. He notes specifically that they concluded that two election workers processing votes that night, Ruby Freeman and Wandrea Moss, had not engaged “in ballot fraud or criminal misconduct.” And he explains that “a legal matter with this news organization and the two election workers has been resolved to the mutual satisfaction of the parties through a fair and reasonable settlement.”  

Indeed, the blog post appeared just days after the Gateway Pundit settled a defamation lawsuit brought by Freeman and Moss, who sued the outlet for promoting false claims that they had participated in mass voter fraud. (These claims, quickly debunked, were focused on video footage of the mother-daughter pair storing ballots in their appropriate carriers—conspiracy theorists had claimed that they were instead packing them into suitcases for some wicked purpose.) The terms of the settlement were not disclosed, but after it was announced, almost 70 articles previously published on the Gateway Pundit, and cited in the lawsuit, were no longer available, according to an analysis by the Associated Press.

Even so, the site—which has promoted numerous lies and conspiracy theories in the past, and which still faces a lawsuit from Eric Coomer, a former executive at Dominion Voting Systems, for pushing false claims that he helped rig the 2020 election—shows no signs of retreat. (The Gateway Pundit has fought this lawsuit, including by filing a motion to dismiss. Although the site filed for bankruptcy in April, a judge tossed it out, concluding that the filing was in “bad faith.”) The site has continued to post with impunity, promoting on a number of occasions the conspiracy that Democrats are “openly stealing” the 2024 election with fraudulent overseas votes. A political-science professor recently told my colleague Matteo Wong that this particular claim has been one of the “dominant narratives” this year, as Donald Trump’s supporters seek ways to undermine faith in the democratic process.  

This is to be expected: The Gateway Pundit has been around since 2004, and it has always been a destination for those disaffected by the “establishment media.” Comment sections—on any website, let alone those that explicitly cater to the far-right fringe—have never had a reputation for sobriety and thoughtfulness. And the Gateway Pundit’s is particularly vivid. One recent commenter described a desire to see Democratic officials “stripped naked and sprayed down with a firehose like Rambo in First Blood.” Even so, data recently shared with me by the Center for Countering Digital Hate—a nonprofit that studies disinformation and online abuse, and which reports on companies that it believes allow such content to spread—show just how nasty these communities can get. Despite the fracturing of online ecosystems in recent years—namely, the rise and fall of various social platforms and the restructuring of Google Search, both of which have resulted in an overall downturn in traffic to news sites—the Gateway Pundit has remained strikingly relevant on social media, according to the CCDH. And its user base, as seen in the comments, has regularly endorsed political violence in the past few months, despite the site’s own policies forbidding such posts.

Researchers from the CCDH recently examined the comment sections beneath 120 Gateway Pundit articles about alleged election fraud published between May and September. They found that 75 percent of those sections contained “threats or calls for violence.” One comment cited in the report reads: “Beat the hell out of any Democrat you come across today just for the hell of it.”

Another: “They could show/televise the hangings or lined up and executed by firing squad and have that be a reminder not to try to overthrow our constitution.” Overall, the researchers found more than 200 comments with violent content hosted on the Gateway Pundit.

Sites like the Gateway Pundit often attempt to justify the vitriol they host on their platforms by arguing in free-speech terms. But even free-speech absolutists can understand legitimate concerns about incitements to violence. Local election officials in Georgia and Arizona have blamed the site and its comment section for election-violence threats in the past. A 2021 Reuters report found links between the site and more than 80 “menacing” messages sent to election workers. According to Reuters, after the Gateway Pundit published a fake report about ballot fraud in Wisconsin, one election official found herself identified in the comment section, along with calls for her to be killed. “She found one post especially unnerving,” the Reuters reporters Peter Eisler and Jason Szep write. “It recommended a specific bullet for killing her—a 7.62 millimeter round for an AK-47 assault rifle.”

The CCDH researchers used data from a social-media monitoring tool called Newswhip to measure social-media engagement with election-related content from Gateway Pundit and similar sites. Although Gateway Pundit was second to Breitbart as a source for election misinformation on social media overall, the researchers found that the Gateway Pundit was actually the most popular on X, where its content was shared more than 800,000 times from the start of the year through October 2.  

In response to a request for comment, John Burns, a lawyer representing Hoft and the Gateway Pundit, told me that the site relies on users reporting “offending” comments, including those expressing violence or threats. “If a few slipped through the cracks, we’ll look into it,” Burns said. He did not comment on the specifics of the CCDH report, nor the recent lawsuits against the company.

The site uses a popular third-party commenting platform called Disqus, which has taken a hands-off approach to policing far-right, racist content in the past. Disqus offers clients AI-powered, customizable moderation tools that allow them to filter out toxic or inappropriate comments from their site, or ban users. The CCDH report points out that violent comments are against Disqus’s own terms of service. “Publishers monitor and enforce their own community rules,” a Disqus spokesperson wrote in an email statement. “Only if a comment is flagged directly to the Disqus team do we review it against our terms of service. Once flagged, we aim to review within 24 hours and determine whether or not action is required based on our rules and terms of service.”

The Gateway Pundit is just one of a constellation of right-wing sites that offer readers an alternate reality. Emily Bell, the founding director of the Tow Center for Digital Journalism, told me that these sites pushed the range of what’s considered acceptable speech “quite a long way to the right,” and in some cases, away from traditional, “fact-based” media. They started to grow more popular with the rise of the social web, in which algorithmic recommendation systems and conservative influencers pushed their articles to legions of users.

The real power of these sites may come not in their broad reach, but in how they shape the opinions of a relatively small, radical subset of people. According to a paper published in Nature this summer, false and inflammatory content tends to reach “a narrow fringe” of highly motivated users. Sites like the Gateway Pundit are “influential in a very small niche,” Brendan Nyhan, a professor of government at Dartmouth and one of the authors of the paper, told me over email. As my colleague Charlie Warzel recently noted, the effect of this disinformation is not necessarily to deceive people, but rather to help this small subset of people stay anchored in their alternate reality.

I asked Pasha Dashtgard, the director of research for the Polarization and Extremism Research and Innovation Lab at American University, what exactly the relationship is between sites like Gateway Pundit and political violence. “That is such a million-dollar question,” he said. “It’s hard to tell.” By that, he means that it’s hard for researchers and law enforcement to know when online threats will translate into armed vigilantes descending on government buildings. Social-media platforms have only gotten less transparent with their data since the previous cycle, making it more difficult for researchers to suss out what’s happening on them.

“The pathway to radicalization is not linear,” Dashtgard explained. “Certainly I would want to disabuse anyone of the idea that it’s like, you go on this website and that makes you want to kill people.” People could have other risk factors that make them more likely to commit violence, such as feeling alienated or depressed, he said. These sites just represent another potential push mechanism.

And they don’t seem to be slowing down. Three hours after Hoft posted his blog post correcting the record in the case of Freeman and Moss, he posted another statement. This one was addressed to readers. “Many of you may be aware that The Gateway Pundit was in the news this week. We settled an ongoing lawsuit against us,” the post reads in part. “Despite their best efforts, we are still standing.”