Itemoids

Walker

The Road Dogs of the American West

The Atlantic

www.theatlantic.com › books › archive › 2024 › 11 › photography-road-west-bryan-schutmaat › 680719

Photographs by Bryan Schutmaat

Drive far enough into Texas from the Louisiana border, and you’ll see the ground dry, the earth crumble into dust. Eventually, the photographer Bryan Schutmaat told me, the strip malls fade into the rearview mirror, the landscape opens, and the American West begins.

Schutmaat has long been fascinated by the West; as he toured with punk bands in his teens and early 20s, he felt himself drawn to the region and its open space. His new book, Sons of the Living, documents a decade’s worth of more recent journeys through the West and features the hitchhikers and “road dogs” he met along the way.

First in a Subaru Forester and then in a Toyota Tacoma pickup, Schutmaat would set out from his home in Austin and drive toward California. He’d weave from Interstate 10 onto the more isolated two-lane blacktop highways snaking into the remote reaches of Texas, New Mexico, and Arizona. When he sensed he was encroaching on the sprawl of Los Angeles, he’d turn around. All told, he spent more than 150 days on the road; many nights, he slept in his car.

At truck stops and campgrounds, Schutmaat would shoot portraits of people he encountered and offer to ferry them from one place to the next. Behind the wheel or over a shared meal or beer, he’d listen as they told their stories: One man, Tazz, had taken to the road after he’d been released from prison and struggled to find work. He had drifted far from his childhood in Maine, and his thick Down East accent clashed with his surroundings. He claimed to have once played childhood pranks on Stephen King’s home; later, he told Schutmaat, he committed more serious transgressions. Schutmaat spent several hours talking in a New Mexico Denny’s with another man, Walker, a tall traveler with resplendent facial hair; Schutmaat took his portrait in the light of a gas-station pavilion, Walker’s beard swaying in the breeze.

[From the September 1896 issue: Frederick J. Turner on the problem of the west]

Schutmaat’s work challenges a mythology of the West that has long maintained a hold on the American imagination. Frederick Jackson Turner theorized that the country’s democratic culture was forged from its pacification of the western frontier; the novelist Wallace Stegner called the region “a geography of hope.” But like the Depression-era photographers Dorothea Lange and Walker Evans, Schutmaat complicates rosy views of the region and its promise. The newspaper editor Horace Greeley is said to have encouraged one of his charges to “Go west, young man, go west and grow up with the country.” Sons of the Living makes clear that the West contains no guaranteed redemption.

Instead, Schutmaat’s photographs reveal what happens when a country grows old and fractured, its citizens isolated. The travelers Schutmaat photographed—widowers and addicts, migrant workers and survivalists, drifters and divorcées—are resilient, but not exactly hopeful. In Schutmaat’s images of abandoned billboards and collapsing towns, there’s a feeling not of humanity taming the wilderness, but of the wilderness steadily reasserting itself over a crumbling human presence.

When Schutmaat was traveling, he’d pull over on the side of the road at nightfall and hike up the highway embankment. He’d set up his camera somewhere elevated and leave the shutter open for five, even 10 minutes. Through his lens, the sparse sets of headlights on the road below would melt into a river of light: the road erased, a wildness restored.

These photos appear in Bryan Schutmaat’s new book, Sons of the Living.

AI’s Fingerprints Were All Over the Election

The Atlantic

www.theatlantic.com › technology › archive › 2024 › 11 › ai-election-propaganda › 680677

The images and videos were hard to miss in the days leading up to November 5. There was Donald Trump with the chiseled musculature of Superman, hovering over a row of skyscrapers. Trump and Kamala Harris squaring off in bright-red uniforms (McDonald’s logo for Trump, hammer-and-sickle insignia for Harris). People had clearly used AI to create these—an effort to show support for their candidate or to troll their opponents. But the images didn’t stop after Trump won. The day after polls closed, the Statue of Liberty wept into her hands as a drizzle fell around her. Trump and Elon Musk, in space suits, stood on the surface of Mars; hours later, Trump appeared at the door of the White House, waving goodbye to Harris as she walked away, clutching a cardboard box filled with flags.

[Read: We haven’t seen the worst of fake news]

Every federal election since at least 2018 has been plagued with fears about potential disruptions from AI. Perhaps a computer-generated recording of Joe Biden would swing a key county, or doctored footage of a poll worker burning ballots would ignite riots. Those predictions never materialized, but many of them were also made before the arrival of ChatGPT, DALL-E, and the broader category of advanced, cheap, and easy-to-use generative-AI models—all of which seemed much more threatening than anything that had come before. Not even a year after ChatGPT was released in late 2022, generative-AI programs were used to target Trump, Emmanuel Macron, Biden, and other political leaders. In May 2023, an AI-generated image of smoke billowing out of the Pentagon caused a brief dip in the U.S. stock market. Weeks later, Ron DeSantis’s presidential primary campaign appeared to have used the technology to make an advertisement.

And so a trio of political scientists at Purdue University decided to get a head start on tracking how generative AI might influence the 2024 election cycle. In June 2023, Christina Walker, Daniel Schiff, and Kaylyn Jackson Schiff started to track political AI-generated images and videos in the United States. Their work is focused on two particular categories: deepfakes, referring to media made with AI, and “cheapfakes,” which are produced with more traditional editing software, such as Photoshop. Now, more than a week after polls closed, their database, along with the work of other researchers, paints a surprising picture of how AI appears to have actually influenced the election—one that is far more complicated than previous fears suggested.

The most visible generated media this election have not exactly planted convincing false narratives or otherwise deceived American citizens. Instead, AI-generated media have been used for transparent propaganda, satire, and emotional outpourings: Trump, wading in a lake, clutches a duck and a cat (“Protect our ducks and kittens in Ohio!”); Harris, enrobed in a coppery blue, struts before the Statue of Liberty and raises a matching torch. In August, Trump posted an AI-generated video of himself and Musk doing a synchronized TikTok dance; a follower responded with an AI image of the duo riding a dragon. The pictures were fake, sure, but they weren’t feigning otherwise. In their analysis of election-week AI imagery, the Purdue team found that such posts were far more frequently intended for satire or entertainment than false information per se. Trump and Musk have shared political AI illustrations that got hundreds of millions of views. Brendan Nyhan, a political scientist at Dartmouth who studies the effects of misinformation, told me that the AI images he saw “were obviously AI-generated, and they were not being treated as literal truth or evidence of something. They were treated as visual illustrations of some larger point.” And this usage isn’t new: In the Purdue team’s entire database of fabricated political imagery, which includes hundreds of entries, satire and entertainment were the two most common goals.

That doesn’t mean these images and videos are merely playful or innocuous. Outrageous and false propaganda, after all, has long been an effective way to spread political messaging and rile up supporters. Some of history’s most effective propaganda campaigns have been built on images that simply project the strength of one leader or nation. Generative AI offers a low-cost and easy tool to produce huge amounts of tailored images that accomplish just this, heightening existing emotions and channeling them to specific ends.

These sorts of AI-generated cartoons and agitprop could well have swayed undecided minds, driven turnout, galvanized “Stop the Steal” plotting, or driven harassment of election officials or racial minorities. An illustration of Trump in an orange jumpsuit emphasizes Trump’s criminal convictions and perceived unfitness for the office, while an image of Harris speaking to a sea of red flags, a giant hammer-and-sickle above the crowd, smears her as “woke” and a “Communist.” An edited image showing Harris dressed as Princess Leia kneeling before a voting machine and captioned “Help me, Dominion. You’re my only hope” (an altered version of a famous Star Wars line) stirs up conspiracy theories about election fraud. “Even though we’re noticing many deepfakes that seem silly, or just seem like simple political cartoons or memes, they might still have a big impact on what we think about politics,” Kaylyn Jackson Schiff told me. It’s easy to imagine someone’s thought process: That image of “Comrade Kamala” is AI-generated, sure, but she’s still a Communist. That video of people shredding ballots is animated, but they’re still shredding ballots. That’s a cartoon of Trump clutching a cat, but immigrants really are eating pets. Viewers, especially those already predisposed to find and believe extreme or inflammatory content, may be further radicalized and siloed. The especially photorealistic propaganda might even fool someone if reshared enough times, Walker told me.

[Read: I’m running out of ways to explain how bad this is]

There were, of course, also a number of fake images and videos that were intended to directly change people’s attitudes and behaviors. The FBI has identified several fake videos intended to cast doubt on election procedures, such as false footage of someone ripping up ballots in Pennsylvania. “Our foreign adversaries were clearly using AI” to push false stories, Lawrence Norden, the vice president of the Elections & Government Program at the Brennan Center for Justice, told me. He did not see any “super innovative use of AI,” but said the technology has augmented existing strategies, such as creating fake-news websites, stories, and social-media accounts, as well as helping plan and execute cyberattacks. But it will take months or years to fully parse the technology’s direct influence on 2024’s elections. Misinformation in local races is much harder to track, for example, because there is less of a spotlight on them. Deepfakes in encrypted group chats are also difficult to track, Norden said. Experts had also wondered whether the use of AI to create highly realistic, yet fake, videos showing voter fraud might have been deployed to discredit a Trump loss. This scenario has not yet been tested.

Although it appears that AI did not directly sway the results last week, the technology has eroded Americans’ overall ability to know or trust information and one another—not deceiving people into believing a particular thing so much as advancing a nationwide descent into believing nothing at all. A new analysis by the Institute for Strategic Dialogue of AI-generated media during the U.S. election cycle found that users on X, YouTube, and Reddit inaccurately assessed whether content was real roughly half the time, and more frequently thought authentic content was AI-generated than the other way around. With so much uncertainty, using AI to convince people of alternative facts seems like a waste of time—far more useful to exploit the technology to directly and forcefully send a motivated message, instead. Perhaps that’s why, of the election-week, AI-generated media the Purdue team analyzed, pro-Trump and anti-Kamala content was most common.

More than a week after Trump’s victory, the use of AI for satire, entertainment, and activism has not ceased. Musk, who will soon co-lead a new extragovernmental organization, routinely shares such content. The morning of November 6, Donald Trump Jr. put out a call for memes that was met with all manner of AI-generated images. Generative AI is changing the nature of evidence, yes, but also that of communication—providing a new, powerful medium through which to illustrate charged emotions and beliefs, broadcast them, and rally even more like-minded people. Instead of an all-caps thread, you can share a detailed and personalized visual effigy. These AI-generated images and videos are instantly legible and, by explicitly targeting emotions instead of information, obviate the need for falsification or critical thinking at all. No need to refute, or even consider, a differing view—just make an angry meme about it. No need to convince anyone of your adoration of J. D. Vance—just use AI to make him, literally, more attractive. Veracity is beside the point, which makes the technology perhaps the nation’s most salient mode of political expression. In a country where facts have gone from irrelevant to detestable, of course deepfakes—fake news made by deep-learning algorithms—don’t matter; to growing numbers of people, everything is fake but what they already know, or rather, feel.