Itemoids

YouTube

Amazon Haul Is an Omen

The Atlantic

www.theatlantic.com › technology › archive › 2024 › 11 › amazon-haul › 680668

No surprise, I thought, as I disposed of the 12-volt charging adapter I had purchased for my car. I’d bought the thing on Temu, the Chinese low-cost-shopping app, as part of a larger haul of random other stuff that the app had marketed to me: chargers to plug into my adapter and car-seat gap-filler crumb-catchers to flank them.

The charger cost $2.43 and took weeks to arrive. Because it came from China, I knew I had no hope of returning it, but $2.43 is less than a Diet Coke these days, so who cares? It turned out I cared, because I wanted to use the gadget to charge things. So I felt disappointment, though not affront, when the gizmo’s plastic pins broke loose mere days after arrival, making the device unusable. I should have just bought a Diet Coke instead.

This week, Amazon announced a new store, Amazon Haul, that hopes to compete with Temu, Shein, and other purveyors of such items. When I opened Haul, which is available only on Amazon’s mobile app, it presented me with an array of “unbelievable finds” at “crazy low prices”: a $3.99 table runner; a pair of blue-and-white zebra-printed women’s swim bottoms for $5.99; a barrage of smartphone cases as low as $2.99; a $2.99 set of foundation brushes; a $2.99 silicone sink strainer; two dozen cork-bottomed chair-leg floor protectors for $6.99.

Temu and Shein have been popular for a long time. But Amazon’s entry into this market officially makes it mainstream. The result isn’t just “low cost” shopping, but a different kind of shopping. Now people buy low-quality goods that they don’t necessarily expect to use, and knowing full well that they’re maybe worthless, for the experience of having bought them.

Of course, people have always shopped just to shop: to hang out at the mall, to experience the relief of retail therapy, to adopt the identity of a label or a style, to pass the time between events. But the internet changed shopping. First, e-commerce made it more standardized and efficient. Instead of fingering through the garments on a rack or rummaging through a discount bin, shoppers clicked product images set against stark white backgrounds. They searched for keywords, which assumed that shopping was driven by need rather than desire. Shopping became more rational, more structured.

[Read: Will Americans ever get sick of cheap junk?]

It consolidated, too. Amazon.com became a so-called everything store, and others, including Walmart.com, followed suit. They offered consumers, well, everything; people no longer needed to visit specialized websites. Then online sellers deployed algorithmic recommendations to steer shoppers toward goods that might benefit the sellers or that might lead buyers to buy more. Slowly, over years, online shopping became disorienting. When I recently searched Amazon for a 16x16 gold picture mat, I was shown a family of products, none of which was a 16x16 gold picture mat. The one I finally bought took forever to arrive—it was not eligible for Prime shipping—and was damaged in transit. I wish I’d made different choices, but which ones? I couldn’t find this product in a local store, and I wasn’t willing to pay for a custom-made one from a specialty shop. This experience is now commonplace. I buy things online that I fully expect to be unfit for purpose, necessitating their return (which has become its own kind of hell). Now shopping neither satisfies a need nor sates a desire. It burns up time and moves money around.

Haul is the perfect name for a habit that contributes to this feeling. On early YouTube, circa the mid-aughts, beauty vloggers seeking topics for vlogging started sharing the goods they had recently purchased, online or in person. They produced what became known as “haul videos.” Eventually, as vloggers gave way to influencers on YouTube, Instagram, and elsewhere, direct sponsorships, feed advertisements, and other incentives drove haul or haul-adjacent content: People would make money for posting it.

Shein started recruiting these influencers to promote its service in the West. The products it sold were so cheap, it didn’t really matter if they were any good. One decent fast-fashion top or accessory out of a $20 haul was still cheaper than Abercrombie or American Eagle. Soon enough, you couldn’t even go to those stores anyway, because of pandemic lockdowns; by 2022, Shein accounted for half of fast-fashion sales in the United States. Shopping became a kind of gambling: Roll the dice and hope that you come out a winner, whatever that would mean.

[Read: Amazon returns have gone to hell]

Showing off has always been a part of shopping, but hauls set use aside entirely, replacing it with exhibition. For the YouTuber or Instagram influencer, it wasn’t important if the clothing or skin-care products were useful or even used, just that they afforded the content creator an opportunity to create content—and, potentially, to get paid by sponsors to do so. Not everyone is an influencer, but lots of people wished to be, and dressing for the job you wanted started to entail hauling as a way of life. Shein, Temu, and now Amazon Haul encourage bulk purchases to justify low costs and minimize freight, while slipping in under the $800 threshold of U.S. import tax. These shops made the haul a basic unit of commerce.

At the same time, Chinese sellers—including some that appear to sell the very same goods found on Shein, Temu, Alibaba, and more—began to dominate Amazon’s third-party-seller platform, known as Marketplace. By 2023, Amazon acknowledged that nearly half of the top 100,000 Marketplace sellers were based in China. If you’ve ever searched for goods and been presented with weird, nonsense-name brands like RECUTMS (it’s “Record Your Times,” not the other thing), these are likely China-based Marketplace sellers. For some time now, cheap products of questionable quality and dubious fitness for purpose have dominated Amazon search results—especially because those sellers can also pay for sponsored ads on Amazon to hawk their wares.  

Amazon Haul closes the gap between normal e-commerce and the haul retail that social-media influencers popularized. Now ordinary people can get maybe-useful, maybe-garbage goods purchased for little money in bulk.

Great to have the choice, perhaps. But likely also irritating, because the phone case, table runner, or makeup brush you might purchase that way are probably garbage. Nobody is hiding this fact—thus Amazon’s carefully chosen language of “unbelievable finds” and “crazy low prices,” and not “high-quality goods.” And consumers are now ready to expect crap anyway, having spent years buying random wares from Instagram ads, TikTok shops, Shein, or the discount manufacturers that dominate Amazon itself. When I open a box that arrives at my door, I don’t really expect delight anymore. Instead, I hope that what’s inside might surprise me by bearing any value at all.

Haul might sound like the latest curiosity of concern only to the very online, but it could be an omen. Over time, Amazon has devolved from an everything store that sold stuff I liked and wanted into a venue for bad things that don’t meet my needs. Haul is just one way to shop, not the only way. But that was also true of Marketplace, which slowly took over Amazon’s listings. For now, you can still buy what you want or think you do. But eventually, hauls could take over entirely, and all shopping could become a novelty-store, mystery-grab-bag experience.

AI’s Fingerprints Were All Over the Election

The Atlantic

www.theatlantic.com › technology › archive › 2024 › 11 › ai-election-propaganda › 680677

The images and videos were hard to miss in the days leading up to November 5. There was Donald Trump with the chiseled musculature of Superman, hovering over a row of skyscrapers. Trump and Kamala Harris squaring off in bright-red uniforms (McDonald’s logo for Trump, hammer-and-sickle insignia for Harris). People had clearly used AI to create these—an effort to show support for their candidate or to troll their opponents. But the images didn’t stop after Trump won. The day after polls closed, the Statue of Liberty wept into her hands as a drizzle fell around her. Trump and Elon Musk, in space suits, stood on the surface of Mars; hours later, Trump appeared at the door of the White House, waving goodbye to Harris as she walked away, clutching a cardboard box filled with flags.

[Read: We haven’t seen the worst of fake news]

Every federal election since at least 2018 has been plagued with fears about potential disruptions from AI. Perhaps a computer-generated recording of Joe Biden would swing a key county, or doctored footage of a poll worker burning ballots would ignite riots. Those predictions never materialized, but many of them were also made before the arrival of ChatGPT, DALL-E, and the broader category of advanced, cheap, and easy-to-use generative-AI models—all of which seemed much more threatening than anything that had come before. Not even a year after ChatGPT was released in late 2022, generative-AI programs were used to target Trump, Emmanuel Macron, Biden, and other political leaders. In May 2023, an AI-generated image of smoke billowing out of the Pentagon caused a brief dip in the U.S. stock market. Weeks later, Ron DeSantis’s presidential primary campaign appeared to have used the technology to make an advertisement.

And so a trio of political scientists at Purdue University decided to get a head start on tracking how generative AI might influence the 2024 election cycle. In June 2023, Christina Walker, Daniel Schiff, and Kaylyn Jackson Schiff started to track political AI-generated images and videos in the United States. Their work is focused on two particular categories: deepfakes, referring to media made with AI, and “cheapfakes,” which are produced with more traditional editing software, such as Photoshop. Now, more than a week after polls closed, their database, along with the work of other researchers, paints a surprising picture of how AI appears to have actually influenced the election—one that is far more complicated than previous fears suggested.

The most visible generated media this election have not exactly planted convincing false narratives or otherwise deceived American citizens. Instead, AI-generated media have been used for transparent propaganda, satire, and emotional outpourings: Trump, wading in a lake, clutches a duck and a cat (“Protect our ducks and kittens in Ohio!”); Harris, enrobed in a coppery blue, struts before the Statue of Liberty and raises a matching torch. In August, Trump posted an AI-generated video of himself and Musk doing a synchronized TikTok dance; a follower responded with an AI image of the duo riding a dragon. The pictures were fake, sure, but they weren’t feigning otherwise. In their analysis of election-week AI imagery, the Purdue team found that such posts were far more frequently intended for satire or entertainment than false information per se. Trump and Musk have shared political AI illustrations that got hundreds of millions of views. Brendan Nyhan, a political scientist at Dartmouth who studies the effects of misinformation, told me that the AI images he saw “were obviously AI-generated, and they were not being treated as literal truth or evidence of something. They were treated as visual illustrations of some larger point.” And this usage isn’t new: In the Purdue team’s entire database of fabricated political imagery, which includes hundreds of entries, satire and entertainment were the two most common goals.

That doesn’t mean these images and videos are merely playful or innocuous. Outrageous and false propaganda, after all, has long been an effective way to spread political messaging and rile up supporters. Some of history’s most effective propaganda campaigns have been built on images that simply project the strength of one leader or nation. Generative AI offers a low-cost and easy tool to produce huge amounts of tailored images that accomplish just this, heightening existing emotions and channeling them to specific ends.

These sorts of AI-generated cartoons and agitprop could well have swayed undecided minds, driven turnout, galvanized “Stop the Steal” plotting, or driven harassment of election officials or racial minorities. An illustration of Trump in an orange jumpsuit emphasizes Trump’s criminal convictions and perceived unfitness for the office, while an image of Harris speaking to a sea of red flags, a giant hammer-and-sickle above the crowd, smears her as “woke” and a “Communist.” An edited image showing Harris dressed as Princess Leia kneeling before a voting machine and captioned “Help me, Dominion. You’re my only hope” (an altered version of a famous Star Wars line) stirs up conspiracy theories about election fraud. “Even though we’re noticing many deepfakes that seem silly, or just seem like simple political cartoons or memes, they might still have a big impact on what we think about politics,” Kaylyn Jackson Schiff told me. It’s easy to imagine someone’s thought process: That image of “Comrade Kamala” is AI-generated, sure, but she’s still a Communist. That video of people shredding ballots is animated, but they’re still shredding ballots. That’s a cartoon of Trump clutching a cat, but immigrants really are eating pets. Viewers, especially those already predisposed to find and believe extreme or inflammatory content, may be further radicalized and siloed. The especially photorealistic propaganda might even fool someone if reshared enough times, Walker told me.

[Read: I’m running out of ways to explain how bad this is]

There were, of course, also a number of fake images and videos that were intended to directly change people’s attitudes and behaviors. The FBI has identified several fake videos intended to cast doubt on election procedures, such as false footage of someone ripping up ballots in Pennsylvania. “Our foreign adversaries were clearly using AI” to push false stories, Lawrence Norden, the vice president of the Elections & Government Program at the Brennan Center for Justice, told me. He did not see any “super innovative use of AI,” but said the technology has augmented existing strategies, such as creating fake-news websites, stories, and social-media accounts, as well as helping plan and execute cyberattacks. But it will take months or years to fully parse the technology’s direct influence on 2024’s elections. Misinformation in local races is much harder to track, for example, because there is less of a spotlight on them. Deepfakes in encrypted group chats are also difficult to track, Norden said. Experts had also wondered whether the use of AI to create highly realistic, yet fake, videos showing voter fraud might have been deployed to discredit a Trump loss. This scenario has not yet been tested.

Although it appears that AI did not directly sway the results last week, the technology has eroded Americans’ overall ability to know or trust information and one another—not deceiving people into believing a particular thing so much as advancing a nationwide descent into believing nothing at all. A new analysis by the Institute for Strategic Dialogue of AI-generated media during the U.S. election cycle found that users on X, YouTube, and Reddit inaccurately assessed whether content was real roughly half the time, and more frequently thought authentic content was AI-generated than the other way around. With so much uncertainty, using AI to convince people of alternative facts seems like a waste of time—far more useful to exploit the technology to directly and forcefully send a motivated message, instead. Perhaps that’s why, of the election-week, AI-generated media the Purdue team analyzed, pro-Trump and anti-Kamala content was most common.

More than a week after Trump’s victory, the use of AI for satire, entertainment, and activism has not ceased. Musk, who will soon co-lead a new extragovernmental organization, routinely shares such content. The morning of November 6, Donald Trump Jr. put out a call for memes that was met with all manner of AI-generated images. Generative AI is changing the nature of evidence, yes, but also that of communication—providing a new, powerful medium through which to illustrate charged emotions and beliefs, broadcast them, and rally even more like-minded people. Instead of an all-caps thread, you can share a detailed and personalized visual effigy. These AI-generated images and videos are instantly legible and, by explicitly targeting emotions instead of information, obviate the need for falsification or critical thinking at all. No need to refute, or even consider, a differing view—just make an angry meme about it. No need to convince anyone of your adoration of J. D. Vance—just use AI to make him, literally, more attractive. Veracity is beside the point, which makes the technology perhaps the nation’s most salient mode of political expression. In a country where facts have gone from irrelevant to detestable, of course deepfakes—fake news made by deep-learning algorithms—don’t matter; to growing numbers of people, everything is fake but what they already know, or rather, feel.