Itemoids

ChatGPT

Conspiracy Theories Have a New Best Friend

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 03 › generative-ai-disinformation-synthetic-media-history › 673260

History has long been a theater of war, the past serving as a proxy in conflicts over the present. Ron DeSantis is warping history by banning books on racism from Florida’s schools; people remain divided about the right approach to repatriating Indigenous objects and remains; the Pentagon Papers were an attempt to twist narratives about the Vietnam War. The Nazis seized power in part by manipulating the past—they used propaganda about the burning of the Reichstag, the German parliament building, to justify persecuting political rivals and assuming dictatorial authority. That specific example weighs on Eric Horvitz, Microsoft’s chief scientific officer and a leading AI researcher, who tells me that the apparent AI revolution could not only provide a new weapon to propagandists, as social media did earlier this century, but entirely reshape the historiographic terrain, perhaps laying the groundwork for a modern-day Reichstag fire.

The advances in question, including language models such as ChatGPT and image generators such as DALL-E 2, loosely fall under the umbrella of “generative AI.” These are powerful and easy-to-use programs that produce synthetic text, images, video, and audio, all of which can be used by bad actors to fabricate events, people, speeches, and news reports to sow disinformation. You may have seen one-off examples of this type of media already: fake videos of Ukrainian President Volodymyr Zelensky surrendering to Russia; mock footage of Joe Rogan and Ben Shapiro arguing about the film Ratatouille. As this technology advances, piecemeal fabrications could give way to coordinated campaigns—not just synthetic media but entire synthetic histories, as Horvitz called them in a paper late last year. And a new breed of AI-powered search engines, led by Microsoft and Google, could make such histories easier to find and all but impossible for users to detect.

Even though similar fears about social media, TV, and radio proved somewhat alarmist, there is reason to believe that AI could really be the new variant of disinformation that makes lies about future elections, protests, or mass shootings both more contagious and immune-resistant. Consider, for example, the raging bird-flu outbreak, which has not yet begun spreading from human to human. A political operative—or a simple conspiracist—could use programs similar to ChatGPT and DALL-E 2 to easily generate and publish a huge number of stories about Chinese, World Health Organization, or Pentagon labs tinkering with the virus, backdated to various points in the past and complete with fake “leaked” documents, audio and video recordings, and expert commentary. A synthetic history in which a government-weaponized bird flu would be ready to go if avian flu ever began circulating among humans. A propagandist could simply connect the news to their entirely fabricated—but fully formed and seemingly well-documented—backstory seeded across the internet, spreading a fiction that could consume the nation’s politics and public-health response. The power of AI-generated histories, Horvitz told me, lies in “deepfakes on a timeline intermixed with real events to build a story.”

[Read: AI search is a disaster]

It’s also possible that synthetic histories will change the kind, but not the severity, of the already rampant disinformation online. People are happy to believe the bogus stories they see on Facebook, Rumble, Truth Social, YouTube, wherever. Before the web, propaganda and lies about foreigners, wartime enemies, aliens, and Bigfoot abounded. And where synthetic media or “deepfakes” are concerned, existing research suggests that they offer surprisingly little benefit compared with simpler manipulations, such as mislabeling footage or writing fake news reports. You don’t need advanced technology for people to believe a conspiracy theory. Still, Horvitz believes we are at a precipice: The speed at which AI can generate high-quality disinformation will be overwhelming.

Automated disinformation produced at a heightened pace and scale could enable what he calls “adversarial generative explanations.” In a parallel of sorts to the targeted content you’re served on social media, which is tested and optimized according to what people engage with, propagandists could run small tests to determine which parts of an invented narrative are more or less convincing, and use that feedback along with social-psychology research to iteratively improve that synthetic history. For instance, a program could revise and modulate a fabricated expert’s credentials and quotes to land with certain demographics. Language models like ChatGPT, too, threaten to drown the internet in similarly conspiratorial and tailored Potemkin text—not targeted advertising, but targeted conspiracies.

Big Tech’s plan to replace traditional internet search with chatbots could increase this risk substantially. The AI language models being integrated into Bing and Google are notoriously terrible at fact-checking and prone to falsities, which perhaps makes them susceptible to spreading fake histories. Although many of the early versions of chatbot-based search give Wikipedia-style responses with footnotes, the whole point of a synthetic history is to provide an alternative and convincing set of sources. And the entire premise of chatbots is convenience—for people to trust them without checking.

If this disinformation doomsday sounds familiar, that’s because it is. “The claim about [AI] technology is the same claim that people were making yesterday about the internet,” says Joseph Uscinski, a political scientist at the University of Miami who studies conspiracy theories. “Oh my God, lies travel farther and faster than ever, and everyone’s gonna believe everything they see.” But he has found no evidence that beliefs in conspiracy theories have increased alongside social-media use, or even throughout the coronavirus pandemic; the research into common narratives such as echo chambers is also shaky.

People buy into alternative histories not because new technologies make them more convincing, Uscinski says, but for the same reason they believe anything else—maybe the conspiracy confirms their existing beliefs, matches their political persuasion, or comes from a source they trust. He referenced climate change as an example: People who believe in anthropogenic warming, for the most part, have “not investigated the data themselves. All they’re doing is listening to their trusted sources, which is exactly what the climate-change deniers are doing too. It’s the same exact mechanism; it’s just in this case the Republican elites happen to have it wrong.”

Of course, social media did change how people produce, spread, and consume information. Generative AI could do the same, but with new stakes. “In the past, people would try things out by intuition,” Horvitz told me. “But the idea of iterating faster, with more surgical precision on manipulating minds, is a new thing. The fidelity of the content, the ease with which it can be generated, the ease with which you can post multiple events onto timelines”—all are substantive reasons to worry. Already, in the lead-up to the 2020 election, Donald Trump planted doubts about voting fraud that bolstered the “Stop the Steal” campaign once he lost. As November 2024 approaches, like-minded political operatives could use AI to create fake personas and election officials, fabricate videos of voting-machine manipulation and ballot-stuffing, and write false news stories, all of which would come together into an airtight synthetic history in which the election was stolen.

[Read: The difference between speaking and thinking]

Deepfake campaigns could send us further into “a post-epistemic world, where you don’t know what’s real or fake,” Horvitz said. A businessperson accused of wrongdoing could call incriminating evidence AI-generated; a politician could plant documented but entirely false character assassinations of rivals. Or perhaps, in the same way Truth Social and Rumble provide conservative alternatives to Twitter and YouTube, a far-right alternative to AI-powered search, trained on a wealth of conspiracies and synthetic histories, will ascend in response to fears about Google, Bing, and “WokeGPT” being too progressive. “There’s nothing in my mind that would stop that from happening in search capacity,” Renée DiResta, the research manager of the Stanford Internet Observatory, who recently wrote a paper on language models and disinformation, says. “It’s going to be seen as a fantastic market opportunity for somebody.” RightWingGPT and a conservative-Christian AI are already under discussion, and Elon Musk is reportedly recruiting talent to build a conservative rival to OpenAI.

Preparing for such deepfake campaigns, Horvitz said, will require a variety of strategies, including media-literacy efforts, enhanced detection methods, and regulation. Most promising might be creating a standard to establish the provenance of any piece of media—a log of where a photo was taken and all the ways it has been edited attached to the file as metadata, like a chain of custody for forensic evidence—which Adobe, Microsoft, and several other companies are working on. But people would still need to understand and trust that log. “You have this moment of both proliferation of content and muddiness about how things are coming to be,” says Rachel Kuo, a media-studies professor at the University of Illinois at Urbana-Champaign. Provenance, detection, or other debunking methods might still rely largely on people listening to experts, whether it be journalists, government officials, or AI chatbots, who tell them what is and isn’t legitimate. And even with such silicon chains of custody, simpler forms of lying—over cable news, on the floor of Congress, in print—will continue.

Framing technology as the driving force behind disinformation and conspiracy implies that technology is a sufficient, or at least necessary, solution. But emphasizing AI could be a mistake. If we’re primarily worried “that someone is going to deep-fake Joe Biden, saying that he is a pedophile, then we’re ignoring the reason why a piece of information like that would be resonant,” Alice Marwick, a media-studies professor at the University of North Carolina at Chapel Hill, told me. And to argue that new technologies, whether social media or AI, are primarily or solely responsible for bending the truth risks reifying the power of Big Tech’s advertisements, algorithms, and feeds to determine our thoughts and feelings. As the reporter Joseph Bernstein has written: “It is a model of cause and effect in which the information circulated by a few corporations has the total power to justify the beliefs and behaviors of the demos. In a way, this world is a kind of comfort. Easy to explain, easy to tweak, and easy to sell.”

The messier story might contend with how humans, and maybe machines, are not always very rational; with what might need to be done for writing history to no longer be a war. The historian Jill Lepore has said that “the footnote saved Wikipedia,” suggesting that transparent sourcing helped the website become, or at least appear to be, a premier source for fairly reliable information. But maybe now the footnote, that impulse and impetus to verify, is about to sink the internet—if it has not done so already.

Radio Atlantic: AI Is Here. What Now?

The Atlantic

www.theatlantic.com › podcasts › archive › 2023 › 03 › ai-future-google-bing-chatbots-tools › 673256

With Google and Microsoft releasing new AI tools, it feels like the future is now with artificial intelligence. But how transformative are products like ChatGPT? Should we be worried about their impact? Are they a new Skynet or just a new Clippy?

Staff writers Charlie Warzel and Amanda Mull discuss.

<br> Subscribe here: Apple Podcasts | Spotify | Stitcher | Google Podcasts | Pocket Casts

The following is a transcript of the episode:

Charlie Warzel: It feels like we’re moving toward this search paradigm where search is going to get a little more fun, a little more entertaining, and, possibly, not any more accurate.

And instead, the debates that we’re going to have about who gets to decide what are being entrusted to huge complicated machine-learning models that we have no idea how they’re trained or what they’re gonna do. It kind of sounds like a nightmare.

Amanda Mull: Welcome to Radio Atlantic. I’m staff writer Amanda Mull and I’m here with fellow staff writer Charlie Warzel, who writes about technology for us. Lately, Charlie’s been considering the nascent AI arms race between Google and Microsoft and why, as he put it in a recent piece, Clippy has touched the face of God. Charlie, it’s great to have you.

Warzel: Happy to be here. Happy to talk about Clippy.

Mull: Great. Great. So, Charlie, lately you’ve been writing about and reporting on AI now that Bing and Google have mutually decided to make all of us think about AI. But first I want to talk about finding a bookcase.

Warzel: Let’s do it. Let’s let Bing be our search butler here.

Mull: What I would like to find out is if the Ikea Kallax bookcase—a classic—can fit in the back of a 2002 Volkswagen Jetta.

Warzel: Okay. Alright, it’s searching, it’s still kind of creaking and groaning under the weight of your search. Okay.

Hello. This is Bing. The Ikea Kallax bookshelf has a height of 30 inches or 77 centimeters with a 58 inches, or 147 centimeters and a shelf depth of 15 inches or 39 centimeters. The 2002 Volkswagen Jetta has a trunk space of 47 inches wide, 42 inches deep and 22 inches tall. Based on these dimensions, it seems unlikely that the bookshelf can fit inside the trunk of the car. You may need to disassemble the bookshelf or use another vehicle to transport it.

Mull: It seems like we need to ask Bing to use the packaging dimensions and not the assembled dimensions.

Warzel: Ooh. Okay. It’s searching …

Hello, this is Bing, I apologize for my mistake. The Kallax packaging dimensions are not clearly stated on the IKEA website. However, based on some online reviews and forums, it seems the package size is about 60 inches long, 16 inches wide and six inches thick. The weight of the package is about 70 pounds. I hope this helps you plan your transportation better.

Well, I’m gonna say, “So, can it fit in the back of my 2002 Jetta?” Okay.

It might be possible to fit the package in the back of your 2002 Jetta if you fold down the rear seats and slide it in diagonally. However, it might be a tight fit and you should measure the space more carefully before attempting it. Alternatively, you could use a roof rack or trailer to transport the package more safely and easily.

Based off of me reading that back to you and that being your first presumably AI-chatbot experience. Is that underwhelming? Is that overwhelming? Is that whelming?

Mull: Honestly, it did work a little bit better than I expected for that kind of query.

Warzel: But I also didn’t feel like I was living in the future, right? Like I was kind of just like, Okay, cool.

Mull: Right. But I didn’t expect it to quite so efficiently respond to the query. Honestly. Just straight up.

Warzel: Yeah. So you know, points for Bing.

Mull: Yeah, it’s strange, after hearing so much hype about AI and about the promise of AI search and about how this could change the internet and change our experience with the internet, to finally go through the process of using it and end up like, Oh, that was okay.

Warzel: Yeah.

Mull: Can you tell me in basic, straightforward terms, what it is that these services are doing?

Warzel: So it’s kind of a couple of things that are happening at once that give it this feeling of the smart robot who’s returning your every query. You have the GPT-3, GPT-4, whatever style text model, which is completing based off of all of this information it’s absorbed, completing what’s coming next. Then layered on top of that, you have all that Bing can do or that Google can do with its chatbot in search in general, and it’s kind of mashing those two together. So when we say “Can I fit this IKEA bookshelf into my Jetta?” it’s just doing kind of raw Google searches.

Mull: Yeah.

Warzel: And instead of even giving you links, we’re just going to give you the definitive answer. And that to me feels a little like, Yes, this is helpful. Are we also sort of missing the bigger picture of what people are actually looking for when they’re trying to search?

Mull: Right. It seems like Microsoft and Google in their AI-search efforts are perhaps trying to provide us with God’s perfect Redditor. This is the guy who knows the answer and it’s an AI chatbot who has looked at all of Reddit and condensed it for you.

Warzel: This is honestly one of my big concerns about AI-enabled search in general, which is that we’ve been doing this since, I don’t know, like 2016. Having these really furious debates over content moderation and who gets to say what. And in the world of search, there are all these different ideas about what websites Google is going to elevate. You know, “Is it going to be third parties that Google hires to evaluate the quality of different links when it comes to like financial information, medical information, news, or is it going to be engineers at Google themselves? Is Google going to be the arbiter of truth here?”

That was in an environment with search where it was still, “We’re going to give you a slew of links,” right? And then it’s choose-your-own-adventure. This is a new paradigm that essentially says, “We’re going to give you a canonical, definitive answer.” The machine is going to scroll through and it’s going to talk to you like a human being, it’s going to speak really confidently, and it’s going to give you the information to the best of its ability and as we know, these machines right now are really good at confidently lying or bullshitting.

Mull: Yeah. It seems like this has enormous capacity to sort of obscure what is happening in a way that seems not helpful. What do you mean by B.S.? Can you give us an example?

Warzel: There are so many examples of people typing stuff in. “Write me a book report about Animal Farm,” [or] some book, and it will come back with very confidently stated false facts. Someone described it as—and I really like it—“spicy autocomplete.” Which is just that these models have scanned billions and billions of pieces of information from books to commentary to literally everything and it’s just predicting, based off of that, what should come next. And so it’s not necessarily trained to give you these canonical definitive answers. The confidence and the sophistication with which it gives you an answer leads you to believe that the machine’s telling you the truth when it’s stating something that’s false.

Mull: We’re going to take a short break and we’ll be right back.

Mull: Are there any good examples that we’ve seen so far in the extremely early days of these tools that demonstrate what some of the potential pitfalls are?

Warzel: A good early example is that in Google’s demo of its own search chatbot tool, it apparently, in one of its promos, surfaced something about the James Webb Space Telescope that was incorrect.

And Google’s stock price slid by like 8 percent on that day. But then online there was this whole series of people—like astrophysicists—who were debating, “Actually like technically … maybe the Google answer was right.”

And so you kind of got this preview almost of what the future of all this is going to be, which is that, you have the machine confidently stating something and then all these different people arguing about whether or not this was true or false and meanwhile, everyone’s gaining and losing fortunes in the stock market based off of it. It was a very kind of dystopian preview.

Mull: Yeah, yeah, absolutely. And I think that when it comes to shopping, recommendations, confident recommendations, [are] a big existential problem in online commerce. Because we have this vast array of options on the internet and they are too many and too confusing for people to make meaningful sense of them and feel confident in purchasing things, often. So the tone of AI and the confidence with which it says things gives it an opportunity to be especially manipulative in a lot of ways, but with commerce in particular, because often what people go to Google for when they’re trying to figure out if they should buy something is to hear someone confidently tell you, “This will work for you.” It seems like this can be exploited by all the same forces making your Google results now absolutely unusable in a lot of ways.

Warzel: Yeah, I mean, to me what’s really tough to wrap my head around is that the whole promise of these large language models that generative AIs are built off of is that it has read the internet and it has read every great book and it has read every … it’s trained off of more information than you could ever even imagine cramming into a million people’s brains.

And if you start monetizing this, what you’re doing then is you’re, in essence, retraining this thing that is supposedly transformative to say, “Yes, we know that like you are the modern Library of Alexandria. But we would really love it if you put this canine chew toy at the top of the stack,” right?

There’s something that’s so corrupt about that premise that you have the totality of the world’s information at your disposal. And this product is actually the thing that you need to show above everything else because somebody paid for it.

And it’s also, to me we’re adding a whole other level of opacity to this. These models are really hard to understand. And then it’s gonna be very difficult to unravel where the money is coming from, and Google right now, it’s just like, “Hey, this thing this box is green around it and it says paid or it says ad.”

You just, even if you click on it, you kind of know, Oh yeah, that’s how the game works. This could be totally different and sort of insidious and maybe not nefarious, but it’s like the virus of capitalism, infecting the host of the AI or something like that, you know?

Mull: Right, right. When some fundamental structure of the internet—and how regular people use the internet—changes, there’s always so many unintended and unpredictable consequences. Even people who are not particularly interested in tech can look at social media and go, Yeah, the advent of these platforms fundamentally changed the way that a lot of people interact with their friends, interact with politics, interact with information.

It’s hard to imagine a scenario in which this doesn’t have just sort of all kinds of messy, unintended consequences that we probably can’t predict right now. The signal that I think these companies are picking up on is that in order to solve problems, people generally want to interact with other people. You want to be able to walk into a store and ask a question of a salesperson. You want to be able to get someone on the phone to fix a problem for you. You want to ask a knowledgeable individual about something you’re unsure of and benefit from their expertise. It seems like they’re just fundamentally misdiagnosing the actual solutions that people want, which is to have more of their life mediated by interactions with other people than interactions with layers of technology.

Warzel: This is, I mean really and truly always what you butt up against with people who are pushing technological change forward. It is seen as a challenge, right? To create an artificial general intelligence.

That has been the stuff of computer-science dreams for decades and decades and it’s sort of this quest to do this thing that feels almost unthinkable, right? And then of course it’s like, Oh well we need to commercialize it, and then you get into this kind of territory.

Mull: Yeah.

Warzel: One of the true fears that a lot of people have, that I think is very well-founded if we are actually entering a new epoch or whatever you wanna call it of life dominated by artificial intelligences, is that there’s going to be this stratification, this class stratification, where it’s like, “Congratulations, you will be able to see an AI doctor anywhere, as soon as you want and it’s going to do like a fairly reasonably good job of diagnosing you with whatever,” right? It’s going to be a really competent thing.

Meanwhile, the people who have the means and the power and the money, they’re going to have the in-person doctor experience, right? They’re going to actually have the person lay hands on them and look at them and talk to them and ask them those questions and use that expertise and have the bedside manner and all those things. Whereas the rest of us are going to get …

Mull: Doc bot.

Warzel: Doc bot. Exactly, and so I think that that’s a very real thing that goes just beyond even the experience of online consumption to buy products or to get information. It goes to like the very heart of very important services, right?

I would love to not trust my taxes to an AI accountant no matter how good it gets. Or have an AI financial adviser who is responding faster than any other human could to algorithmic triggers in the stock market to buy, sell, and trade. But also, a great thing about a financial adviser who’s not responding instantaneously to algorithmic shocks is they have … they have the ability to be patient, to wait, to use their creativity or their knowledge, all these things. I think that’s the scariest part about the era, is that it does take so much humanity out of what we’re headed toward.

And the human interaction becomes even more and more of a luxury. The fanciest technology billionaires don’t let their kids have screen time because they’re terrified of what the screens are doing to them. That kind of thing repeats, and with the AI it’s weird and possibly quite consequential.

Mull: Yeah, nobody should be going to an AI doctor that’s not widely used by billionaires themselves. I’ll let the VCs go first.

Charlie, thank you so much. This was fascinating and unsettling. I appreciate you walking me through all this.

Warzel: Fascinating unsettling is my wheelhouse. Thank you for having me. I appreciate it.

Mull: This episode of Radio Atlantic was produced by A.C. Valdez, Theo Balcomb, and Kevin Townsend. Claudine Ebeid is the executive producer of audio at The Atlantic. Thanks for listening.