Itemoids

Joe Biden

Conspiracy Theories Have a New Best Friend

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 03 › generative-ai-disinformation-synthetic-media-history › 673260

History has long been a theater of war, the past serving as a proxy in conflicts over the present. Ron DeSantis is warping history by banning books on racism from Florida’s schools; people remain divided about the right approach to repatriating Indigenous objects and remains; the Pentagon Papers were an attempt to twist narratives about the Vietnam War. The Nazis seized power in part by manipulating the past—they used propaganda about the burning of the Reichstag, the German parliament building, to justify persecuting political rivals and assuming dictatorial authority. That specific example weighs on Eric Horvitz, Microsoft’s chief scientific officer and a leading AI researcher, who tells me that the apparent AI revolution could not only provide a new weapon to propagandists, as social media did earlier this century, but entirely reshape the historiographic terrain, perhaps laying the groundwork for a modern-day Reichstag fire.

The advances in question, including language models such as ChatGPT and image generators such as DALL-E 2, loosely fall under the umbrella of “generative AI.” These are powerful and easy-to-use programs that produce synthetic text, images, video, and audio, all of which can be used by bad actors to fabricate events, people, speeches, and news reports to sow disinformation. You may have seen one-off examples of this type of media already: fake videos of Ukrainian President Volodymyr Zelensky surrendering to Russia; mock footage of Joe Rogan and Ben Shapiro arguing about the film Ratatouille. As this technology advances, piecemeal fabrications could give way to coordinated campaigns—not just synthetic media but entire synthetic histories, as Horvitz called them in a paper late last year. And a new breed of AI-powered search engines, led by Microsoft and Google, could make such histories easier to find and all but impossible for users to detect.

Even though similar fears about social media, TV, and radio proved somewhat alarmist, there is reason to believe that AI could really be the new variant of disinformation that makes lies about future elections, protests, or mass shootings both more contagious and immune-resistant. Consider, for example, the raging bird-flu outbreak, which has not yet begun spreading from human to human. A political operative—or a simple conspiracist—could use programs similar to ChatGPT and DALL-E 2 to easily generate and publish a huge number of stories about Chinese, World Health Organization, or Pentagon labs tinkering with the virus, backdated to various points in the past and complete with fake “leaked” documents, audio and video recordings, and expert commentary. A synthetic history in which a government-weaponized bird flu would be ready to go if avian flu ever began circulating among humans. A propagandist could simply connect the news to their entirely fabricated—but fully formed and seemingly well-documented—backstory seeded across the internet, spreading a fiction that could consume the nation’s politics and public-health response. The power of AI-generated histories, Horvitz told me, lies in “deepfakes on a timeline intermixed with real events to build a story.”

[Read: AI search is a disaster]

It’s also possible that synthetic histories will change the kind, but not the severity, of the already rampant disinformation online. People are happy to believe the bogus stories they see on Facebook, Rumble, Truth Social, YouTube, wherever. Before the web, propaganda and lies about foreigners, wartime enemies, aliens, and Bigfoot abounded. And where synthetic media or “deepfakes” are concerned, existing research suggests that they offer surprisingly little benefit compared with simpler manipulations, such as mislabeling footage or writing fake news reports. You don’t need advanced technology for people to believe a conspiracy theory. Still, Horvitz believes we are at a precipice: The speed at which AI can generate high-quality disinformation will be overwhelming.

Automated disinformation produced at a heightened pace and scale could enable what he calls “adversarial generative explanations.” In a parallel of sorts to the targeted content you’re served on social media, which is tested and optimized according to what people engage with, propagandists could run small tests to determine which parts of an invented narrative are more or less convincing, and use that feedback along with social-psychology research to iteratively improve that synthetic history. For instance, a program could revise and modulate a fabricated expert’s credentials and quotes to land with certain demographics. Language models like ChatGPT, too, threaten to drown the internet in similarly conspiratorial and tailored Potemkin text—not targeted advertising, but targeted conspiracies.

Big Tech’s plan to replace traditional internet search with chatbots could increase this risk substantially. The AI language models being integrated into Bing and Google are notoriously terrible at fact-checking and prone to falsities, which perhaps makes them susceptible to spreading fake histories. Although many of the early versions of chatbot-based search give Wikipedia-style responses with footnotes, the whole point of a synthetic history is to provide an alternative and convincing set of sources. And the entire premise of chatbots is convenience—for people to trust them without checking.

If this disinformation doomsday sounds familiar, that’s because it is. “The claim about [AI] technology is the same claim that people were making yesterday about the internet,” says Joseph Uscinski, a political scientist at the University of Miami who studies conspiracy theories. “Oh my God, lies travel farther and faster than ever, and everyone’s gonna believe everything they see.” But he has found no evidence that beliefs in conspiracy theories have increased alongside social-media use, or even throughout the coronavirus pandemic; the research into common narratives such as echo chambers is also shaky.

People buy into alternative histories not because new technologies make them more convincing, Uscinski says, but for the same reason they believe anything else—maybe the conspiracy confirms their existing beliefs, matches their political persuasion, or comes from a source they trust. He referenced climate change as an example: People who believe in anthropogenic warming, for the most part, have “not investigated the data themselves. All they’re doing is listening to their trusted sources, which is exactly what the climate-change deniers are doing too. It’s the same exact mechanism; it’s just in this case the Republican elites happen to have it wrong.”

Of course, social media did change how people produce, spread, and consume information. Generative AI could do the same, but with new stakes. “In the past, people would try things out by intuition,” Horvitz told me. “But the idea of iterating faster, with more surgical precision on manipulating minds, is a new thing. The fidelity of the content, the ease with which it can be generated, the ease with which you can post multiple events onto timelines”—all are substantive reasons to worry. Already, in the lead-up to the 2020 election, Donald Trump planted doubts about voting fraud that bolstered the “Stop the Steal” campaign once he lost. As November 2024 approaches, like-minded political operatives could use AI to create fake personas and election officials, fabricate videos of voting-machine manipulation and ballot-stuffing, and write false news stories, all of which would come together into an airtight synthetic history in which the election was stolen.

[Read: The difference between speaking and thinking]

Deepfake campaigns could send us further into “a post-epistemic world, where you don’t know what’s real or fake,” Horvitz said. A businessperson accused of wrongdoing could call incriminating evidence AI-generated; a politician could plant documented but entirely false character assassinations of rivals. Or perhaps, in the same way Truth Social and Rumble provide conservative alternatives to Twitter and YouTube, a far-right alternative to AI-powered search, trained on a wealth of conspiracies and synthetic histories, will ascend in response to fears about Google, Bing, and “WokeGPT” being too progressive. “There’s nothing in my mind that would stop that from happening in search capacity,” Renée DiResta, the research manager of the Stanford Internet Observatory, who recently wrote a paper on language models and disinformation, says. “It’s going to be seen as a fantastic market opportunity for somebody.” RightWingGPT and a conservative-Christian AI are already under discussion, and Elon Musk is reportedly recruiting talent to build a conservative rival to OpenAI.

Preparing for such deepfake campaigns, Horvitz said, will require a variety of strategies, including media-literacy efforts, enhanced detection methods, and regulation. Most promising might be creating a standard to establish the provenance of any piece of media—a log of where a photo was taken and all the ways it has been edited attached to the file as metadata, like a chain of custody for forensic evidence—which Adobe, Microsoft, and several other companies are working on. But people would still need to understand and trust that log. “You have this moment of both proliferation of content and muddiness about how things are coming to be,” says Rachel Kuo, a media-studies professor at the University of Illinois at Urbana-Champaign. Provenance, detection, or other debunking methods might still rely largely on people listening to experts, whether it be journalists, government officials, or AI chatbots, who tell them what is and isn’t legitimate. And even with such silicon chains of custody, simpler forms of lying—over cable news, on the floor of Congress, in print—will continue.

Framing technology as the driving force behind disinformation and conspiracy implies that technology is a sufficient, or at least necessary, solution. But emphasizing AI could be a mistake. If we’re primarily worried “that someone is going to deep-fake Joe Biden, saying that he is a pedophile, then we’re ignoring the reason why a piece of information like that would be resonant,” Alice Marwick, a media-studies professor at the University of North Carolina at Chapel Hill, told me. And to argue that new technologies, whether social media or AI, are primarily or solely responsible for bending the truth risks reifying the power of Big Tech’s advertisements, algorithms, and feeds to determine our thoughts and feelings. As the reporter Joseph Bernstein has written: “It is a model of cause and effect in which the information circulated by a few corporations has the total power to justify the beliefs and behaviors of the demos. In a way, this world is a kind of comfort. Easy to explain, easy to tweak, and easy to sell.”

The messier story might contend with how humans, and maybe machines, are not always very rational; with what might need to be done for writing history to no longer be a war. The historian Jill Lepore has said that “the footnote saved Wikipedia,” suggesting that transparent sourcing helped the website become, or at least appear to be, a premier source for fairly reliable information. But maybe now the footnote, that impulse and impetus to verify, is about to sink the internet—if it has not done so already.

Biden tells Democratic senators he won't veto effort to rescind DC crime law

CNN

www.cnn.com › 2023 › 03 › 02 › politics › dc-crime-bill-biden-not-vetoing › index.html

President Joe Biden told Democratic senators Thursday that he won't veto Republican legislation to rescind a Washington, DC, crime law, West Virginia Democrat Sen. Joe Manchin told reporters.

Opinion: Fox News was never a real news network

CNN

www.cnn.com › 2023 › 03 › 02 › opinions › fox-news-rupert-murdoch-election-zurawik › index.html

Since Fox Chairman Rupert Murdoch's deposition in the Dominion Voting Systems lawsuit was recently made public, there has been an avalanche of speculation about the effect his words will have on the network. Murdoch acknowledged that some Fox News hosts -- Sean Hannity, Jeanine Pirro, Maria Bartiromo and former host Lou Dobbs -- endorsed lies that the 2020 presidential election was stolen; and that he gave Jared Kushner confidential information about President Joe Biden's ads and debate strategy in 2020.

Jill Biden says she offers 'good balance' of insight to President Biden

CNN

www.cnn.com › videos › politics › 2023 › 03 › 02 › jill-biden-advice-president-biden-sot-saenz-cnntm-vpx.cnn

First lady Jill Biden sits down with CNN White House correspondent Arlette Saenz offering a window into her marriage with President Joe Biden and how they both help each other.

The FBI Desperately Wants to Let Trump Off the Hook

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 03 › fbi-trump-mar-a-lago-raid-prosecution › 673251

The way conservatives tell it, the Federal Bureau of Investigation is a hive of anti-Trump villainy, filled with agents looking for any excuse to hound the former president with investigative witch hunts. But the thing to understand about Donald Trump’s legal troubles is that they exist not because federal agents are out to get him, but despite the fact that the FBI is full of Trump supporters who would really like to leave him alone.

This morning, The Washington Post reported that FBI investigators clashed with federal prosecutors over the decision to search the former president’s residence, where highly classified documents were found despite Trump’s insistence that he had none.

“Some of those field agents wanted to shutter the criminal investigation altogether in early June,” the Post reported, adding that FBI agents were “simply afraid” and “worried taking aggressive steps investigating Trump could blemish or even end their careers.” The FBI did not exhibit this worry in 2016, when it publicly announced that it was reopening the investigation into Hillary Clinton’s handling of classified documents, an announcement that, even with all the other mistakes her campaign made, likely cost Clinton the election. That decision was made in part because then-Director James Comey feared that pro-Trump FBI agents would leak the details if he did not announce them publicly. The federal investigation into the Trump campaign, by contrast, was properly kept confidential until after the election. As one agent told the reporter Spencer Ackerman in 2016, “The FBI is Trumpland.”

[Adam Serwer: If it were anyone else, they’d be prosecuted]

President Joe Biden is also under investigation for his mishandling of classified documents, but for now the two situations are distinguished by Biden’s attorneys discovering and voluntarily handing those documents over, as opposed to lying about having them and then insisting that they were his to keep. Neither man, however, should be above prosecution if the circumstances call for it.

A simple but obvious fact has been lost over the past few years, amid Trump’s direct attacks on the FBI, and liberal defenses of the FBI against those attacks: FBI agents are cops. Law-enforcement officers, including the FBI, have long been disproportionately conservative, but in the past few decades, like the rest of the nation, they have also become far more polarized by party, a reality reflected in the rhetoric and positioning of advocacy groups such as the Fraternal Order of Police. There are liberal and moderate cops, but they are not close to comprising a majority. Simply put, the FBI is full of people who would prefer not to investigate Donald Trump. He remains under federal investigation only because of his own inability to stop criming.

Michael Fanone, a former Metropolitan Police officer who was injured by the mob that attempted to overthrow the government on Trump’s behalf on January 6, became disillusioned by the lack of support he received from fellow officers. “What it is is Trumpism,” Fanone told Politico in 2022. “And it’s a loyalty to Donald Trump because he says things like, ‘We love our law enforcement officers.’ And, you know, there’s a lot of police officers at the Metropolitan Police Department and other law enforcement agencies that participated in the defense of the U.S. Capitol on January 6, that still do not accept the reality of what January 6th was.”

Steven D’Antuono, one of the former top FBI officials described in the Post story as reluctant to carry out the search, also said a few days after January 6 that there had been “no indication” of potential violence that day. A moderately active news consumer would have understood that the risk of violence was real; perhaps the only people unaware of that potential worked at the FBI or as regular columnists for elite publications.

[Quinta Jurecic: The classified-files scandal is the most Trumpy scandal of all]

I am not alleging any malignant intent here. But the partisan lean of law-enforcement officers has consequences, producing ideological blind spots and an institutional bias in favor of conservative individuals. They are also more sensitive to criticism from the right, not only because it comes from powerful people, but because it is always more painful to be attacked by people you perceive as being on your side. The stakes here are not simply political; as the debacle of January 6 showed, such blind spots affect the bureau’s ability to fulfill its duties.

In theory, proper, vigorous oversight by Congress might check this kind of bias, among other benefits. But having recently lost one presidential election to FBI intervention, the Democratic Party appears reluctant to engage in such oversight, and the Republican Party is only interested in confirming the conspiratorial explanations of its base for why the benevolent Mr. Trump continues to come under investigative scrutiny. This has merely reinforced to both the bureau’s leadership and its rank and file that the only political danger they need to heed comes from the right, further exacerbating the underlying ideological dynamic.

The irony of all Trump’s legal problems, however, is that the FBI desperately wants to leave him alone—if only he would let them.