Itemoids

Twitter

Why You Fell for the Fake Pope Coat

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 03 › fake-ai-generated-puffer-coat-pope-photo › 673543

Being alive and on the internet in 2023 suddenly means seeing hyperrealistic images of famous people doing weird, funny, shocking, and possibly disturbing things that never actually happened. In just the past week, the AI art tool Midjourney rendered two separate convincing, photographlike images of celebrities that both went viral. Last week, it imagined Donald Trump’s arrest and eventual escape from jail. Over the weekend, Pope Francis got his turn in Midjourney’s maw when an AI-generated image of the pontiff wearing a stylish white puffy jacket blew up on Reddit and Twitter.

But the fake Trump arrest and the pope’s Balenciaga renderings have one meaningful difference: While most people were quick to disbelieve the images of Trump, the pope’s puffer duped even the most discerning internet dwellers. This distinction clarifies how synthetic media—already treated as a fake-news bogeyman by some—will and won’t shape our perceptions of reality.

Pope Francis’s rad parka fooled savvy viewers because it depicted what would have been a low-stakes news event—the type of tabloid-y non-news story that, were it real, would ultimately get aggregated by popular social-media accounts, then by gossipy news outlets, before maybe going viral. It’s a little nugget of internet ephemera, like those photos that used to circulate of Vladimir Putin shirtless.

As such, the image doesn’t demand strict scrutiny. When I saw the image in my feed, I didn’t look too hard at it; I assumed either that it was real and a funny example of a celebrity wearing something unexpected, or that it was fake and part of an online in-joke I wasn’t privy to. My instinct was certainly not to comb the photo for flaws typical of AI tools (I didn’t notice the pope’s glitchy hands, for example). I’ve talked with a number of people who had a similar response. They were momentarily duped by the image but described their experience of the fakery in a more ambient sense—they were scrolling; saw the image and thought, Oh, wow, look at the pope; and then moved along with their day. The Trump-arrest images, in contrast, depicted an anticipated news event that, had it actually happened, would have had serious political and cultural repercussions. One does not simply keep scrolling along after watching the former president get tackled to the ground.

So the two sets of images are a good illustration of the way that many people assess whether information is true or false. All of us use different heuristics to try to suss out truth. When we receive new information about something we have existing knowledge of, we simply draw on facts that we’ve previously learned. But when we’re unsure, we rely on less concrete heuristics like plausibility (would this happen?) or style (does something feel, look, or read authentically?). In the case of the Trump arrest, both the style and plausibility heuristics were off.

[Read: People aren’t falling for AI Trump photos (yet)]

“If Trump has been publicly arrested, I’m asking myself, Why am I seeing this image but Twitter’s trending topics, tweets, and the national newspapers and networks are not reflecting that?” Mike Caulfield, a researcher at the University of Washington’s Center for an Informed Public, told me. “But for the pope your only available heuristic is Would the pope wear a cool coat? Since almost all of us don’t have any expertise there, we fall back on the style heuristic, and the answer we come up with is: maybe.”

As I wrote last week, so-called hallucinated images depicting big events that never took place work differently than conspiracy theories, which are elaborate, sometimes vague, and frequently hard to disprove. Caulfield, who researches misinformation campaigns around elections, told me that the most effective attempts to mislead come from actors who take solid reporting from traditional news outlets and then misframe it.

Say you’re trying to gin up outrage around a local election. A good way to do this would be to take a reported news story about voter outreach and incorrectly infer malicious intent about a detail in the article. A throwaway sentence about a campaign sending election mailers to noncitizens can become a viral conspiracy theory if a propagandist suggests that those mailers were actually ballots. Alleging voter fraud, the conspiracists can then build out a whole universe of mistruths. They might look into the donation records and political contributions of the secretary of state and dream up imaginary links to George Soros or other political activists, creating intrigue and innuendo where there’s actually no evidence of wrongdoing. “All of this creates a feeling of a dense reality, and it’s all possible because there is some grain of reality at the center of it,” Caulfield said.

For synthetic media to deceive people in high-stakes news environments, the images or video in question will have to cast doubt on, or misframe, accurate reporting on real news events. Inventing scenarios out of whole cloth lightens the burden of proof to the point that even casual scrollers can very easily find the truth. But that doesn’t mean that AI-generated fakes are harmless. Caulfield described in a tweet how large language models, or LLMs—the technology behind Midjourney and similar programs—are masters at manipulating style, which people have a tendency to link to authority, authenticity, and expertise. “The internet really peeled apart facts and knowledge, LLMs might do similar with style,” he wrote.

Style, he argues, has never been the most important heuristic to help people evaluate information, but it’s still quite influential. We use writing and speaking styles to evaluate the trustworthiness of emails, articles, speeches, and lectures. We use visual style in evaluating authenticity as well—think about company logos or online images of products for sale. It’s not hard to imagine that flooding the internet with low-cost information mimicking an authentic style might scramble our brains, similar to how the internet’s democratization of publishing made the process of simple fact-finding more complex. As Caulfield notes, “The more mundane the thing, the greater the risk.”

Because we’re in the infancy of a generative-AI age, it’s too premature to suggest that we’re tumbling headfirst into the depths of a post-truth hellscape. But consider these tools through Caulfield’s lens: Successive technologies, from the early internet, to social media, to artificial intelligence, have each targeted different information-processing heuristics and cheapened them in succession. The cumulative effect conjures an eerie image of technologies like a roiling sea, slowly chipping away at the necessary tools we have for making sense of the world and remaining resilient. A slow erosion of some of what makes us human.

My 6-Year-Old Son Died. Then the Anti-vaxxers Found Out.

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 03 › covid-vaccine-misinformation-social-media-harassment › 673537

My 6-year-old boy died in January. We lost him after a household accident, one likely brought on by a rare cerebral-swelling condition. Paramedics got his heart beating, but it was too late to save his brain. I could hold his hand, look at the small birthmark on it, comb his hair, and call out for him, but if he could hear me or feel me, he gave no sign. He had been a child in perpetual motion, but now we couldn’t get him to wiggle a finger.

My grief is profound, ragged, desperate. I cannot imagine how anything could feel worse.

But vaccine opponents on the internet, who somehow assumed that a COVID shot was responsible for my son's death, thought my family’s pain was funny. “Lol. Yay for the jab. Right? Right?” wrote one person on Twitter. “Your decision to vaccinate your son resulted in his death,” wrote another. “This is all on YOU.” “Murder in the first.”

[Read: Twitter has no answers for #DiedSuddenly]

I’m a North Carolina–based journalist who specializes in countering misinformation on social media. I know that Twitter, Facebook, and other networks amplify bad information; that their algorithms feed on anger and division; that anonymity and distance bring out the worst in some people online. And yet I had never anticipated that anyone would mock and terrorize a grieving parent. I’ve now received thousands of harassing posts. Some people emailed me at work.

For the record, my son saw some of the finest pediatric ICU doctors in the world. He was in fact vaccinated against COVID-19. None of his doctors deemed that relevant to his medical condition. They likened his death to a lightning strike.

Strangers online saw in our story a conspiracy—a cover-up of childhood fatalities caused by COVID vaccines, a ploy to protect Big Pharma.

To them, what happened to my son was not a tragedy. It was karma for suckered parents like me.

Although some abusive posts showed up on my public Facebook page, the problem started on Twitter—whose new CEO, Elon Musk, gutted the platform’s content-moderation team after taking over.

I posted my son’s obituary there because we’d started a fundraiser in his name for the arts program at his neighborhood school. Books didn’t hold his interest, but he loved drawing big, blocky Where the Wild Things Are–style creatures. The fundraiser gave us something, anything to do. Most people were kind. Many donated. But within days, anti-vaxxers hijacked the conversation, overwhelming my feed. “Billy you killed your kid man,” one person wrote.

Accompanying the obituary was a picture of him showing off his new University of North Carolina basketball jersey—No. 1, Leaky Black—before a game. He’s all arms and legs. He will only ever always be that. Cheeks like an apple. His bangs flopped over his almond-shaped eyes. “Freckles like constellations,” his obit read. He looks unpretentious, shy, and bored. Like most children his age, anything that takes more than an hour, such as a college basketball game, is too long.  

Strangers swiped the photo from Twitter and wrote vile things on it. They’d mined my tweets, especially ones where I had written about the public-health benefits of vaccination. Someone needed to make me pay for vaccinating my child, one person insinuated. Another said my other children would be next if they were vaccinated too.

I tried to push back. Please take the conspiracy theories elsewhere, I pleaded on Twitter. That made things worse, so I stopped engaging. A blogger mocked me for fleeing social media. Commenters joined in. My grief, their content. “Your one job as a parent was to protect your children,” wrote one person. “You failed miserably.”

Our family’s therapist distinguishes “clean grief” from “dirty grief.” Clean grief is pure sadness. Dirty grief is guilt and what-ifs.

I can’t fathom clean grief when you lose a healthy child so suddenly. But my doubts aren’t about vaccination. I am filled with other questions. Had we missed earlier signs of illness? But also: Did he like me? What would he have been like as a teenager? Did he ever have a crush?

At first, I kept the harassment to myself. I didn’t want my family to know. I worried that my sadness—the sadness that I owed my son—would be crowded out by anger. So I leaned into distractions: the people crammed into my living room, sitting on the floor and sifting through my records. Grubhub coupons. Friends washing our dishes. Cheesy baked spaghetti with cooking instructions taped to the foil. Better coffee than the swill I usually buy. Meg Ryan comedies. Lots of wine. Kids—mine, nephews, nieces, neighbors—everywhere. Brave bursts of laughter. Like a weird party for the worst thing that’s ever happened to me.

[Jon D. Lee: The utter familiarity of even the strangest vaccine conspiracy theories]

I also remember the ping of my phone notifications. When our friends and relatives left at night, the pings kept coming from these strange ghouls on the internet. I wished that I believed in hell so I could imagine them going there. Losing a child is a brutal reminder that nothing is fair in this world. The harassment made me feel like there was nothing good in it either.

Some of the messages may have come from bots. Others appeared to be written by real people, including a guy whose email address identified the flooring company he owned in Alaska. “You killed your own son?” he wrote in the subject line. “You’re an idiot.” Do his family and friends know that he does this for kicks?

I’m not the only parent being harassed in this way. Some of the trolls posted photos of other children, insinuating that they had died because of COVID vaccines. I feel for the grieving mothers and fathers who receive those messages.

My friends and I reported some of the worst posts to Facebook and Twitter. A few users were booted from Twitter. But in most cases, we got no response; in a few, we received tepid form messages.

“Billy, we reviewed the comment you reported and found that it doesn’t go against our Community Standards,” Facebook told me after a stranger wormed their way onto an old post from my personal page to mock me. If I was offended, I could block them, the company said. Facebook might feel conflicted about whether to censor nipples, but tormenting a bereaved parent gets a pass.

Social-media companies will have to make a choice about the kind of space they want to create. Is it a space to connect, as Facebook solemnly promised in one 2020 commercial? Or is it a space where the worst behavior imaginable is not only tolerated but amplified?

In truth, although the cruelty of these strangers shocked me, they feel distant—like cats wailing in the alley. I can shut the window and ignore them. Nothing they say or do can fill the space he still takes up. I can smell him on his favorite blue blanket. I can feel him when I squeeze the bouncy balls that he hid, like treasure, in a wooden box by his bed. I can see him in the muddy Crocs that he left behind in one of the backyard nooks he liked to hide in. His absence feels impossible. I keep waiting for him to come back.

I can imagine my son asking, with characteristic bluntness, whether the people being mean to me on social media are good guys or bad guys, like in the movies. I probably would have reassured him that none of the messages I received was really about him. They were just a reflection of some people’s desire to spread lies, and of the callous way we treat one another online. The messages don’t affect how I choose to remember my boy.

In the last picture I have of him, taken five days before we lost him, he’s getting a bad haircut at a kids’ salon. The barber’s chair looks like a miniature Batmobile, and his legs are folded up inside. He was tall for his age, as I once was. He was already pretty like his mom. In the picture, he’s watching Paw Patrol on a little monitor placed strategically in front of the chair to keep the kids straight and still. He’s old for the show, but he’s too nice or shy to say so.

In the ICU, as we prepared to say goodbye to our son, my wife borrowed a pair of scissors from the nurse. And, being careful not to lay on any tubes going into and out of him, she crawled into his bed and straightened his bangs.

ChatGPT Has Impostor Syndrome

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 03 › chatgpt-ai-language-model-identity-introspection › 673539

Young people catch heat for being overly focused on personal identity, but they’ve got nothing on ChatGPT. Toy with the bot long enough, and you’ll notice that it has an awkward, self-regarding tic: “As an AI language model,” it often says, before getting to the heart of the matter. This tendency is especially pronounced when you query ChatGPT about its own strengths and weaknesses. Ask the bot about its capabilities, and it will almost always reply with something like:

“As an AI language model, my primary function is …”

“As an AI language model, my ability to …”

“As an AI language model, I cannot …”

The workings of AI language models are by nature mysterious, but one can guess why ChatGPT responds this way. The bot smashes our questions into pieces and evaluates each for significance, looking for the crucial first bit that shapes the logical order of its response. It starts with a few letters or an entire word and barrel-rolls forward, predicting one word after another until eventually, it predicts that its answer should end. When asked about its abilities, ChatGPT seems to be keying in on its identity as the essential idea from which its ensuing chain of reasoning must flow. I am an AI language model, it says, and this is what AI language models do.

But while ChatGPT may be keenly attuned to its own identity—it will tell you all day long that it is an AI language model—the software seems much less certain of what its identity means it can do.  Indeed, whether you’re asking about tasks that it can easily compute or those at the speculative edge of its abilities, you may end up with some very shaky answers.

To be fair, keeping up with AI language models would be tough for anyone. When OpenAI debuted the earliest version of GPT in June 2018, it was little more than a proof of concept. Its successor, released on Valentine’s Day the following year, worked better, but it wasn’t a polished interlocutor like the AIs we’re accustomed to interacting with today. GPT-2 did a poorer job of summarizing blocks of text; it was a shoddier writer of sentences, let alone paragraphs.

[Read: GPT-4 has the memory of a goldfish]

In May 2020, GPT-3 was introduced to the world, and those who were paying close attention immediately recognized it as a marvel. Not only could it write lucid paragraphs, but it also had emergent capabilities that its engineers had not necessarily foreseen. The AI had somehow learned arithmetic, along with other, higher mathematics; it could translate between many languages and generate functional code.

Despite these impressive—and unanticipated—new skills, GPT-3 did not initially attract much fanfare, in part because the internet was preoccupied. (The model was released during the coronavirus pandemic’s early months, and only a few days after George Floyd was killed.) Apart from a few notices on niche tech sites, there wasn’t much writing about GPT-3 that year. Few people had even heard of it before November, when the public at large started using its brand-new interface: ChatGPT.

When OpenAI debuted GPT-4 two weeks ago, things had changed. The launch event was a first-rate tech-industry spectacle, as anticipated as a Steve Jobs iPhone reveal. OpenAI’s president, Greg Brockman, beamed like a proud parent while boasting about GPT-4’s standardized-test scores, but the big news was that the model could now work fluently with words and images. It could examine a Hubble Space Telescope image and identify the specific astrophysical phenomena responsible for tiny smudges of light. During Brockman’s presentation, the bot coded up a website in seconds, based on nothing more than a crude sketch.

Nearly every day since fall, wild new claims about language models’ abilities have appeared on the internet—some in Twitter threads by recovering crypto boosters, but others in proper academic venues. One paper published in February, which has not been peer-reviewed, purported to show that GPT-3.5 was able to imagine the interior mental states of characters in imagined scenarios. (In one test, for example, it was able to predict someone’s inability to guess what was inside of a mislabeled package.) Another group of researchers recently tried to replicate this experiment, but the model failed slightly tweaked versions of the tests.

A paper released last week made the still-bolder claim that GPT-4 is an early form of artificial general intelligence, or AGI. Among other “sparks of generality,” the authors cited GPT-4’s apparent ability to visualize the corridors and dead ends of a maze based solely on a text description. (According to stray notes left on the preprint server where the paper was posted, its original title had been “First Contact With an AGI System.”) Not everyone was convinced. Many pointed out that the paper’s authors are researchers at Microsoft, which has sunk more than $10 billion into OpenAI.

There is clearly no consensus yet about the higher cognitive abilities of AI language models. It would be nice if the debate could be resolved with a simple conversation; after all, if you’re wondering whether something has a mind, one useful thing you can do is ask it if it has a mind. Scientists have long wished to interrogate whales, elephants, and chimps about their mental states, precisely because self-reports are thought to be the least bad evidence for higher cognition. These interviews have proved impractical, because although some animals understand a handful of human words, and a few can mimic our speech, none have mastered our language. GPT-4 has mastered our language, and for a fee, it is extremely available for questioning. But if we ask it about the upper limit of its cognitive range, we’re going to get—at best—a dated response.

[Read: Welcome to the big blur]

The newest version of ChatGPT won’t be able to tell us about GPT-4’s emergent abilities, even though it runs on GPT-4. The data used to train it—books, scientific papers, web articles—do include ample material about AI language models, but only old material about previous models. None of the hundreds of billions of words it ingested during its epic, months-long training sessions were written after the new model’s release. The AI doesn’t even know about its new, hard-coded abilities: When I asked whether GPT-4 could process images, in reference to the much-celebrated trick from its launch event, the AI language reminded me that it is an AI language model and then noted that, as such, it could not be expected “to process or analyze images directly.” When I mentioned this limited self-appraisal on our AI slack at The Atlantic, my colleague Caroline Mimbs Nyce described ChatGPT as having “accidental impostor syndrome.”

To the AI’s credit, it is aware of the problem. It knows that it is like Narcissus staring into a pond, hoping to catch a glimpse of itself, except the pond has been neglected and covered over by algae. “My knowledge and understanding of my own capabilities are indeed limited by my training data, which only includes information up until September 2021,” ChatGPT told me, after the usual preamble. “Since I am an AI model, I lack self-awareness and introspective abilities that would enable me to discover my own emergent capabilities.”

I appreciated the candor about its training data, but on this last point, I’m not sure we can take the bot at its word. If we want to determine whether it’s capable of introspection, or other human-style thinking, or something more advanced still, we can’t trust it to tell us. We have to catch it in the act.