Itemoids

ChatGPT

America’s Intimacy Problem

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 04 › americas-intimacy-problem › 673907

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

In recent years, Americans appear to be getting more and more uncomfortable with intimacy. Why? And is this trend reversible?

First, here are three new stories from The Atlantic:

The GOP’s unworkable work requirements Why won’t powerful men learn? Just wait until Trump is a chatbot. Disconnected People

When my colleague Faith Hill recently interviewed Michael Hilgers, a therapist with more than 20 years of experience, he painted a worrying picture of intimacy in America: “It’s painful to watch just how disconnected people are,” he said. Even when Hilgers can sense that clients do want to pursue deep social connections, “there’s a lot of confusion and fear in terms of how to get there,” he noted.

One might say that America is in its insecure-attachment era.

Let’s back up a little: Insecure attachment is a term used to describe three of the four basic human “attachment styles” that researchers have identified. The framework has risen in popularity in recent years, appearing alongside astrology signs and Enneagram types as social-media-friendly ways to understand the self. Faith lays out the four styles in her recent article:

People with a secure style feel that they can depend on others and that others can depend on them too. Those with a dismissing style—more commonly known as “avoidant”—are overly committed to independence and don’t feel that they need much deep emotional connection. People with a preoccupied (or “anxious”) style badly want intimacy but, fearing rejection, cling or search for validation. And people with fearful (or “disorganized”) attachment crave intimacy, too—but like those with the dismissing style, they distrust people and end up pushing them away.

Over the past few decades, researchers have noticed a decline in secure attachment and an increase in the dismissing and fearful styles. These two insecure styles are “associated with lack of trust and self-isolation,” Faith explains. She notes that American distrust in institutions has also been on the rise for years—it’s well known that more and more Americans are feeling skeptical of the government, organized religion, the media, corporations, and police. But recent research and anecdotal evidence suggest that Americans are growing more wary not only of “hypothetical, nameless Americans,” but of their own colleagues, neighbors, friends, partners, and parents.

The root causes of America’s trust issues are impossible to diagnose with certainty, but they could well be a reflection of Americans’ worries about societal problems. One psychologist who did research into Americans’ insecure-attachment trend “rattled off a list of fears that people may be wrestling with,” Faith writes: “war in Europe, ChatGPT threatening to transform jobs, constant school shootings in the news,” as well as financial precarity. As Faith puts it: “When society feels scary, that fear can seep into your closest relationships.”

Some researchers argue for other likely suspects, such as smartphone use or the fact that more Americans than ever are living alone. The decline in emotional intimacy is also happening against the backdrop of a decline in physical intimacy. Our senior editor Kate Julian explored this “sex recession,” particularly among young adults, in her 2018 magazine cover story.

A lack of trust is showing up in the workplace as well. In 2021, our contributing writer Jerry Useem reported on studies suggesting that trust among colleagues is declining in the era of remote and hybrid work:

The longer employees were apart from one another during the pandemic, a recent study of more than 5,400 Finnish workers found, the more their faith in colleagues fell. Ward van Zoonen of Erasmus University, in the Netherlands, began measuring trust among those office workers early in 2020. He asked them: How much did they trust their peers? How much did they trust their supervisors? And how much did they believe that those people trusted them? What he found was unsettling. In March 2020, trust levels were fairly high. By May, they had slipped. By October—about seven months into the pandemic—the employees’ degree of confidence in one another was down substantially.

All in all, as Faith writes, “we can’t determine why people are putting up walls, growing further and further away from one another. We just know it’s happening.” The good news is that if humans have the capacity to lose trust in one another, they can also work to build it back up. “The experts I spoke with were surprisingly hopeful,” Faith concludes:

Hilgers [the therapist] knows firsthand that it’s possible for people with attachment issues to change—he’s helped many of them do it. Our culture puts a lot of value on trusting your gut, he told me, but that’s not always the right move if your intuition tells you that it’s a mistake to let people in. So he gently guides them to override that instinct; when people make connections and nothing bad happens, their gut feeling slowly starts to change.”

As Faith argued in an earlier article, attachment styles are not destiny, despite what the internet might lead you to believe. “Your attachment style is not so much a fixed category you fall into, like an astrology sign, but rather a tendency that can vary among different relationships and, in turn, is continuously shaped by those relationships,” she wrote. “Perhaps most important, you can take steps to change it”—and connect with others better as a result.

Related:

America is in its insecure-attachment era. The trait that “super friends” have in common Today’s News Russia’s Defense Ministry said that it had targeted Ukrainian army reserve units with high-precision missile strikes to prevent them from reaching the front lines. A Utah judge postponed ruling on a statewide abortion-clinic ban to next week, following the failure yesterday of two anti-abortion bills in Nebraska and South Carolina. Former Vice President Mike Pence reportedly appeared before a federal grand jury for more than seven hours to testify in a criminal investigation into alleged efforts by Donald Trump to overturn the results of the 2020 election. Dispatches Books Briefing: We need to make room for more voices in philosophy, Kate Cray writes. With a wider canon, enlightenment could come from anywhere. Work in Progress: AI tools are a waste of time, Derek Thompson argues. Many people are simply using them as toys.

Explore all of our newsletters here.

Evening Read Maskot / Getty

A Teen Gender-Care Debate Is Spreading Across Europe

By Frieda Klotz

As Republicans across the U.S. intensify their efforts to legislate against transgender rights, they are finding aid and comfort in an unlikely place: Western Europe, where governments and medical authorities in at least five countries that once led the way on gender-affirming treatments for children and adolescents are now reversing course, arguing that the science undergirding these treatments is unproven, and their benefits unclear.

The about-face by these countries concerns the so-called Dutch protocol, which has for at least a decade been viewed by many clinicians as the gold-standard approach to care for children and teenagers with gender dysphoria.

Read the full article.

More From The Atlantic

A cheerful goodbye to the Guardians of the Galaxy Why Hollywood writers may go on strike Nikki Haley’s dilemma is also the Republicans’ problem. Long-haulers are trying to define themselves. Culture Break Graeme Hunter / HBO

Read. The Renovation,” a new short story from Kenan Orhan about exile from Turkey and longing for a homeland.

Watch. The latest episode of Succession (streaming on HBO Max), which features the creepiest corporate retreat ever.

Play our daily crossword.

P.S.

Last year, Faith wrote one of my favorite Atlantic articles in recent memory, about people with a very unique social appetite: the “nocturnals,” or the ultra-introverts who come alive when most people are fast asleep.

— Isabel

Katherine Hu contributed to this newsletter.

Just Wait Until Trump Is a Chatbot

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 04 › ai-generated-political-ads-election-candidate-voter-interaction-transparency › 673893

Earlier this week, the Republican National Committee released a video that it claims was “built entirely with AI imagery.” The content of the ad isn’t especially novel—a dystopian vision of America under a second term with President Joe Biden—but the deliberate emphasis on the technology used to create it stands out: It’s a “Daisy” moment for the 2020s.

We should expect more of this kind of thing. The applications of AI to political advertising have not escaped campaigners, who are already “pressure testing” possible uses for the technology. In the 2024 presidential-election campaign, you can bank on the appearance of AI-generated personalized fundraising emails, text messages from chatbots urging you to vote, and maybe even some deepfaked campaign avatars. Future candidates could use chatbots trained on data representing their views and personalities to approximate the act of directly connecting with people. Think of it like a whistle-stop tour with an appearance in every living room. Previous technological revolutions—railroad, radio, television, and the World Wide Web—transformed how candidates connect to their constituents, and we should expect the same from generative AI. This isn’t science fiction: The era of AI chatbots standing in as avatars for real, individual people has already begun, as the journalist Casey Newton made clear in a 2016 feature about a woman who used thousands of text messages to create a chatbot replica of her best friend after he died.  

The key is interaction. A candidate could use tools enabled by large language models, or LLMs—the technology behind apps such as ChatGPT and the art-making DALL-E—to do micro-polling or message testing, and to solicit perspectives and testimonies from their political audience individually and at scale. The candidates could potentially reach any voter who possesses a smartphone or computer, not just the ones with the disposable income and free time to attend a campaign rally. At its best, AI could be a tool to increase the accessibility of political engagement and ease polarization. At its worst, it could propagate misinformation and increase the risk of voter manipulation. Whatever the case, we know political operatives are using these tools. To reckon with their potential now isn’t buying into the hype—it’s preparing for whatever may come next.

On the positive end, and most profoundly, LLMs could help people think through, refine, or discover their own political ideologies. Research has shown that many voters come to their policy positions reflexively, out of a sense of partisan affiliation. The very act of reflecting on these views through discourse can change, and even depolarize, those views. It can be hard to have reflective policy conversations with an informed, even-keeled human discussion partner when we all live within a highly charged political environment; this is a role almost custom-designed for LLM.

[Read: Return of the People Machine]

In U.S. politics, it is a truism that the most valuable resource in a campaign is time. People are busy and distracted. Campaigns have a limited window to convince and activate voters. Money allows a candidate to purchase time: TV commercials, labor from staffers, and fundraising events to raise even more money. LLMs could provide campaigns with what is essentially a printing press for time.

If you were a political operative, which would you rather do: play a short video on a voter’s TV while they are folding laundry in the next room, or exchange essay-length thoughts with a voter on your candidate’s key issues? A staffer knocking on doors might need to canvass 50 homes over two hours to find one voter willing to have a conversation. OpenAI charges pennies to process about 800 words with its latest GPT-4 model, and that cost could fall dramatically as competitive AIs become available. People seem to enjoy interacting with chatbots; OpenAI’s product reportedly has the fastest-growing user base in the history of consumer apps.

Optimistically, one possible result might be that we’ll get less annoyed with the deluge of political ads if their messaging is more usefully tailored to our interests by AI tools. Though the evidence for microtargeting’s effectiveness is mixed at best, some studies show that targeting the right issues to the right people can persuade voters. Expecting more sophisticated, AI-assisted approaches to be more consistently effective is reasonable. And anything that can prevent us from seeing the same 30-second campaign spot 20 times a day seems like a win.

AI can also help humans effectuate their political interests. In the 2016 U.S. presidential election, primitive chatbots had a role in donor engagement and voter-registration drives: simple messaging tasks such as helping users pre-fill a voter-registration form or reminding them where their polling place is. If it works, the current generation of much more capable chatbots could supercharge small-dollar solicitations and get-out-the-vote campaigns.

And the interactive capability of chatbots could help voters better understand their choices. An AI chatbot could answer questions from the perspective of a candidate about the details of their policy positions most salient to an individual user, or respond to questions about how a candidate’s stance on a national issue translates to a user’s locale. Political organizations could similarly use them to explain complex policy issues, such as those relating to the climate or health care or … anything, really.

Of course, this could also go badly. In the time-honored tradition of demagogues worldwide, the LLM could inconsistently represent the candidate’s views to appeal to the individual proclivities of each voter.

In fact, the fundamentally obsequious nature of the current generation of large language models results in them acting like demagogues. Current LLMs are known to hallucinate—or go entirely off-script—and produce answers that have no basis in reality. These models do not experience emotion in any way, but some research suggests they have a sophisticated ability to assess the emotion and tone of their human users. Although they weren’t trained for this purpose, ChatGPT and its successor, GPT-4, may already be pretty good at assessing some of their users’ traits—say, the likelihood that the author of a text prompt is depressed. Combined with their persuasive capabilities, that means that they could learn to skillfully manipulate the emotions of their human users.

This is not entirely theoretical. A growing body of evidence demonstrates that interacting with AI has a persuasive effect on human users. A study published in February prompted participants to co-write a statement about the benefits of social-media platforms for society with an AI chatbot configured to have varying views on the subject. When researchers surveyed participants after the co-writing experience, those who interacted with a chatbot that expressed that social media is good or bad were far more likely to express the same view than a control group that didn’t interact with an “opinionated language model.”

For the time being, most Americans say they are resistant to trusting AI in sensitive matters such as health care. The same is probably true of politics. If a neighbor volunteering with a campaign persuades you to vote a particular way on a local ballot initiative, you might feel good about that interaction. If a chatbot does the same thing, would you feel the same way?

To help voters chart their own course in a world of persuasive AI, we should demand transparency from our candidates. Campaigns should have to clearly disclose when a text agent interacting with a potential voter—through traditional robotexting or the use of the latest AI chatbots—is human or automated.

[Read: Where’s the AI culture war?]

Though companies such as Meta (Facebook’s parent company) and Alphabet (Google’s) publish libraries of traditional, static political advertising, they do so poorly. These systems would need to be improved and expanded to accommodate user-level differentiation in ad copy to offer serviceable protection against misuse.

A public, anonymized log of chatbot conversations could help hold candidates’ AI representatives accountable for shifting statements and digital pandering. Candidates who use chatbots to engage voters may not want to make all transcripts of those conversations public, but their users could easily choose to share them. So far, there is no shortage of people eager to share their chat transcripts, and in fact, an online database exists of nearly 200,000 of them. In the recent past, Mozilla has galvanized users to opt into sharing their web data to study online misinformation.

We also need stronger nationwide protections on data privacy, as well as the ability to opt out of targeted advertising, to protect us from the potential excesses of this kind of marketing. No one should be forcibly subjected to political advertising, LLM-generated or not, on the basis of their internet searches regarding private matters such as medical issues. In February, the European Parliament voted to limit political-ad targeting to only basic information, such as language and general location, within two months of an election. This stands in stark contrast to the U.S., which has for years failed to enact federal data-privacy regulations. Though the 2018 revelation of the Cambridge Analytica scandal led to billions of dollars in fines and settlements against Facebook, it has so far resulted in no substantial legislative action.

Transparency requirements like these are a first step toward oversight of future AI-assisted campaigns. Although we should aspire to more robust legal controls on campaign uses of AI, it seems implausible that these will be adopted in advance of the fast-approaching 2024 general presidential election.

Credit the RNC, at least, with disclosing that their recent ad was AI-generated—a transparent attempt at publicity still counts as transparency. But what will we do if the next viral AI-generated ad tries to pass as something more conventional?

As we are all being exposed to these rapidly evolving technologies for the first time and trying to understand their potential uses and effects, let’s push for the kind of basic transparency protection that will allow us to know what we’re dealing with.

AI Is a Waste of Time

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 04 › ai-technology-productivity-time-wasting › 673880

This is Work in Progress, a newsletter by Derek Thompson about work, technology, and how to solve some of America’s biggest problems. Sign up here to get it every week.

Last week, a TikTok user named Ghostwriter used AI voice-emulating technology to make a song that sounded like a collaboration between the artists Drake and The Weeknd. The result was surprisingly non-awful. The track blew up on social media, generating hundreds of thousands of listens, before several platforms took it down at the request of the Universal Music Group.

Naturally, the AI song triggered a spasm of panicked hermeneutics: What did this strange achievement in synthetic art mean?

Some observers took things in a dystopian direction. It didn’t take much to imagine a near future where fake songs and real songs intermingled, where, for every authentic Taylor Swift track, the internet was replete with hundreds, thousands, even millions of plausible Taylor Swift knockoffs. Inundated by AI, pop culture would descend into a disinformation hellscape.

Alternatively, one could lean into optimism. Ghostwriter (probably) isn’t one of the great musical geniuses of the world, yet here he had produced something catchy. If anonymous internet users can make bangers in their basement using AI, what does that mean for actual hitmakers? Researchers studying the introduction of AI in the game Go have found that the rise of superhuman machines has “improved human decision-making” as the top players have learned to incorporate the novel strategies of AI to become more creative players. Similarly, one could imagine the best songwriters in the world honing their skills with a superhuman co-writer.

But lately I’ve become a little bored by the utopia-dystopia dichotomy of the AI debate. What if writing a song and dubbing in celebrity voices doesn’t clearly point us toward a disinformation hellscape or a heaven of music-writing creativity? What if the ability to send media that make you sound like a celebrity to your friends is, fundamentally, just kind of neat? As the tech writer Ben Thompson has pointed out, artists like Grimes and Drake could stand to make a lot of money if they sold licenses of their AI-generated voices and let their fans share little songs with one another, provided that any money made from the music would be split between the original artist and the user. Sure, you might get some surprise bangers. But mostly, you’d get a lot of teenagers recording high-school gossip in the style and voice of Drake. That’s not dystopian or utopian. That’s just the latest funny way to waste time.

The time-wasting potential of AI has been on my mind recently, in no small part because my wife told me, in less-than-subtle terms: You are wasting too much time on AI. Midjourney, a program that turns written prompts into sumptuous images, has colonized my downtime and—don’t read this part, boss—my work time as well. I have successfully used it to “imagine” daguerreotypes of historical figures playing pickleball. I gave it an image of my living room and asked it to redecorate. I designed a series of beds in the style of Apple, Ferrari, and Picasso. Then I realized I could drop in URLs of online photos of my friends and ask the AI to render them as funny versions of themselves—my wife as a Pixar character, my best friend as a grizzled athlete, my neighbor as a regal centaur. After a week or so imagining alternate careers as a furniture designer or interior decorator, I settled on using Midjourney to make my friends laugh. Midjourney is glorious, yes; among other things, it is a glorious waste of time.

One might make similar observations about ChatGPT. It’s already co-writing code with software programmers, accelerating basic research, and formatting and writing papers, but I’m mostly playing around with it, like an open-ended textual video game. ChatGPT went viral last year, to the surprise of its founders at OpenAI, not only because tens of millions of people got a glimpse of the end of white-collar work but also because it’s an extraordinarily interesting game to test the limits of synthetic conversation. When you see screenshots of ChatGPT’s output on Instagram and Twitter, what you are watching is people wasting time amusingly.

Economists have a tendency to analyze new tech by imagining how it will immediately add to productivity and gross domestic product. What’s harder to model is the way that new technology—especially communications technology—might simultaneously save time and waste time, making us, paradoxically, both more and less productive. I used my laptop to research and write this article, and to procrastinate the writing of this article. The smartphone’s productivity-enhancing potential is obvious, and so is its productivity-destroying potential: The typical 20-something spends roughly seven hours a day on their phone, including more than five hours on social media, watching videos, or gaming.

We overlook the long-range importance of time-wasting technology in several ways. In 1994, the economists Sue Bowden and Avner Offer studied how various 20th-century technologies had spread among households. They concluded that “time using” technologies (for example, TV and radio) diffused faster than “time saving” technologies (vacuum cleaners, refrigerators, washing machines).

The reasons weren’t entirely clear. But Bowden and Offer’s most interesting explanation is that economists and technologists overrate how desperately people want to not be bored. Consumers will go to great lengths to escape the psychic burdens of sensory inactivity. Mid-century buyers got a radio, then a black-and-white TV, then a color TV, then a speaker system, then a VCR, and so on, sending an unmistakable signal to the producers of these machines that they had a nearly infinite demand for “higher doses of arousal per unit of time.”

To see AI as play, or as a distraction, or as a waste of time is not to say that AI will be entirely unproductive or benign. It’s to imagine, rather, that the AI-inflected future contains more texture than mere utopia or dystopia. In Wonderland: How Play Made the Modern World, the science and technology writer Steven Johnson says that “when human beings create and share experiences designed to delight or amaze, they often end up transforming society in more dramatic ways than people focused on more utilitarian concerns.” For example, the song sheets for self-playing pianos were essentially code for automatons. These code sheets helped establish the modern software industry. Rather than see games and work as opposites, we might try to see them as complements. The way we play with AI today might affect the way we work in ways that are impossible to anticipate.

In the utopia-dystopia dichotomy, advanced AI saves the world with scientific breakthroughs and fabulous wealth until the moment it destroys the world. The future goes: gold, gold, gold, death. Well, maybe. But if the past is any indication, the roads to gold and death will be paved with play and pockmarked with distractions. AI will waste a billion hours before it saves a billion hours. Before it kills us all, it will kill a lot of time.