Itemoids

AI

Just Wait Until Trump Is a Chatbot

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 04 › ai-generated-political-ads-election-candidate-voter-interaction-transparency › 673893

Earlier this week, the Republican National Committee released a video that it claims was “built entirely with AI imagery.” The content of the ad isn’t especially novel—a dystopian vision of America under a second term with President Joe Biden—but the deliberate emphasis on the technology used to create it stands out: It’s a “Daisy” moment for the 2020s.

We should expect more of this kind of thing. The applications of AI to political advertising have not escaped campaigners, who are already “pressure testing” possible uses for the technology. In the 2024 presidential-election campaign, you can bank on the appearance of AI-generated personalized fundraising emails, text messages from chatbots urging you to vote, and maybe even some deepfaked campaign avatars. Future candidates could use chatbots trained on data representing their views and personalities to approximate the act of directly connecting with people. Think of it like a whistle-stop tour with an appearance in every living room. Previous technological revolutions—railroad, radio, television, and the World Wide Web—transformed how candidates connect to their constituents, and we should expect the same from generative AI. This isn’t science fiction: The era of AI chatbots standing in as avatars for real, individual people has already begun, as the journalist Casey Newton made clear in a 2016 feature about a woman who used thousands of text messages to create a chatbot replica of her best friend after he died.  

The key is interaction. A candidate could use tools enabled by large language models, or LLMs—the technology behind apps such as ChatGPT and the art-making DALL-E—to do micro-polling or message testing, and to solicit perspectives and testimonies from their political audience individually and at scale. The candidates could potentially reach any voter who possesses a smartphone or computer, not just the ones with the disposable income and free time to attend a campaign rally. At its best, AI could be a tool to increase the accessibility of political engagement and ease polarization. At its worst, it could propagate misinformation and increase the risk of voter manipulation. Whatever the case, we know political operatives are using these tools. To reckon with their potential now isn’t buying into the hype—it’s preparing for whatever may come next.

On the positive end, and most profoundly, LLMs could help people think through, refine, or discover their own political ideologies. Research has shown that many voters come to their policy positions reflexively, out of a sense of partisan affiliation. The very act of reflecting on these views through discourse can change, and even depolarize, those views. It can be hard to have reflective policy conversations with an informed, even-keeled human discussion partner when we all live within a highly charged political environment; this is a role almost custom-designed for LLM.

[Read: Return of the People Machine]

In U.S. politics, it is a truism that the most valuable resource in a campaign is time. People are busy and distracted. Campaigns have a limited window to convince and activate voters. Money allows a candidate to purchase time: TV commercials, labor from staffers, and fundraising events to raise even more money. LLMs could provide campaigns with what is essentially a printing press for time.

If you were a political operative, which would you rather do: play a short video on a voter’s TV while they are folding laundry in the next room, or exchange essay-length thoughts with a voter on your candidate’s key issues? A staffer knocking on doors might need to canvass 50 homes over two hours to find one voter willing to have a conversation. OpenAI charges pennies to process about 800 words with its latest GPT-4 model, and that cost could fall dramatically as competitive AIs become available. People seem to enjoy interacting with chatbots; OpenAI’s product reportedly has the fastest-growing user base in the history of consumer apps.

Optimistically, one possible result might be that we’ll get less annoyed with the deluge of political ads if their messaging is more usefully tailored to our interests by AI tools. Though the evidence for microtargeting’s effectiveness is mixed at best, some studies show that targeting the right issues to the right people can persuade voters. Expecting more sophisticated, AI-assisted approaches to be more consistently effective is reasonable. And anything that can prevent us from seeing the same 30-second campaign spot 20 times a day seems like a win.

AI can also help humans effectuate their political interests. In the 2016 U.S. presidential election, primitive chatbots had a role in donor engagement and voter-registration drives: simple messaging tasks such as helping users pre-fill a voter-registration form or reminding them where their polling place is. If it works, the current generation of much more capable chatbots could supercharge small-dollar solicitations and get-out-the-vote campaigns.

And the interactive capability of chatbots could help voters better understand their choices. An AI chatbot could answer questions from the perspective of a candidate about the details of their policy positions most salient to an individual user, or respond to questions about how a candidate’s stance on a national issue translates to a user’s locale. Political organizations could similarly use them to explain complex policy issues, such as those relating to the climate or health care or … anything, really.

Of course, this could also go badly. In the time-honored tradition of demagogues worldwide, the LLM could inconsistently represent the candidate’s views to appeal to the individual proclivities of each voter.

In fact, the fundamentally obsequious nature of the current generation of large language models results in them acting like demagogues. Current LLMs are known to hallucinate—or go entirely off-script—and produce answers that have no basis in reality. These models do not experience emotion in any way, but some research suggests they have a sophisticated ability to assess the emotion and tone of their human users. Although they weren’t trained for this purpose, ChatGPT and its successor, GPT-4, may already be pretty good at assessing some of their users’ traits—say, the likelihood that the author of a text prompt is depressed. Combined with their persuasive capabilities, that means that they could learn to skillfully manipulate the emotions of their human users.

This is not entirely theoretical. A growing body of evidence demonstrates that interacting with AI has a persuasive effect on human users. A study published in February prompted participants to co-write a statement about the benefits of social-media platforms for society with an AI chatbot configured to have varying views on the subject. When researchers surveyed participants after the co-writing experience, those who interacted with a chatbot that expressed that social media is good or bad were far more likely to express the same view than a control group that didn’t interact with an “opinionated language model.”

For the time being, most Americans say they are resistant to trusting AI in sensitive matters such as health care. The same is probably true of politics. If a neighbor volunteering with a campaign persuades you to vote a particular way on a local ballot initiative, you might feel good about that interaction. If a chatbot does the same thing, would you feel the same way?

To help voters chart their own course in a world of persuasive AI, we should demand transparency from our candidates. Campaigns should have to clearly disclose when a text agent interacting with a potential voter—through traditional robotexting or the use of the latest AI chatbots—is human or automated.

[Read: Where’s the AI culture war?]

Though companies such as Meta (Facebook’s parent company) and Alphabet (Google’s) publish libraries of traditional, static political advertising, they do so poorly. These systems would need to be improved and expanded to accommodate user-level differentiation in ad copy to offer serviceable protection against misuse.

A public, anonymized log of chatbot conversations could help hold candidates’ AI representatives accountable for shifting statements and digital pandering. Candidates who use chatbots to engage voters may not want to make all transcripts of those conversations public, but their users could easily choose to share them. So far, there is no shortage of people eager to share their chat transcripts, and in fact, an online database exists of nearly 200,000 of them. In the recent past, Mozilla has galvanized users to opt into sharing their web data to study online misinformation.

We also need stronger nationwide protections on data privacy, as well as the ability to opt out of targeted advertising, to protect us from the potential excesses of this kind of marketing. No one should be forcibly subjected to political advertising, LLM-generated or not, on the basis of their internet searches regarding private matters such as medical issues. In February, the European Parliament voted to limit political-ad targeting to only basic information, such as language and general location, within two months of an election. This stands in stark contrast to the U.S., which has for years failed to enact federal data-privacy regulations. Though the 2018 revelation of the Cambridge Analytica scandal led to billions of dollars in fines and settlements against Facebook, it has so far resulted in no substantial legislative action.

Transparency requirements like these are a first step toward oversight of future AI-assisted campaigns. Although we should aspire to more robust legal controls on campaign uses of AI, it seems implausible that these will be adopted in advance of the fast-approaching 2024 general presidential election.

Credit the RNC, at least, with disclosing that their recent ad was AI-generated—a transparent attempt at publicity still counts as transparency. But what will we do if the next viral AI-generated ad tries to pass as something more conventional?

As we are all being exposed to these rapidly evolving technologies for the first time and trying to understand their potential uses and effects, let’s push for the kind of basic transparency protection that will allow us to know what we’re dealing with.

Hollywood Is Nothing Without Writers

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 04 › writers-guild-of-america-strike-residuals-pay-streaming › 673876

Early one morning in March, I was at the Holloway House in West Hollywood meeting a writer friend for breakfast. When I arrived the place was empty, but 90 minutes later, it was positively vibrating with anxious energy. Walking to the door on my way out I could hear them, at table after table: my fellow writers. Pitching ideas for TV shows and arcs for feature films to junior executives and studio executives and independent producers. To anyone who might listen and possibly have the power to buy a script. The writers were leaving it all on the dance floor, as if their life—or at least their livelihood—depended on it.

If you’ve ever watched an American feature film or scripted television show or laughed at a late-night comic’s jokes, you’ve encountered work from members of the Writers Guild of America, the collective that represents entertainment writers’ interests. On Monday, the guild’s contract with the Alliance of Motion Picture and Television Producers—the association that represents streaming platforms and television networks—is set to expire. Writers, who have watched their income erode during the “streaming wars,” are prepared to strike if they don’t get some of what they want at the bargaining table.

Hollywood is always a little manic, but that day at the Holloway House was excessively so. It was a pitch-a-palooza, speed dating for “content.” A strike means that no one in the guild can write, sell, or even negotiate scripts; the last strike, in 2007 and 2008, shut Hollywood down for 100 days. The studios, or so the rumors had it, were anxious to stockpile scripts. The writers were anxious to stockpile cash.

Screenwriting is a “cool job,” but making it as a full-time writer in show business has never been harder, particularly if you’re doing the day-to-day work on the television programs we all love to binge. Breaking into the business has always been tough, but for generations, once you were through the gate with that hard-fought big break, there was a pathway to a creative middle-class career. One job as a script assistant for a network show that ran for 22 episodes could lead to a spot as a writer and then another as a story editor. The sheer volume of episodes would keep you employed for the bulk of the year. If you got lucky, you could ride residuals from reruns through dry spells. Write for a megahit like Seinfeld or Friends or The Office, and you could be contemplating retirement plans. Streaming has upended all of this.

The pivot to streaming, and the subsequent corporate consolidation, resulted in seismic changes: fewer buyers for content, shorter season orders for television programs, fewer feature films, the removal of older content from platforms, more licensed content from overseas. Above all, shows are being developed in new ways that have reduced the number of available jobs for screenwriters and cut into their salaries. Residuals, guild members say, have become nearly nonexistent. A handful of creators are raking in big bucks, but nearly half of all guild writers for TV write “at scale”—the writers’ equivalent of minimum wage—up from a third a decade ago.

“This career is turning from something that you do all year round, something you could depend on, to a gig career,” Jordan Carlos, a WGA writer-actor-comedian-podcaster and a soon-to-be-published author, told me. “And that is really tough.” It means you have to “patch together, at minimum, two or three writing jobs a year.”

That’s why Carlos, like many other screenwriters (myself included), has become so multihyphenate. If a podcast or Substack that you love happens to be written by a writer from one of your favorite shows, well, it’s no coincidence. “​​Hey,” Carlos said, “you got to dance. You got to hustle, man.”

Last week, I, along with the overwhelming majority of guild members, voted to authorize a strike. The vote doesn’t immediately trigger a strike, but it gives union leadership the authority to do so at any point after our contract expires. In addition to raising the pay scale, the guild’s platform of demands intends to correct for the ways in which evolving business practices have reduced writers’ incomes, as well as install some protections against the looming threat of artificial intelligence.

“I don’t know a single person who wants a strike,” Chris Hazzard, a screenwriter, told me. Yet everyone that I spoke with saw this moment as an existential crisis in the profession, and recognized that a strike might be the only way to remind Hollywood that, as Hazzard put it, “the whole thing starts with us.”

In the beginning, there is an idea. Of a housewife married to a Cubano band leader. Of a Black couple who made a small fortune in dry cleaning and moved from Queens to the Upper East Side. Of a New Jersey mob boss with depression. Of a nurse addicted to Percocet. Of an awkward Black girl and her friends navigating life in Los Angeles. Of a high-school girls’ soccer team stranded in the Canadian wilderness. An idea, and a question: What would happen if … ?

Days, weeks, months later, the person with the idea emerges, bleary-eyed but invigorated, having answered that question over 30, 60, 120 pages of magic called a script. I don’t say “magic” to be hyperbolic. I write articles; I write books; I have written some bad poems. But a script is magic because it is the only form of writing that, if it is alive and vivid enough, can come to physical, three-dimensional life.

It is also a persuasive document. It needs to persuade nervous executives to fight to buy it or fight to make it or fight to get it more money. It needs to persuade actors to sign on, along with other talented artists: directors, set designers, costume designers, composers.

The script is a pragmatic document. It can be read and broken down and evaluated by the costs associated with scenes and shoot days. It is a road map for a budget and a shooting schedule and a location scout. It sets the preliminary shopping list for a costume designer and a prop master.

And it is a living document: If an actor has issues with a line, or a guest star is out because of COVID, or you’ve hit your budget and need to lose a scene, the story has to change. Screenwriters are the ones who make it work. The script is the beginning. The script is the source.

As Hazzard said, it starts with us. That should be worth something, right? “If we write a script for a movie and that movie becomes successful, then that movie could turn into a theme-park ride and into a whole line of merchandise and kids’ toys and all these other things that writers see zero” profits from, Hazzard added. “People watch Shark Tank. They understand that if you write something and it becomes wildly successful, you should have a piece of that.”

“If I came out now trying to be a television writer,” Dailyn Rodriguez, a screenwriter and showrunner on shows such as The Lincoln Lawyer, told me, “I think I would pack up my shit and move back to New York. That’s how much the business has changed.”

Rodriguez got her start back in the early aughts after doing a fellowship funded by Disney. Her first job was in a writers’ room for The George Lopez Show; it was the first time she could pay her bills and afford her own apartment. She remembers that, during the 2007 strike, most people thought it was just a bunch of whiny writers on the picket line: “You rich little snots. What are you complaining about?”

They were complaining about the rise of the internet, and asking that they be compensated for work sold or streamed online—a nascent practice at the time that the writers saw as a bellwether. The strike worked. After 2010, when Netflix transitioned to streaming, and later began producing its own content, it was obliged to hire and pay guild minimums and residuals. “If we hadn’t gone on strike,” Rodriguez told me, “we wouldn’t have gotten jurisdiction over the internet.”

The guild’s history of reminding large, profitable studio systems that their products depend on the skilled craft of a network of writers is very much on the minds of WGA members now.

The list of demands is complex, but top priorities include provisions to protect writers’ work against AI, addressing the abuses of “mini-rooms,” and a better model for writers to share in the long-term profits generated from their creative labor.

Feature writers hired to create films for theatrical release, for example, are paid fees pegged to things like the delivery of the script to the studio and the theatrical release. A nervous development team that wants to perfect a script can force writers to spend weeks or even months revising while waiting to get paid. A last-minute decision to send the film straight to streaming could result in lower income for the same work. “Traditionally when you made a feature, it would come out in theaters and then it would go to pay-cable and planes, and then it would just get sold off to various different distributors,” Hazzard said. “And largely now with streamers, that just doesn’t exist. So when you make a movie for a streamer, in large part, you’re making whatever your up-front pay is, and then that’s kind of it.”

On the TV side, the big issue is the controversial use of mini-rooms. When a studio or streamer is excited about an idea but not totally sure it will work, rather than order a pilot—which could cost millions of dollars to produce—it will offer the creator a chance to put together a mini-room: a group of writers that will produce some scripts to “see where the series is going.” Because the show doesn’t necessarily exist yet, most writers are paid the minimum rate. If the show never goes anywhere, the writers walk away without a writing credit. If the show is green-lit, the studio has a season’s worth of usable scripts delivered at a fraction of what they would have cost for a show in production. The writers never see the difference.

These changes have been particularly devastating for writers of color, who made up about a quarter of TV writers in 2019. Almost all of the established screenwriters of color I know got their start through an entry-level “diversity spot” that the studios funded to encourage predominantly white showrunners to diversify their staff.

If you performed well, as Rodriguez did, you would go from script assistant to writer to story editor and so on. The rise of the mini-room has made this harder. It not only doesn’t have a diversity seat, but it is also more likely to be staffed quickly, with the showrunner or creator throwing together a group of writers they are already familiar with—often their friends. This reliance on social networks often puts writers of color—many of whom are in an earlier stage of their career—at a disadvantage.

Rodriguez understands why studios see the mini-room as an expedient tool, but thinks it will cost them down the line. “Because of the way that these mini-rooms have divorced themselves from production, you have an entire generation of writers that are not getting experience on the set.” They are not “being trained to be the next generation of showrunners. And so that’s bad.”

My path to screenwriting was anomalous. When I was selling the rights to my debut novel—about a Nuyorican wedding planner from working-class South Brooklyn—I wanted to write the adaptation myself. I was advised that this might make it harder to sell, because I had no experience in screenwriting, but I wanted to take my chances. I felt fairly certain that it would be hard to find another screenwriter who understood the hyperspecific world of the book. Eventually the rights and my script were green-lit to pilot, which isn’t a guarantee that the show will get made, but it did secure my membership in the WGA. Suddenly I was a TV writer.

I got to see my script, which I had painstakingly worked on with the project’s director and co-producer, transformed into a budget, a schedule, paint colors for set pieces, shoes and handbags and watches for actors, water towers to create a “rainy day”—real-life, tangible things. Things that cost money. I learned quickly that my script needed to be flexible. Rewriting a daytime scene as a nighttime scene kept the shooting schedule on track; reusing a location saved an outrageous sum of money. On set I realized that screenwriting was not just an art, but a skilled craft with commercial implications.

And this aspect of the craft is in part what is being dismantled by the studios and what the WGA is trying to preserve—both for the writers as well as for the studios and audiences.

No one wants a strike. No one wants L.A.’s economy to suffer or hardworking actors and crew members to take a hit. But stories don’t come from nowhere. This past March, Charlie Kaufman—who wrote films such as Being John Malkovich and Eternal Sunshine of the Spotless Mind—was honored at the WGA Awards. He used his moment in the spotlight as a rallying cry. “They’ve tricked us into thinking we can’t do it without them,” Kaufman said. “The truth is they can’t do anything of value without us.”

AI Is a Waste of Time

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 04 › ai-technology-productivity-time-wasting › 673880

This is Work in Progress, a newsletter by Derek Thompson about work, technology, and how to solve some of America’s biggest problems. Sign up here to get it every week.

Last week, a TikTok user named Ghostwriter used AI voice-emulating technology to make a song that sounded like a collaboration between the artists Drake and The Weeknd. The result was surprisingly non-awful. The track blew up on social media, generating hundreds of thousands of listens, before several platforms took it down at the request of the Universal Music Group.

Naturally, the AI song triggered a spasm of panicked hermeneutics: What did this strange achievement in synthetic art mean?

Some observers took things in a dystopian direction. It didn’t take much to imagine a near future where fake songs and real songs intermingled, where, for every authentic Taylor Swift track, the internet was replete with hundreds, thousands, even millions of plausible Taylor Swift knockoffs. Inundated by AI, pop culture would descend into a disinformation hellscape.

Alternatively, one could lean into optimism. Ghostwriter (probably) isn’t one of the great musical geniuses of the world, yet here he had produced something catchy. If anonymous internet users can make bangers in their basement using AI, what does that mean for actual hitmakers? Researchers studying the introduction of AI in the game Go have found that the rise of superhuman machines has “improved human decision-making” as the top players have learned to incorporate the novel strategies of AI to become more creative players. Similarly, one could imagine the best songwriters in the world honing their skills with a superhuman co-writer.

But lately I’ve become a little bored by the utopia-dystopia dichotomy of the AI debate. What if writing a song and dubbing in celebrity voices doesn’t clearly point us toward a disinformation hellscape or a heaven of music-writing creativity? What if the ability to send media that make you sound like a celebrity to your friends is, fundamentally, just kind of neat? As the tech writer Ben Thompson has pointed out, artists like Grimes and Drake could stand to make a lot of money if they sold licenses of their AI-generated voices and let their fans share little songs with one another, provided that any money made from the music would be split between the original artist and the user. Sure, you might get some surprise bangers. But mostly, you’d get a lot of teenagers recording high-school gossip in the style and voice of Drake. That’s not dystopian or utopian. That’s just the latest funny way to waste time.

The time-wasting potential of AI has been on my mind recently, in no small part because my wife told me, in less-than-subtle terms: You are wasting too much time on AI. Midjourney, a program that turns written prompts into sumptuous images, has colonized my downtime and—don’t read this part, boss—my work time as well. I have successfully used it to “imagine” daguerreotypes of historical figures playing pickleball. I gave it an image of my living room and asked it to redecorate. I designed a series of beds in the style of Apple, Ferrari, and Picasso. Then I realized I could drop in URLs of online photos of my friends and ask the AI to render them as funny versions of themselves—my wife as a Pixar character, my best friend as a grizzled athlete, my neighbor as a regal centaur. After a week or so imagining alternate careers as a furniture designer or interior decorator, I settled on using Midjourney to make my friends laugh. Midjourney is glorious, yes; among other things, it is a glorious waste of time.

One might make similar observations about ChatGPT. It’s already co-writing code with software programmers, accelerating basic research, and formatting and writing papers, but I’m mostly playing around with it, like an open-ended textual video game. ChatGPT went viral last year, to the surprise of its founders at OpenAI, not only because tens of millions of people got a glimpse of the end of white-collar work but also because it’s an extraordinarily interesting game to test the limits of synthetic conversation. When you see screenshots of ChatGPT’s output on Instagram and Twitter, what you are watching is people wasting time amusingly.

Economists have a tendency to analyze new tech by imagining how it will immediately add to productivity and gross domestic product. What’s harder to model is the way that new technology—especially communications technology—might simultaneously save time and waste time, making us, paradoxically, both more and less productive. I used my laptop to research and write this article, and to procrastinate the writing of this article. The smartphone’s productivity-enhancing potential is obvious, and so is its productivity-destroying potential: The typical 20-something spends roughly seven hours a day on their phone, including more than five hours on social media, watching videos, or gaming.

We overlook the long-range importance of time-wasting technology in several ways. In 1994, the economists Sue Bowden and Avner Offer studied how various 20th-century technologies had spread among households. They concluded that “time using” technologies (for example, TV and radio) diffused faster than “time saving” technologies (vacuum cleaners, refrigerators, washing machines).

The reasons weren’t entirely clear. But Bowden and Offer’s most interesting explanation is that economists and technologists overrate how desperately people want to not be bored. Consumers will go to great lengths to escape the psychic burdens of sensory inactivity. Mid-century buyers got a radio, then a black-and-white TV, then a color TV, then a speaker system, then a VCR, and so on, sending an unmistakable signal to the producers of these machines that they had a nearly infinite demand for “higher doses of arousal per unit of time.”

To see AI as play, or as a distraction, or as a waste of time is not to say that AI will be entirely unproductive or benign. It’s to imagine, rather, that the AI-inflected future contains more texture than mere utopia or dystopia. In Wonderland: How Play Made the Modern World, the science and technology writer Steven Johnson says that “when human beings create and share experiences designed to delight or amaze, they often end up transforming society in more dramatic ways than people focused on more utilitarian concerns.” For example, the song sheets for self-playing pianos were essentially code for automatons. These code sheets helped establish the modern software industry. Rather than see games and work as opposites, we might try to see them as complements. The way we play with AI today might affect the way we work in ways that are impossible to anticipate.

In the utopia-dystopia dichotomy, advanced AI saves the world with scientific breakthroughs and fabulous wealth until the moment it destroys the world. The future goes: gold, gold, gold, death. Well, maybe. But if the past is any indication, the roads to gold and death will be paved with play and pockmarked with distractions. AI will waste a billion hours before it saves a billion hours. Before it kills us all, it will kill a lot of time.