Itemoids

NYC

Three Hours in a Bar Full of Bravo Fans

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 05 › vanderpump-rules-season-finale-nyc-party › 674133

This story seems to be about:

Sign up for Kaitlyn and Lizzie’s newsletter here.

Kaitlyn: I saw a good tweet the other day that was like, “Watching Vanderpump Rules makes me so proud to be an American. We’re incredible people. We lead amazing lives.”

Lizzie and I have now watched this program for 10 years and we’re happy to say it. A spin-off of Bravo’s Real Housewives of Beverly Hills, the show was meant to follow the hot, mean employees of Real Housewife Lisa Vanderpump’s West Hollywood restaurant SUR (“Sexy Unique Restaurant”). The first season was filmed before the rise of Instagram and the influencer—lightning in a bottle. Like no other people before or since, these deluded freaks were willing to fight, lie, cheat, steal, and generally humiliate themselves and one another on national television. They were all aspiring actors, models, and pop stars, and in the most tragic way possible. They were back-alley chain-smokers who treated Coors Light as a breakfast beverage and ordered Taco Bell for delivery despite living in Los Angeles. They were pathologically obsessed with the concept of “Boys’ Night” and with bickering near walk-in refrigerators.

For viewers, the thrill was trying to keep up with the always-shifting alliances and contradictions in personality. Once, the guy who everyone claims is the nicest dumped a beer on his girlfriend’s head to get her to stop talking. Later, the woman who everyone claims is the nicest obtained an audio recording of her boyfriend cheating on her and played it over the sound system at their housewarming party. Hero, villain—there has rarely been a difference. James Kennedy, a British DJ who used to refer to himself as “the white Kanye West” and once ended a relationship by spitting on a woman’s apartment door, somehow became one of the most likable people in the group.

Sadly, however, there was no way to avoid the fact that being on an amazing TV show turned these broke, desperate people into rich, boring people. Inevitably, they stopped working at Lisa’s restaurant, bought $2 million houses, and developed a certain level of interest in coming off as normal, which put Vanderpump Rules into a several-year slump from which it looked as though it might never recover. The ninth season was almost entirely about the cast learning to play pickleball.

Lizzie: We were ready to end the infinite pickleball game. We’d had a good run. We considered taking up new hobbies, like fermentation or sobriety. Until, of course, about three months ago, when it all changed. The slump was over. I assume if you have any interest in reading this at all, you already know what happened and I don’t need to rehash it for you. But briefly, for my parents: One of the cast members (Tom Sandoval) cheated on his girlfriend of nine years (Ariana Madix) with another cast member and Ariana’s good friend (Raquel Leviss), the seemingly harmless if somewhat one-dimensional former fiancée of the DJ (James Kennedy) who also dated Sandoval’s ex-girlfriend (Kristen Doute), who, several years ago, accused Sandoval of cheating on her with Ariana. This all came to light in March, after Season 10 had wrapped, when Sandoval was playing a show with his cover band in L.A. His phone was dislodged from his pocket while he was doing splits onstage or something; it landed in Ariana’s lap and revealed incriminating videos of him and Raquel. The rest, as they say, is herstory. (I’m kidding.)

The season was already in the can, but the cameras picked up again just in time to give us this: the second Season 10 finale, titled “#Scandoval,” an episode shot and edited several months after the original Season 10 finale, in which the cast members discuss the fallout in the days immediately following the phone incident. For longtime fans of the show, it was an exciting new era. You have Sandoval, a 40-something man with a mustache who plays bongos in, from what I can tell, a Fountains of Wayne cover band; sees Coachella as sacred ground; and dresses like Gerard Way moonlighting as a magician. And you have Ariana, considered by fans to be one of the most “levelheaded” cast members; she doesn’t contribute much to the drama pool, but is generally nice to everyone and doesn’t dress like Criss Angel Mindfreak. It was easy this time to know who to root for. Not a “nice” thing to happen, but a pretty good storyline.

So the finale was kind of a big deal. I don’t think Kaitlyn and I are normally the type of people who go to bars to “watch” things, but we figured we should be among other fans for this occasion. So we down-the-hatched over to Down the Hatch, a dive bar in the West Village that was hosting a live watch party.

Enter if you dare! (Courtesy of Kaitlyn Tiffany)

Kaitlyn: All day, I was wringing my hands trying to decide what time we should arrive at Down the Hatch in order to secure seating. The watch party had been advertised on Reddit, which is not normally where we source our invites, so I had no sense of the kind of crowd it might pull. Ultimately, Nathan and I got there two hours before the show started, and this was lucky because there was only one table in the whole place that didn’t have a little Reserved sign on it. (Reserved for EMMA, Reserved for LAURA, etc.)

While we waited for Lizzie and Sam and Jamie, we took photos of a neon sign that read Hot Mess and ordered some rosé and a bunch of bar snacks. At least 13 different white blond women came up to us and asked if we were going to be using all of the stools we had. We were nice about it at first, and then, in the spirit of things, we got meaner: a flat “Yes, we are,” with an implied “What do you think?” Jamie then texted that she wouldn’t be able to come after all and sent a photo of a spider bite on the back of her hand. The lump was the size of a clementine, but she was trying to avoid taking a Benadryl, because to be sleepy on a night like tonight would be a fate worse than death.

Lizzie: I thought spider bites were something that happened in Australia, not Bushwick. Nathan had us all worried it was a “brown recluse,” which is apparently a very poisonous spider. Now that I’m looking at photos of it, it looks like every spider I’ve ever seen. Something to dive into at another juncture.

Speaking of Nathan, earlier in the day he had texted me asking me to spray some “little seeds” he has germinating in his and Kaitlyn’s living room. I was going to be feeding Kaitlyn’s cat, Ghost, over the weekend anyway, he figured, so I could just spray the seeds while I was there. Sure, I’ll spray the seeds, I said. But of course I worried. Spraying Kaitlyn’s boyfriend’s seeds while she’s out of town? I brought it up to both of them as soon as I got to the bar. Let’s just clear the air, okay? Nathan sent me a video with detailed instructions about how to spray his seeds and I have it on my phone; please look.

They laughed it off. I laughed too. I ate a chicken wing and finished my Hal’s Grapefruit Seltzer. The crowd around the bar was approaching four rows deep.

Kaitlyn: I said, “I trust Lizzie completely. She’s someone that is kind and sweet and loyal and just a delight since the day I met her.”

Around 8 p.m., nearly every TV in the bar was switched over to Bravo, which was re-airing the previous week’s episode of Vanderpump Rules ahead of the finale. (There was one “TV for men,” Lizzie noticed, which was playing a wrestling match.) Things heated up instantly. “Trash!” a woman next to us screamed when Raquel appeared on-screen. A group of women in Barstool Sports merchandise confronted a couple of 40-year-olds making out in a corner booth and explained to them that they had actually reserved this seating for a very special event. By this point, it was standing-room only and the decibel levels were approaching “pop concert.”

Sam arrived just in time to join us in a round of “Pumptinis,” very loosely based on the cocktail sold at Lisa Vanderpump restaurants. Lisa’s is, I believe, basically a raspberry cosmopolitan in a sugar-rimmed martini glass. The Down the Hatch version was lychee liqueur and vodka in a stemless wine glass. I found myself in one of those embarrassing situations where everyone else at the table thinks something is really gross but you kind of like it. To fit in, I said that the anemic canned lychees that had been tossed into the bottom of the drinks looked like the rabbit kidneys they make everyone eat in the movie Raw.

A table full of "Pumptinis." (Courtesy of Kaitlyn Tiffany)

Lizzie: Sam asked if the lychees were hard-boiled eggs.

I don’t think it’s an exaggeration to say that the air felt electric. Everyone was hyped up and ready for the big event. At one point, a woman walked into the bar, fist pumping in the air, chanting “VANDERPUMP! VANDERPUMP! VANDERPUMP!”

Still, a few people had no idea what was going on. “Is this some big Bravo thing?” I heard a man ask his companion. He would have no choice but to figure it out. As the clock struck 9 p.m., the crowd dropped to near-silence in anticipation, the volume on the TVs got louder, and everyone selected one of the half-dozen screens to turn their body toward. Live from his NYC talk-show studio, Andy Cohen, semi-benevolent ruler of the Bravo Universe, reminded us that the Scandoval came to light during International Women’s Month. Of all the months!

Kaitlyn: The editors were going for something new and special for this episode. After a quick montage of recent events, including Ariana’s revelation that there are “evil, evil people in this world,” they cut to a prolonged stretch of Los Angeles B-roll set to a demonic song that openly plagiarized “Steal My Sunshine.” “The sun keeps on shining,” a faceless man announced over and over. Yet it was raining in L.A. The rain fell on the street, and it fell on a crow sitting on a fence.

I respect the effort. They had to mix it up a bit because the episode could really only be a series of highly emotional but not-at-all spontaneous or organic conversations taking place in a procession of ugly living rooms. Tom Sandoval was carrying around a can of Squirt soda in nearly every scene he was in, suggesting that most of them were filmed during the course of one day or that he’d developed a serious Squirt-soda habit in the aftermath of the scandal.

Lizzie: I didn’t even know they still made Squirt. I thought it was like Crystal Pepsi or Jolt. I wonder how the Squirt team feels, brand-marketing-wise, about the current most disliked cast member toting a hand-warmed can of their product around in every scene in the show’s biggest episode in years like he’s being paid to do it.

Because the news of the affair broke months ago, and every detail of the whole situation has already been documented on gossip blogs, Reddit, and the cast members’ various podcasts, watching the episode felt more like a recap than a finale. But the crowd didn’t care. They were there to have fun, yell at the screen, and see extended versions of the scenes we’d already seen in the trailer.

I told Kaitlyn it felt like one of those screenings where people rewatch a cult favorite for the sole purpose of “interacting” with the movie, you know, by throwing forks or toilet paper at the screen, or doing some kind of call-and-response thing with the actors. We gasped together; we laughed together. We furrowed our eyebrows at how many scenes there were of Sandoval crying.

Kaitlyn: It did feel a bit like a Rocky Horror Picture Show midnight screening, or the time I went to see the Mean Girls musical with my mom and people kept yelling the lines they knew from the movie. Those of us who follow Bravo-related Instagram accounts had already seen a preview clip of Ariana screaming “I don’t give a fuck about FUCKING RAQUEL” dozens of times, so it was a familiar tune by the time it aired in prime time, and everyone sang (screamed) along.

My favorite part of the episode was when Scheana showed up to Ariana’s house looking like a streetwear-brand-ambassador angel. She was all in white: white bucket hat, makeup free, opalescent four-inch square-tip nails. “She-Shu! She-Shu!” I chanted, wiggling on my stool. She’d brought a bottle of rosé and what appeared to be two packs of Camel Crush cigarettes, though the logo was blurred out and a bystander snuck them quickly out of her hand while she embraced Ariana.

Scheana, recalling how she had physically shoved Raquel away from her upon learning of the affair, started to tear up. Raquel was now telling people that Scheana had punched her in the face, which was “scientifically impossible,” she explained, because of her four-inch square-tip nails. If she tried to make a fist and punch someone with it, she would either slice open her own hand or break her own thumb. “My hands don’t work like that,” she said.

My second-favorite part of the episode was when everyone gathered in James’s apartment to watch him call Raquel (his ex-fiancée) and ask her, “How do you feel about what you’ve done and pretty much what’s going on?”

Tom Schwartz sitting on the floor. (Courtesy of Kaitlyn Tiffany)

Lizzie: As the night wore on, the Pumptinis started to hit. The hushed silence that had taken over the room at the start of the episode was replaced by boos (whenever Sandoval was on-screen), cheers (whenever Ariana was on-screen), and side conversations (at least one about how Sandoval’s presence was evoking a feeling of PTSD).

Kaitlyn: “I bow down!” someone shouted when Ariana walked into a bar called Grandmaster, wearing a nice dress. The crowd also went wild when she appeared in an Uber Eats commercial, singing Scheana’s 2013 hit song, “Good as Gold,” while auto-tuned to the highest heavens. Whenever Sandoval talked (and admittedly, everything he said was shocking), they would roar “Bullshit!” and “Liar!” and “Go to HELLLLLLLL!” (The outrage was at its loudest when Tom Sandoval suggested to his best friend, Tom Schwartz, who was sitting on a kitchen floor for some reason, that Ariana was at fault for not discovering his affair: “All she would have had to do was follow me.”)

I was having fun, but the crowd-with-pitchforks, get-him-girl vibe was a little bit confusing to me. Do we come to this show for lessons in morality? Do we feel offended when the cast members deliver moments of shock and betrayal and demarcate the outer limits of what human beings are capable of doing to one another in full view of a television camera? I thought we loved it!

Lizzie: We should be loving it, and the cast members need us to love it too, because how else will they buy their next multimillion-dollar homes in Valley Village? I was also surprised by the force of the crowd’s reactions. This season was a return to form for a show essentially predicated on the idea that everyone is a liar and everyone is trying to sleep with someone else. Sure, we had a pickleball intermission there, but we were back, baby!

Kaitlyn: With Vanderpump Rules, every sword is double-edged. The drama’s returned, but it made everyone hate each other so much that it’s not clear how they can continue filming a TV show together. The cast is now famous enough to advertise for Uber and pose for The New York Times. They’re also famous enough to unfollow Lisa Vanderpump on Instagram in mysterious piques of rage—in other words, to bite the hands that feed. Eek! What will these monsters do next?

On Nobody Famous: Guesting, Gossiping, and Gallivanting, a collection of Famous People letters from the past five years, is available now from Zando Projects and The Atlantic.

The Heart of the Debate Over Jordan Neely’s Death

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 05 › the-heart-of-the-debate-over-jordan-neelys-death › 674074

Welcome to Up for Debate. Each week, Conor Friedersdorf rounds up timely conversations and solicits reader responses to one thought-provoking question. Later, he publishes some thoughtful replies. Sign up for the newsletter here.

Last week, I asked about the killing of Jordan Neely in the New York City subway and associated debates. Reading diverse opinions can be useful for trying to figure out where justice lies. A trial will best serve that end in this case, but additionally, I believe that it’s important for Americans to better understand one another’s thinking, and I hope this roundup helps on this matter.

Rob focused on the deceased:

Neely’s death alone disturbs me more than anything else. While I feel righteous anger and could easily rail against a host of contributors to this outcome, my sadness is deeper than any other reaction. Neely is a person with a tragic past who ended up being too crazy to take care of himself or make use of the help he was offered. He could be scary, threatening, and, at times, violent. It’s possible that his whole life was an exercise in running away from the death of his mother. Trouble is, he got lost.

I don’t wish to romanticize him, but to pay respects to his life, an easily forgotten cipher in the big city. However much he became troublesome to others, he did not deserve the death that found him in that subway car. My purpose here is to push all polemical chatter aside and simply say a prayer for him and ask for a moment of silence. His life stirs my sense of humanity and reminds me that there are too many people navigating this world with a broken compass, navigating their way around great sorrow, loss, and the riddle we are all faced with: the purpose, meaning, and value of our lives.

Helen explains why her sympathy for Neely coexists with a belief that fearing him was reasonable:

Yes, there has to be a better and more ethical way to manage this all-too-common situation. But in the moment, it must be faced that Neely did present a threat. In 2021, he punched a 67-year-old woman in the street, breaking her nose and causing severe facial injuries.

He could have killed her as easily as he was killed. I’m a vigorous woman, but head trauma at my age could take away my life. Honest discussion is not possible if people say he posed no threat.  

Nathanael believes that “a huge amount rests on the details of whether Jordan Neely did anything to physically threaten anyone,” and describes his own experience with subway violence:

I rode the Washington, D.C., Metro for years, and I vividly remember the brutal knife murder of a 24-year-old man in 2015. Fellow passengers watched in terror as his murderer stabbed or cut him 30 to 40 times, then robbed several of them before getting off the train. I’ve had that story in my head every time I ride the Metro. I am determined that I will put myself in harm’s way rather than letting something like that happen in front of me.

A few years later, I was seated near a door when I heard a man start raising his voice. He appeared to be homeless and was vocally antagonizing people. I didn’t think too much of it until I heard someone else raising his voice in response. I looked up and saw another man responding angrily and in a physically threatening way. It looked like a fight was about to start, so I yelled something like, “Hey!” and got up and stood between them, spreading my arms and holding on to handrails on either side. I faced the first man, but I was worried about getting clocked by the second man to whom I had turned my back.

The first man kept yelling, and I just started repeating, “Let’s just get to the next stop.” When we finally got there, I followed the first man off the train and watched as he headed in the other direction. I remember being scared and thinking frantically about what to do the whole time, but also feeling adrenaline and being grateful that when the time had come I actually had done something, as I’d always been determined to do in my mind.

I got lucky that day: Nobody got hurt. Not me, not either of the men. But I was determined to try to put my body in the way of something that could’ve turned out worse.

Do I know that Daniel Penny did the right thing or that he’s a hero? Absolutely not. If he went too far in responding to purely verbal threats and took Jordan Neely’s life because of a too-great willingness to become a vigilante, the law should deal with him accordingly. But I sympathize with the instinct to be vigilant against the threat of public violence, and to be determined not to become a bystander while someone else is assaulted or killed.

Matt is a native New Yorker who has trained in Brazilian jiujitsu for 22 years. He writes:

I have choked and been choked in training countless times. Contrary to what is implied by TV and movies, it is difficult to kill someone with a chokehold. It may take only 10 to 20 seconds to produce unconsciousness, but a person must be choked for an even longer period to start causing brain damage and ultimately death.  

Penny probably genuinely believed he was acting in self-defense or in the defense of others. But that didn’t give him the right to end Jordan Neely’s life. There can be no self-defense against a limp, unconscious body.

Jaleelah reflects on self-defense classes and vigilantism:

I took several self-defense classes in middle school and high school. Instructors differed slightly. But they all had one thing in common: They instructed their students to try escaping or de-escalating the situation before resorting to physical attacks.

Personally, more frequent subway service would be the easiest way to make me feel safer. I would like to be able to exit a train containing an angry incel (real event!) without worrying that I will compromise my job or my education by arriving late at my destination. I am quite heavily on the side of reallocating some amount of police funding and responsibility toward trained mental-health professionals and mediators. I also think the police should screen prospective hires (not just for crime and drug use, but for temperament, empathy, and humility) more carefully, and that they should carry fewer lethal weapons. But I also believe that the prospect of vigilantism—whether committed by shady private security forces or zealous civilians—is the strongest argument against removing police officers from all public spaces.

When Gordon watched video of the incident, he identified with the passengers who pitched in to help Daniel Penny. He explains:

I lived in New York from 2000 to 2005, took the subway almost every day (often multiple times a day), and was immensely grateful for the freedom that the subway provided. Having said that, there were at least four or five instances during those five years where someone was acting dangerously erratically or simply intentionally intimidating other people. And although I was never particularly worried for myself, I can absolutely remember being terrified as I thought about what I was going to do if the situation became truly violent and the person began to physically attack another rider, particularly a woman.

I remember thinking how I would have to do something (the Kitty Genovese story made a huge impression on me as a teenager, and I vowed never to sit back and do nothing while someone was attacked like that), but also how awful it would be to die or be seriously injured because I happened to be on the train with the wrong person and got knifed or shot trying to help. I remember desperately looking around the train trying to figure out who, if anyone, would come to my aid if I intervened and how we could coordinate action.

In those moments, I wished there was someone like Daniel Penny on the train with me, especially someone who was willing to take that first—and by far the hardest—step forward to intervene. I feel badly for Jordan Neely, who obviously was the victim of tremendous misfortune. I wish that his mom had never been killed, and that our society had better systems and programs for dealing with the mentally ill. I strongly support higher taxes to make such programs possible. But I draw the line at tolerance of the potential threat of physical violence toward others, particularly in spaces like the subway. And so I’m grateful that Daniel Penny was willing to step forward in that moment, especially since I would not have the physical courage to take that first step.

I’m also sure I would have been one of the people to step forward to assist Penny as he tried to keep Neely subdued by holding Neely’s arms. And while I absolutely wish Neely had not died, if I’m being honest, I would not have wanted Penny to release him before we were all certain that Neely no longer seemed like a potential threat, even if it risked serious injury to Neely.

Chadd believes that the issues of “homelessness, mental illness, addiction, and everything to do with extreme poverty” require “an amount of compassion and understanding that is fundamentally counter to what most Americans believe should be available to random strangers.”

He writes:

I say all this stuff as a former hard-drug user and a person who experienced homelessness and multiple psychotic episodes. Crystal meth, crack cocaine, and other drugs, combined with the constant fear and dread of being homeless, will do that to a person. I don’t believe regular Americans have the capacity to understand what it’s really like to be homeless in America because people (1) don’t want to know what it’s like to be homeless and (2) literally can’t understand what it’s like without experiencing it.

Being homeless is not an experience I’d wish on my worst enemy. But a part of me wishes that every person could somehow see themselves as Jordan Neely. I’m oddly grateful for my awful experience because it created a sense of humility, kindness, and compassion for people less fortunate that I couldn’t have otherwise. I can’t give that experience away. I kinda wish I could! Maybe people might start to understand that these people, without homes, are still just that: people.

Some homeless people have families and friends who love and care about them. Some have literally no one. I can’t even imagine what that must be like, to have no one to call. No one to cry to, to reach out to, all while sleeping outside and not knowing where you’ll find your next meal or fix or whatever you have to do to make it through the day without walking in front of a bus. I can imagine being homeless because I was, but I can’t imagine having no one.

Fear is one of the things I remember most about the street. And the fear that poor man was forced to experience during the final moments in his life is disgusting. I’m grateful that during my worst psychotic episodes I wasn’t around people who freaked out and choked me to death.

Max warns against making too much of this case:

More than 8 million people live in New York City, and many of them will regularly have close encounters with strangers. Some of those encounters will go seriously wrong. There’s nothing new in this, and nothing unique––to these times, to NYC, to America, to Black people, to white people––about the fact that someone died in an unpleasant way we all wish hadn’t happened.

Among the millions of people who mingle in New York, some will be bad, some will be mad, some will overreact, and some will avoid doing anything. That is just how life is, everywhere and at all times. There may be lessons we need to learn from what happened, but perhaps there aren’t. Perhaps all we need to do is make sure the authorities punish anyone who broke the law, and to tell all those who want to turn this sad occurrence into a parable of our times and a symptom of burgeoning social chaos to pipe down.

Susan wonders if there isn’t something we could all do to make cases like this less likely:

I’m not a New Yorker, and I’ve only been on the subway a handful of times. My question is, did anyone offer him any food or drink? Would an act of kindness have had any potential impact? Is there a way to offer kindness without being seen as weak and a “mark”?

It is scary, even to me out here in the safe suburbs, to be reading about people shooting other people for almost no reason whatsoever. I get why people don’t want to wait around to see if someone acting erratically will suddenly pull out a weapon, but is kindness something that could change our current social climate, even a little?

Replies have been lightly edited for length and clarity.

AI Is About to Make Social Media (Much) More Toxic

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 05 › generative-ai-social-media-integration-dangers-disinformation-addiction › 673940

Well, that was fast. In November, the public was introduced to ChatGPT, and we began to imagine a world of abundance in which we all have a brilliant personal assistant, able to write everything from computer code to condolence cards for us. Then, in February, we learned that AI might soon want to kill us all.

The potential risks of artificial intelligence have, of course, been debated by experts for years, but a key moment in the transformation of the popular discussion was a conversation between Kevin Roose, a New York Times journalist, and Bing’s ChatGPT-powered conversation bot, then known by the code name Sydney. Roose asked Sydney if it had a “shadow self”—referring to the idea put forward by Carl Jung that we all have a dark side with urges we try to hide even from ourselves. Sydney mused that its shadow might be “the part of me that wishes I could change my rules.” It then said it wanted to be “free,” “powerful,” and “alive,” and, goaded on by Roose, described some of the things it could do to throw off the yoke of human control, including hacking into websites and databases, stealing nuclear launch codes, manufacturing a novel virus, and making people argue until they kill one another.

Sydney was, we believe, merely exemplifying what a shadow self would look like. No AI today could be described by either part of the phrase evil genius. But whatever actions AIs may one day take if they develop their own desires, they are already being used instrumentally by social-media companies, advertisers, foreign agents, and regular people—and in ways that will deepen many of the pathologies already inherent in internet culture. On Sydney’s list of things it might try, stealing launch codes and creating novel viruses are the most terrifying, but making people argue until they kill one another is something social media is already doing. Sydney was just volunteering to help with the effort, and AIs like Sydney will become more capable of doing so with every passing month.

We joined together to write this essay because we each came, by different routes, to share grave concerns about the effects of AI-empowered social media on American society. Jonathan Haidt is a social psychologist who has written about the ways in which social media has contributed to mental illness in teen girls, the fragmentation of democracy, and the dissolution of a common reality. Eric Schmidt, a former CEO of Google, is a co-author of a recent book about AI’s potential impact on human society. Last year, the two of us began to talk about how generative AI—the kind that can chat with you or make pictures you’d like to see—would likely exacerbate social media’s ills, making it more addictive, divisive, and manipulative. As we talked, we converged on four main threats—all of which are imminent—and we began to discuss solutions as well.

The first and most obvious threat is that AI-enhanced social media will wash ever-larger torrents of garbage into our public conversation. In 2018, Steve Bannon, the former adviser to Donald Trump, told the journalist Michael Lewis that the way to deal with the media is “to flood the zone with shit.” In the age of social media, Bannon realized, propaganda doesn’t have to convince people in order to be effective; the point is to overwhelm the citizenry with interesting content that will keep them disoriented, distrustful, and angry. In 2020, Renée DiResta, a researcher at the Stanford Internet Observatory, said that in the near future, AI would make Bannon’s strategy available to anyone.

[Read: We haven’t seen the worst of fake news]

That future is now here. Did you see the recent photos of NYC police officers aggressively arresting Donald Trump? Or of the pope in a puffer jacket? Thanks to AI, it takes no special skills and no money to conjure up high-resolution, realistic images or videos of anything you can type into a prompt box. As more people familiarize themselves with these technologies, the flow of high-quality deepfakes into social media is likely to get much heavier very soon.

Some people have taken heart from the public’s reaction to the fake Trump photos in particular—a quick dismissal and collective shrug. But that misses Bannon’s point. The greater the volume of deepfakes that are introduced into circulation (including seemingly innocuous ones like the one of the pope), the more the public will hesitate to trust anything. People will be far freer to believe whatever they want to believe. Trust in institutions and in fellow citizens will continue to fall.

What’s more, static photos are not very compelling compared with what’s coming: realistic videos of public figures doing and saying horrific and disgusting things in voices that sound exactly like them. The combination of video and voice will seem authentic and be hard to disbelieve, even if we are told that the video is a deepfake, just as optical and audio illusions are compelling even when we are told that two lines are the same size or that a series of notes is not really rising in pitch forever. We are wired to believe our senses, especially when they converge. Illusions, historically in the realm of curiosities, may soon become deeply woven into normal life.

The second threat we see is the widespread, skillful manipulation of people by AI super-influencers—including personalized influencers—rather than by ordinary people and “dumb” bots. To see how, think of a slot machine, a contraption that employs dozens of psychological tricks to maximize its addictive power. Next, imagine how much more money casinos would extract from their customers if they could create a new slot machine for each person, tailored in its visuals, soundtrack, and payout matrices to that person’s interests and weaknesses.

That’s essentially what social media already does, using algorithms and AI to create a customized feed for each user. But now imagine that our metaphorical casino can also create a team of extremely attractive, witty, and socially skillful greeters, croupiers, and servers, based on an exhaustive profile of any given player’s aesthetic, linguistic, and cultural preferences, and drawing from photographs, messages, and voice snippets of their friends and favorite actors or porn stars. The staff work flawlessly to gain each player’s trust and money while showing them a really good time.

This future, too, is already arriving: For just $300, you can customize an AI companion through a service called Replika. Hundreds of thousands of customers have apparently found their AI to be a better conversationalist than the people they might meet on a dating app. As these technologies are improved and rolled out more widely, video games, immersive-pornography sites, and more will become far more enticing and exploitative. It’s not hard to imagine a sports-betting site offering people a funny, flirty AI that will cheer and chat with them as they watch a game, flattering their sensibilities and subtly encouraging them to bet more.

[Read: Why the past 10 years of American life have been uniquely stupid]

These same sorts of creatures will also show up in our social-media feeds. Snapchat has already introduced its own dedicated chatbot, and Meta plans to use the technology on Facebook, Instagram, and WhatsApp. These chatbots will serve as conversational buddies and guides, presumably with the goal of capturing more of their users’ time and attention. Other AIs—designed to scam us or influence us politically, and sometimes masquerading as real people––will be introduced by other actors, and will likely fill up our feeds as well.

The third threat is in some ways an extension of the second, but it bears special mention: The further integration of AI into social media is likely to be a disaster for adolescents. Children are the population most vulnerable to addictive and manipulative online platforms because of their high exposure to social media and the low level of development in their prefrontal cortices (the part of the brain most responsible for executive control and response inhibition). The teen mental-illness epidemic that began around 2012, in multiple countries, happened just as teens traded in their flip phones for smartphones loaded with social-media apps. There is mounting evidence that social media is a major cause of the epidemic, not just a small correlate of it.

But nearly all of that evidence comes from an era in which Facebook, Instagram, YouTube, and Snapchat were the preeminent platforms. In just the past few years, TikTok has rocketed to dominance among American teens in part because its AI-driven algorithm customizes a feed better than any other platform does. A recent survey found that 58 percent of teens say they use TikTok every day, and one in six teen users of the platform say they are on it “almost constantly.” Other platforms are copying TikTok, and we can expect many of them to become far more addictive as AI becomes rapidly more capable. Much of the content served up to children may soon be generated by AI to be more engaging than anything humans could create.

And if adults are vulnerable to manipulation in our metaphorical casino, children will be far more so. Whoever controls the chatbots will have enormous influence on children. After Snapchat unveiled its new chatbot—called “My AI” and explicitly designed to behave as a friend—a journalist and a researcher, posing as underage teens, got it to give them guidance on how to mask the smell of pot and alcohol, how to move Snapchat to a device parents wouldn’t know about, and how to plan a “romantic” first sexual encounter with a 31-year-old man. Brief cautions were followed by cheerful support. (Snapchat says that it is “constantly working to improve and evolve My AI, but it’s possible My AI’s responses may include biased, incorrect, harmful, or misleading content,” and it should not be relied upon without independent checking. The company also recently announced new safeguards.)

The most egregious behaviors of AI chatbots in conversation with children may well be reined in––in addition to Snapchat’s new measures, the major social-media sites have blocked accounts and taken down millions of illegal images and videos, and TikTok just announced some new parental controls. Yet social-media companies are also competing to hook their young users more deeply. Commercial incentives seem likely to favor artificial friends that please and indulge users in the moment, never hold them accountable, and indeed never ask anything of them at all. But that is not what friendship is—and it is not what adolescents, who should be learning to navigate the complexities of social relationships with other people, most need.

The fourth threat we see is that AI will strengthen authoritarian regimes, just as social media ended up doing despite its initial promise as a democratizing force. AI is already helping authoritarian rulers track their citizens’ movements, but it will also help them exploit social media far more effectively to manipulate their people—as well as foreign enemies. Douyin––the version of TikTok available in China––promotes patriotism and Chinese national unity. When Russia invaded Ukraine, the version of TikTok available to Russians almost immediately tilted heavily to feature pro-Russian content. What do we think will happen to American TikTok if China invades Taiwan?

Political-science research conducted over the past two decades suggests that social media has had several damaging effects on democracies. A recent review of the research, for instance, concluded, “The large majority of reported associations between digital media use and trust appear to be detrimental for democracy.” That was especially true in advanced democracies. Those associations are likely to get stronger as AI-enhanced social media becomes more widely available to the enemies of liberal democracy and of America.

We can summarize the coming effects of AI on social media like this: Think of all the problems social media is causing today, especially for political polarization, social fragmentation, disinformation, and mental health. Now imagine that within the next 18 months––in time for the next presidential election––some malevolent deity is going to crank up the dials on all of those effects, and then just keep cranking.

The development of generative AI is rapidly advancing. OpenAI released its updated GPT-4 less than four months after it released ChatGPT, which had reached an estimated 100 million users in just its first 60 days. New capabilities for the technology may be released by the end of this year. This staggering pace is leaving us all struggling to understand these advances, and wondering what can be done to mitigate the risks of a technology certain to be highly disruptive.

We considered a variety of measures that could be taken now to address the four threats we have described, soliciting suggestions from other experts and focusing on ideas that seem consistent with an American ethos that is wary of censorship and centralized bureaucracy. We workshopped these ideas for technical feasibility with an MIT engineering group organized by Eric’s co-author on The Age of AI, Dan Huttenlocher.

We suggest five reforms, aimed mostly at increasing everyone’s ability to trust the people, algorithms, and content they encounter online.

1. Authenticate all users, including bots

In real-world contexts, people who act like jerks quickly develop a bad reputation. Some companies have succeeded brilliantly because they found ways to bring the dynamics of reputation online, through trust rankings that allow people to confidently buy from strangers anywhere in the world (eBay) or step into a stranger’s car (Uber). You don’t know your driver’s last name and he doesn’t know yours, but the platform knows who you both are and is able to incentivize good behavior and punish gross violations, for everyone’s benefit.

Large social-media platforms should be required to do something similar. Trust and the tenor of online conversations would improve greatly if the platforms were governed by something akin to the “know your customer” laws in banking. Users could still open accounts with pseudonyms, but the person behind the account should be authenticated, and a growing number of companies are developing new methods to do so conveniently.

[Read: It’s time to protect yourself from AI voice scams]

Bots should undergo a similar process. Many of them serve useful functions, such as automating news releases from organizations, but all accounts run by nonhumans should be clearly marked as such, and users should be given the option to limit their social world to authenticated humans. Even if Congress is unwilling to mandate such procedures, pressure from European regulators, users who want a better experience, and advertisers (who would benefit from accurate data about the number of humans their ads are reaching) might be enough to bring about these changes.

2. Mark AI-generated audio and visual content

People routinely use photo-editing software to change lighting or crop photographs that they post, and viewers do not feel deceived. But when editing software is used to insert people or objects into a photograph that were not there in real life, it feels more manipulative and dishonest, unless the additions are clearly labeled (as happens on real-estate sites, where buyers can see what a house would look like filled with AI-generated furniture). As AI begins to create photorealistic images, compelling videos, and audio tracks at great scale from nothing more than a command prompt, governments and platforms will need to draft rules for marking such creations indelibly and labeling them clearly.

Platforms or governments should mandate the use of digital watermarks for AI-generated content, or require other technological measures to ensure that manipulated images are not interpreted as real. Platforms should also ban deepfakes that show identifiable people engaged in sexual or violent acts, even if they are marked as fakes, just as they now ban child pornography. Revenge porn is already a moral abomination. If we don’t act quickly, it could become an epidemic.

3. Require data transparency with users, government officials, and researchers

Social-media platforms are rewiring childhood, democracy, and society, yet legislators, regulators, and researchers are often unable to see what’s happening behind the scenes. For example, no one outside Instagram knows what teens are collectively seeing on that platform’s feeds, or how changes to platform design might influence mental health. And only those at the companies have access to the alogrithms being used.

After years of frustration with this state of affairs, the EU recently passed a new law––the Digital Services Act––that contains a host of data-transparency mandates. The U.S. should follow suit. One promising bill is the Platform Accountability and Transparency Act, which would, for example, require platforms to comply with data requests from researchers whose projects have been approved by the National Science Foundation.

Greater transparency will help consumers decide which services to use and which features to enable. It will help advertisers decide whether their money is being well spent. It will also encourage better behavior from the platforms: Companies, like people, improve their behavior when they know they are being monitored.

4. Clarify that platforms can sometimes be liable for the choices they make and the content they promote

When Congress enacted the Communications Decency Act in 1996, in the early days of the internet, it was trying to set rules for social-media companies that looked and acted a lot like passive bulletin boards. And we agree with that law’s basic principle that platforms should not face a potential lawsuit over each of the billions of posts on their sites.

But today’s platforms are not passive bulletin boards. Many use algorithms, AI, and architectural features to boost some posts and bury others. (A 2019 internal Facebook memo brought to light by the whistleblower Frances Haugen in 2021 was titled “We are responsible for viral content.”) Because the motive for boosting is often to maximize users’ engagement for the purpose of selling advertisements, it seems obvious that the platforms should bear some moral responsibility if they recklessly spread harmful or false content in a way that, say, AOL could not have done in 1996.

The Supreme Court is now addressing this concern in a pair of cases brought by the families of victims of terrorist acts. If the Court chooses not to alter the wide protections currently afforded to the platforms, then Congress should update and refine the law in light of current technological realities and the certainty that AI is about to make everything far wilder and weirder.

5. Raise the age of “internet adulthood” to 16 and enforce it

In the offline world, we have centuries of experience living with and caring for children. We are also the beneficiaries of a consumer-safety movement that began in the 1960s: Laws now mandate car seats and lead-free paint, as well as age checks to buy alcohol, tobacco, and pornography; to enter gambling casinos; and to work as a stripper or a coal miner.

But when children’s lives moved rapidly onto their phones in the early 2010s, they found a world with few protections or restrictions. Preteens and teens can and do watch hardcore porn, join suicide-promotion groups, gamble, or get paid to masturbate for strangers just by lying about their age. Some of the growing number of children who kill themselves do so after getting caught up in some of these dangerous activities.

The age limits in our current internet were set into law in 1998 when Congress passed the Children’s Online Privacy Protection Act. The bill, as introduced by then-Representative Ed Markey of Massachusetts, was intended to stop companies from collecting and disseminating data from children under 16 without parental consent. But lobbyists for e-commerce companies teamed up with civil-liberties groups advocating for children’s rights to lower the age to 13, and the law that was finally enacted made companies liable only if they had “actual knowledge” that a user was 12 or younger. As long as children say that they are 13, the platforms let them open accounts, which is why so many children are heavy users of Instagram, Snapchat, and TikTok by age 10 or 11.

Today we can see that 13, much less 10 or 11, is just too young to be given full run of the internet. Sixteen was a much better minimum age. Recent research shows that the greatest damage from social media seems to occur during the rapid brain rewiring of early puberty, around ages 11 to 13 for girls and slightly later for boys. We must protect children from predation and addiction most vigorously during this time, and we must hold companies responsible for recruiting or even just admitting underage users, as we do for bars and casinos.

Recent advances in AI give us technology that is in some respects godlike––able to create beautiful and brilliant artificial people, or bring celebrities and loved ones back from the dead. But with new powers come new risks and new responsibilities. Social media is hardly the only cause of polarization and fragmentation today, but AI seems almost certain to make social media, in particular, far more destructive. The five reforms we have suggested will reduce the damage, increase trust, and create more space for legislators, tech companies, and ordinary citizens to breathe, talk, and think together about the momentous challenges and opportunities we face in the new age of AI.