Itemoids

Meta

The Global Town Square Is in Ruins

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 10 › social-media-infrastructure-news-algorithms › 675614

Social media has, once again, become the window through which the world is witnessing unspeakable violence and cruelty in an active war zone. Thousands of people, including children and the elderly, have been killed or injured in Israel and the Gaza Strip since Hamas launched its surprise attack on Saturday—you have probably seen the carnage yourself on X, TikTok, or Instagram.

These scenes are no less appalling for their familiarity. But they are familiar. As my colleague Kaitlyn Tiffany wrote last year, the history of war is a history of media. The Gulf War demonstrated the power of CNN and the 24/7 cable-news format, foreshadowing the way infotainment would permeate politics and culture for the next 20 years. A series of contentious election cycles from 2008 to 2020, as well as the Arab Spring, the Syrian civil war, and the rise of the Islamic State, showed how social-media platforms democratized punditry and journalism, for better and worse. Commentators were quick to dub Russia’s invasion of Ukraine the “first TikTok war,” as the internet filled with videos from Ukrainians documenting the horrors of war in profoundly personal, often surreal ways.

[Read: The myth of the “first TikTok war”]

If such conflicts are lenses through which we can understand an information environment, then one must surmise that, at present, our information environment is broken. It relies on badly maintained social-media infrastructure and is presided over by billionaires who have given up on the premise that their platforms should inform users. During the first days of the Israel-Hamas war, X owner Elon Musk himself has interacted with doctored videos published to his platform. He has also explicitly endorsed accounts that are known to share false information and express vile anti-Semitism. In an interview with The New York Times, a Hamas official said that the organization has been using the lack of moderation on X to post violent, graphic videos on the platform to terrorize Israeli citizens. Meanwhile, Adam Mosseri, the head of Instagram and the unofficial lead on the company’s Twitter clone, Threads, has received requests from journalists, academics, and news junkies to make his product more useful for following the war. He has responded by saying that his team won’t “amplify” news media on the platform: “To do so would be too risky given the maturity of the platform, the downsides of over-promising, and the stakes,” he wrote. (Neither Meta nor X responded to requests for more information regarding their platforms’ plans to handle conflict-related posts.)

These are new cracks in an already crumbling foundation, as major social platforms have grown less and less relevant in the past year. In response, some users have left for smaller competitors such as Bluesky or Mastodon. Some have simply left. The internet has never felt more dense, yet there seem to be fewer reliable avenues to find a signal in all the noise. One-stop information destinations such as Facebook or Twitter are a thing of the past. The global town square—once the aspirational destination that social-media platforms would offer to all of us—lies in ruins, its architecture choked by the vines and tangled vegetation of a wild informational jungle. This may be for the best in the long run, although the immediate effect for those of us still glued to these ailing platforms is one of complete chaos.

Their transformation has not been an accident. For nearly a year, Musk has worked to dismantle his site’s previous architecture, including the platform’s verification system for public figures, journalists among them. Musk’s antics and layoffs have contributed to the diminishing of its trust and safety team. Now anyone can pay for a verification badge to make one’s posts more visible. (Some of the site’s new blue-check users are scam artists or disinformation peddlers, a number of whom are pawning off fake, old, or misleading footage as verified reports from Gaza.) Musk has also reinstated accounts that were banned for rules violations. And last week, in a supremely poorly timed move, the platform stripped auto-populating headlines from news stories; the result has been a substantial loss of legibility and the further erosion of trusted media sources on the platform. Musk has turned X into a deepfake version of Twitter—a facsimile of the once-useful social network, altered just enough so as to be disorienting, even terrifying.

Since 2018, Facebook and its parent company, Meta, have changed their news-feed algorithm to emphasize personal posts over news media. After the January 6 insurrection, the company deemphasized political news links from publishers; the move, according to The Wall Street Journal, caused an influx of complaints about misinformation. At the same time, Facebook’s user base began to erode, and the company’s transparency reports revealed that the most popular content circulating on the platform was little more than viral garbage—a vast wasteland of CBD promotional content and foreign tabloid clickbait. What’s left, across all platforms, is fragmented. News and punditry are everywhere online, but audiences are siloed; podcasts are more popular than ever, and millions of younger people online have turned to influencers and creators on Instagram and especially TikTok as trusted sources of news.

[Read: Should you delete your kid’s TikTok this week?]

The previous status quo was deeply flawed, of course. Social media, especially Twitter, has sometimes been an incredible news-gathering tool; it has also been terrible and inefficient, a game of do your own research that involves batting away bullshit and parsing half truths, hyperbole, outright lies, and invaluable context from experts on the fly. Social media’s greatest strength is thus its original sin: These sites are excellent at making you feel connected and informed, frequently at the expense of actually being informed. That’s to say nothing of the psychological toll that comes from staring at the raw feed. I’ve personally witnessed beheadings and war crimes through my screen—an experience no person should endure merely to stay informed about the world.

The back-and-forth with Mosseri over news on Threads illustrates the awkwardness of the moment. Mosseri’s position is reasonable enough, and there’s a genuine cognitive dissonance in asking Meta—a company with an atrocious track record of having its platform used to foment political unrest and supercharge propaganda—to build a safe space for journalism. And yet it is also understandable in turbulent moments for people to want something from the organizations that begged for our attention, monetized it, and, over time, influenced the way we found information. At the center of these pleas for a Twitter alternative is a feeling that a fundamental promise has been broken. In exchange for our time, our data, and even our well-being, we uploaded our most important conversations onto platforms designed for viral advertising—all under the implicit understanding that social media could provide an unparalleled window to the world.

Social media is not just a vector for information. Or misinformation. It’s a place to bear witness, to express solidarity, and to fight for change. All of that is harder now than it was just a year ago. What comes next is impossible to anticipate, but it’s worth considering the possibility that the centrality of social media as we’ve known it for the past 15 years has come to an end—that this particular window to the world is being slammed shut.

Should You Delete Your Kid’s TikTok This Week?

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 10 › graphic-content-children-social-media-use › 675619

This week, a teenager might open up their TikTok feed and immediately be served a video about a hairbrush that promises to gently detangle the roughest of tangles. Or a clip about Travis Kelce and Taylor Swift’s rumored romance. Or the app could show them a scene from the Israeli Supernova music festival, where on Saturday a young woman named Noa Argamani was put on the back of a motorcycle as her boyfriend was held by captors.

Footage from Hamas’s surprise attack on Israel, and the retaliatory strikes it has prompted, is appearing in social-media feeds across the world. Videos about the conflict have drawn billions of views on TikTok alone, according to The Washington Post, and queries related to it have appeared in the app’s trending searches. Hamas reportedly posted the murder of one grandmother to her own Facebook page.

Hamas reportedly captured about 150 hostages, and has threatened to execute them. Some schools in Israel and the United States have asked that parents preemptively delete social-media apps from their children’s devices in order to protect them from the possibility of seeing clips in which hostages beg for their lives. “Together with other Jewish day schools, we are warning parents to disable social media apps such as Instagram, X, and Tiktok from their children’s phones,” reads one such statement, posted by The Wall Street Journal’s Joanna Stern. “Graphic and often misleading information is flowing freely, augmenting the fears of our students.”

Parents have good reason to be concerned. Psychologists don’t fully know how watching graphic content online can affect kids. But “there’s enough circumstantial evidence suggesting that it’s not healthy from a mental-health standpoint,” Meredith Gasner, a psychologist at Boston Children’s Hospital, told me, citing research on the viral videos of George Floyd’s death in police custody.

Of course, kids have long been at risk of encountering disturbing or graphic content on social media. But the current era of single feeds serving short videos selected by algorithms, sometimes with little apparent logic, potentially changes the calculus. Firing up TikTok feels like pulling the lever of a content slot machine; every time a user opens up the app, they don’t necessarily know whether they’ll find comedy or horror. Lots of kids are pulling the lever many times a day, sometimes spending hours in the app. Nor is this just a TikTok problem: Instagram and YouTube, among other platforms, both have their own TikTok-like feeds. Much of the material on these platforms is benign, but on weeks like this one, when even adults may have trouble stomaching visuals they encounter, the idea that children are all over social media is particularly unsettling.

If hostage videos appear, the social-media platforms are hypothetically in a position to prevent them from going viral. A spokesperson for TikTok did not respond to a request for comment, but the platform’s community guidelines forbid use of the platform “to threaten or incite violence, or to promote violent extremism,” and the website says that the company works to detect and remove such content. Instagram, for its part, also moderates “videos of intense, graphic violence,” and has established a special-operations center staffed with experts to monitor the situation in Israel, a spokesperson for Meta said in an email. Both platforms offer safety tools for parents. Still, social-media platforms’ track record when it comes to content moderation is abysmal. Some videos that are upsetting to children may find their way onto the apps, especially those posted by reputable news outlets.

I talked to eight experts on children and the internet who told me that deleting social-media apps unilaterally might not work. For one, TikTok and Instagram videos are often cross-posted on other platforms, like YouTube Shorts, so you’d have to delete a lot of apps to create a true bubble. (And even so, that might not be impenetrable.) Kicking your teen off social media, albeit temporarily, may also feel like a punishment to your kid, who did nothing wrong.

But that doesn’t mean that parents are helpless. A better approach, experts told me, is for parents to be more open and communicative with their kids. “Having that open dialogue is key because they’re not really going to be able to escape what’s going on,” Laura Ordoñez, head of digital content and curation at Common Sense Media, a nonprofit that advocates for a safer digital world for kids and families, told me. Even if children can avoid videos of violence, the realities those videos represent still exist.

Families with a direct connection to the region may have a tougher time navigating the next few days than those without one. And age matters a lot, the experts said. Younger kids, particularly those in second grade or below, should be protected from watching upsetting videos as much as possible, says Heather Kirkorian, the director of the Cognitive Development and Media Lab at the University of Wisconsin at Madison. They’re too young to understand what’s happening. “They don’t have the cognitive and emotional skills to understand and process,” she told me.

At those younger ages, parents can realistically bubble kids from certain platforms and sites. Though that’s not to say they won’t hear about the war at school or have questions about it. When discussing with younger children, experts advise talking in kid-friendly language and, when appropriate, letting them know that they personally are safe. If the child is under 7, Ordoñez advises using “very simple and concrete explanations” like “Someone was hurt” or “People are fighting.” She also recommends that adults avoid watching or listening to news in front of children, who may overhear material that upsets them.

For older children, quarantining them from life online is rarely plausible. If you do delete TikTok from their phone, kids may just download it again or find another way to view it—by, say, using another kids’ device or a school computer. As Diana Graber, the author of Raising Humans in a Digital World, pointed out: “The minute you tell a child you can’t look at something, guess what they’re going to do?” Experts told me that a more productive approach is to ask kids questions about what they know, what they’ve seen, and how they feel. Warn them that the content they encounter may upset them, and talk to them about how it might affect them. Graber notes that a lot of kids these days are fluent in the language of mental health. If you’ve seen graphic content on your feeds, you can assume that your kid might see it, too. Julianna Miner, the author of Raising a Screen-Smart Kid, notes that “it’s important to give your kids a heads up” and to “prepare them for what they might see.” After that, you can “give them the choice of logging off or changing settings or taking some steps to potentially limit the types of things they could be exposed to.” This way, you’re on the same team.

In tense moments like this one, kids—like everyone else—are likely to encounter misinformation and disinformation, some of which began circulating even as the attacks were first being carried out. Bloomberg reported that a video from a different music festival in September was making the rounds on TikTok and had gotten almost 200,000 likes. For this reason, Sarita Schoenebeck, a professor at the University of Michigan who directs its Living Online Lab, recommends reminding kids that we don’t always know whether what we see online is real or fake.

In general, experts advise that parents should personalize their approach to their children. Some are more sensitive than others, and parents know their kids and what they can handle best. More broadly, monitor for signs that they’re upset. That might look different depending on the child. One good rule of thumb Schoenebeck gives when advising parents about whether kids are ready for smartphones is to think about how well your child is able to self-regulate around technology. “When you say, ‘Oh, time to turn the TV off!’ or whatever, are they able to self-regulate and do that without having a fit?” she asked. Are they capable of doing a dinner without phones or do they sneak a peek under the table? The same questions may show how ready they are to self-regulate their social-media use in upsetting times.