Itemoids

Muskian

Beyond Doomscrolling

The Atlantic

www.theatlantic.com › technology › archive › 2025 › 01 › watch-duty-la-fires › 681333

The image that really got me on social media this week was a faded photo of a man and woman, standing on what looks like the front steps of their home. It’s a candid shot—both are focusing their attention on an infant cradled in the mother’s arm. It is likely one of the first photos of a new family, and the caption broke my heart: “This photo was blown into our yard during the Eaton Canyon fire. Anyone from Pasadena/Altadena recognize these people?”

The picture is perfectly intact, not singed or torn, yet it seems to represent an entire universe of loss. Staring at the photo, a piece of family history scattered by the same winds that fuel the Los Angeles fires, you can just begin to see the contours of what is gone. The kind of grief that cannot be inventoried in an insurance claim.

And then you scroll. A satellite photo of a charred, leveled neighborhood is sandwiched next to some career news. On Instagram, I see a GoFundMe for a woman who is nine months pregnant and just lost her house; it’s followed immediately by someone else’s ebullient ski-vacation photos and a skin-care advertisement. I proceed through the “For You” feed on X and find Elon Musk replying to a video where Alex Jones claims the fires are part of a globalist plot to ruin the United States (“True,” he said), and blaming the fires on DEI initiatives; then a shitpost about Meta’s content-moderation changes (“On my way to comment ‘retard’ on every facebook post,” it reads, with 297,000 views). I scroll again: “Celebrities Reveal How They REALLY Feel About Kelly Clarkson,” another post teases. This is followed by a post about a new red-flag warning in L.A.: The fire is not relenting.

[Read: The unfightable fire]

To watch the destruction in Los Angeles through the prism of our fractured social-media ecosystem is to feel acutely disoriented. The country is burning; your friends are going on vacation; next week Donald Trump will be president; the government is setting the fires to stage a “land grab”; a new cannabis-infused drink will help you “crush” Dry January. Mutual-aid posts stand alongside those from climate denialists and doomers. Stay online long enough and it’s easy to get a sense that the world is simultaneously ending and somehow indifferent to that fact. It all feels ridiculous. A viral post suggests that “climate change will manifest as a series of disasters viewed through phones with footage that gets closer and closer to where you live until you’re the one filming it.” You scroll some more and learn that the author of that post wrote the line while on the toilet (though the author has since deleted the confession).

Call it doomscrolling, gawking, bearing witness, or whatever you want, but there is an irresistible pull in moments of disaster to consume information. This is coupled with the bone-deep realization that the experience of staring at our devices while others suffer rarely provides the solidarity one might hope. Amanda Hess captured this distinctly modern feeling in a 2023 article about watching footage of dead Gazan children on Instagram: “I am not a survivor or a responder. I’m a witness, or a voyeur. The distress I am feeling is shame.”

For those on the ground, these networks mean something different. These people do not need to bear witness: They need specific information about their circumstances, and they need help. But the chaos of our social platforms and the splintered nature of a hollowed-out media industry extend the disorientation to them as well. “This time, I’m a civilian,” Matt Pearce, a Los Angeles–based journalist, wrote last week. “And this time, the user experience of getting information about a disaster unfolding around me was dogshit.” Anna Merlan, a reporter for Mother Jones, chronicled the experience of sifting through countless conspiracy theories and false-flag posts while watching the fires encroach on her home and packing her car to evacuate.

As I read these dispatches and watch helplessly from afar, the phrase time on site bangs around in my head. This is the metric that social-media companies optimize for, and it means what it sounds like: the amount of time that people spend on these apps. In recent years, there has been much handwringing over how much time users are spending on site; Tech-industry veterans such as Tristan Harris have made lucrative second careers warning of the addictive, exploitative nature of tech platforms and their algorithms. Harris’s crusade began in 2016, when he suggested a healthier metric of “time well spent,” which sought to reverse the “digital attention crisis.” This became its own kind of metric, adopted by Mark Zuckerberg in 2018 as Facebook’s north star for user satisfaction. Since then, the phrase has fallen out of favor. Harris rebranded his effort away from time well spent to a focus on “humane” technology.

But the worries persist. Parents obsess over the vague metric of “screen time,” while researchers write best-selling books and debate what, exactly, phones and social media are doing to kids and how to prove it. American politicians are so worried about time on site—especially when its by-product, metadata, is being collected by foreign governments—that the United States may very well ban TikTok, an app used by roughly one-third of the country’s adults. (In protest, many users have simply started spending time on another Chinese site, Xiaohongshu.) Many people suspect that time on site can’t be good for us, yet time on site also is how many of us learn about the world, form communities, and entertain ourselves. The experience of logging on and consuming information through the algorithmic morass of our feeds has never felt more dispiriting, commoditized, chaotic, and unhelpful than it does right now.

[Read: No one knows exactly what social media is doing to teens]

It is useful, then, to juxtapose this information ecosystem—one that’s largely governed by culture-warring tech executives and populated by attention seekers—with a true technological public good. Last week, I downloaded Watch Duty, a free app that provides evacuation notices, up-to-date fire maps, and information such as wind direction and air-quality alerts. The app, which was founded in 2021 after fires ravaged Sonoma County, California, has become a crucial piece of information infrastructure for L.A. residents and first responders. It is run by a nonprofit as a public service, with volunteer reporters and full-time staff who help vet information. Millions have downloaded the app just this month.

Watch Duty appears to be saving lives at a time when local-government services have been less than reliable, sending out incorrect evacuation notices to residents. It is a shining example of technology at its best and most useful, and so I was struck by something one of its co-founders, David Merritt, told to The Verge over the weekend: “We don’t want you to spend time in the app,” he said. “You get information and get out. We have the option of adding more photos, but we limit those to the ones that provide different views of a fire we have been tracking. We don’t want people doom scrolling.” This, he rightly argues, is “the antithesis of what a lot of tech does.”

The contrast between Watch Duty and broad swaths of the internet feels especially stark in the early days of 2025. The toxic incentives and environments of our other apps are as visible as ever, and the men behind these services—Musk and Zuckerberg especially—seem intent on making the experience of using them worse than ever. It’s all in service of engagement, of more time on site. Musk, who has transformed X into a superfund site of conspiracy theorizing, crypto ads, hateful posts, and low-rent memes, has been vehement that he wants his users to come to the platform and never leave. He has allegedly deprioritized hyperlinks that would take people away from the platform to other sites. (Musk did not deny that this is happening when confronted by Paul Graham, a Y Combinator co-founder.) He has his own name for the metric he wants X to optimize for: unregretted user seconds.

Zuckerberg recently announced his own version of the Muskian playbook, which seeks to turn his Meta platforms into a more lawless posting zone, including getting rid of fact-checkers and turning off its automated moderation systems on all content but “illegal and high-severity violations.” That system kept spam and disinformation content from flooding the platform. Make no mistake: This, too, is its own play for time on site. In an interview last month with the Financial Times, a Meta executive revealed that the company plans to experiment with introducing generative-AI-powered chatbots into its services, behaving like regular users. Connor Hayes, vice president of product for generative AI at Meta, says that this feature—which, I should add, nobody asked for—is a “priority” for the company over the next two years. This is supposed to align with another goal, which is to make its apps “more entertaining and engaging.”

This should feel more than disheartening for anyone who cares about or still believes in the promise of the internet and technology to broaden our worldview, increase resilience, and expose us to the version of humanity that is always worth helping and saving. Spending time on site has arguably never felt this bad; the forecast suggests that it will only get worse.

In recent days, I’ve been revisiting some of the work of the climate futurist Alex Steffen, who has a knack for putting language to our planetary crisis. The unprecedented disasters that appear now with more frequency are an example of discontinuity, where “past experience loses its value as a guide to decision-making about the future.” Steffen argues that we have no choice but to adapt to this reality and anticipate how we’ll survive it. He offers no panaceas or bromides. The climate crisis will come for each of us, but will affect us unevenly. We are not all in this together, he argues. But action is needed—specifically, proactive fixes that make our broken systems more effective and durable.

Clearly our information systems are in need of such work. They feel like they were built for a world we no longer inhabit. Most of them are run by billionaires who can afford to insulate themselves from reality, at least for now. I don’t see an end to the discontinuity or brokenness of our internet. But there are glimpses of resilience. Maybe platforms like Watch Duty offer a template. “I don’t want to sell this,” John Clarke Mills, the company’s CEO, told The Hollywood Reporter on Monday. He went further: “No one should own this. The fact that I have to do this with my team is not OK. Part of this is out of spite. I’m angry that I’m here having to do this, and the government hasn’t spent the money to do this themselves.” Mills’s anger is righteous, but it could also be instructive. Instead of building things that make us feel powerless, Mills is building tools that give people information that can be turned into agency.

There’s no tidy conclusion to any of this. There is loss, fear, anger, but also hope. Days later, I went to check back on the post that contained that photo of the man and woman with a child. I’d hoped that the internet would work its magic to reunite the photo with those who’d lost it. Throughout the replies are people trying to signal-boost the post. In one reply, a local news producer asks for permission to do a story about the photograph. Another person thinks they have a lead on the family. So far, there’s no happy ending. But there is hope.

We’re All Trying to Find the Guy Who Did This

The Atlantic

www.theatlantic.com › technology › archive › 2025 › 01 › mark-zuckerberg-free-expression › 681238

Mark Zuckerberg is sick of the woke politics governing his social feeds. He’s tired of the censorship and social-media referees meddling in free speech. We’re in a “new era” now, he said in a video today, announcing that he plans to replace Facebook and Instagram fact-checkers with a system of community notes similar to the one on X, the rival platform owned by Elon Musk. Meta will also now prioritize “civic content,” a.k.a. political content, not hide from it.

The social-media hall monitors have been so restrictive on “topics of immigration and gender that they’re out of touch with mainstream discourse,” Zuckerberg said with the zeal of an activist. He spoke about “a cultural tipping point towards once again prioritizing speech” following “nonstop” concerns about misinformation from the “legacy media” and four years of the United States government “pushing for censorship.” It is clear from Zuckerberg’s announcement that he views establishment powers as having tried and failed to solve political problems by suppressing his users. That message is sure to delight Donald Trump and the incoming administration. But there’s one tiny hitch. Zuckerberg is talking about himself and his own policies. The establishment? That’s him.

The changes to Meta’s properties, including Facebook, Instagram, and Threads, are being framed by the CEO as a return “to our roots around free expression.” This bit of framing is key, painting him as having been right all along. It also conveniently elides nearly a decade of decisions made by Zuckerberg, who not only is Meta’s founder but also holds a majority of voting power in the company, meaning the board cannot vote him out. He is Meta’s unimpeachable king.

[From the March 2024 issue: The rise of techno-authoritarianism]

I don’t have access to Zuckerberg’s brain, so I can’t know the precise reasons for his reversal. Has he been genuinely red-pilled by UFC founder (and new Meta board member) Dana White and his jiu-jitsu friends? Is he jealous of Musk, who seems to be having a good time palling around with Trump and turning X into 4chan? Is he simply an opportunist cozying up to the incoming administration? Or is he terrified that Trump—who not long ago threatened to send him to jail—will follow through on his promises of retribution against tech executives who don’t bend to his whims? Is this indeed just an opportunity for Meta to get back to its relatively unmoderated roots? My money is that Zuckerberg’s new posture—visiting Mar-a-Lago, donating $1 million to Trump’s inaugural fund, and elevating Joel Kaplan, a longtime Republican insider, to the top policy job at Meta—is motivated by all of the above.

Zuckerberg’s personal politics have always been inextricably linked to his company’s political and financial interests. Above all else, the Facebook founder seems compelled by any ideology that allows the company to grow rapidly and make money without having to take too much responsibility for what happens on its platforms. Zuckerberg knows which way the political wind is blowing and appears to be trying to ride it while, simultaneously, being at least a little bit afraid of it. When a reporter today asked Trump if he thought Meta’s policy changes were driven by his previous threats, he replied, “Probably.”

Zuckerberg’s motives are less important than his actions, which, at least right now, are inarguably MAGA-coded. (He said that he’s moving the content-review teams away from the biased, blue shores of California to the supposedly neutral land of Texas, for one.) They are also deeply cynical. After years of arguing that its users don’t want to see political content (unless they explicitly follow political accounts or pages), Meta is now arguing that it is time to promote “civic” material. The company is pandering to the right and a skewed definition of free speech after having spent the past few months actively restricting teens from seeing LGBTQ-related content on its platforms, as User Mag reported earlier this week. Just this morning, 404 Media reported that Meta’s human-resources team has been deleting criticism of White from Facebook Workplace, the internal platform where Meta employees communicate.

Such hypocrisy ought to be expected from Zuckerberg, whose announcement carries the energy of a guy complaining about a problem he’s responsible for. Zuckerberg has a rich history of making editorial decisions for Meta’s platforms, watching them play out, and then reacting to them as if they were the result of some outside force. In 2013, I watched as Facebook flooded publishers with traffic, thanks to a deliberate algorithmic change to prioritize news. I watched the company build a news division and product and hire a big name to run it. And after the 2016 election, when the company came under intense scrutiny from many of the same outlets that had previously benefited from its platform, I watched the company argue that it was reducing visibility of publishers in favor of posts from “friends and family.”

Meta’s history is littered with similar about-faces. In 2017, Zuckerberg gave a speech extolling Facebook’s groups and pages. The company changed its mission statement from “Making the world more open and connected” to “Give people the power to build community and bring the world closer together.” The company prioritized groups over other content. As usual, Zuckerberg said he was reacting to the desires of his users (that this was also a way to increase engagement across the company’s platforms was surely a happy coincidence). But then, in 2021, after QAnon and Stop the Steal groups were found to operate unchecked on the platform, Zuckerberg announced that the company would stop recommending political groups to users, citing a need to “turn down the temperature” of the national conversation after the January 6 insurrection.

One way to look at this is that Meta has always been deeply, if begrudgingly, reactive in its moderation decisions. The company is hands-off until it ends up in a public-relations crisis and dragged in front of Congress. The company has argued that it is a neutral actor, that it has no interest in presiding over what people can and cannot say. And yet, this is the same company that, in 2020, declared that it was taking “new steps to protect the U.S. elections.” The contradictions abound. Facebook is averse to being an editorial entity, but it hired fact-checkers. It does not wish to be political, but it has an election war room (but please, don’t call it a war room). Zuckerberg is done with politics, but he’s flying down to Mar-a-Lago. You get the gist.

The end result of being so deeply reactive is that Zuckerberg ends up rather awkwardly at war with his own company. Currently, Meta’s new Trump-administration content free-for-all seems to be motivated by a sense of shame or sheepishness for how Meta responded to world events from March 2020 to January 7, 2021, the day Facebook banned Trump from its platforms for his role in inciting the rioters the day before. Despite speaking with clarity and conviction at the time, Zuckerberg seems to be letting the revisionist narratives of COVID and January 6 influence his thinking. As I wrote last year, “Decisions that seemed rational in 2020 and 2021 may seem irrational to him today—the product of a kind of pandemic anxiety.”

[Read: Mark Zuckerberg will never win]

I take Zuckerberg at his word that he feels the discourse has changed, especially when it’s consumed on platforms like X. That discourse is profoundly anti-institutional—less mainstream media, more Joe Rogan. (Rogan, of course, is now as mainstream as they come.) Zuckerberg may even be right that fact-checkers ultimately eroded trust among the skeptical more than they preserved the truth. But Meta is not an insurgent force—it’s a global behemoth with lobbyists and corporate interests. Zuckerberg is himself one of the world’s richest men. The sclerotic, slop-ridden wasteland of stale memes on its Facebook product, bloodless posts on Threads—a blatant clone of X—and hot people linking out to their OnlyFans profiles on Instagram are all products of a legacy institution that he presides over. That Zuckerberg should look out over his kingdom and see it as “out of touch” isn’t a criticism of “woke” Democrats or a regulation-crazy government: It’s a criticism of the way he himself capitulates.

Maybe this is Zuckerberg’s final pivot. Perhaps he’s wanted these changes all along and this moment will bring about a Muskian renaissance that is, at last, true to his own internal politics. But if one is searching for truisms to better understand Zuckerberg, I’m not sure there’s a more apt one than this quote, from a Facebook employee interviewed by BuzzFeed News in 2020. “He seems truly incapable of taking personal responsibility for decisions and actions at Facebook,” the employee said. The employee offered the quote in response to political violence in Kenosha, Wisconsin, during the George Floyd protests, a conflict that Facebook groups played a role in inflaming. But the quote speaks to something more fundamental about the CEO. For as long as he’s been running his company, Zuckerberg has been anxiously gazing in the rearview mirror, unaware or unwilling to recognize the Mark Zuckerberg–size blind spot over his shoulder.