Itemoids

Data

We Should Fix Silicon Valley, Not Tear It Down

The Atlantic

www.theatlantic.com › books › archive › 2023 › 03 › palo-alto-malcolm-harris-book-review › 673531

Consider a proposal: Stanford should give its more than 8,000 acres to the Muwekma Ohlone, the land’s original people. After all, the university would still have $36 billion in the bank. (U.S. colleges and universities have amassed enormous wealth—more than $800 billion in endowment assets, according to a recent survey of 678 institutions.) Even more outrageously endowed is the surrounding region of Silicon Valley, which is Malcolm Harris’s real target when he makes this suggestion at the end of his new book, Palo Alto: A History of California, Capitalism, and the World. It’s precisely Stanford’s land, Harris explains, that has “nurtured the Silicon Valley extraction machine,” one he believes is wreaking havoc on the planet and immiserating so many of its people.

Deeply researched and richly detailed, Palo Alto is a prehistory of today’s all-too-familiar Valley of oligarchs and Big Brother brogrammers who seem to taint everything they touch, including housing, transportation, and democracy. At the same time, it distills and expresses a stark new techno-pessimism, growing especially fast on the left. Under the Palo Alto System, a term Harris uses to trace the history of Silicon Valley—particularly the obsession with productivity and economic value that he sees as a constant—technology has been hopelessly poisoned by the drive for profit. “Competition and domination, exploitation and exclusion, minority rule and class hate: These aren’t problems capitalist technology will solve,” Harris, who is a self-proclaimed Marxist, writes. “That’s what it’s for. In the proper language, they are features, not bugs.”

Harris wants to wipe the hard drive clean. He makes no calls to protest, divest, or boycott. He is not interested in seizing the means of digital production (and reproduction), organizing tech workers, or “socializing social media.” Harris instead argues that returning the land to the Ohlone could help “draw a new path, away from exhaustion and toward recovery, repair, and renewal.” (The tribe is currently focused on regaining federal recognition, and Harris joined its delegation in D.C. this month.) But he entirely bypasses another way forward: reclaiming Silicon Valley for the public.

A modern Marx in Palo Alto, crashing (of course) at one of Stanford’s seven cooperative houses, wouldn’t give up on such an important site of struggle. Silicon Valley’s mystique may be evaporating fast, but its infrastructure still holds enormous public potential. It is, after all, a collection of utilities (encompassing not just chips, cables, and servers but also digital infrastructure) that should be considered as much a part of the public domain as water and electricity—not least because as Harris, the historian Margaret O’Mara, and others have shown, that infrastructure was built at almost every step with public money.

Besides transforming our daily lives, Silicon Valley infrastructure, especially mobile phones and social media, has been justifiably hailed as helping drive major social movements, including the one that led to Barack Obama’s election, the Arab Spring, Occupy Wall Street, #MeToo, and Black Lives Matter. Rather than dismantling it, as Palo Alto suggests, wouldn’t governing, developing, and harnessing it make more sense? The Valley is more than just a few monopoly platforms; it’s everything we put into them and everything they took from us. When the Valley falters or collapses one day, as happened with the railroads in the 1970s or Wall Street in 2008, there could be a onetime chance to usher in “people’s community control of modern technology,” as the Black Panthers put it. Earlier this month, venture capitalists and start-up founders triggered a run on Silicon Valley Bank, requiring a federal takeover. The rescue should come with terms and conditions.

[Annie Lowrey: Silicon Valley Bank’s failure is now everyone’s problem]

Popular control of technology should be the ultimate goal, through whatever combination of law, code, and direct action may be necessary. Among other things, it would mean people, not companies, controlling their own data. Treating essential technological services like water and electricity would mean regulation and legislation to ensure that they are universally accessible and open source, and subject to democratic deliberation. Technologies built with any substantial public funding—MRI and GPS, the Human Genome Project and self-driving cars, Google and the internet itself—should in turn fund and serve the public. Forget buzzy black-box bots like ChatGPT, Bing, and Bard impersonating human language and behavior for corporate profits. These new forms of text prediction should be developed openly and carefully to improve public services.

Unlike related critiques of Silicon Valley, which usually highlight its libertarian and dystopian dimensions, Palo Alto is a takedown grounded in the long-term history of an actual place. That place is really a series of nested dolls, starting with Stanford and the small, adjacent city of Palo Alto, which it dominates. Beyond lies the Valley, itself just one part of the Bay Area, and beyond that California, the fifth-largest economy in the world. The influence of California, of course, can now be felt everywhere.

Obituaries for California are also now everywhere. Banking on conservative Florida and Texas to take its place at the center of the nation’s social, economic, and cultural life, many on the right are gleeful about the deep-blue state’s demographic slowdown and frequently point to its litany of disasters: wildfires, homelessness, inequality. For his part, Harris, though attuned to the Bay’s radical history, skewers Palo Alto as the “belly of the capitalist beast” and impugns the entire state by extension.

Absent in both cases is actually existing California, the glorious stew of contradictions stirred up in Kevin Starr’s encyclopedic eight-volume history of the state. Today’s Golden State is still one of the most diverse societies in human history, and the Bay’s massive Chinese, Vietnamese, Mexican, Mayan, South Asian, Pacific Islander, and other communities are not just pawns on Silicon Valley’s chessboard. Forged by a mass middle class, modern California has been an engine of economic uplift for millions, with a unique if embattled system of public higher education.

California is worth fighting for, and so is Silicon Valley. If not at Stanford and in Palo Alto, the dynamic and destructive love triangle between technology, capitalism, and higher education would surely be happening somewhere else. (An Austin System might be even worse.) California can draw from a broadly liberal, and even radical, inheritance. Among all the different institutions and interests involved, potential reformers have leverage, not least with disenchanted tech workers themselves.

In 1876, the transcontinental railroad chief and first Republican governor of California, Leland Stanford Sr., bought a farm and built a town near a millennium-old sequoia tree, a palo alto (“tall stick”) that still stands. The original Palo Alto System, Harris writes, was a method the governor designed on that farm for breeding and training horses, which identified and quantified talent as early as possible, with brutal efficiency. In 1891, the farm, which had recently become a university to honor Leland Jr., dead at 15 of typhoid, welcomed its first students. “Still a breeding and training project,” as Harris argues, it was now focused on human beings, though nondenominational, coeducational, and channeling a spirit of invention and progressivism.

[Read: Palo Alto’s first tech giant was a horse farm]

As Harris writes, Silicon Valley is home to some of the “most productive workers in the history of the world”—a handful of haves throwing the have-nots deep into the shade—whose “productivity” is destroying (“disrupting”) industry after established industry. Their companies are becoming some of the world’s most valuable, not only by creating jobs or social goods but by attracting enormous global flows of capital that chase unsustainable returns and inflate gigantic bubbles.

Yet as a callow undergrad in the trough between the dotcom bust of 2000 and the ascent of social media around 2005, I found Stanford and the surrounding area genuinely open to outsiders, deluged with money but also weird ideas and alternative currents both cultish and corporate. For every dorm-room start-up and back-of-the-napkin business plan, there were people curing diseases, contending with the origins of the universe, advancing clean energy, and pioneering irrigation techniques, not to mention all the eternally overshadowed artistic, humanistic, and social-scientific work being done on and around campus. Palo Alto misses the core of curiosity and experimentation that still exists there, fuel for a less profit-obsessed future Valley. Realizing it, however, may take a tech crash or a new antitrust movement.

But for Harris, who grew up in Palo Alto, this future is exceedingly unlikely. With the Palo Alto System, he names a revealing but rigid through line in the region’s history, connecting early faculty forays into eugenics to Cold War military research to the venture-backed Valley of today, where it’s “progress by victory, defeat, and ruthless elimination, full speed from day one.” Or as the Stanford sports chant has it: “Give ’em the axe, the axe, the axe! … Right in the neck, the neck, the neck!”

Tracing the system’s genealogy through the most virulent figures—such as Stanford’s founding president, David Starr Jordan, and the semiconductor pioneer William Shockley, both eugenicists—Harris captures crucial continuities but forecloses some difficult questions. How did an industry and a region stocked with liberals and leftists become the bleeding edge of capitalism? And isn’t the future of many potentially liberatory or at least neutral technologies still up for grabs, just as people in the ’90s saw the internet as a commons for freedom and experimentation?

The Palo Alto System today encompasses rampant law-breaking, long-term loss-making, and “big exits” (whether IPOs or departures from Earth’s atmosphere), along with dependence on despots and workers employed by a thousand external contractors. But for now most people are still hooked on the hype and glued to their screens. No sooner were Americans bound thumbs-first to Apple, Google, and Facebook than they started succumbing to Uber, Airbnb, and Zoom. Data and control, backed by big money and scaled to the nth degree, keep yielding results that enchant and entrap, and people keep handing over their money and information, minds and moods, lives and societies.

Please Get Me Out of Dead-Dog TikTok

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 03 › dead-dog-tiktok-algorithm-pet-loss-grief › 673445

A brown dog, muzzle gone gray—surely from a life well lived—tries to climb three steps but falters. Her legs give out, and she twists and falls. A Rottweiler limps around a kitchen. A golden retriever pants in a vet’s office, then he’s placed on a table, wrapped in medical tubes. “Bye, buddy,” a voice says off camera. Nearby, a hand picks up a syringe.

This is Dead-Dog TikTok. It is an algorithmic loop of pet death: of sick and senior dogs living their last day on Earth, of final hours spent clinging to one another in the veterinarian’s office, of the brutal grief that follows in the aftermath. One related trend invites owners to share the moment they knew it was time—time unspecified, but clear: Share the moment you decided to euthanize your dog.

The result is wrenching. A dog is always dying, and someone is always hurting. Likes and sympathetic comments amass. The video goes viral. Engage with one—or even just watch it to completion—and you may be served another, and another. Suddenly, you’re stuck in a corner of TikTok you’d rather not see.

“TikTok has to figure out a way to separate dog content from ‘my dog died’ content,” one user observes in a video from February. He says he can’t stand watching the latter, and his comment section is filled with people agreeing. “The amount of dogs I’ve never met that I’ve cried over is unreal,” one writes.  

Dead-Dog TikTok gets at a core tension of the platform writ large. TikTok collapses social media and entertainment, and gives an outsize power to its “For You”–feed algorithm: The user has limited control over what shows up on their feed. Unlike, say, on Reddit, where you might enter a rabbit hole by choice (maybe because you’ve subscribed to the True Crime forum), TikTok’s algorithm might throw you down one based on metrics that may not signal your actual interest.

And in the case of Dead-Dog TikTok, the algorithm can’t know what it means to plop a stranger’s pet loss next to a teen bopping to the latest viral trend or a snippet from late-night television. It can’t recognize that a user’s intention behind posting their dog’s last moments—for catharsis, for validation, to find other people who have felt that same loss—may not be a match for many viewers on the other side who are just trying to pass some time. “We often ascribe all sorts of intentions to the algorithm, like, Oh, it knows,” Nick Seaver, an anthropology professor at Tufts University who studies algorithms, told me. “But it really doesn’t.”

[Read: What happens when everything becomes TikTok]

The tension is unresolvable, which is possibly why TikTok rolled out a feature last week allowing users to “start fresh” with a new feed. TikTok, for its part, sees the solution as diversifying the content. “In addition, we work to carefully apply limits to some content that doesn’t violate our policies, but may impact the viewing experience if viewed repeatedly, particularly when it comes to content with themes of sadness, extreme exercise or dieting, or that’s sexually suggestive,” the company wrote in a blog post.

Whatever equation powers TikTok’s For You feed appears to have picked up that videos about dead dogs engage users. But it doesn’t seem to know when to stop serving it, and it tends to go too far, perhaps even by design. “When it finds something that works, it will go and try to push that—both at the individual level and the overall ecosystem level—pretty far,” Kevin Munger, a political scientist at Penn State who has studied the TikTok algorithm, explained to me. “It’s not going to stop at the right level.” To use a positive analogy, it’s as if the algorithm has figured out that you like cake, and so it’s serving you cake for breakfast, lunch, and dinner.

An algorithmic reset may not be able to totally solve this problem—in theory, the app will relearn what you like and serve videos accordingly. Some of the researchers I spoke with said that they very intentionally—even aggressively—signal to the platform what they do and don’t like. When they see a video of a type they don’t want more of, they take action: swiping away quickly, seeking out positive videos, reporting the upsetting content, even closing the app altogether. Other options include blocking specific users or hashtags, or pressing the “not interested” button.

As Robyn Caplan of the Data & Society Research Institute pointed out, an algorithm “can’t necessarily tell the difference between something that is making you cry and something that is making you laugh.” She told me she once asked a friend for funny videos to help “cheer up” her feed.

Grief is a nuanced human experience. “There’s not an obvious context in which you might want to watch videos about pet grief,” Seaver said. “And so it totally makes sense that these systems do these kind of clunky moves, because I don’t think there’s a non-clunky way to do it.” At its best, Dead-Dog TikTok may offer a support community to people suffering and normalize their pain.

[Read: There are no ‘five stages’ of grief]

Take Blaine Weeks. Weeks thought she had more time with her dog Indica—a few weeks or months, maybe. He was old, and his body seemed to be failing. Then one day, he didn’t want to get up. “I felt like I just didn’t have enough,” she told me. “I didn’t have enough pictures, I didn’t have enough videos, and I was distraught about that.” Weeks decided to record Indica’s last day, worried that otherwise she might block it out entirely with grief.

In the video montage of that day, which Weeks posted to TikTok, she loads Indica into her truck, and they get McDonald’s burgers as a final treat. Weeks tells him that she loves him as he licks tears from her face. Later, on the floor of the vet office, Indica perks up enough to eat a few fries, before resting his head in Weeks’s lap. It ends there.  

The post has been viewed 13 million times and climbing. “Randomly last night that video started going crazy again and got, like, another 400,000 views,” she told me when we talked earlier this month. Weeks said that she’d had to turn off her phone for a bit because of negative comments on the video (detractors questioned Weeks’s decision to euthanize) but that overall she’d found comfort from the experience. The video, she said, connected her with more than a dozen people whom she can talk with about her grief. “We kind of check on each other back and forth, saying, ‘Hey, are you doing okay today?’ ‘Yes, I’m doing okay. How are you?’” A stranger made a painting of Indica and sent it to her.

Stefanie Renee Salyers’s TikTok saying goodbye to Princess, her Shih Tzu, has been viewed 28 million times and has nearly 90,000 comments. Salyers got so many messages after posting that she created a Google Form for other people to share their dog-grief stories, offering to read them privately or—with their permission—create TikToks about their lost pets. “I felt, I guess, glad that, even though my video is of a very sad event, that there were people who saw it and felt like, I’m not alone in feeling this grief. And I’m not crazy for feeling like I lost a family member,” she told me.

Crystal Abidin, the founder of the TikTok Cultures Research Network—a group that connects scholars doing qualitative research about TikTok—and a professor at Curtin University, in Perth, Australia, has been studying the comment sections on TikTok grief posts at large. She has found “a really beautiful ethos of care work happening” there: people comforting one another, resource sharing, and more.

Videos like Saylers’s and Weeks’s may inspire others to post their own pet-loss stories. Abidin believes that the pandemic really mainstreamed videos about grief and death on the platform—videos from individuals, videos from health-care professionals. “There is a whole collision of these histories and people of different standpoints and expertise, all on GriefTok,” she told me. “It’s not bad; it’s not good. It’s just that you cannot choose what you want on your feed. And that can be arresting for a viewer.”

Dead-Dog TikTok may be a genuinely helpful space for some, and an upsetting one for others. The platform can’t perfectly sort who’s who. “But if we think about your personal ethos, principles, and morality, do we really want platforms to be the arbiter of what we should and shouldn’t see?” Abidin asked. Maybe TikTok could be smarter about not circulating distressing content, but should it really decide who grieves online and how?

Grief is messy and complicated and hits different people in different ways. So it is only natural that its manifestations online would likewise be messy and complicated. To grieve is to be human—one thing that algorithms, no matter how eerily attuned to our interests and desires, never can be.