Itemoids

Netherlands

The End of an Internet Era

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 04 › buzzfeed-news-internet-era › 673822

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

The internet of the 2010s was chaotic, delightful, and, most of all, human. What happens to life online as that humanity fades away?

First, here are three new stories from The Atlantic:

Silicon Valley’s favorite slogan has lost all meaning. Too many Americans are missing out on the best kitchen gadget. Elon Musk revealed what Twitter always was.

Chaotically Human

My colleague Charlie Warzel worked at BuzzFeed News in the 2010s. He identifies those years as a specific era of the internet—one that symbolically died yesterday with the news of the website shutting down. Yesterday, Charlie offered a glimpse of what those years felt like for people working in digital media:

I worked at BuzzFeed News for nearly six years—from March 2013 until January 2019. For most of that time, it felt a bit like standing in the eye of the hurricane that is the internet. Glorious chaos was everywhere around you, yet it felt like the perfect vantage to observe the commercial web grow up. I don’t mean to sound self-aggrandizing, but it is legitimately hard to capture the cultural relevance of BuzzFeed to the media landscape of the mid-2010s, and the excitement and centrality of the organization’s approach to news. There was “The Dress,” a bit of internet ephemera that went so viral, we joked that that day might have been the last good one on the internet.

Charlie goes on, and his essay is worth reading in full, but today I’d like to focus on the point he ends on: that the internet of the 2010s was human in a way that today’s is not. Charlie doesn’t just mean human in the sense of not generated by a machine. He’s referring to chaos, unpredictability, delight—all of the things that made spending time on the internet fun.

Charlie explains how Buzzfeed News ethos emphasized paying attention to the joyful and personal elements of life online:

BuzzFeed News was oriented around the mission of finding, celebrating, and chronicling the indelible humanity pouring out of every nook and cranny of the internet, so it makes sense that any iteration that comes next will be more interested in employing machines to create content. The BuzzFeed era of media is now officially over. What comes next in the ChatGPT era is likely to be just as disruptive, but I doubt it’ll be as joyous and chaotic. And I guarantee it’ll feel less human.

The shrinking humanity of the internet is a theme that Charlie’s been thinking about for a while. Last year, he wrote about why many observers feel that Google Search is not as efficient as it used to be—some argue that the tool returns results that are both drier and less useful than they once were. Charlie learned in his reporting that some of the changes the Search tool has rolled out are likely the result of Google’s crackdowns on misinformation and low-quality content. But these changes might also mean that Google Search has stopped delivering interesting results, he argues:

In theory, we crave authoritative information, but authoritative information can be dry and boring. It reads more like a government form or a textbook than a novel. The internet that many people know and love is the opposite—it is messy, chaotic, unpredictable. It is exhausting, unending, and always a little bit dangerous. It is profoundly human.

It’s also worth remembering the downsides of this humanity, Charlie notes: The unpredictability that some people are nostalgic for also gave way to conspiracy theories and hate speech in Google Search results.

The Google Search example raises its own set of complex questions, and I encourage those interested to read Charlie’s essay and the corresponding edition of his newsletter, Galaxy Brain. But the strong reactions to Google Search and the ways it is changing are further evidence that many people crave an old internet that now feels lost.

If the internet is becoming less human, then something related is happening to social media in particular: It’s becoming less familiar. Social-media platforms such as Friendster and Myspace, and then Facebook and Instagram, were built primarily to connect users with friends and family. But in recent years, this goal has given way to an era of “performance” media, as the internet writer Kate Lindsay put it in an Atlantic article last year. Now, she wrote, “we create online primarily to reach people we don’t know instead of the people we do.”

Facebook and Instagram are struggling to attract and retain a younger generation of users, Lindsay notes, because younger users prefer video. They’re on TikTok now, most likely watching content created by people they don’t know. And in this new phase of “performance” media, we lose some humanity too. “There is no longer an online equivalent of the local bar or coffee shop: a place to encounter friends and family and find person-to-person connection,” Lindsay wrote.

I came of age in the Tumblr era of the mid-2010s, and although I was too shy to put anything of myself on display, I found joy in lurking for hours online. Now those of us looking for a place to have low-stakes fun on the internet are struggling to find one. The future of social-media platforms could surprise us: IOS downloads of the Tumblr app were up by 62 percent the week after Elon Musk took control of Twitter, suggesting that the somewhat forgotten platform could see a resurgence as some users leave Twitter.

I may not have personally known the bloggers I was keeping up with on Tumblr, but my time there still felt human in a way that my experiences online have not since. The feeling is tough to find words for, but maybe that’s the point: As the internet grows up, we won’t know what we’ve lost until it’s gone.

Related:

The internet of the 2010s ended today. Instagram is over.

Today’s News

Less than a year after overturning Roe v. Wade, the Supreme Court is expected to decide tonight on whether the abortion pill mifepristone should remain widely available while litigation challenging the FDA’s approval of the drug continues. The Russian military stated that one of its fighter jets accidentally bombed Belgorod, a Russian city near the Ukrainian border. Dominic Raab stepped down from his roles as deputy prime minister and justice secretary of Britain after an official inquiry found that he had engaged in intimidating behavior on multiple occasions, one of which involved a misuse of power.

Dispatches

Work in Progress: America has failed the civilization test, writes Derek Thompson. The Books Briefing: Elise Hannum rounds up books about celebrity—and observes how difficult it can be to appear both otherworldly and relatable. Up for Debate: Conor Friedersdorf explores how the gender debate veered off track.

Explore all of our newsletters here.

Evening Read

National Gallery of Art, Widener Collection

Vermeer’s Revelations

By Susan Tallman

Of all the great painters of the golden age when the small, soggy Netherlands arose as an improbable global power, Johannes Vermeer is the most beloved and the most disarming. Rembrandt gives us grandeur and human frailty, Frans Hals gives us brio, Pieter de Hooch gives us busy burghers, but Vermeer issues an invitation. The trompe l’oeil curtain is pulled back, and if the people on the other side don’t turn to greet us, it’s only because we are always expected.

Vermeer’s paintings are few in number and scattered over three continents, and they rarely travel. The 28 gathered in Amsterdam for the Rijksmuseum’s current, dazzling exhibition represent about three-quarters of the surviving work—“a greater number than the artist might have ever seen together himself,” a co-curator, Pieter Roelofs, notes—and make this the largest Vermeer show in history. The previous record holder took place 27 years ago at the National Gallery in Washington, D.C., and at the Mauritshuis, in The Hague. Prior to that, the only chance to see anything close would have been the Amsterdam auction in May 1696 that dispersed perhaps half of everything he’d painted in his life.

Read the full article.

More From The Atlantic

Murders are spiking in Memphis. A memoir about friendship and illness Gavin Newsom is not governing.

Culture Break

Brian Shumway / Gallery Stock

Read. Journey, a wordless picture book, is about the expedition of a girl with a magical red crayon. It’s one of seven books that you should read as a family.

Watch. Ari Aster’s newest movie, Beau Is Afraid, invites you into the director’s anxious fantasies.

Play our daily crossword.

While you’re over on Charlie’s Galaxy Brain page, check out the November newsletter in which he comes up with a great term for our evolving internet age: geriatric social media. (It’s not necessarily a bad thing.)

— Isabel

Did someone forward you this email? Sign up here.

Katherine Hu contributed to this newsletter.

Moore’s Law Is Not for Everything

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 04 › moores-law-defining-technological-progress › 673809

In early 2021, long before ChatGPT became a household name, OpenAI CEO Sam Altman self-published a manifesto of sorts, titled “Moore’s Law for Everything.” The original Moore’s Law, formulated in 1965, describes the development of microchips, the tiny silicon wafers that power your computer. More specifically, it predicted that the number of transistors that engineers could cram onto a chip would roughly double every year. As Altman sees it, something like that astonishing rate of progress will soon apply to housing, food, medicine, education—everything. The vision is nothing short of utopian. We ride the exponential curve all the way to paradise.

In late February, Altman invoked Moore again, this time proposing “a new version of moore’s law that could start soon: the amount of intelligence in the universe doubles every 18 months.” This claim did not go unchallenged: “Oh dear god what nonsense,” replied Grady Booch, the chief scientist for software engineering at IBM Research. But whether astute or just absurd, Altman’s comment is not unique: Technologists have been invoking and adjusting Moore’s Law to suit their own ends for decades. Indeed, when Gordon Moore himself died last month at the age of 94, the legendary engineer and executive, who in his lifetime built one of the world’s largest semiconductor companies and made computers accessible to hundreds of millions of people, was remembered most of all for his prediction—and also, perhaps, for the optimism it inspired.

Which makes sense: Moore’s Law defined at least half a century of technological progress and, in so doing, helped shape the world as we know it. It’s no wonder that all manner of technologists have latched on to it. They want desperately to believe—and for others to believe—that their technology will take off in the same way microchips did. In this impulse, there is something telling. To understand the appeal of Moore’s Law is to understand how a certain type of Silicon Valley technologist sees the world.

The first thing to know about Moore’s Law is that it isn’t a law at all—not in a legalistic sense, not in a scientific sense, not in any sense, really. It’s more of an observation. In an article for Electronics magazine published 58 years ago this week, Moore noted that the number of transistors on each chip had been doubling every year. This remarkable progress (and associated drop in costs), he predicted, would continue for at least the next decade. And it did—for much longer, in fact. Depending on whom you ask and how they choose to interpret the claim, it may have held until 2005, or the present day, or some point in between.

Carver Mead, an engineering professor at the California Institute of Technology, was the first to call Moore’s observation a “law.” By the early 1980s, that phrase—Moore’s Law—had become the slogan for a nascent industry, says Cyrus Mody, a science historian at Maastricht University, in the Netherlands, and the author of The Long Arm of Moore’s Law. With the U.S. economy having spent the better part of the past decade in the dumps, he told me, a message of relentless progress had PR appeal. Companies could say, “‘Look, our industry is so consistently innovative that we have a law.’”

[Read: AI is like … nuclear weapons?]

This wasn’t just spin. Microchip technology really had developed according to Moore’s predicted schedule. As the tech got more and more intricate, Moore’s Law became a sort of metronome by which the industry kept time. That rhythm was a major asset. Silicon Valley executives were making business-strategy decisions on its basis, David C. Brock, a science historian who co-wrote a biography of Gordon Moore, told me.

For a while, the annual doubling of transistors on a chip seemed like magic: It happened year after year, even though no one was shooting for that specific target. At a certain point, though, when the industry realized the value of consistency, Moore’s Law morphed into a benchmark to be reached through investment and planning, and not simply a phenomenon to be taken for granted, like gravity or the tides. “It became a self-fulfilling prophecy,” Paul Ceruzzi, a science historian and a curator emeritus at the National Air and Space museum, told me.

Still, for almost as long as Moore’s Law has existed, people have foretold its imminent demise. If they were wrong, that’s in part because Moore’s original prediction has been repeatedly tweaked (or outright misconstrued), whether by extending his predicted doubling time, or by stretching his meaning of a single chip, or by focusing on computer power or performance instead of the raw number of transistors. Once Moore’s Law had been fudged in all these ways, the floodgates opened to more extravagant and brazen reinterpretations. Why not apply the law to pixels, to drugs, to razor blades?

An endless run of spin-offs ensued. Moore’s Law of cryptocurrency. Moore’s Law of solar panels. Moore’s Law of intelligence. Moore’s Law for everything. Moore himself used to quip that his law had come to stand for just about any supposedly exponential technological growth. That’s another law, I guess: At every turn of the technological-hype cycle, Moore’s Law will be invoked.

The reformulation of Moore’s observation as a law, and then its application to a new technology, creates an air of Newtonian precision—as if that new technology could only grow in scale. It transforms something you want to happen into something that will happen—technology as destiny.

For decades, that shift has held a seemingly irresistible appeal. More than 20 years ago, the computer scientist Ray Kurzweil fit Moore’s Law into a broad argument for the uninterrupted exponential progress of technology over the past century—a trajectory that he still believes is drawing us toward “the Singularity.” In 2011, Elon Musk professed to be searching for a “Moore’s Law of Space.” A year later, Mark Zuckerberg posited a “social-networking version of Moore’s Law,” whereby the rate at which users share content on Facebook would double every year. (Look how that turned out.) More recently, in 2021, Changpeng Zhao, the CEO of the cryptocurrency exchange Binance, cited Moore’s Law as evidence that “blockchain performance should at least double every year.” But no tech titan has been quite as explicit in their assertions as Sam Altman. “This technological revolution,” he says in his essay, “is unstoppable.” No one can resist it. And no one can be held responsible.

Moore himself did not think that technological progress was inevitable. “His whole life was a counterexample to that idea,” Brock told me. “Quietly measuring what was actually happening, what was actually going on with the technology, what was actually going on with the economics, and acting accordingly”—that was what Moore was about. He constantly checked and rechecked his analysis, making sure everything still held up. You don’t do that if you believe you have hit upon an ironclad law of nature. You don’t do that if you believe in the unstoppable march of technological progress.

Moore recognized that his law would eventually run up against a brick wall, some brute fact of physics that would halt it in its tracks—the size of an atom, the speed of light. Or worse, it would cause catastrophe before it did. “The nature of exponentials is that you push them out,” he said in a 2005 interview with Techworld magazine, “and eventually disaster happens.”

Exactly what sort of disaster Moore envisioned is unclear. Brock, his biographer, suspects that it might have been ecological ruin; Moore was, after all, a passionate conservationist. Perhaps he viewed microchips as a sort of invasive species, multiplying and multiplying at the expense of the broader human ecosystem. Whatever the particulars, he was an optimist, not a utopian. And yet, the law bearing his name is now cited in support of a worldview that was not his own. That is the tragedy of Moore’s Law.

America Fails the Civilization Test

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 04 › america-mortality-rate-guns-health › 673799

This is Work in Progress, a newsletter by Derek Thompson about work, technology, and how to solve some of America’s biggest problems. Sign up here to get it every week.

The true test of a civilization may be the answer to a basic question: Can it keep its children alive?

For most of recorded history, the answer everywhere was plainly no. Roughly half of all people—tens of billions of us—died before finishing puberty until about the 1700s, when breakthroughs in medicine and hygiene led to tremendous advances in longevity. In Central Europe, for example, the mortality rate for children fell from roughly 50 percent in 1750 to 0.3 percent in 2020. You will not find more unambiguous evidence of human progress.

How’s the U.S. doing on the civilization test? When graded on a curve against its peer nations, it is failing. The U.S. mortality rate is much higher, at almost every age, than that of most of Europe, Japan, and Australia. That is, compared with the citizens of these nations, American infants are less likely to turn 5, American teenagers are less likely to turn 30, and American 30-somethings are less likely to survive to retirement.

Last year, I called the U.S. the rich death trap of the modern world. The “rich” part is important to observe and hard to overstate. The typical American spends almost 50 percent more each year than the typical Brit, and a trucker in Oklahoma earns more than a doctor in Portugal.

This extra cash ought to buy us more years of living. For most countries, higher incomes translate automatically into longer lives. But not for today’s Americans. A new analysis by John Burn-Murdoch, a data journalist at the Financial Times, shows that the typical American is 100 percent more likely to die than the typical Western European at almost every age from birth until retirement.

What if I offered you a pill and told you that taking this mystery medication would have two effects? First, it would increase your disposable income by almost half. Second, it would double your odds of dying in the next 365 days. To be an average American is to fill a lifetime prescription of that medication and take the pill nightly.

According to data collected by Burn-Murdoch, a typical American baby is about 1.8 times more likely to die in her first year than the average infant from a group of similarly rich countries: Australia, Austria, Switzerland, Germany, France, the U.K., Japan, the Netherlands, and Sweden. Let’s think of this 1.8 figure as “the U.S. death ratio”—the annual mortality rate in the U.S., as a multiple of similarly rich countries.

By the time an American turns 18, the U.S. death ratio surges to 2.8. By 29, the U.S. death ratio rockets to its peak of 4.22, meaning that the typical American is more than four times more likely to die than the average resident in our basket of high-income nations. In direct country-to-country comparisons, the ratio is even higher. The average American my age, in his mid-to-late 30s, is roughly six times more likely to die in the next year than his counterpart in Switzerland.

[INSERT CHART HERE]

The average U.S. death ratio stays higher than three for practically the entire period between ages 30 and 50, meaning that the typical middle-aged American is roughly three times more likely to die within the year than his counterpart in Western Europe or Australia. Only in our late 80s and 90s are Americans statistically on par, or even slightly better off, than residents of other rich nations.

“One in 25 American five-year-olds today will not make it to their 40th birthday,” Burn-Murdoch observed. On average, a representative U.S. kindergarten class will lose one member before their fifth decade of life.

What is going on here? The first logical suspect might be guns. According to a recent Pew analysis of CDC data, gun deaths among U.S. children and teens have doubled in the past 10 years, reaching the highest level of gun violence against children recorded this century. In March, a 20-something shooter fired 152 rounds at a Christian school in Nashville, Tennessee, killing three children and three adults, before being killed by police. In April, a 20-something shooter killed six people at a Louisville, Kentucky, bank, before he, too, was killed by police.

People everywhere suffer from mental-health problems, rage, and fear. But Americans have more guns to channel those all-too-human emotions into a bullet fired at another person. One could tell a similar story about drug overdoses and car deaths. In all of these cases, America suffers not from a monopoly on despair and aggression, but from an oversupply of instruments of death. We have more drug-overdose deaths than any other high-income country because we have so much more fentanyl, even per capita. Americans drive more than other countries, leading to our higher-than-average death rate from road accidents. Even on a per-miles-driven basis, our death rate is extraordinary.

When I reached out to Burn-Murdoch, I expected that these three culprits—guns, drugs, and cars—would explain most of our death ratio. However, on my podcast, Plain English, he argued that Americans’ health (and access to health care) seems to be the most important factor. America’s prevalence of cardiovascular and metabolic disease is so high that it accounts for more of our early mortality than guns, drugs, and cars combined.

Disentangling America’s health issues is complicated, but I can offer three data points. First, American obesity is unusually high, which likely leads to a larger number of early and middle-aged deaths. Second, Americans are unusually sedentary. We take at least 30 percent fewer steps a day than people do in Australia, Switzerland, and Japan. Finally, U.S. access to care is unusually unequal—and our health-care outcomes are unusually tied to income. As the Northwestern University economist Hannes Schwandt found, Black teens in the poorest U.S. areas are roughly twice as likely to die before they turn 20 as teenagers in the richest counties. This outcome is logically downstream of America’s paucity of universal care and our shortage of physicians, especially in low-income areas.

There is no single meta-explanation for America’s death ratio that’s capacious enough to account for our higher rates of death from guns, drugs, cars, infant mortality, diet, exercise, and unequal access to care. I’ll try to offer one anyway—only to immediately contradict it.

Let’s start with the idea, however simplistic, that voters and politicians in the U.S. care so much about freedom in that old-fashioned ’Merica-lovin’ kind of way that we’re unwilling to promote public safety if those rules constrict individual choice. That’s how you get a country with infamously laissez-faire firearms laws, more guns than people, lax and poorly enforced driving laws, and a conservative movement that has repeatedly tried to block, overturn, or limit the expansion of universal health insurance on the grounds that it impedes consumer choice. Among the rich, this hyper-individualistic mindset can manifest as a smash-and-grab attitude toward life, with surprising consequences for the less fortunate. For example, childhood obesity is on the rise at the same time that youth-sports participation is in decline among low-income kids. What seems to be happening at the national level is that rich families, seeking to burnish their child’s résumé for college, are pulling their kids out of local leagues so that they can participate in prestigious pay-to-play travel teams. At scale, these decisions devastate the local youth-sports leagues for the benefit of increasing by half a percentage point the odds of a wealthy kid getting into an Ivy League school.

The problem with the Freedom and Individualism Theory of Everything is that, in many cases, America’s problem isn’t freedom-worship, but actually something quite like its opposite: overregulation. In medicine, excessive regulation and risk aversion on the part of the FDA and Institutional Review Boards have very likely slowed the development and adoption of new lifesaving treatments. This has created what the economist Alex Tabarrok calls an “invisible graveyard” of people killed by regulators preventing access to therapies that would have saved their life. Consider, in the same vein, the problem of diet and exercise. Are Americans unusually sedentary because they love freedom so very much? It’s possible, I guess. But the more likely explanation is that restrictive housing policies have made it too hard for middle- and low-income families to live near downtown business districts, which forces many of them to drive more than they would like, thus reducing everyday walking and exercise.

America is caught in a lurch between oversight and overkill, sometimes promoting individual freedom, with luridly fatal consequences, and sometimes blocking policies and products, with subtly fatal consequences. That’s not straightforward, and it’s damn hard to solve. But mortality rates are the final test of civilization. Who said that test should be easy?