Itemoids

Silicon Valley

Goodbye to the Dried Office Mangos

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 04 › tech-company-perks-free-food-google › 673855

Even as the whole of Silicon Valley grapples with historic inflation, a bank crash, and mass layoffs, Google’s woes stand apart. The explosion of ChatGPT and artificial intelligence more broadly has produced something of an existential crisis for the company, a “code red” moment for the business. “Am I concerned? Yes,” Sundar Pichai, Google’s CEO, told The New York Times. But Google employees are encountering another problem: “They took away the dried mango,” says a project manager at Google’s San Francisco office, whom I agreed not to name to protect the employee from reprisal. At least at that office, the project manager said, workers are seeing less of long-cherished food items—not just the mango, but also the Maui-onion chips and the fun-size bags of M&Ms.

Cost-cutting measures have gutted some of Google’s famous perks. In a company-wide email last month, Chief Financial Officer Ruth Porat announced rollbacks on certain in-office amenities, including company-sponsored fitness classes, massages, and the availability of so-called microkitchens: pantries stocked with everything from low-calorie pork rinds to spicy Brazilian flower buds. These perks have long been an inextricable part of Google’s culture, even in an industry flush with nap pods and coffee bars—a way to recruit top talent and keep coders happy during long days in the office. “The idea was ‘We’re going to make it so wonderful to be here that you never need to leave,’” Peter Cappelli, a professor of management at the University of Pennsylvania’s Wharton School, told me. “Are they giving up on that idea?”

Google told me they’re still committed to perks, and indeed, the free meals are still around. “As we’ve consistently said, we set a high bar for industry-leading perks, benefits and office amenities, and will continue that into the future,” Google spokesperson Ryan Lamont said in an email. But the cutbacks are seemingly coming at an inopportune time: If there was ever a moment when Google needed to recruit top talent, it’s now. Although overall demand for software engineers has slowed, money and jobs are still flocking to a buzzy new breed of generative AI. OpenAI, after all, makes a point of matching Google’s daily meals and handing out “fresh-baked cookies.” Google’s new attitude toward perks may be an admission of what was true all along: Perks are perks—just expendable add-ons. They’re nice to have in the good times but hardly essential in the bad.

The world of HR has long claimed that happy workers are productive workers, but Google treated that idea like a mantra, creating offices that were less like cubicle-packed grids and more like adult playgrounds (complete with in-office slides and rock-climbing walls). As part of what the company refers to as “Googley extras,” it has given employees free yoga and pilates classes, fully comped team trips, and even once-a-week eyebrow shaping. Other big companies, and even start-ups flush with venture-capital cash, realized that to have a shot at competing for talent, they’d need to start subsidizing the same sort of lifestyle. Massages and macchiatos were just the start: Apple has hosted private concerts with artists such as Stevie Wonder and Maroon 5; Dropcam, a start-up Google bought in 2014 (whose tech it has recently decided to phase out), reportedly offered each employee a free helicopter ride, piloted by the CEO, to a destination of their choosing. Others, such as WeWork, simply handed out tequila around the clock.

The Googley extras aren’t gone, by any means, but they’re no longer guaranteed. Google’s infamous shuttle buses, known to clog San Francisco streets as they ferry employees to and from the office, are running less frequently, and traditional laptops have become a privilege reserved for employees in engineering roles. Everyone else must now make do with slightly wimpier netbooks. Part of this reduction in amenities has to do with the new reality of hybrid work, which has itself become a perk. It makes sense to trim the shuttle-bus schedule if fewer people are taking the bus to work every day. Same goes for the reported reduction in in-office muffins, although understanding the rationale behind the crackdown doesn’t necessarily make it sting any less.

It’s not just Google, either. “My sense is that [perks] are being pulled back broadly,” Cappelli said. “So many public companies feel that they have to look like they’re belt-tightening for investors.” After just a year, Salesforce has abandoned its “Trailblazer Ranch,” a 75-acre retreat meant to host guided nature walks, group cooking classes, sessions for meditation, and “art journaling.” Over at Meta, already a year out from its decision to cancel free laundry and dry-cleaning services, employees are expressing similar frustrations over snacks.

Still, it all cuts a little deeper at Google. That’s in part because Google has taken such care to cement its reputation as the best place in the world to work, the plushest employer in a sea of plush. As any Google employee will insist, the lunches were never as good at Apple or Microsoft. The message is perhaps symbolic as much as practical. Muffins are not a real financial concern for Alphabet, Google’s $1.3 trillion parent company, which could very much still cash in on the new AI boom. But for the company’s workers, it’s not the muffins themselves, but their absence, that may end up having the greatest impact. “The way it is conveyed to people matters as much as the perks themselves,” Cappelli said. If an abundance of perks signals care and intention, what might a lack of perks represent? “You’re sending the opposite signal: ‘We don’t really care about you so much, and that’s why we’re taking it away.’”

Flashy perks helped produce an illusion of safety that couldn’t last. Surface-level penny-pinching is ultimately about assuring investors that costs are under control; employees’ annoyance is just part of the bargain. You’ll know your employer really means business when it lays off your whole team. And if Google is willing to cut down on some of its most visible perks just as generative AI threatens to upend its business, then maybe it’s not too concerned about OpenAI outdoing it in the snack department. The end of muffins and dried-mango slices amounts to a gesture more than anything else—a way of reminding current employees that these are lean times, and they should start acting like it.

The End of an Internet Era

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 04 › buzzfeed-news-internet-era › 673822

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

The internet of the 2010s was chaotic, delightful, and, most of all, human. What happens to life online as that humanity fades away?

First, here are three new stories from The Atlantic:

Silicon Valley’s favorite slogan has lost all meaning. Too many Americans are missing out on the best kitchen gadget. Elon Musk revealed what Twitter always was.

Chaotically Human

My colleague Charlie Warzel worked at BuzzFeed News in the 2010s. He identifies those years as a specific era of the internet—one that symbolically died yesterday with the news of the website shutting down. Yesterday, Charlie offered a glimpse of what those years felt like for people working in digital media:

I worked at BuzzFeed News for nearly six years—from March 2013 until January 2019. For most of that time, it felt a bit like standing in the eye of the hurricane that is the internet. Glorious chaos was everywhere around you, yet it felt like the perfect vantage to observe the commercial web grow up. I don’t mean to sound self-aggrandizing, but it is legitimately hard to capture the cultural relevance of BuzzFeed to the media landscape of the mid-2010s, and the excitement and centrality of the organization’s approach to news. There was “The Dress,” a bit of internet ephemera that went so viral, we joked that that day might have been the last good one on the internet.

Charlie goes on, and his essay is worth reading in full, but today I’d like to focus on the point he ends on: that the internet of the 2010s was human in a way that today’s is not. Charlie doesn’t just mean human in the sense of not generated by a machine. He’s referring to chaos, unpredictability, delight—all of the things that made spending time on the internet fun.

Charlie explains how Buzzfeed News ethos emphasized paying attention to the joyful and personal elements of life online:

BuzzFeed News was oriented around the mission of finding, celebrating, and chronicling the indelible humanity pouring out of every nook and cranny of the internet, so it makes sense that any iteration that comes next will be more interested in employing machines to create content. The BuzzFeed era of media is now officially over. What comes next in the ChatGPT era is likely to be just as disruptive, but I doubt it’ll be as joyous and chaotic. And I guarantee it’ll feel less human.

The shrinking humanity of the internet is a theme that Charlie’s been thinking about for a while. Last year, he wrote about why many observers feel that Google Search is not as efficient as it used to be—some argue that the tool returns results that are both drier and less useful than they once were. Charlie learned in his reporting that some of the changes the Search tool has rolled out are likely the result of Google’s crackdowns on misinformation and low-quality content. But these changes might also mean that Google Search has stopped delivering interesting results, he argues:

In theory, we crave authoritative information, but authoritative information can be dry and boring. It reads more like a government form or a textbook than a novel. The internet that many people know and love is the opposite—it is messy, chaotic, unpredictable. It is exhausting, unending, and always a little bit dangerous. It is profoundly human.

It’s also worth remembering the downsides of this humanity, Charlie notes: The unpredictability that some people are nostalgic for also gave way to conspiracy theories and hate speech in Google Search results.

The Google Search example raises its own set of complex questions, and I encourage those interested to read Charlie’s essay and the corresponding edition of his newsletter, Galaxy Brain. But the strong reactions to Google Search and the ways it is changing are further evidence that many people crave an old internet that now feels lost.

If the internet is becoming less human, then something related is happening to social media in particular: It’s becoming less familiar. Social-media platforms such as Friendster and Myspace, and then Facebook and Instagram, were built primarily to connect users with friends and family. But in recent years, this goal has given way to an era of “performance” media, as the internet writer Kate Lindsay put it in an Atlantic article last year. Now, she wrote, “we create online primarily to reach people we don’t know instead of the people we do.”

Facebook and Instagram are struggling to attract and retain a younger generation of users, Lindsay notes, because younger users prefer video. They’re on TikTok now, most likely watching content created by people they don’t know. And in this new phase of “performance” media, we lose some humanity too. “There is no longer an online equivalent of the local bar or coffee shop: a place to encounter friends and family and find person-to-person connection,” Lindsay wrote.

I came of age in the Tumblr era of the mid-2010s, and although I was too shy to put anything of myself on display, I found joy in lurking for hours online. Now those of us looking for a place to have low-stakes fun on the internet are struggling to find one. The future of social-media platforms could surprise us: IOS downloads of the Tumblr app were up by 62 percent the week after Elon Musk took control of Twitter, suggesting that the somewhat forgotten platform could see a resurgence as some users leave Twitter.

I may not have personally known the bloggers I was keeping up with on Tumblr, but my time there still felt human in a way that my experiences online have not since. The feeling is tough to find words for, but maybe that’s the point: As the internet grows up, we won’t know what we’ve lost until it’s gone.

Related:

The internet of the 2010s ended today. Instagram is over.

Today’s News

Less than a year after overturning Roe v. Wade, the Supreme Court is expected to decide tonight on whether the abortion pill mifepristone should remain widely available while litigation challenging the FDA’s approval of the drug continues. The Russian military stated that one of its fighter jets accidentally bombed Belgorod, a Russian city near the Ukrainian border. Dominic Raab stepped down from his roles as deputy prime minister and justice secretary of Britain after an official inquiry found that he had engaged in intimidating behavior on multiple occasions, one of which involved a misuse of power.

Dispatches

Work in Progress: America has failed the civilization test, writes Derek Thompson. The Books Briefing: Elise Hannum rounds up books about celebrity—and observes how difficult it can be to appear both otherworldly and relatable. Up for Debate: Conor Friedersdorf explores how the gender debate veered off track.

Explore all of our newsletters here.

Evening Read

National Gallery of Art, Widener Collection

Vermeer’s Revelations

By Susan Tallman

Of all the great painters of the golden age when the small, soggy Netherlands arose as an improbable global power, Johannes Vermeer is the most beloved and the most disarming. Rembrandt gives us grandeur and human frailty, Frans Hals gives us brio, Pieter de Hooch gives us busy burghers, but Vermeer issues an invitation. The trompe l’oeil curtain is pulled back, and if the people on the other side don’t turn to greet us, it’s only because we are always expected.

Vermeer’s paintings are few in number and scattered over three continents, and they rarely travel. The 28 gathered in Amsterdam for the Rijksmuseum’s current, dazzling exhibition represent about three-quarters of the surviving work—“a greater number than the artist might have ever seen together himself,” a co-curator, Pieter Roelofs, notes—and make this the largest Vermeer show in history. The previous record holder took place 27 years ago at the National Gallery in Washington, D.C., and at the Mauritshuis, in The Hague. Prior to that, the only chance to see anything close would have been the Amsterdam auction in May 1696 that dispersed perhaps half of everything he’d painted in his life.

Read the full article.

More From The Atlantic

Murders are spiking in Memphis. A memoir about friendship and illness Gavin Newsom is not governing.

Culture Break

Brian Shumway / Gallery Stock

Read. Journey, a wordless picture book, is about the expedition of a girl with a magical red crayon. It’s one of seven books that you should read as a family.

Watch. Ari Aster’s newest movie, Beau Is Afraid, invites you into the director’s anxious fantasies.

Play our daily crossword.

While you’re over on Charlie’s Galaxy Brain page, check out the November newsletter in which he comes up with a great term for our evolving internet age: geriatric social media. (It’s not necessarily a bad thing.)

— Isabel

Did someone forward you this email? Sign up here.

Katherine Hu contributed to this newsletter.

Moore’s Law Is Not for Everything

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 04 › moores-law-defining-technological-progress › 673809

In early 2021, long before ChatGPT became a household name, OpenAI CEO Sam Altman self-published a manifesto of sorts, titled “Moore’s Law for Everything.” The original Moore’s Law, formulated in 1965, describes the development of microchips, the tiny silicon wafers that power your computer. More specifically, it predicted that the number of transistors that engineers could cram onto a chip would roughly double every year. As Altman sees it, something like that astonishing rate of progress will soon apply to housing, food, medicine, education—everything. The vision is nothing short of utopian. We ride the exponential curve all the way to paradise.

In late February, Altman invoked Moore again, this time proposing “a new version of moore’s law that could start soon: the amount of intelligence in the universe doubles every 18 months.” This claim did not go unchallenged: “Oh dear god what nonsense,” replied Grady Booch, the chief scientist for software engineering at IBM Research. But whether astute or just absurd, Altman’s comment is not unique: Technologists have been invoking and adjusting Moore’s Law to suit their own ends for decades. Indeed, when Gordon Moore himself died last month at the age of 94, the legendary engineer and executive, who in his lifetime built one of the world’s largest semiconductor companies and made computers accessible to hundreds of millions of people, was remembered most of all for his prediction—and also, perhaps, for the optimism it inspired.

Which makes sense: Moore’s Law defined at least half a century of technological progress and, in so doing, helped shape the world as we know it. It’s no wonder that all manner of technologists have latched on to it. They want desperately to believe—and for others to believe—that their technology will take off in the same way microchips did. In this impulse, there is something telling. To understand the appeal of Moore’s Law is to understand how a certain type of Silicon Valley technologist sees the world.

The first thing to know about Moore’s Law is that it isn’t a law at all—not in a legalistic sense, not in a scientific sense, not in any sense, really. It’s more of an observation. In an article for Electronics magazine published 58 years ago this week, Moore noted that the number of transistors on each chip had been doubling every year. This remarkable progress (and associated drop in costs), he predicted, would continue for at least the next decade. And it did—for much longer, in fact. Depending on whom you ask and how they choose to interpret the claim, it may have held until 2005, or the present day, or some point in between.

Carver Mead, an engineering professor at the California Institute of Technology, was the first to call Moore’s observation a “law.” By the early 1980s, that phrase—Moore’s Law—had become the slogan for a nascent industry, says Cyrus Mody, a science historian at Maastricht University, in the Netherlands, and the author of The Long Arm of Moore’s Law. With the U.S. economy having spent the better part of the past decade in the dumps, he told me, a message of relentless progress had PR appeal. Companies could say, “‘Look, our industry is so consistently innovative that we have a law.’”

[Read: AI is like … nuclear weapons?]

This wasn’t just spin. Microchip technology really had developed according to Moore’s predicted schedule. As the tech got more and more intricate, Moore’s Law became a sort of metronome by which the industry kept time. That rhythm was a major asset. Silicon Valley executives were making business-strategy decisions on its basis, David C. Brock, a science historian who co-wrote a biography of Gordon Moore, told me.

For a while, the annual doubling of transistors on a chip seemed like magic: It happened year after year, even though no one was shooting for that specific target. At a certain point, though, when the industry realized the value of consistency, Moore’s Law morphed into a benchmark to be reached through investment and planning, and not simply a phenomenon to be taken for granted, like gravity or the tides. “It became a self-fulfilling prophecy,” Paul Ceruzzi, a science historian and a curator emeritus at the National Air and Space museum, told me.

Still, for almost as long as Moore’s Law has existed, people have foretold its imminent demise. If they were wrong, that’s in part because Moore’s original prediction has been repeatedly tweaked (or outright misconstrued), whether by extending his predicted doubling time, or by stretching his meaning of a single chip, or by focusing on computer power or performance instead of the raw number of transistors. Once Moore’s Law had been fudged in all these ways, the floodgates opened to more extravagant and brazen reinterpretations. Why not apply the law to pixels, to drugs, to razor blades?

An endless run of spin-offs ensued. Moore’s Law of cryptocurrency. Moore’s Law of solar panels. Moore’s Law of intelligence. Moore’s Law for everything. Moore himself used to quip that his law had come to stand for just about any supposedly exponential technological growth. That’s another law, I guess: At every turn of the technological-hype cycle, Moore’s Law will be invoked.

The reformulation of Moore’s observation as a law, and then its application to a new technology, creates an air of Newtonian precision—as if that new technology could only grow in scale. It transforms something you want to happen into something that will happen—technology as destiny.

For decades, that shift has held a seemingly irresistible appeal. More than 20 years ago, the computer scientist Ray Kurzweil fit Moore’s Law into a broad argument for the uninterrupted exponential progress of technology over the past century—a trajectory that he still believes is drawing us toward “the Singularity.” In 2011, Elon Musk professed to be searching for a “Moore’s Law of Space.” A year later, Mark Zuckerberg posited a “social-networking version of Moore’s Law,” whereby the rate at which users share content on Facebook would double every year. (Look how that turned out.) More recently, in 2021, Changpeng Zhao, the CEO of the cryptocurrency exchange Binance, cited Moore’s Law as evidence that “blockchain performance should at least double every year.” But no tech titan has been quite as explicit in their assertions as Sam Altman. “This technological revolution,” he says in his essay, “is unstoppable.” No one can resist it. And no one can be held responsible.

Moore himself did not think that technological progress was inevitable. “His whole life was a counterexample to that idea,” Brock told me. “Quietly measuring what was actually happening, what was actually going on with the technology, what was actually going on with the economics, and acting accordingly”—that was what Moore was about. He constantly checked and rechecked his analysis, making sure everything still held up. You don’t do that if you believe you have hit upon an ironclad law of nature. You don’t do that if you believe in the unstoppable march of technological progress.

Moore recognized that his law would eventually run up against a brick wall, some brute fact of physics that would halt it in its tracks—the size of an atom, the speed of light. Or worse, it would cause catastrophe before it did. “The nature of exponentials is that you push them out,” he said in a 2005 interview with Techworld magazine, “and eventually disaster happens.”

Exactly what sort of disaster Moore envisioned is unclear. Brock, his biographer, suspects that it might have been ecological ruin; Moore was, after all, a passionate conservationist. Perhaps he viewed microchips as a sort of invasive species, multiplying and multiplying at the expense of the broader human ecosystem. Whatever the particulars, he was an optimist, not a utopian. And yet, the law bearing his name is now cited in support of a worldview that was not his own. That is the tragedy of Moore’s Law.

The Rice Cooker Has Been Perfect Since 1955

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 04 › rice-cooker-design-history › 673795

In January, Timothy Wu’s electric rice cooker started ailing. His Zojirushi NS-ZCC10—a white, shoe-box-size machine that plays a cheerful jingle when its contents have been steamed to fluffy excellence—wasn’t keeping rice warm for as long as it used to. Following a quarter century of almost daily service, the machine was so loved that his two young daughters (one of whom had years ago dubbed herself “rice monster”) requested a funeral. A few nights after the rice cooker’s demise, the family gathered around the machine, lit candles, and made speeches about what it had done for them. This faithful companion had accompanied Wu through at least four cities, a marriage, the birth of two children, and jobs in both the Obama and Biden administrations, outliving as many as 10 phones, several computers, and multiple cars. “There are not that many things in life which are utterly reliable, in some ways completely selfless, and so giving,” Wu, a professor at Columbia Law School and a prominent critic of Big Tech, told me.

The rice cooker, after all, is a perfect appliance in basically every way: a tabletop device that tells you what it does (cooks rice) and does what it says it will (cooks rice) with ease and without fail. You measure grains and water in a ratio provided by the cooker, pour everything into its inner pot, close the lid, and press a button. Within 30 minutes or so, you will have the ideal bowl of rice—pleasantly chewy, with grains that are not clumpy or dry. The machine automates an otherwise fiendish process: “If you’re cooking rice with a stovetop and a pot, you either have to use a timer or you have to really carefully notice when the water has stopped simmering,” the chef and author J. Kenji López-Alt told me. “And it’s really difficult to do that by eye.” Just a bit too much or too little water, rice, heat, or cooking time can produce a gloopy or burnt mess.

Not only is the automatic electric rice cooker perfect, but it has been so for decades—perhaps since the first model went on sale, in 1955, and certainly since engineers harnessed more advanced technologies in the ’70s and ’80s. Many models on the market today work in functionally the same way as the ones sold generations ago, and in some cases, the similarities go even further. Wu’s new rice cooker, also a Zojirushi NS-ZCC10, is utterly indistinguishable from the now-deceased one he bought in the ’90s: a spitting image in shape, buttons, elephant logo, and all. The finished rice is just as good. So much modern technology, especially in disruption-obsessed Silicon Valley, promises that over time it will improve dramatically and inevitably—a computer that was the size of a room in 1955 can now fit into your pocket. But the rice cooker hasn’t changed much at all, because it hasn’t needed to.

The fact that this rice cooker worked for 25+ years of constant usage without fail makes me want to praise its engineers -- and the fact that the new model is identical to the old suggests they knew they got it right. pic.twitter.com/C13sxEveQC

— Tim Wu (@superwuster) January 31, 2023

The simple, static elegance of rice cookers is not especially common in the United States, the self-proclaimed home of innovation and progress where so many other gadgets have made it big. The average American does not cook much rice compared with much of Asia, and only 13 percent of American homes use a rice cooker. But these marvelous machines are near ubiquitous in much of East and Southeast Asia, where rice is a staple: In the rice cooker’s birthplace, Japan, 89 percent of multi-person households own one.

[Read: J. Kenji López-Alt thinks you’ll be fine with an induction stove ]

This kitchen masterpiece was developed as the country was rebuilding after World War II, when a Toshiba salesman advertising a washing machine to housewives learned that was preparing rice three times a day was more arduous than doing laundry. The traditional Japanese method of cooking rice, in earthenware pots known as kama over a stove called a kamado, required constantly watching and adjusting the heat. Realizing a business opportunity, the salesman proposed that an engineer design something for Toshiba that could cook rice automatically. The engineer knew little about cooking rice, but he asked his wife, Fumiko Minami, to help. She spent two years studying her kama, other rice-cooking appliances, and various prototypes, as the historian Helen Macnaughtan has documented, eventually arriving at the technique that still powers the simplest models today.

At its core, the greatest kitchen appliance requires just a thermometer and a heat source. Assuming your proportions are right, rice is fully cooked when all of the water in the pot has been absorbed or evaporated. To track that, the first Toshiba rice cookers used a bimetallic strip that senses when the pot surpasses 212 degrees Fahrenheit, the boiling point of water, and turns off the machine. The appliance’s internal temperature can only surpass that  point at when all the liquid is gone and, therefore, the rice is finished. “It’s a foolproof way of cooking rice that’s way more reliable than anything you could do in a pot on the stove,” López-Alt said.

After testing the final prototype near a steaming bathroom, under a scorching sun, and in an ice warehouse, Toshiba released the first rice cooker in December 1955. In Japan, the technology was immediately miraculous. Within a year, Toshiba was producing 200,000 rice cookers a month. By 1960, half of Japanese households had one—and the appliance was spreading to neighboring countries. After acquiring a rice cooker, “people felt like they were not that poor anymore,” Yoshiko Nakano, a professor of management at the Tokyo University of Science and the author of Where There Are Asians, There Are Rice Cookers, told me. In her research, Nakano found that for working-class households across East Asia, the new machines were more life-altering than televisions or refrigerators, freeing many women from time-consuming drudgery.

The electric rice cooker has evolved from Minami’s original design. Manufacturers quickly added a function to keep rice warm for many hours, obviating the need to cook multiple batches a day. In 1979, they introduced microchips, which could modulate temperature and cook time based on factors including the volume and type of rice. Then came induction heating in 1988 and pressure cooking in 1992. Many of these steps forward in technology have really brought the rice cooker back in time—making it better emulate the traditional kamado cooking method, says Marilyn Matsuba, a marketing manager at Zojirushi. Microchips modulate temperature in a method similar to what people used to do manually; induction heating and pressure cooking mimic the traditional earthenware pot and its double lid. Over the years, rice cookers have also become better at handling some varieties not commonly found in East Asia, such as long-grain basmati.

Manufacturers have continued to tweak and improve their most advanced models, which can cost more than $700. Zojirushi’s most expensive rice cooker accepts feedback on the quality of each batch of rice and uses AI to personalize its cooking cycle to each user’s tastes. And local variations exist, such as a machine that makes tahdig, the crispy-bottom Iranian rice dish. But many popular rice cookers on the market today, especially in the U.S., still use the decades-old thermometer or microchip methods. And even the microchips may be unnecessary. The highest-rated, cheapest models on Amazon, which run about $20, are thermometer-based, and various comparisons from food writers and publications find that the simple models work great. López-Alt, who eats rice many times a week and is known for testing recipes and equipment with scientific rigor, owns an old-fashioned rice cooker. Even Matsuba, of Zojirushi, told me that while the company’s latest technologies do make better rice, “perhaps the cost-benefit isn’t as clear to the consumer,” especially to American consumers who don’t scrutinize the minutiae of cooked rice as people do in Japan.

As an American who eats plenty of rice, I had to decide for myself. This weekend, I tested an old, bimetallic-switch-based rice cooker against a microchip-wielding Zojirushi, which sells for more than $200. The fancy machine’s rice was a bit fluffier, the simple one’s rice just barely mushier. But the far-cheaper technology cooked rice almost as well in 19 minutes versus the Zojirushi’s 46-minute cycle, which soaks the rice beforehand and lets it steam briefly once finished. Without several side-by-side samples, I’m not sure I would have noticed a difference. My verdict: perfect since 1955.

[Read: The Instant Pot will not solve all of life’s problems]

That’s possible because the rice cooker is a modest tool, aspiring to a simple, millennia-old task. Not only are its mechanics an anachronism, then, but so is its spirit—it’s not trying to cram several functions into a single product, nor is it maddening to use. Compare the rice cooker’s simplicity to the seven-in-one Instant Pot, the Omni Cook (a blender that can sous vide, self-clean, and knead, among 18 other functions), or the Ninja Foodi (an air fryer–pressure cooker chimera)—a class of kitchen appliances that seek to replace your entire kitchen. In the pursuit of doing everything, these gadgets rarely do any one thing as well as we would like, perhaps why the Instant Pot’s popularity is plummeting. “Many other technologies in our life are frustrating and often have their own agendas; they want to advertise products to us or do other things,” Wu told me. “The rice cooker is just selflessly serving.” Having a product that is straightforward and works well every time is a vanishingly rare experience, in the kitchen or outside it.

A few months after the funeral, over Easter weekend, Wu and his family took out their retired Zojirushi. His daughters thought “it was dead,” he said, “but it’s not”—only the keep-warm function had degraded. When he successfully cooked a pot of rice with the old rice cooker, “the children were overjoyed, and they cheered.” It was a resurrection, if only of sorts: A single rice cooker can falter, but the rice cooker can never really die.

The End of BuzzFeed News Means the Coming of a New Internet

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 04 › buzzfeed-news-end-political-influence-cultural-impact › 673803

If you’re curious to know what it was like to work at BuzzFeed News in the salad days of the mid-2010s, here is a representative anecdote: I was sitting in my desk one morning, dreadfully hungover and editing a story titled “The Definitive Oral History of the Wikipedia Photo for ‘Grinding,’” when the sounds of a screaming man broke my trance. I looked up to see Tracy Morgan three feet away, surrounded by a small entourage of handlers.

Morgan was barreling through the office, lifting his shirt up, smacking his belly, and cracking jokes about how pale all of us internet writers looked. I remember our lone investigative reporter, Alex Campbell, scurrying away from his desk a row away from mine to continue his reporting call in silence. A few months later, the story he’d been working on would help free an innocent woman from prison. Morgan’s chattering faded, and the newsroom returned to its ambient humming of frenetic keyboard clacking—the sound of the internet being made. Hardly anyone had batted an eye.

I worked at BuzzFeed News for nearly six years—from March 2013 until January 2019. For most of that time, it felt a bit like standing in the eye of the hurricane that is the internet. Glorious chaos was everywhere around you, yet it felt like the perfect vantage to observe the commercial web grow up. I don’t mean to sound self-aggrandizing, but it is legitimately hard to capture the cultural relevance of BuzzFeed to the media landscape of the mid-2010s, and the excitement and centrality of the organization’s approach to news. There was “The Dress,” a bit of internet ephemera that went so viral, we joked that that day might have been the last good one on the internet. There was the Facebook Live experiment in which two bored staffers got 800,000 people to concurrently watch them put rubber bands on a watermelon until it exploded—a piece of content that will live in “pivot to video” infamy.

And for an offshoot of a place (somewhat unfairly known) as a listicle and cat-video factory, BuzzFeed News had an outsize political influence. It published a Donald Trump profile so scathing that it very well may have goaded him into running for president. We got Barack Obama to use a selfie stick and also published the Steele Dossier. Once, I got assigned to follow exotic dancers around at a predawn chicken-wing-eating contest. During Trump’s first press conference as president-elect, I stood next to our editor in chief and watched the soon-to-be leader of the free world single us out as a “failing pile of garbage.” Within an hour, we were selling shirts plastered with the phrase. BuzzFeed News contained multitudes.

I ran into Ben Smith's office to make calls. I couldn't hear a damn thing. Nemtsov's friends were crying to me over the phone about his brutal murder. Everyone else in the office was chanting, "DRESS! DRESS! DRESS!"

— max seddon (@maxseddon) April 20, 2023

One can attribute the site’s cultural relevance, the industry enthusiasm around the work, and even the rivalries and haters, to BuzzFeed News’s unofficial mission: to report on the internet like it was a real place, and to tell stories in the honest, casual tone of the web. At the time I joined, this was, if not a new kind of journalism, certainly an updated model for seeking out stories—one that’s now been fully absorbed by the mainstream. At its simplest, it might have meant mining a viral tweet or Reddit thread for ideas, but more often than not, it meant bearing witness to the joy, chaos, and horrors that would pour across our timelines every day and using them as a starting point for real reporting. It meant realizing, as I and my colleagues did, during the on- and offline manhunt for the Boston Marathon bombers, that a new culture of internet vigilantism was beginning to take hold in digital communities and that the media no longer unilaterally shaped broad news narratives.

Reporting on the internet like it was a real place led some of my colleagues to peer around corners of our politics and culture. In 2015, Joseph Bernstein outlined the way that “various reactionary forces have coalesced into a larger, coherent counterculture”—a phenomenon bubbling up in message boards such as 4chan that he called a “Chanterculture.” To read the piece now is to see the following half decade—reactionary MAGA politics, Trump’s troll armies, our current digital culture warring—laid out plainly. The Chanterculture story is a BuzzFeed News archetype: Movements like this weren’t hard to see if you were spending time in these communities and taking the people in them seriously. Most news organizations, however, weren’t doing that.

People afflicted with Business School Brain who didn’t understand BuzzFeed News (including one of the company’s lead investors) often described it like a tech start-up. This was true only in the sense that the company had an amazing, dynamic publishing platform—a content-management system that updated almost daily with new features based on writer input. But the secret behind BuzzFeed News had nothing to do with technology (or even moving fast). The secret was cultural. Despite the site’s constant bad reputation as a click farm, I was never once told to chase traffic. No editor ever discussed referrals or clicks. The emphasis was on doing the old-fashioned thing: finding an original story that told people something new, held people to account, or simply delighted. The traffic would come.

The place was obsessed with story, not prestige, and its ambition was nearly boundless. It wasn’t afraid of devoting considerable resources to being silly as long as the narrative was good. (The company enabled me to spend weeks reporting an oral history of one day on the internet, sent me to cover political campaigns and rallies, agreed to let me stay in the guest room of a porn producer’s New Hampshire BDSM cabin, and allowed me fly to Sweden to get a microchip implanted in my hand). And the company supported hard, serious journalism around the world. As one of my colleagues reminded me today, a common refrain during the BuzzFeed News heyday was that it felt like a fake job. Not because it wasn’t serious work, but because getting paid to work there often felt like getting away with something.

The legacy of BuzzFeed News has two components. The first I described above. This legacy lives on in the stories, as well as the alumni network of brilliant writers, reporters, editors, and artists, who now work in every newsroom on the planet. (There are five of us here at The Atlantic.) The second part is, sadly, much more familiar: It is the tragic story of the digital media industry writ large. It is a familiar tale of mismanagement, low interest rates, unrealistic expectations, greedy, extractive venture capitalists, and the impossibility of exponential growth.

If it felt like a fake job, that’s because, in the harshest financial terms, it was: In 2014, the venture firm Andreesen Horowitz invested $50 million into BuzzFeed News, a number which makes my stomach drop now. I was the technology editor at the time and remember getting pulled into a meeting about it, mostly as a heads up and an assurance that the investment from Silicon Valley’s buzziest firm would not influence how we covered tech. This turned out to be true. Reporting on tech platforms while working at BuzzFeed News always felt like living in the town whose local politics you covered—you lived it and you wrote about it.

It all would’ve made at least some sense to you, too, if you were 28 and living a millennial subsidy life, taking cheap Ubers and watching Silicon Valley grow invincible. Those next few years were a blur. The new hire emails came in so fast that I stopped opening them. It all made sense then, but today, it looks like the inevitable fate-sealing that comes from making a deal with the venture capital devil.

BuzzFeed News was not, as Andreesen Horowitz’s Chris Dixon once said, a “full-stack startup.” This should've been blindingly obvious. The business of news gathering—not content creation—is expensive, and it does not scale. BuzzFeed News’s bread and butter—telling the internet’s stories and leveraging its systems to promote them, was only nominally a technology strategy, and one that was yoked to the success of other venture-funded social media companies like Facebook. The fate of the entire digital-media ecosystem was dependent on the line going up and to the right in perpetuity—or at least until the money men saw their returns. Just how infectious was this “perpetual growth” mindset? In the mid-2010s, BuzzFeed turned down a rumored $500 million acquisition from Disney, perhaps in part because it wanted to become Disney.

Around the time I left in 2019, it became clear that browsing and attention habits were shifting, turning places like Facebook into ghost towns for politically radicalized Boomers. This was the first time I heard internal rumblings of investor concern. I started hearing people whispering the word profitability—a term I’d never had occasion to hear around the office—a lot more. It took less than four years to fully internalize the lesson that venture capitalism is just a form of gambling: You invest in 10 companies to make money off one, and employees are the chips. News, no matter how much technology you wrap around it, may be a public good, but, if you’re looking for Facebook-level exits, it’s a bad bet.

I am sad and angry that the extractive practices of modern finance, the whims of rich and powerful investors, and the race-to-the-bottom economics of the digital media industry have stripped BuzzFeed for parts. I’m worried, on a practical level, about what might happen to the site’s archives, as well as the nearly 200 people the company plans to lay off. What’s left of the company (including the good, hard-working employees who are not fired) will have to navigate the wreckage created by an industry with a broken economic model. It seems likely that a zombified form of BuzzFeed will become the embodiment of everything the previous version wasn’t: terrified, obsessed with squeezing every ounce of shareholder value from its employees, and constantly bending to the forces of new technology like artificial intelligence, rather than harnessing and growing alongside them.

BuzzFeed News was oriented around the mission of finding, celebrating, and chronicling the indelible humanity pouring out of every nook and cranny of the internet, so it makes sense that any iteration that comes next will be more interested in employing machines to create content. The BuzzFeed era of media is now officially over. What comes next in the ChatGPT era is likely to be just as disruptive, but I doubt it’ll be as joyous and chaotic. And I guarantee it’ll feel less human.

Ali Wong Has Never Been Funnier—Or More Heartbreaking

The Atlantic

www.theatlantic.com › culture › archive › 2023 › 04 › beef-netflix-review-ali-wong-steven-yeun › 673674

The first time I saw Amy, Ali Wong’s character in Beef, I found myself sitting up a little straighter and leaning a little closer toward my TV. I knew Wong had a starring role, but Amy caught me off guard. Wearing a cream-colored bucket hat, her hands gripping the steering wheel and her face frozen in fear, she looked nothing like what I expected of the faceless driver I’d just watched in the show’s opening minutes—the one who’d careened recklessly across lanes, taunting, threatening, and throwing trash at a stranger.

Then again, Beef likes toying with assumptions of who its characters might be and where its story might veer next. The half-hour-episode Netflix series from the first-time showrunner Lee Sung Jin (Silicon Valley) is hard to categorize; it’s simultaneously a black comedy, a domestic drama, and a psychological thriller. It starts with a road-rage incident that Amy sets off when she flips the bird at Danny (Steven Yeun) in a parking lot after he nearly backs his truck into her Benz. Like a gnarlier Changing Lanes, their ensuing feud leads to an escalating series of vengeful acts that build from petty pranks into horrifying, morally questionable schemes. That the show feels balanced at all is down to how well drawn both leads are. Amy is a wealthy entrepreneur with a loving husband, a cute daughter, and a state-of-the-art mansion. Danny is a contractor barely making rent who shares a cramped apartment with his slacker brother. Both are deeply, desperately unhappy.

Yet of the two, Amy is less immediately sympathetic. Danny lives a difficult paycheck-to-paycheck lifestyle, his every failure deepening his belief that the world works against him. Amy, meanwhile, has no obvious reason to be miserable. She has it all—if “all” is defined as a stellar career and a nuclear family. Lee, who was inspired to create the series after getting caught in a road-rage incident himself, initially conceived of the character as a white man, matching the identity of the driver he’d encountered in real life. But quickly—in “maybe half a day,” Lee told me over the phone—he dropped the idea; he didn’t want the series to be merely about racial dynamics or to boil down to a culture clash. Later, with Wong in mind, he envisioned a new character: a woman whose self-made success is the cause of her downfall. Not that Beef tears Amy apart; instead, the series grants her more and more achievements, dissecting how her suffocating ambition pushes her to act on her worst impulses against a complete stranger. She is TV’s most compelling antiheroine of late: someone who knows she’s her own worst enemy and who, as Lee explained, “feels very much trapped in a maze of her own creation.”

[Read: A comedy special about wanting to cheat on your husband]

Consider how Amy constantly questions her power and instinctively tries to hide that self-doubt. She may appear to be a Strong Modern Woman—she agrees to photos with fans and participates in glitzy panels about female entrepreneurs, where she says things like “Despite what everybody tells you, you can have it all!”—but she’s uncomfortable with the image. The show doesn’t place her in a male-dominated field; she owns an artsy, minimalist plant business, and she’s working on selling her company to the female owner of a retail chain. In the presence of similarly well-off women, she wears a permanent smile through gritted teeth. She dresses in soft knits and unwrinkled silks, as if to distance herself from the girlboss uniform of power suits and pencil skirts. “There was something interesting to us as writers about someone who has so much chaos going on inside but [who’s] trying to cover that with as much calm and people-appeasing energy as possible,” Lee said. Amy knows that expressing her discontent with her apparently perfect life would ruin people’s impression of her as a role model. And despite her reluctance to play the part, she likes knowing that she is considered an inspiration.

Besides, when she does try to explain how she feels, the people closest to her can’t understand why she’s uneasy. In one wrenching scene, Amy divulges her malaise to her husband, George (Joseph Lee). “There’s this feeling I’ve had for a long time,” she says, squeezing out her words between pauses. “I don’t remember when it started; I can’t pinpoint exactly when or why … It feels like the ground, but, like, right here.” She gestures to her chest as she begins to cry. George reacts in a supportive manner: “I know a lot of people who battled depression and won,” he says—but the statement only causes Amy to shut their conversation down. His words are too positive, too insistent that she beat whatever she’s got. Through her, Beef highlights a complicated twist on loneliness: Amy has a healthy network of loved ones, but the more encouraging they are, the worse she feels. She’s fortunate to have a doting husband and the means to seek help. So why can’t she do what’s expected of her and feel better?

The idea that existential sadness can come for anyone is personal for Lee: He told me that the scene of Amy’s confession came directly from a moment in the writers’ room during which he attempted to describe his own anxiety, and ended up weeping in front of the staff. Like Amy, Lee hasn’t been able to shake off the weight in his chest: “That feeling is still very much there. It doesn’t go away … Writing this character was figuring out a way to accept that—that for some of us, that feeling is just permanent.” Amy’s attempts to find catharsis lead her to make decisions that range from farcical to frightening, if not outright criminal. In her, Lee conveys the thrill and desperation of that never-ending search for release—a journey that pushes Beef forward, step by fascinating step. Wong sells each of them. She’s never been funnier, or more heartbreaking.

Tim Cook and Bob Iger to meet with House China committee members

CNN

www.cnn.com › 2023 › 04 › 05 › tech › house-chine-committee-big-tech-hollywood-meeting › index.html

Members of a House panel focused on US-China competition are set to meet with leaders from Silicon Valley and Hollywood during a multi-day tour of California beginning today, according to a source close to the committee.

AI Isn’t Omnipotent. It’s Janky.

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 04 › artificial-intelligence-government-amba-kak › 673586

In the past few months, artificial intelligence has managed to pass the bar exam, create award-winning art, and diagnose sick patients better than most physicians. Soon it might eliminate millions of jobs. Eventually it might usher in a post-work utopia or civilizational apocalypse.

At least those are the arguments being made by its boosters and detractors in Silicon Valley. But Amba Kak, the executive director of the AI Now Institute, a New York–based group studying artificial intelligence’s effects on society, says Americans should view the technology with neither a sense of mystery nor a feeling of awed resignation. The former Federal Trade Commission adviser thinks regulators need to analyze AI’s consumer and business applications with a shrewd, empowered skepticism.  

Kak and I discussed how to understand AI, the risks it poses, whether the technology is overhyped, and how to regulate it. Our conversation has been condensed and edited for clarity.

Annie Lowrey: Let’s start off with the most basic question: What is AI?

Amba Kak: AI is a buzzword. The FTC has described the term artificial intelligence as a marketing term. They put out a blog post saying that the term has no discernible, definite meaning! That said, what we are talking about are algorithms that take large amounts of data. They process that data. They generate outputs. Those outputs could be predictions, about what word is going to come next or what direction a car needs to turn. They could be scores, like credit-scoring algorithms. They could be algorithms that rank content in a way, like in your news feed.

Lowrey: That sounds like technology that we already had. What’s different about AI in the past year or two?

Kak: You mean “generative AI.” Colloquially understood, these systems generate text, images, and voice outputs. Like many other kinds of AI, generative AI relies on large and often complex models trained on massive data sets—huge amounts of text scraped from sites like Reddit or Wikipedia, or images downloaded from Flickr. There are image generators, where you put in a text prompt and the output is an image. There are also text generators, where you put in a text prompt and you get back text.

[Read: What have humans just unleashed?]

Lowrey: Do these systems “think”? Are they more “human” or more “intelligent” than past systems working with huge amounts of data?

Kak: The short answer is no. They don’t think. They’re not intelligent. They are “haphazardly stitching together sequences of linguistic forms” they observe in the training data, as the AI researchers Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell put it. There are vested interests that want us to see these systems as being intelligent and a stepping stone to the singularity and “artificial general intelligence.”

Lowrey: What does singularity mean in this context?

Kak: It has no clear meaning. It’s this idea that machines will be so intelligent that they will be a threat to the human race. ChatGPT is the beginning. The end is we’re all going to die.

These narratives are purposefully distracting from the fact that these systems aren’t like that. What they are doing is fairly banal, right? They’re taking a ton of data from the web. They’re learning patterns, spitting out outputs, replicating the learning data. They’re better than what we had before. They’re much more effective at mimicking the kind of interaction you might have with a human, or the kind of result you might get from a human.

Lowrey: When you look at the AI systems out there, what do you see as the most immediate, concrete risk for your average American?

Kak: One broad, big bucket of concerns is the generation of inaccurate outputs. Bad advice. Misinformation, inaccurate information. This is especially bad because people think these systems are “intelligent.” They’re throwing medical symptoms into ChatGPT and getting inaccurate diagnoses. As with other applications of algorithms—credit scoring, housing, criminal justice—some groups feel the pinch worse than others. The people who might be most at risk are people who can’t afford proper medical care, for instance.

A second big bucket of concerns has to do with security and privacy. These systems are very susceptible to being gamed and hacked. Will people be prompted to disclose personal information in a dangerous way? Will outputs be manipulated by bad actors? If people are using these as search engines, are they getting spammed? In fact, is ChatGPT the most effective spam generator we’ve ever seen? Will the training data be manipulated? What about phishing at scale?

One third big bucket is competition. Microsoft and Google are well poised to corner this market. Do we want them to have control over an even bigger swath of the digital economy? If we believe—or are being made to believe—that these large language models are the inevitable future, are we accepting that a few companies have a first-mover advantage and might dominate the market? The chair of the FCC, Lina Khan, has already said the government is going to scrutinize this space for anticompetitive behavior. We’re already seeing companies engage in potentially anticompetitive behavior.

Lowrey: One issue seems to be that these models are being created with vast troves of public data—even if that’s not data people intended to be used for this purpose. And the creators of the models are a small elite—a few thousand people, maybe. That seems like an ideal way to amplify existing inequalities.   

[Read: Why are we letting the AI crisis just happen?]

Kak: OpenAI is the company that makes ChatGPT. In an earlier version, some of the training data was sourced from Reddit, user-generated content known for being abusive and biased against gender minorities and members of racial and ethnic minority groups. It would be no surprise that the AI system reflects that reality.

Of course the risk is that it perpetuates dominant viewpoints. Of course the risk is that it reinforces power asymmetries and inequalities that already exist. Of course these models are going to reflect the data that they’re trained on, and the worldviews that are embedded in that data. More than that, Microsoft and Google are now going to have a much wider swath of data to work from, as they get these inputs from the public.

Lowrey: How much is regulating AI like regulating social media? Many of the concerns seem the same: the viral spread of misinformation and disinformation, the use and misuse of truly enormous quantities of personal information, and so on.   

Kak: It took a few tech-driven crisis cycles to bring people to the consensus that we need to hold social-media companies accountable. With Cambridge Analytica, countries that had moved one step in 10 years on privacy laws all of a sudden moved 10 steps in one year. There was finally momentum across political ideologies. With AI, we’re not there. We need to galvanize the political will. We do not need to wait for a crisis.

In terms of whether regulating AI is like regulating other forms of media or tech: I get tired of saying this, but this is about data protection, data privacy, and competition policy. If we have good data-privacy laws and we implement them well, if we protect consumers, if we force these companies to compete and do not allow them to consolidate their advantages early—these are key components. We’re already seeing European regulators step in using existing data-privacy laws to regulate AI.  

Lowrey: But we don’t do a lot of tech regulation, right? Not compared with, say, the regulation of energy utilities, financial firms, providers of health care.

Kak: Big banks are actually a useful way of thinking about how we should be regulating these firms. The actions of large financial firms can have diffuse, unpredictable effects on the broader financial system, and thus the economy. We cannot predict the particular harm that they will cause, but we know they can. So we put the onus on these companies to demonstrate that they are safe enough, and we have a lot of rules that apply to them. That’s what we need to have for our tech companies, because their products have diffuse, unpredictable effects on our information environment, creative industries, labor market, and democracy.

Lowrey: Are we starting from scratch?

Kak: Absolutely not. We are not starting with a blank slate. We already have enforcement tools. This is not the Wild West.

Generative AI is being used for spam, fraud, plagiarism, deepfakes, that kind of stuff. The FTC is already empowered to tackle these issues. It can force companies to substantiate their claims, including the claim that they’ve mitigated risks to users. Then there are the sectoral regulators. Take the Consumer Financial Protection Bureau. It could protect consumers from being harmed by chatbots in the financial sector.

Lowrey: What about legislative proposals?

Kak: There are bills that have been languishing on the Hill regarding algorithmic accountability, algorithmic transparency, and data privacy. This is the moment to strengthen them and pass them. Everybody’s talking about futuristic risks, the singularity, existential risk. They’re distracting from the fact that the thing that really scares these companies is regulation. Regulation today.

This would address questions like: What training data are you using? Where does it come from? How are you mitigating against discrimination? How are you ensuring that certain types of data aren’t being exploited, or used without consent? What security vulnerabilities do you have and how are you protecting against them? It’s a checklist, almost. It sounds boring. But you get these companies to put their answers on paper, and that empowers the regulators to hold them accountable and initiate enforcement when things go wrong.

In some legislative proposals, these rules won’t apply to private companies. They’re regarding the government use of algorithms. But it gives us a framework we can strengthen and amend for use on private businesses. And I would say we should go much further on the transparency and documentation elements. Until these companies do due diligence, they should not be on the market. These tools should not be public. They shouldn’t be able to sell them.

Lowrey: Does Washington really have its head around this?

Kak: It’s always tempting to put the blame on lawmakers and regulators. They’re slow to understand this technology! They’re overwhelmed! It’s missing the point and it’s not true. It works in the interest of industry. OpenAI and Entropic and all these companies are telling lawmakers and the public that nobody’s as worried about this as they are. We’re capable of fixing it. But these are magic, unknowable systems. Nobody but us understands them. Maybe we don’t even understand them.

There are promising signs that regulators aren’t listening. Regulators at the FTC and elsewhere are saying, We’re going to ask questions. You’re going to answer. We’re going to set the terms of the debate, not you. That’s the crucial move. We need to place the burden on companies to assure regulators and lawmakers and the public. Lawmakers don’t need to understand these systems perfectly. They just need to ask the companies to prove to us that they’re not unleashing them on the public when they think they might do harm.

Lowrey: Let’s talk about the hypothetical long-range risk. A recent public letter called for a six-month halt on AI development. Elon Musk and hundreds of other tech leaders signed it. It asked, and I quote: “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” Are these concerns you share? What do you make of those questions?

Kak: Yeah, no. This is a perfect example of a narrative meant to frighten people into complacency and inaction. It shifts the conversation away from the harm these AI systems are creating in the present. The issue is not that they’re omnipotent. It is that they’re janky now. They’re being gamed. They’re being misused. They’re inaccurate. They’re spreading disinformation.

Lowrey: If you were a member of Congress and you had Sam Altman, the head of OpenAI, testifying before you, what would you ask him?

Kak: Apart from the laundry list of gaps in knowledge on training data, I would ask for details about the relationship between OpenAI and Microsoft, information about what deals they have under way—who’s actually buying this system and how are they using it? Why did the company feel confident enough that it had mitigated enough risk to go forward with commercial release? I would want him to show us documentation, receipts of internal company processes.

Let’s really put him on the spot: Is OpenAI following the laws that exist? My guess is he’d answer that he doesn’t know. That’s exactly the problem. We’re seeing these systems being rolled out with minimal internal or external scrutiny. This is key, because we’re hearing a lot of noise from these executives about their commitments to safety and so on. But surprise! Conspicuously little support for actual, enforceable regulation.

Let’s not stop at Sam Altman, just because he’s all over the media right now. Let’s call Satya Nadella of Microsoft, Sundar Pichai of Google, and other Big Tech executives too. These companies are competing aggressively in this market and control the infrastructure that the whole ecosystem depends on. They’re also significantly more tight-lipped about their policy positions.

Lowrey: I guess a lot of this will become more concrete when folks are using AI technologies to make money. Companies are going to be using this stuff to sell cars soon.

Kak: This is an expensive business, whether it’s the computing costs or the cost of human labor to train these AI systems to be more sophisticated or less toxic or abusive. And this is at a time when financial headwinds are affecting the tech industry. What happens when these companies are squeezed for profit? Regulation becomes more important than ever, to prevent the bottom line from dictating irresponsible choices.

Lowrey: Let’s say we don’t regulate these companies very well. What does the situation look like 20 years from now?

Kak: I can definitely speculate about the unreliable and unpredictable information environment we’d find ourselves in: misinformation, fraud, cybersecurity vulnerability, and hate speech.

Here’s what I know for sure. If we don’t use this moment to reassert public control over the trajectory of the AI industry, in 20 years we’ll be on the back foot, responding to the fallout. We didn’t just wake up one morning with targeted advertising as the business model of the internet, or suddenly find that tech infrastructure was controlled by a handful of companies. It happened because regulators didn’t move when they needed to. And the companies told us they would not “be evil.”

With AI, we’re talking about the same companies. Rather than take their word that they’ve got it covered, rather than getting swept up in their grand claims, let’s use this moment to set guardrails. Put the burden on the companies to prove that they’re going to do no harm. Prevent them from concentrating power in their hands.

A Stylish Spy Caper

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 04 › a-stylish-spy-caper › 673602

This story seems to be about:

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

Good morning, and welcome back to The Daily’s Sunday culture edition, in which one Atlantic writer reveals what’s keeping them entertained.

Today’s special guest will be familiar to readers of The Daily: the Atlantic staff writer Tom Nichols. Tom’s incisive current-events analysis and swashbuckling prose are most frequently found in weekday editions of this very newsletter. His writing on Russia, national security, and, of course, American politics also regularly appears elsewhere in our magazine.

Anyone who knows Tom, either personally or through his writing, is likely aware that he’s just a bit of a 1980s film and TV buff. But he’s been known to dip a toe into the 21st century too. These days, he’s engrossed in the fourth and final season of Succession, eagerly anticipating the return of the Star Trek prequel series Strange New Worlds, and treasures a Robert Lowell poem that was first published—as it happens—in The Atlantic.

First, here are three Sunday reads from The Atlantic:

Something odd is happening with handbags. Why Americans care about work so much There’s exactly one good reason to buy a house. The Culture Survey: Tom Nichols

The upcoming entertainment event I’m most looking forward to: Well, the honest answer is that I’m glued to the final season of Succession because I’m in it. (I have a very small part as a cranky right-wing pundit. I know: “Nice reach, Tom.”) And Succession, of course, is an incredible series.

But I’m very excited to hear that Strange New Worlds, the Star Trek prequel series, is coming back for at least two more seasons. Of course, I’m already familiar with SNW; the debut that has me most fascinated, however, is the upcoming Amazon Prime series Fallout, based on the immensely popular game franchise. (The first Fallout game debuted in 1997, so that tells you how long I’ve been playing it.) The Fallout world is a weird place; if you’ve seen the series Hello Tomorrow!, where the 1950s are reimagined with floating cars and space travel and malfunctioning robot bartenders, it’s something like that.

Except it all takes place after a nuclear war. So I’m hoping they get that right. [Related: The real Succession endgame]

The television show I’m most enjoying right now: I just discovered A Spy Among Friends, a limited series based on a book about the infamous Kim Philby espionage affair of the early 1960s. It’s beautifully done. I began my career in Soviet and Russian affairs, and so I’m familiar with the details of the Philby spy caper—which is good, because the series assumes a lot of familiarity with the history. But it’s the kind of period drama you can enjoy watching just for the fine details of its production and re-creation of an era. [Related: Washington—the fifth man (from 1988)]

A quiet song that I love, and a loud song that I love: I’m going to be clever here and say that I have always loved a song that is both quiet and loud: “Don’t Want to Wait Anymore” by The Tubes. You’ll have to hear it to get that comment, I think.

A musical artist who means a lot to me: I have a particular attachment to Joe Jackson. Most people will know him only from a few hits back in the ’80s, such as “Steppin’ Out,” but I feel like he’s one of those artists whose work I have been able to appreciate at every stage of my life. I enjoyed his autobiography, A Cure for Gravity, which is a memoir of growing up and falling in love with music, rather than some trashy rock tell-all. There’s a self-awareness and sly humor and even an awkwardness in his songs that can still make me as pensive now as when I first heard them 30 or 40 years ago.

I suppose I’d add Al Stewart here too. His songs about history are both beautiful and nerdy: He’s a perfectionist, and I have to love a guy who once lamented that he accidentally referred to Henry Tudor as Henry Plantagenet. I recently saw him do a small concert where he performed his album Year of the Cat in its entirety, and at my age, I appreciate a rock star who can perform well while aging gracefully. (Mick Jagger: Take a lesson.)

A painting, sculpture, or other piece of visual art that I cherish: The Oath of the Horatii,” by Jacques-Louis David. Don’t ask me why; I saw it as a teenager in a bookstore in Boston, and I couldn’t take my eyes off of it. There was something about the stilted drama of the scene, the valiant backstory about the defenders of Rome, that made me stare. (Also, I also am slightly color-blind, so maybe the vivid reds and silver in the painting got through my defective eyeballs.) When I began teaching military officers, my understanding of the painting changed: I came to see it both as a celebration of military loyalty, but also, at least to me, as a warning about the seductive glorification of war. For some 20 years, I kept a print of it on the wall of my office at the Naval War College.

A cultural product I loved as a teenager and still love, and something I loved but now dislike: One of the lousier jobs I had as a teenager was as a janitor at the old Spalding sports-equipment company, which back then was headquartered in my hometown. But one of the perks was that some of the offices I had to clean were air-conditioned, so I’d goof off while working the evening shift by reading the books that the art department had strewn around their desks. That’s where I discovered Cape Light, a book of photographs by Joel Meyerowitz. I fell in love with that book at 18 years old, and I still keep a copy right next to my desk for when I need a soothing mental and visual break. My house is decorated with several large prints from the book.

The thing I loved as a teen that I hate now? Vintage arena rock. I was driving along the other day and the band Kansas came on the radio, and I thought: Wait—didn’t I used to love this stuff? The days when I would hear Asia or Kansas and turn the volume to 11 are long over for me. (Some things haven’t changed, however: I am infamous on social media for my love of the group Boston, and my disdain—which I have had since childhood—for Led Zeppelin.) [Related: More than an album cover (from 2015)]

The last debate I had about culture: I cannot pinpoint the last debate I had about culture, because so many people think my taste is so awful on so many things that it’s more like an ongoing project than a single debate. [Related: The complex psychology of why people like things (from 2016)]

A poem, or line of poetry, that I return to: I’m not literate enough to fully appreciate most poetry, but I was introduced to the work of Robert Lowell in college, and it stuck. Perhaps I feel a connection to him as a New Englander; I reread “For the Union Dead”—published in The Atlantic in 1960, the year of my birth—every year. But the line that kept coming back to me over the years, and now occurs to me more often as I age, is from “Terminal Days at Beverly Farms,” a very short poem in which Lowell paints a spare, melancholy, almost Edward Hopper–like portrait in words of his father’s last days as a retired naval officer. The old man, restless and in declining health, lived in Beverly Farms, on the North Shore of Massachusetts, an area where I had family and that I have loved since childhood. I have been to the “Maritime Museum in Salem” where his father spent many leisurely hours, and I have ridden the commuter trains to Boston whose tracks shone “like a double-barrelled shotgun through the scarlet late August sumac.”

But it’s the last line that gets to me, because it’s such a simple observation about the penultimate moments before death. I don’t mean to end here on a morbid note, because oddly, this line does not depress me. But I’ve often thought of it because it’s likely how most people die—without speeches or final declarations or drama.

Father’s death was abrupt and unprotesting.

His vision was still twenty-twenty.

After a morning of anxious, repetitive smiling,

his last words to Mother were:

“I feel awful.”

[Related: The difficult grandeur of Robert Lowell (from 1975)]

Read past editions of the Culture Survey with Amy Weiss-Meyer, Kaitlyn Tiffany, Bhumi Tharoor, Amanda Mull, Megan Garber, Helen Lewis, Jane Yong Kim, Clint Smith, John Hendrickson, Gal Beckerman, Kate Lindsay, Xochitl Gonzalez, Spencer Kornhaber, Jenisha Watts, David French, Shirley Li, David Sims, Lenika Cruz, Jordan Calhoun, Hannah Giorgis, and Sophie Gilbert.

The Week Ahead

1. Pretty Baby: Brooke Shields, a two-part documentary series on the former child model and actress (begins streaming Monday on Hulu)

2. A Living Remedy, a meditation on American inequality and the second memoir by the best-selling author and Atlantic contributing writer Nicole Chung (on sale Tuesday)

3. Air, from the director Ben Affleck, traces the blockbuster footwear collaboration between Nike and Michael Jordan that would cement both of their legacies (in theaters Wednesday)

Essay Illustration by Daniel Zender / The Atlantic. Source: Getty

A Tale of Maternal Ambivalence

By Daphne Merkin

Motherhood has always been a subject ripe for mythmaking, whether vilification or idealization. Although fictional accounts, from antiquity until today, have offered us terrible, even treacherous mothers, including Euripides’s Medea and Livia Soprano, depictions of unrealistically all-good mothers, such as Marmee from Little Women, are more common and provide a sense of comfort. Maternal characters on the dark end of the spectrum provoke our unease because their monstrous behavior so clearly threatens society’s standards for mothers. They show that mother love isn’t inevitable, and that veering off from the expected response to a cuddly new infant isn’t inconceivable.

If motherhood brings with it the burden of our projected hopes, new mothers are especially hemmed in by wishful imagery, presumed to be ecstatically bonding with their just-emerged infants as they suckle at milk-filled breasts, everything smelling sweetly of baby powder. The phenomenon of postpartum depression, for instance, a condition that affects 10 to 15 percent of women, has been given short shrift in literature and other genres when not ignored entirely. This is true as well when it comes to the evocation of maternal ambivalence, the less-than-wholehearted response to the birth of a child, which is mostly viewed as a momentary glitch in the smooth transition from pregnancy to childbirth to motherhood instead of being seen as a sign of internal conflict.

Read the full article.

More in Culture What California means to writers A romantic comedy you never want to end Is Silicon Valley beyond redemption? Dungeons & Dragons and the return of the sincere blockbuster Seven books the critics were wrong about The real Succession endgame ‘Rock and roll ain’t what it used to be.’ Catch Up on The Atlantic An astonishing, frightening first for the country Childbirth is no fun. But an extremely fast birth can be worse. Ron DeSantis chose the wrong college to take over Photo Album VCG / Getty

Tourists pick tea leaves in Fujian province, China; demonstrators convene in Israel, France, and the Texas State Capitol in Austin; and more, in our editor’s photo selections of the week.

Did someone forward you this email? Sign up here.

Explore all of our newsletters.

When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.