Itemoids

Elon Musk

Elon Musk’s Twitter Is a Disaster for Disaster Planning

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 02 › elon-musk-twitter-blue-natural-disaster-crisis-emergency-response › 673209

For years, Twitter was at its best when bad things happened. Before Elon Musk bought it last fall, before it was overrun with scammy ads, before it amplified fake personas, and before its engineers were told to get more eyeballs on the owner’s tweets, Twitter was useful in saving lives during natural disasters and man-made crises. Emergency-management officials have used the platform to relate timely information to the public—when to evacuate during Hurricane Ian, in 2022; when to hide from a gunman during the Michigan State University shootings earlier this month—while simultaneously allowing members of the public to transmit real-time data. The platform didn’t just provide a valuable communications service; it changed the way emergency management functions.

That’s why Musk-era Twitter alarms so many people in my field. The platform has been downgraded in multiple ways: Service is glitchier; efforts to contain misleading information are patchier; the person at the top seems largely dismissive of outside input. But now that the platform has embedded itself so deeply in the disaster-response world, it’s difficult to replace. The rapidly deteriorating situation raises questions about platforms’ obligation to society—questions that prickly tech execs generally don’t want to consider.

[Read: I watched Elon Musk kill Twitter from the inside]

From the beginning, Twitter executives wanted users to rely on their service in moments of crisis. The company’s founder, Jack Dorsey, told 60 Minutes a decade ago that he got the idea for Twitter in part from listening to a police scanner when he was a child. In a subsequent interview, he suggested that he first understood the platform’s power after a tremor in the Bay Area: “I was in the office on a Saturday, and my phone buzzed, and it was a tweet, and it said simply, ‘Earthquake.’” By 2015, the U.S. Geological Survey was using Twitter to better monitor earthquakes and people’s reactions to those earthquakes in areas where the agency lacked sufficient sensors.

Perhaps it and other agencies were naive to depend so much on a private company’s willingness to continue providing a free communications service. But Twitter clearly relished its own importance in times of crisis, which presumably contributed to the platform’s overall popularity. The company provided guidance and best practices to emergency-response agencies. According to Twitter’s website, “crisis and emergency response” is one of its five stated areas of focus.

Successful relief efforts focus on deploying the people, processes, and technology necessary to deliver information and resources quickly. Twitter captured the disaster market, so to speak, because it was a technology with no equal. In a crisis, time is the most sacred commodity; in 1906, the writer Alfred Henry Lewis remarked that “there are only nine meals between mankind and anarchy.” That notion might sound familiar to residents of hurricane-prone areas: “The first 72 are on you” is a well-known slogan reminding citizens to prepare enough home provisions to last at least that many hours after a storm passes.

Unfortunately, the platform is becoming less useful as a way of monitoring chatter about developing events. Twitter announced on February 2 that it would end free access for researchers to its application programming interface—a mechanism that allows people outside the company to gather and analyze large quantities of data from the social-media platform. Relief workers have frequently used API access to determine where supplies and other resources are needed most.

Four days after the company’s API announcement, a massive earthquake hit Turkey and Syria, killing at least 46,000 people. In an enormous geographic area, API data can help narrow down who is saying what, who is stuck where, and where limited supplies should be delivered first. Amid complaints about what abandoning free API access would mean in that crisis, Twitter postponed the restriction. Still, its long-term intentions are uncertain, and some public-spirited deployments of the API by outside researchers—such as a ProPublica bot tracking politicians’ deleted tweets—appear to be breaking down.

[Read: Elon Musk is a far-right activist]

Meanwhile, Musk’s policy of offering “verified” status to all paying customers is making information on the platform less dependable. Twitter’s blue checks originally signified that the company had made some effort to verify an account owner’s identity. Soon after Musk made them available to Twitter Blue subscribers, an enterprising jokester bought a handle impersonating the National Weather Service. That was witty but not very funny—not when so many people depend on the agency’s tweets about snow, ice storms, and hurricanes.

Tweets were a mechanism for people to seek help. They were a mechanism for public-safety agencies to provide information on what or what not to do. They were a mechanism for legacy-blue-check sources to amplify essential plans. They were a mechanism for crisis managers to, through the API, drive resources where they were needed. Relief-and-response entities came to rely on the company, believing that its mastery of speed was a public service Twitter itself valued.

Dorsey started a company that claimed to have a social mission. Musk’s Twitterverse is a chorus of “lol”s and “whatever”s. He recently joked that he acquired the “world’s largest non-profit,” and his focus appears to be on cutting costs and making Twitter profitable. But in the process, he has disrupted an emergency-management system meant to be reliable during disruptions.

In contract law, the term reliance interest describes what arises when one party conditions its own choices on statements or promises made by the other party. Even without a formal contract, the former has a legitimate grievance if the latter breaks its promises. To some degree, that idea applies to crisis communications on Twitter: A public-safety apparatus came to rely on a platform that actively courted such reliance. The harm from Twitter’s recent changes may not be measurable in dollars, but it is nevertheless real harm.

Twitter fired more employees after Elon Musk said layoffs had ended

Quartz

qz.com › twitter-layoffs-2023-elon-musk-crawford-de-kuijper-1850162252

Dozens of Twitter employees have reportedly lost their jobs last week, despite CEO Elon Musk promised layoffs had ended in November, after he made drastic cuts to the company’s workforce shortly after completing his $44 billion takeover.

Read more...

Facebook Is Taking the Worst Ideas From the Airline Industry

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 02 › meta-verified-facebook-instagram-subscription-service › 673207

It’s been a rough few months for the technology industry. Stock prices have plummeted. Meta, Amazon, Google, Spotify, and Twitter have all laid off a sizable chunk of their workforce (the list goes on, too). Everybody is talking about how ChatGPT and other generative-AI chatbots are role-playing as Skynet, and the older tech giants are feeling out of step. But whereas Google and Microsoft are deep into the chatbot arms race, Meta looks like a late-aughts tech dinosaur.

It’s time to shake things up, to turn the ship around. To innovate. Meta’s big, new idea: Charge people for basic support features and … a blue check mark.

On Sunday, Facebook and Instagram announced Meta Verified, a subscription service that will give benefits to people who pay a fee and confirm their identity. The perks include algorithmic boosts to posts, human customer service, and added protection from impersonation. Meta’s paid verification follows Elon Musk’s controversial decision last year to include its famous blue check marks in its Twitter Blue subscription package. Not long after Twitter’s decision, Tumblr launched its own paid verification plan, which was initially meant as a joke mocking Musk’s ham-fisted business strategy but ended up increasing the company’s revenue. Netflix is also looking to squeeze extra money out of its viewers with its plan to end password sharing across different households.

Taken together, the vibe feels a bit like trying to use a familiar service and getting hit with a pop-up that says, “Thank you for using Web 2.0. Your free-trial period has ended!”

[Read: The end of the Silicon Valley myth]

I am not a Meta power user, and I certainly won’t be paying for a blue check mark. Still, the Verified announcement depressed me. It felt at first like Meta had gone full Spirit Airlines, that paying for customer service is akin to ponying up for glasses of water or any carry-on larger than a purse.

But the Spirit comparison isn’t quite right. Spirit has always operated as a budget experience, intended to undercut the competition at the expense of creature comforts. Facebook, though, is following the trajectory of the airline industry writ large. It is a once-revolutionary service that, over time, has transformed into something more soul-sucking. And although Meta still churns out tens of billions in profit each year, real signs of trouble are on the horizon. Just like the airline industry before it, when faced with a rocky economy, Meta decided to nickel-and-dime its users by asking them to pay for things one should reasonably expect to come standard. (A Meta spokesperson said in an email that the feature is “specifically focused on the top requests we get from up-and-coming creators. In this case, because we know creator accounts have or are looking to grow a large following, this then puts them at an increased risk for impersonation attempts.”)

Though it feels like they’ve been a scourge since the birth of aviation, checked-bag fees were introduced in 2008. According to a 2013 profile, an Australian consultant named John Thomas came up with the idea in response to rising fuel prices that threatened to sink the airline industry. United Airlines was the first to charge a $25 fee for a flier’s second bag. It took only a few weeks for the rest of the big airlines to follow suit. Within three months, some airlines started charging fees for all non-carry-ons. The industry made billions.

Nobody seriously thinks that Facebook or Twitter will rake in anything remotely comparable (one report suggests that Twitter has only 290,000 Blue subscribers worldwide, which comes out to roughly $2.4 million a month). It’s easy enough to conclude—and people certainly have—that Meta is just out of ideas after its lackluster pivot to a legless metaverse. But the problem seems deeper: Meta doesn’t even know what kind of company it is anymore.

Meta may very well think that it provides an essential service, just like an airline. Facebook and Instagram certainly offer convenience via sheer scale—massive numbers of people exist there, even if in some zombified-account form. Indeed, an increased focus on verification and identity confirmation makes sense, especially if we are hurtling towards a future where machines will convincingly sound like machines. But customer service and protection from impersonation ought to be universal; perhaps such digital courtesies are going extinct, just like the complimentary in-flight meal on a cross-country trip.

But Meta is obviously not an airline; the services it provides aren’t essential and, despite its ubiquity, its users are not captive. If anything, its flagship platform is hemorrhaging cultural relevance. Facebook itself feels like a place strewn recycled memes, where a common sight is once-popular fan pages inexplicably turning into multilevel-marketing-scheme accounts for CBD products. Who beyond those scammers would pay for an algorithmic boost?

Nor is Meta behaving like its tech forefathers, who gradually got us to pay for digital items. In 2013, I spoke with Paul Vidich—a former Warner Music Group executive who was involved in negotiations with Steve Jobs to start selling songs on iTunes in the early 2000s for 99 cents each. Vidich told me then that he’d agonized over the correct price point but figured that the combination of a huge music library, a one-click interface (with a credit card already on file), and a cheap price might wean the Napster generation off its freeloading. “It’s something you don’t have to think twice about before buying,” he said.

Vidich was right, and people purchased tens of billions of songs in the pre-streaming era. Apple got people to shell out because it brought the record store into our home. And, after a period of piracy, it allowed guilty consciences to compensate artists, however slightly, at a price that was hard to turn down. But Meta Verified isn’t really offering ease or … much of anything, really. Instead, it’s asking users to pay for services that keep them safer on its own platforms—a bit like the Mafia tactic of paying for “protection.”

Meta is a company in crisis. For the past decade, its core business has been defined by companies it purchased—namely Instagram and WhatsApp—and a string of desperate pivots, many of which led nowhere. The running theme behind each of these attempts at innovation is a false confidence born of the company’s immense scale. It has always struggled to see itself the way outsiders do, which is perhaps why leaders like Mark Zuckerberg thought Facebook could revolutionize mobile phones or become a leader in workplace-communication software. The company believed that, after years of terrible publicity and privacy scandals, what people wanted was for Facebook to reimagine the internet in its own image through the metaverse. It did not seem to realize that one of the biggest problems with the metaverse is Meta itself.

But Meta can take some solace in knowing that it’s not alone. The end of Big Tech’s free-trial period marks the waning days of a specific internet era. Perhaps, as my colleague Ian Bogost has argued, it’s the end of the social-media era. Maybe it’s merely the end of social-media companies as culturally ascendant institutions, and the beginning of our thinking of them as failed states or corrupt utilities—the new cable companies.

Either way, it’s hard to look at the hype and energy around the commercial-AI boom and compare it with the stagnant air that surrounds platforms like Twitter and Facebook. There’s an odd juxtaposition between our excitement and fear over sentient AI and the arrival of almost infinite synthetic media and the desperation of the internet’s old guard asking us to pay to confirm our identity. This feels like a year when an unsettling and unpredictable future may arrive—whether we want it to or not. I just wouldn’t bet on it coming from Meta.

Why the Tesla Recall Matters

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 02 › tesla-recall-elon-musk-missy-cummings › 673124

More than 350,000 Tesla vehicles are being recalled by the National Highway Traffic Safety Administration because of concerns about their self-driving-assistance software—but this isn’t your typical recall. The fix will be shipped “over the air” (meaning the software will be updated remotely, and the hardware does not need to be addressed).

Missy Cummings sees the voluntary nature of the recall as a positive sign that Tesla is willing to cooperate with regulators. Cummings, a professor in the computer-science department at George Mason University and a former NHTSA regulator herself, has at times argued that the United States should proceed more cautiously on autonomous vehicles, drawing the ire of Elon Musk, who has accused her of being biased against his company.

[Andrew Moseman: The inconvenient truth about electric vehicles]

Cummings also sees this recall as a software story: NHTSA is entering an interesting—perhaps uncharted—regulatory space. “If you release a software update—that’s what’s about to happen with Tesla—how do you guarantee that that software update is not going to cause worse problems? And that it will fix the problems that it was supposed to fix?” she asked me. “If Boeing never had to show how they fixed the 737 Max, would you have gotten into their plane?”

Cummings and I discussed that and more over the phone.

Our conversations have been condensed and edited for clarity.

Caroline Mimbs Nyce: What was your reaction to this news?

Missy Cummings: I think it’s good. I think it’s the right move.

Nyce: Were you surprised at all?

Cummings: No. It’s a really good sign—not just because of the specific news that they’re trying to get self-driving to be safer. It also is a very important signal that Tesla is starting to grow up and realize that it’s better to work with the regulatory agency than against them.

Nyce: So you’re seeing the fact that the recall was voluntary as a positive sign from Elon Musk and crew?

Cummings: Yes. Really positive. Tesla is realizing that, just because something goes wrong, it’s not the end of the world. You work with the regulatory agency to fix the problems. Which is really important, because that kind of positive interaction with the regulatory agency is going to set them up for a much better path for dealing with problems that are inevitably going to come up.

That being said, I do think that there are still a couple of sticky issues. The list of problems and corrections that NHTSA asked for was quite long and detailed, which is good—except I just don’t see how anybody can actually get that done in two months. That time frame is a little optimistic.

It’s kind of the Wild West for regulatory agencies in the world of self-certification. If Tesla comes back and says, “Okay, we fixed everything with an over-the-air update,” how do we know that it’s been fixed? Because we let companies self-certify right now, there’s not a clear mechanism to ensure that indeed that fix has happened. Every time that you try to make software to fix one problem, it’s very easy to create other problems.

Nyce: I know there’s a philosophical question that’s come up before, which is, How much should we be having this technology out in the wild, knowing that there are going to be bugs? Do you have a stance?

Cummings: I mean, you can have bugs. Every type of software—even software in safety-critical systems in cars, planes, nuclear reactors—is going to have bugs. I think the real question is, How robust can you make that software to be resilient against inevitable human error inside the code? So I’m okay with bugs being in software that’s in the wild, as long as the software architecture is robust and allows room for graceful degradation.

Nyce: What does that mean?

Cummings: It means that if something goes wrong—for example, if you’re on a highway and you’re going 80 miles an hour and the car commands a right turn—there’s backup code that says, “No, that’s impossible. That’s unsafe, because if we were to take a right turn at this speed … ” So you basically have to create layers of safety within the system to make sure that that can’t happen.

[Emma Marris: Bring on the boring EVs]

This isn’t just a Tesla problem. These are pretty mature coding techniques, and they take a lot of time and a lot of money. And I worry that the autonomous-vehicle manufacturers are in a race to get the technology out. And anytime you’re racing to get something out, testing and quality assurance always gets thrown out the window.   

Nyce: Do you think we’ve gone too fast in green-lighting the stuff that’s on the road?

Cummings: Well, I’m a pretty conservative person. It’s hard to say what green-lighting even means. In a world of self-certification, companies were allowed to green-light themselves. The Europeans have a preapproval process, where your technology is preapproved before it is let loose in the real world.

In a perfect world—if Missy Cummings were the king of the world—I would have set up a preapproval process. But that’s not the system we have. So I think the question is, Given the system in place, how are we going to ensure that, when manufacturers do over-the-air updates to safety-critical systems, it fixes the problems that it was supposed to fix and doesn’t introduce new safety-related issues? We don’t know how to do that. We’re not there yet.

In a way, NHTSA is wading into new regulatory waters. This is going to be a good test case for: How do we know when a company has successfully fixed recall problems through software? How can we ensure that that’s safe enough?

Nyce: That’s interesting, especially as we put more software into the things around us.

Cummings: That’s right. It’s not just cars.

Nyce: What did you make of the problem areas that were flagged by NHTSA in the self-driving software? Do you have any sense of why these things would be particularly challenging from a software perspective?

Cummings: Not all, but a lot are clearly perception-based.

The car needs to be able to detect objects in the world correctly so that it can execute, for example, the right rule for taking action. This all hinges on correct perception. If you’re going to correctly identify signs in the world—I think there was an issue with the cars that they sometimes recognized speed-limit signs incorrectly—that’s clearly a perception problem.

What you have to do is a lot of under-the-hood retraining of the computer vision algorithm. That’s the big one. And I have to tell you, that’s why I was like, “Oh snap, that is going to take longer than two months.” I know that theoretically they have some great computational abilities, but in the end, some things just take time. I have to tell you, I’m just so grateful I’m not under the gun there.

Nyce: I wanted to go back a bit—if it were Missy’s world, how would you run the regulatory rollout on something like that?

Cummings: I think in my world we would do a preapproval process for anything with artificial intelligence in it. I think the system we have right now is fine if you take AI out of the equation. AI is a nondeterministic technology. That means it never performs the same way twice. And it’s based on software code that can just be rife with human error. So anytime that you’ve got this code that touches vehicles that move in the world and can kill people, it just needs more rigorous testing and a lot more care and feeding than if you’re just developing a basic algorithm to control the heat in the car.

[Read: The simplest way to sell more electric cars in America]

I’m kind of excited about what just happened today with this news, because it’s going to make people start to discuss how we deal with over-the-air updates when it touches safety-critical systems. This has been something that nobody really wants to tackle, because it’s really hard. If you release a software update—that’s what’s about to happen with Tesla—how do you guarantee that that software update is not going to cause worse problems? And that it will fix the problems that it was supposed to fix?

What should a company have to prove? So, for example, if Boeing never had to show how they fixed the 737 Max, would you have gotten into their plane? If they just said, “Yeah, I know we crashed a couple and a lot of people died, but we fixed it, trust us,” would you get on that plane?

Nyce: I know you’ve experienced some harassment over the years from the Musk fandom, but you’re still on the phone talking to me about this stuff. Why do you keep going?

Cummings: Because it’s really that important. We have never been in a more dangerous place in automotive-safety history, except for maybe right when cars were invented and we hadn’t figured out brake lights and headlights yet. I really do not think people understand just how dangerous a world of partial autonomy with distraction-prone humans is.

I tell people all the time, “Look, I teach these students. I will never get in a car that any of my students have coded because I know just what kinds of mistakes they introduce into the system.” And these aren’t exceptional mistakes. They’re just humans. And I think the thing that people forget is that humans create the software.

The Real Elitists Are at Fox News

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 02 › fox-news-tucker-carlson-dominion › 673128

This story seems to be about:

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

Right-wing political and media figures regularly level the accusation of “elitism” at other Americans. But new revelations from Dominion Voting Systems’ defamation lawsuit against Fox News and the Fox Corporation over claims of election fraud are reminders that the most cynical elites in America are the Republicans and their media valets.

But first, here are three new stories from The Atlantic.

An anti-racist professor faces “toxicity on the left today.” I watched Elon Musk kill Twitter’s culture from the inside. Bird flu leaves the world with an existential choice.

Patronizing for Profit

Elected Republicans and their courtiers in the right-wing-media ecosystem deploy the word elite as an accusation, a calumny, almost a crime. To be one of the elite is to be a snooty, educated city dweller, a highbrow pretend-patriot who looks down upon the Real Americans who hunt and fish and drive pickup trucks to church. (It does not mean “rich people”; Donald Trump has gleefully referred to himself and his supporters as the “super-elite.”) The elites also support the production of “fake news” by liars who intend to hoodwink ordinary people into doing the bidding of wealthy globalists. They buy books and listen to National Public Radio and they probably read things like The Atlantic.

This shtick has been a remarkable success. Republicans have used it to convince millions of working people that super-educated gasbags such as Ted Cruz, Josh Hawley, and Ron DeSantis are just ordinary folks who care deeply about kitchen-table issues that matter to their family and a secure future for their children, such as Hunter Biden’s sex life and whether public schools are letting kids pee in litter boxes.

In the entertainment hothouse, Fox News is the most prominent offender. The Fox all-star lineup, especially in prime time with Tucker Carlson, Sean Hannity, and Laura Ingraham, is a parade of millionaires who work for Rupert Murdoch, one of the richest and most powerful men in this corner of the Milky Way galaxy. Every day they warn their viewers that democracy is in peril because of people who majored in gender studies. All of this nuttery is delivered with a straight face—or in Carlson’s case, the weird mien of a dog watching a magic trick.

It’s one thing, however, to suspect that Fox personalities see their viewers as mere rubes who must be riled up in the name of corporate profit. It’s another entirely to have it all documented in black and white. Dominion might not win its lawsuit against Fox, but for the rest of America, the process has produced something more important than money: an admission, by Fox’s on-air personalities, of how much they disrespect and disdain their own viewers.

According to documents from Dominion’s legal filing, Fox News hosts repeatedly exchanged private doubts about Republicans’ 2020 election-fraud claims. Hannity, in the weeks after the 2020 election, said that the regular Fox guest and top conspiracy-pusher, former New York City Mayor Rudy Giuliani, was “acting like an insane person.” Ingraham had a similar evaluation: “Such an idiot.” And it’s not like Murdoch didn’t share that sentiment: In one message, he said Giuliani and the Trump lawyer Sidney Powell were pushing “really crazy stuff” and he told Fox News CEO Suzanne Scott that their behavior was “damaging everybody.” (Fox reportedly banned Giuliani in 2021, putting up with him for weeks after January 6 and then shutting him down as the Dominion lawsuit gained momentum.)

There are few hours on Fox that manage to pack in more gibberish and nonsense than Carlson’s show, and yet—to give him one zeptosecond of credit—he took Powell apart in a segment on his show. In later months, of course, Carlson would continue to inject the information stream with various strains of conspiratorial pathogens, but when even Tucker Carlson is worried, perhaps it’s a sign that things are out of hand.

Of course, Carlson wasn’t worried about the truth; he was worried about the profitability of the Fox brand. When the Fox reporter Jacqui Heinrich did a real-time fact-check on Twitter of a Trump tweet about voter fraud, Carlson tried to ruin her career. “Please get her fired,” he wrote in a text chain that included Hannity and Ingraham. He continued:

Seriously…What the fuck? I’m actually shocked…It needs to stop immediately, like tonight. It’s measurably hurting the company. The stock price is down. Not a joke.

After the election, Carlson warned that angering Trump could have catastrophic consequences: “He could easily destroy us if we play it wrong.” Murdoch, too, said that he did not want to “antagonize Trump further.”

Meanwhile, the Fox producer Abby Grossberg was more worried about the torch-and-pitchfork Fox demographic. After the election, she reminded Fox Business anchor Maria Bartiromo that Fox’s faithful should be served the toxic gunk they craved: “To be honest, our audience doesn’t want to hear about a peaceful transition,” Grossberg texted. “Yes, agree,” Bartiromo answered in a heroic display of high-minded journalistic principle.

In other words: Our audience of American citizens wants to be encouraged in its desire to thwart the peaceful transfer of power for the first time in our history as a nation. And Bartiromo answered: Yes, let’s keep doing that.

As Vox’s Sean Illing tweeted today, Bartiromo’s thirsty pursuit of ratings is a reminder that “no one has a lower opinion of conservative voters than conservative media.” More important, Fox’s cynical fleecing of its viewers is an expression of titanic elitism, the sort that destroys reality in the minds of ordinary people for the sake of fame and money. Not only does such behavior reveal contempt for Fox’s viewers; it encourages the destruction of our system of government purely for ratings and a limo to and from the Fox mothership in Times Square. (New York City might be full of coastal “elitists,” but that’s where the Fox crew lives and works; we’ll know the real populist millennium has arrived when Fox packs off Hannity and Greg Gutfeld and Jeanine Pirro to its new offices in Kansas or Oklahoma.)

Although it’s amusing to bash the Fox celebrities who have been caught in this kind of grubby hypocrisy, the elitism of the American right is a much bigger problem because it drives so much of the unhinged populism that threatens our democracy. Fox News and the highly educated Republican officeholders who use its support to stay in office know exactly what they’re doing. But they are all now riding a tiger of their own creation: As the conservative writer George Will has noted, for the first time in American history, a major political party is terrified by its own voters.

Fox, of course, has said that the Dominion filing “mischaracterized the record,” and “cherry-picked quotes stripped of key context,” and the network insisted in a legal brief it was merely observing its “commitment to inform fully and comment fairly.” Sadly, Fox will likely survive this disaster whether it wins or loses in court. Like the GOP base it serves, the network and its viewers have immense reserves of denial and rationalization they can bring to bear against the incursions of reality. “We can fix this,” Scott, the Fox CEO, wrote in the midst of this mess, “but we cannot smirk at our viewers any longer.”

But why not? It’s been working like a charm so far.

Related:

Brian Stelter: I never truly understood Fox News until now. Fox hosts knew—and lied anyway. (from 2021)

Today’s News

Six people have been killed in a series of shootings in Tate County, Mississippi. The five former Memphis police officers accused of killing Tyre Nichols pleaded not guilty to second-degree murder charges. The U.S. has finished recovering debris from the balloon shot down off the coast of South Carolina, and so far, analysis of the remnants reinforce the conclusion that it was a Chinese spy balloon, officials said.

Dispatches

The Books Briefing: What truly elevates poetry is not just what we write but also what inspires us to write it, Emma Sarappo argues.

Explore all of our newsletters here.

Evening Read

Getty; The Atlantic

Buttons Are Bougie Now

By Drew Millard

The 2022 Ford Bronco Raptor, among the most expensive offerings in the car manufacturer’s line of tough-guy throwback SUVs, features 418 horsepower, a 10-speed transmission, axles borrowed from off-road-racing vehicles, and 37-inch tires meant for driving off sand dunes at unnecessarily high speeds. But when the automotive site Jalopnik got its hands on a Bronco Raptor for testing, the writer José Rodríguez Jr. singled out something else entirely to praise about the $70,000 SUV: its buttons. The Bronco Raptor features an array of buttons, switches, and knobs controlling everything from its off-road lights to its four-wheel-drive mode to whatever a “sway bar disconnect” is. So much can be done by actually pressing or turning an object that Rodríguez Jr. found the vehicle’s in-dash touch screen—the do-it-all “infotainment system” that has become ubiquitous in new vehicles—nearly vestigial.

Then again, the ability to manipulate a physical thing, a button, has become a premium feature not just in vehicles, but on gadgets of all stripes.

Read the full article.

More From The Atlantic

An ICU doctor on how this COVID wave is different John Fetterman and the performance of wellness Photos of the week: the world’s oldest dog, the Opera Ball in Austria, and more

Culture Break

Claudette Barius / Warner Bros.

Read. Keep Valentine’s Day going with these books to read with someone you love.

Or read a new short story by Ben Okri.

Watch. Magic Mike’s Last Dance, in theaters, is as sexy as it is romantic. And Emily, also in theaters, is a sensitive, provocative look at Emily Brontë’s life.

Play our daily crossword.

P.S.

To get away from politics and this entire decade, I’ve been binge-watching old episodes of 30 Rock, Tina Fey’s inspired send-up of life as a comedy writer at NBC. And I have come to realize that Alec Baldwin’s portrayal of Jack Donaghy—on the show, the vice president of East Coast television and microwave-oven programming for General Electric—produced one of television’s greatest characters. In lesser hands, he could have been just another corporate buffoon, a foil for the clever creatives, but 30 Rock never let Jack become a red-faced Theodore J. Mooney or Milburn Drysdale; he was vicious, funny, sentimental, cynical, both a backstabber and a good friend.

Of course, the reason he’s also a candidate for becoming my spirit animal is that he is from Massachusetts (as I am), worked his way through a good school (as I did), and now is happily and self-indulgently aware of his own obnoxiousness. (I’m working on it.) When Fey’s Liz Lemon finds Jack in his office in a tuxedo, he says: “It’s after six. What am I, a farmer?” When his flinty harridan of a mom reproaches him for not appreciating her, he doesn’t miss a beat: “Mother, there are terrorist cells that are more nurturing than you are.” I’m not sure any actor but Baldwin and his hoarse whisper could pull off those lines. But even years later, I find myself laughing out loud. Now if you’ll excuse me, I need to dress for dinner.

— Tom

Isabel Fattal contributed to this newsletter.

I Watched Elon Musk Kill Twitter’s Culture From the Inside

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 02 › elon-musk-twitter-ethics-algorithm-biases › 673110

Everyone has an opinion about Elon Musk’s takeover of Twitter. I lived it. I saw firsthand the harms that can flow from unchecked power in tech. But it’s not too late to turn things around.

I joined Twitter in 2021 from Parity AI, a company I founded to identify and fix biases in algorithms used in a range of industries, including banking, education, and pharmaceuticals. It was hard to leave my company behind, but I believed in the mission: Twitter offered an opportunity to improve how millions of people around the world are seen and heard. I would lead the company’s efforts to develop more ethical and transparent approaches to artificial intelligence as the engineering director of the Machine Learning Ethics, Transparency, and Accountability (META) team.

In retrospect, it’s notable that the team existed at all. It was focused on community, public engagement, and accountability. We pushed the company to be better, providing ways for our leaders to prioritize more than revenue. Unsurprisingly, we were wiped out when Musk arrived.

He might not have seen the value in the type of work that META did. Take our investigation into Twitter’s automated image-crop feature. The tool was designed to automatically identify the most relevant subjects in an image when only a portion is visible in a user’s feed. If you posted a group photograph of your friends at the lake, it would zero in on faces rather than feet or shrubbery. It was a simple premise, but flawed: Users noticed that the tool seemed to favor white people over people of color in its crops. We decided to conduct a full audit, and there was indeed a small but statistically significant bias. When Twitter used AI to determine which portion of a large image to show on a user’s feed, it had a slight tendency to favor white people (and, additionally, to favor women). Our solution was straightforward: Image cropping wasn’t a function that needed to be automated, so Twitter disabled the algorithm.

I felt good about joining Twitter to help protect users, particularly people who already face broader discrimination, from algorithmic harms. But months into Musk’s takeover—a new era defined by feverish cost-cutting, lax content moderation, the abandonment of important features such as block lists, and a proliferation of technical problems that have meant the site couldn’t even stay online for the entire Super Bowl—it seems no one is keeping watch. A year and a half after our audit, Musk laid off employees dedicated to protecting users. (Many employees, including me, are pursuing arbitration in response.) He has installed a new head of trust and safety, Ella Irwin, who has a reputation for appeasing him. I worry that by ignoring the nuanced issue of algorithmic oversight—to such an extent that Musk reportedly demanded an overhaul of Twitter’s systems to display his tweets above all others—Twitter will perpetuate and augment issues of real-world biases, misinformation, and disinformation, and contribute to a volatile global political and social climate.

Irwin did not respond to a series of questions about layoffs, algorithmic oversight, and content moderation. A request to the company’s press email also went unanswered.

[Read: Twitter’s slow and painful end]

Granted, Twitter has never been perfect. Jack Dorsey’s distracted leadership across multiple companies kept him from defining a clear strategic direction for the platform. His short-tenured successor, Parag Agrawal, was well intentioned but ineffectual. Constant chaos and endless structuring and restructuring were ongoing internal jokes. Competing imperatives sometimes manifested in disagreements between those of us charged with protecting users and the team leading algorithmic personalization. Our mandate was to seek outcomes that kept people safe. Theirs was to drive up engagement and therefore revenue. The big takeaway: Ethics don’t always scale with short-term engagement.

A mentor once told me that my role was to be a truth teller. Sometimes that meant confronting leadership with uncomfortable realities. At Twitter, it meant pointing to revenue-enhancing methods (such as increased personalization) that would lead to ideological filter bubbles, open up methods of algorithmic bot manipulation, or inadvertently popularize misinformation. We worked on ways to improve our toxic-speech-identification algorithms so they would not discriminate against African-American Vernacular English as well as forms of reclaimed speech. All of this depended on rank-and-file employees. Messy as it was, Twitter sometimes seemed to function mostly on goodwill and the dedication of its staff. But it functioned.

Those days are over. From the announcement of Musk’s bid to the day he walked into the office holding a sink, I watched, horrified, as he slowly killed Twitter’s culture. Debate and constructive dissent was stifled on Slack, leaders accepted their fate or quietly resigned, and Twitter slowly shifted from being a company that cared about the people on the platform to a company that only cares about people as monetizable units. The few days I spent at Musk’s Twitter could best be described as a Lord of the Flies–like test of character as existing leadership crumbled, Musk’s cronies moved in, and his haphazard management—if it could be called that—instilled a sense of fear and confusion.

Unfortunately, Musk cannot simply be ignored. He has purchased a globally influential and politically powerful seat. We certainly don’t need to speculate on his thoughts about algorithmic ethics. He reportedly fired a top engineer earlier this month for suggesting that his engagement was waning because people were losing interest in him, rather than because of some kind of algorithmic interference. (Musk initially responded to the reporting about how his tweets are prioritized by posting an off-color meme, and today called the coverage “false.”) And his track record is far from inclusive: He has embraced far-right talking points, complained about the “woke mind virus,” and explicitly thrown in his lot with Donald Trump and Ye (formerly Kanye West).

[Read: An unholy alliance between Ye, Musk, and Trump]

Devaluing work on algorithmic biases could have disastrous consequences, especially because of how perniciously invisible yet pervasive these biases can become. As the arbiters of the so-called digital town square, algorithmic systems play a significant role in democratic discourse. In 2021, my team published a study showing that Twitter’s content-recommendation system amplified right-leaning posts in Canada, France, Japan, Spain, the United Kingdom, and the United States. Our analysis data covered the period right before the 2020 U.S. presidential election, identifying a moment in which social media was a crucial touch point of political information for millions. Currently, right-wing hate speech is able to flow on Twitter in places such as India and Brazil, where radicalized Jair Bolsonaro supporters staged a January 6–style coup attempt.

Musk’s Twitter is simply a further manifestation of how self-regulation by tech companies will never work, and it highlights the need for genuine oversight. We must equip a broad range of people with the tools to pressure companies into acknowledging and addressing uncomfortable truths about the AI they’re building. Things have to change.

My experience at Twitter left me with a clear sense of what can help. AI is often thought of as a black box or some otherworldly force, but it is code, like much else in tech. People can review it and change it. My team did it at Twitter for systems that we didn’t create; others could too, if they were allowed. The Algorithmic Accountability Act, the Platform Accountability and Transparency Act, and New York City’s Local Law 144—as well as the European Union’s Digital Services and AI Acts—all demonstrate how legislation could create a pathway for external parties to access source code and data to ensure compliance with antibias requirements. Companies would have to statistically prove that their algorithms are not harmful, in some cases allowing individuals from outside their companies an unprecedented level of access to conduct source-code audits, similar to the work my team was doing at Twitter.

After my team’s audit of the image-crop feature was published, Twitter recognized the need for constructive public feedback, so we hosted our first algorithmic-bias bounty. We made our code available and let outside data scientists dig in—they could earn cash for identifying biases that we’d missed. We had unique and creative responses from around the world and inspired similar programs at other organizations, including Stanford University.

Public bias bounties could be a standard part of algorithmic risk-assessment programs in companies. The National Institute of Standards and Technology, the U.S.-government entity that develops algorithmic-risk standards, has included validation exercises, such as bounties, as a part of its recommended algorithmic-ethics program in its latest AI Risk Management Framework. Bounty programs can be an informative way to incorporate structured public feedback into real-time algorithmic monitoring.

To meet the imperatives of addressing radicalization at the speed of technology, our approaches need to evolve as well. We need well-staffed and well-resourced teams working inside tech companies to ensure that algorithmic harms do not occur, but we also need legal protections and investment in external auditing methods. Tech companies will not police themselves, especially not with people like Musk in charge. We cannot assume—nor should we ever have assumed—that those in power aren’t also part of the problem.