Itemoids

Twitter

The debate over Daylight Saving Time, March Madness money, and Elon Musk's FTC problem

CNN

www.cnn.com › videos › business › 2023 › 03 › 09 › nightcap-daylight-saving-march-madness-full-orig-jg.cnn

CNN's Harry Enten tells "Nightcap's" Jon Sarlin why Americans switch the clocks back and forth twice a year, even though the time change is pretty universally hated. Plus, Los Angeles Times columnist LZ Granderson on how legal sports betting has changed March Madness. And CNN's Clare Duffy explains why the FTC's investigation of Twitter could be a real problem for Elon Musk. To get the day's business headlines sent directly to your inbox, sign up for the Nightcap newsletter.

Elon Musk Is Spiraling

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 03 › elon-musk-twitter-disability-worker-tweets › 673339

In recent memory, a conversation about Elon Musk might have had two fairly balanced sides. There were the partisans of Visionary Elon, head of Tesla and SpaceX, a selfless billionaire who was putting his money toward what he believed would save the world. And there were critics of Egregious Elon, the unrepentant troll who spent a substantial amount of his time goading online hordes. These personas existed in a strange harmony, displays of brilliance balancing out bursts of terribleness. But since Musk’s acquisition of Twitter, Egregious Elon has been ascendant, so much so that the argument for Visionary Elon is harder to make every day.

Take, just this week, a back-and-forth on Twitter, which, as is usually the case, escalated quickly. A Twitter employee named Haraldur Thorleifsson tweeted at Musk to ask whether he was still employed, given that his computer access had been cut off. Musk—who has overseen a forced exodus of Twitter employees—asked Thorleifsson what he’s been doing at Twitter. Thorleifsson replied with a list of bullet points. Musk then accused him of lying and in a reply to another user, snarked that Thorleifsson “did no actual work, claimed as his excuse that he had a disability that prevented him from typing, yet was simultaneously tweeting up a storm.” Musk added: “Can’t say I have a lot of respect for that.” Egregious Elon was in full control.

By the end of the day, Musk had backtracked. He’d spoken with Thorleifsson, he said, and apologized “for my misunderstanding of his situation.” Thorleifsson isn’t fired at all, and, Musk said, is considering staying on at Twitter. (Twitter did not respond to a request for comment, nor did Thorleifsson, who has not indicated whether he would indeed stay on.)

The exchange was surreal in several ways. Yes, Musk has accrued a list of offensive tweets the length of a CVS receipt, and we could have a very depressing conversation about which cruel insult or hateful shitpost has been the most egregious. Still, this—mocking a worker with a disability—felt like a new low, a very public demonstration of Musk’s capacity to keep finding ways to get worse. The apology was itself surprising; Musk rarely shows remorse for being rude online. But perhaps the most surreal part was Musk’s personal conclusion about the whole situation: “Better to talk to people than communicate via tweet.”

[R]ead: Twitter’s slow and painful end

This is quite the takeaway from the owner of Twitter, the man who paid $44 billion to become CEO, an executive who is rabidly focused on how much other people are tweeting on his social platform, and who was reportedly so irked that his own tweets weren’t garnering the engagement numbers he wanted that he made engineers change the algorithm in his favor. (Musk has disputed this.) The conclusion of the Thorleifsson affair seems to betray a lack of conviction, a slip in the confidence that made Visionary Elon so compelling. It is difficult to imagine such an equivocation elsewhere in the Musk Cinematic Universe, where Musk seems more at ease, more in control, with the particularities of his grand visions. In leading an electric-car company and a space company, Musk has expressed, and stuck with, clear goals and purposes for his project: make an electric car people actually want to drive; become a multiplanetary species. When he acquired Twitter, he articulated a vision for making the social network a platform for free speech. But in practice, the self-described Chief Twit had gotten dragged into—and has now articulated—the thing that many people understand to be true about Twitter, and social media at large: that, far from providing a space for full human expression, it can make you a worse version of yourself, bringing out your most dreadful impulses.  

We can’t blame all of Musk’s behavior on social media: Visionary Elon has always relied on his darker self to achieve his largest goals. Musk isn’t known for being the most understanding boss, at any of his companies. He’s called in SpaceX workers on Thanksgiving to work on rocket engines. He’s said that Tesla employees who want to work remotely should “pretend to work somewhere else.” At Twitter, Musk expects employees to be “extremely hardcore” and work “long hours at high intensity,” a directive that former employees have claimed, in a class-action lawsuit, has resulted in workers with disabilities being fired or forced to resign. (Twitter quickly sought to dismiss the claim.) Musk’s interpretation of worker accommodation is converting conference rooms into bedrooms so that employees can sleep at the office.

In the past, though, the two aspects of Elon aligned enough to produce genuinely admirable results. He has led the development of a hugely popular electric car and produced the only launch system capable of transporting astronauts into orbit from U.S. soil. Even as SpaceX tried to force out residents from the small Texas town where it develops its most ambitious rockets, it converted some locals into Elon fans. SpaceX hopes to attempt the first launch of its newest, biggest rocket there “sometime in the next month or so,” Musk said this week. That launch vehicle, known as Starship, is meant for missions to the moon and Mars, and it is a key part of NASA’s own plans to return American astronauts to the lunar surface for the first time in more than 50 years.

[Read: Elon Musk, baloney king]

Through all this, he tweeted. Only now, though, is his online persona so alienating people that more of his fans and employees are starting to object. Last summer, a group of SpaceX employees wrote an open letter to company leadership about Musk’s Twitter presence, writing that “Elon’s behavior in the public sphere is a frequent source of distraction and embarrassment for us”; SpaceX responded by firing several of the letter’s organizers. By being so focused on Twitter—a place with many digital incentives, very few of which involve being thoughtful and generous—Musk seems to be ceding ground to the part of his persona that glories in trollish behavior. On Twitter, Egregious Elon is rewarded with engagement, “impressions.” Being reactionary comes with its rewards. The idea that someone is “getting worse” on Twitter is a common one, and Musk has shown us a master class of that downward trajectory in the past year. (SpaceX, it’s worth noting, prides itself on having a “no-asshole policy.”)

Does Visionary Elon have a chance of regaining the upper hand? Sure. An apology helps, along with the admission that maybe tweeting in a contextless void is not the most effective way to interact with another person. Another idea: Stop tweeting. Plenty of people have, after realizing—with the clarity of the protagonist of The Good Place, a TV show about being in hell—that this is the bad place, or at least a bad place for them. For Musk, though, to disengage from Twitter would now come at a very high cost. It’s also unlikely, given how frequently he tweets. And so, he stays. He engages and, sometimes, rappels down, exploring ever-darker corners of the hole he’s dug for himself.

On Tuesday, Musk spoke at a conference held by Morgan Stanley about his vision for Twitter. “Fundamentally it’s a place you go to to learn what’s going on and get the real story,” he said. This was in the hours before Musk retracted his accusations against Thorleifsson, and presumably learned “the real story”—off Twitter. His original offending tweet now bears a community note, the Twitter feature that allows users to add context to what may be false or misleading posts. The social platform should be “the truth, the whole truth—and I’d like to say nothing but the truth,” Musk said. “But that’s hard. It’s gonna be a lot of BS.” Indeed.

Duck Off, Autocorrect

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 03 › ai-chatgpt-autocorrect-limitations › 673338

By most accounts, I’m a reasonable, levelheaded individual. But some days, my phone makes me want to hurl it across the room. The problem is autocorrect, or rather autocorrect gone wrong—that habit to take what I am typing and mangle it into something I didn’t intend. I promise you, dear iPhone, I know the difference between its and it’s, and if you could stop changing well to we’ll, that’d be just super. And I can’t believe I have to say this, but I have no desire to call my fiancé a “baboon.”

It’s true, perhaps, that I am just clumsy, mistyping words so badly that my phone can’t properly decipher them. But autocorrect is a nuisance for so many of us. Do I even need to go through the litany of mistakes, involuntary corrections, and everyday frustrations that can make the feature so incredibly ducking annoying? “Autocorrect fails” are so common that they have sprung endless internet jokes. Dear husband getting autocorrected to dead husband is hilarious, at least until you’ve seen a million Facebook posts about it.

Even as virtually every aspect of smartphones has gotten at least incrementally better over the years, autocorrect seems stuck. An iPhone 6 released nearly a decade ago lacks features such as Face ID and Portrait Mode, but its basic virtual keyboard is not clearly different from the one you use today. This doesn’t seem to be an Apple-specific problem, either: Third-party keyboards can be installed on both iOS and Android that claim to be better at autocorrect. Disabling the function altogether is possible, though it rarely makes for a better experience. Autocorrect’s lingering woes are especially strange now that we have chatbots that are eerily good at predicting what we want or need. ChatGPT can spit out a passable high-school essay while autocorrect still can’t seem to consistently figure out when it’s messing up my words. If everything in tech gets disrupted sooner or later, why not autocorrect?

[Read: The end of high-school English]

At first, autocorrect as we now know it was a major disruptor itself. Although text correction existed on flip phones, the arrival of devices without a physical keyboard required a new approach. In 2007, when the first iPhone was released, people weren’t used to messaging on touchscreens, let alone on a 3.5-inch screen where your fingers covered the very letters you were trying to press. The engineer Ken Kocienda’s job was to make software to help iPhone owners deal with inevitable typing errors; in the quite literal sense, he is the inventor of Apple’s autocorrect. (He retired from the company in 2017, though, so if you’re still mad at autocorrect, you can only partly blame him.)

Kocienda created a system that would do its best to guess what you meant by thinking about words not as units of meaning but as patterns. Autocorrect essentially re-creates each word as both a shape and a sequence, so that the word hello is registered as five letters but also as the actual layout and flow of those letters when you type them one by one. “We took each word in the dictionary and gave it a little representative constellation,” he told me, “and autocorrect did this little geometry that said, ‘Here’s the pattern you created; what’s the closest-looking [word] to that?’”

That’s how it corrects: It guesses which word you meant by judging when you hit letters close to that physical pattern on the keyboard. This is why, at least ideally, a phone will correct teh or thr to the. It’s all about probabilities. When people brand ChatGPT as a “super-powerful autocorrect,” this is what they mean: so-called large language models work in a similar way, guessing what word or phrase comes after the one before.

When early Android smartphones from Samsung, Google, and other companies were released, they also included autocorrect features that work much like Apple’s system: using context and geometry to guess what you meant to type. And that does work. If you were to pick up your phone right now and type in any old nonsense, you would almost certainly end up with real words. When you think about it, that’s sort of incredible. Autocorrect is so eager to decipher letters that out of nonsense you still get something like meaning.

Apple’s technology has also changed quite a bit since 2007, even if it doesn’t always feel that way. As language processing has evolved and chips have become more powerful, tech has gotten better at not just correcting typing errors but doing so based on the sentence it thinks we’re trying to write. In an email, a spokesperson for Apple said the basic mix of syntax and geometry still factors into autocorrect, but the system now also takes into account context and user habit.

And yet for all the tweaking and evolution, autocorrect is still far, far from perfect. Peruse Reddit or Twitter and frustrations with the system abound. Maybe your keyboard now recognizes some of the quirks of your typing—thankfully, mine finally gets Navneet right—but the advances in autocorrect are also partly why the tech remains so annoying. The reliance on context and user habit is genuinely helpful most of the time, but it also is the reason our phones will sometimes do that maddening thing where they change not only the word you meant to type but the one you’d typed before it too.

In some cases, autocorrect struggles because it tries to match our uniqueness to dictionaries or patterns it has picked out in the past. In attempting to learn and remember patterns, it can also learn from our mistakes. If you accidentally type thr a few too many times, the system might just leave it as is, precisely because it’s trying to learn. But what also seems to rile people up is that autocorrect still trips over the basics: It can be helpful when Id changes to I’d or Its to It’s at the beginning of a sentence, but infuriating when autocorrect does that when you neither want nor need it to.

That’s the thing with autocorrect: anticipating what you meant to say is tricky, because the way we use language is unpredictable and idiosyncratic. The quirks of idiom, the slang, the deliberate misspellings—all of the massive diversity of language is tough for these systems to understand. How we text our families or partners can be different from how we write notes or type things into Google. In a serious work email, autocorrect may be doing us a favor by changing np to no, but it’s just a pain when we meant “no problem” in a group chat with friends.

[Read: The difference between speaking and thinking]

Autocorrect is limited by the reality that human language sits in this strange place where it is both universal and incredibly specific, says Allison Parrish, an expert on language and computation at NYU. Even as autocorrect learns a bit about the words we use, it must, out of necessity, default to what is most common and popular: The dictionaries and geometric patterns accumulated by Apple and Google over years reflect a mean, an aggregate norm. “In the case of autocorrect, it does have a normative force,” Parrish told me, “because it’s built as a system for telling you what language should be.”

She pointed me to the example of twerk. The word used to get autocorrected because it wasn’t a recognized term. My iPhone now doesn’t mess with I love to twerk, but it doesn’t recognize many other examples of common Black slang, such as simp or finna. Keyboards are trying their best to adhere to how “most people” speak, but that concept is something of a fiction, an abstract idea rather than an actual thing. It makes for a fiendishly difficult technical problem. I’ve had to turn off autocorrect on my parents’ phones because their very ordinary habit of switching between English, Punjabi, and Hindi on the fly is something autocorrect simply cannot handle.

That doesn’t mean that autocorrect is doomed to be like this forever. Right now, you can ask ChatGPT to write a poem about cars in the style of Shakespeare and get something that is precisely that: “Oh, fair machines that speed upon the road, / With wheels that spin and engines that doth explode.” Other tools have used the text messages of a deceased loved one to create a chatbot that can feel unnervingly real. Yes, we are unique and irreducible, but there are patterns to how we text, and learning patterns is precisely what machines are good at. In a sense, the sudden chatbot explosion means that autocorrect has won: It is moving from our phones to all the text and ideas of the internet.

But how we write is a forever-unfinished process in a way that Shakespeare’s works are not. No level of autocorrect can figure out how we write before we’ve fully decided upon it ourselves, even if fulfilling that desire would end our constant frustration. The future of autocorrect will be a reflection of who or what is doing the improving. Perhaps it could  get better by somehow learning to treat us as unique. Or it could continue down the path of why it fails so often now: It thinks of us as just like everybody else.