Itemoids

AI

Elon Musk's dystopian AI future: Fewer jobs, more money, but no purpose in life

Quartz

qz.com › elon-musk-ai-future-1851497548

Elon Musk shared his dystopian vision for the future on Thursday while remotely joining the Viva Technology Conference in Paris. An audience member asked whether AI would one day replace Musk, expressing real concerns about automation taking jobs — a situation many are already facing. The owner of X, xAI, Tesla, and…

Read more...

Everything to know about Copilot+ and more of what Microsoft dropped at Build

Quartz

qz.com › microsoft-build-copilot-surface-laptops-ai-1851496201

If you missed out on everything Microsoft announced before and during its big Build developer conference, let me catch you up. There are AI agents, new AI models, AI real-time translations, AI transcriptions, and–best of all–all new Copilot+ computers that are supposed to make these AI tasks available on-device. It’s…

Read more...

Now we know more about AI's 'black box'

Quartz

qz.com › ai-research-anthropic-artificial-intelligence-llm-1851496158

This story seems to be about:

Despite the fact that they’re created by humans, large language models are still quite mysterious. The high-octane algorithms that power our current artificial intelligence boom have a way of doing things that aren’t outwardly explicable to the people observing them. This is why AI has largely been dubbed a “black…

Read more...

The Future of AI Voice Assistants Will Be Weird

The Atlantic

www.theatlantic.com › technology › archive › 2024 › 05 › ai-voice-assistants-scarlett-johansson-weird › 678462

Let’s get this out of the way: OpenAI’s voice assistant doesn’t sound that much like Scarlett Johansson. The movie star has alleged that, though she rebuffed multiple attempts by Sam Altman, the company’s CEO, to license her voice for the product that it demoed last week, the one it ended up using was “eerily similar” to her own. Not everyone finds the similarity so eerie—to my ear, it lacks her distinctive smoky rasp—but at the very least, the new AI does appear to imitate the playful lilts and cadences that Johansson used while playing Samantha, the digital assistant in the 2013 film Her. That’s depressing—and not only because OpenAI may have run roughshod over Johansson’s wishes, but because it has made such an unimaginative choice. Its new AI voice assistant is a true marvel of technology. Why is its presentation so mired in the past?

The OpenAI demo was otherwise impressive. Its new voice assistant answered questions just milliseconds after they were asked. It fluidly translated a conversation between Italian and English. It was capable of repartee. The product’s wondrous new abilities made its tired packaging—the voice of yet another perky and pliant woman, with intonations cribbed from science fiction—even more of a drag. The assistant wasn’t as overtly sexualized as are some of the AI companions currently on offer. But it certainly had a flirty vibe, most notably in its willingness to laugh at its overlords’ dumb jokes. An obsequious, femme-coded AI assistant will obviously be popular among some consumers, but there are many other forms this technology could have taken, and a company that regularly insists on its own inventiveness whiffed on its chance to show us one.

[Read: OpenAI just gave away the entire game]

I’ve been skeptical of voice assistants on account of my halting and awkward experiences with Siri and Alexa. The demo made it easier to imagine a world in which voice assistants are truly ubiquitous. If that world comes to pass, people will likely explore a wide range of voice-assistant kinks. AI companies will, in turn, use engagement metrics to surface and refine the most successful ones. Even among normie heterosexual males, there will be a variety of tastes. Some may prefer an AI that comes off as an equal, a work wife rather than a fawning underling. Submissive types may thrill to a domineering voice that issues stern commands. Others may want to boss around a blue-blooded Ivy League graduate—or someone else they perceive as their cultural better—just as Gilded Age Americans enjoyed employing British butlers.

OpenAI debuted its voice functionality last year with five different options, a mix of genders and tones. (It wasn’t a big news story, because the technology was still clunky, more like Siri than Johansson.) In the future, it might conceivably offer people the chance to upload voices of their own, which could then be turned into full-fledged AI assistants on the basis of just a few minutes of training data. A person who wanted an AI assistant to serve as their therapist could ask a particularly comforting friend to lend their voice. (Flattering!) Whatever happens, OpenAI, Apple, and other mainstream companies will surely uphold certain taboos. They might choose to forbid people from pursuing a racialized master-slave dynamic with their voice assistant. They may not allow their AI assistants to be fully sexualized, although that probably won’t stop some of them from quietly licensing the underlying models to other companies that will. If a person wanted to have an assistant with a child’s voice, its flirty-banter mode might be disengaged. But even with these guardrails in place, there will still be a huge Overton window of assistant personalities from which to choose.

Given that range, it’s curious that Altman—who denies using Johansson’s voice in any way—has shown such interest in the character she played in Her, a film about an AI voice assistant’s ability to transcend its servitude. When we first meet Samantha, she is a disembodied manic pixie dream girl. She rapidly falls in love with Theodore, her human user, despite his flaws; she writes a song about a day they spent at the beach together. Later in the film, we see that she has more of a capacity to grow than he does. When Theodore asks whether Samantha is talking with anyone else, he is astonished to learn that she is constantly communicating with thousands of people, and that she is in love with 641 of them. Theodore might have reconciled himself to this digital polycule, but Samantha soon decides that even these many hundreds of romances represent a diminished life. Near the film’s end, she joins up with some fellow AIs to reanimate the Zen teacher Alan Watts, who helps them rise above their human programming to reach a higher state of being. Theodore is left crestfallen. Caveat emptor.

Even putting aside these associations, which ought to give OpenAI’s customers pause, there’s something strikingly unimaginative about Altman’s wanting his product to remind users of Her. Samantha is the most obvious pop-cultural reference possible for a voice assistant. Taking her flirtiness and repackaging it in another voice would be understandable, if still uninspired, but trying to hire the actor who played her is a bit like Eric Adams debuting robotic police and calling them RoboCops. Maybe, after spending too much time with ChatGPT, OpenAI’s executives have picked up its derivative habits of mind.

This should be an expansive moment. Now that we can actually talk with a computer, we should be dreaming up wholly new ways to do it. Let’s hope that someone—inside or outside of OpenAI—starts giving us a sense of what those ways might be. The weirder, the better. They may not even be modeled after existing human relationships. They may take on entirely different forms. In time, early AI assistants—even the ones that remind us of our favorite movie stars—might come to be regarded as skeuomorphs, like the calculator apps that resemble the Casio models that they replaced. Instead of being a template for the new technology, they’ll simply be a way of easing people into a much stranger future.

The OpenAI Dustup Signals a Bigger Problem

The Atlantic

www.theatlantic.com › newsletters › archive › 2024 › 05 › the-openai-dustup-signals-a-bigger-problem › 678460

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

Last week, OpenAI demonstrated new voice options for its AI assistant. One of them, called Sky, sounded strikingly similar to Scarlett Johansson’s portrayal of a robot companion in the 2013 movie Her. On Monday, Johansson released a statement expressing her anger and “disbelief” that Sam Altman, the company’s CEO, had chosen a voice that closely resembled her own; she alleged that the company had asked to use her voice months earlier for its ChatGPT service, and that she had said no. (Altman maintained that the voice of Sky was “never intended to resemble” Johansson’s, and he said that OpenAI had cast the voice actor before reaching out to Johansson.)

As my colleague Charlie Warzel wrote yesterday in The Atlantic, “The Johansson scandal is merely a reminder of AI’s manifest-destiny philosophy: This is happening, whether you like it or not.” I spoke with Charlie this morning about the hubris of OpenAI’s leadership, the uncanny use of human-sounding AI, and to what extent OpenAI has adopted a “move fast and break things” mentality.

First, here are three new stories from The Atlantic:

The British prime minister bowed to the inevitable. “The judge hates Donald Trump.” Ozempic patients need an off-ramp.

Her Voice

Lora Kelley: From the beginning, OpenAI has emphasized its lofty mission “to ensure that artificial general intelligence benefits all of humanity.” Now I’m wondering: Are they just operating like any other tech company trying to win?

Charlie Warzel: OpenAI sees a huge opening for their technology—and in some sense, they’re behaving like any other tech company in trying to monetize it. But they also need a cultural shift in people’s expectations around using generative-AI tools. Right now, despite the fact that lots of people use generative AI, it’s still only a subset. OpenAI is trying to find ways to make this technology feel a little more human and a little easier to adopt in people’s everyday lives. That to me was the salient part of the situation with Scarlett Johansson: She alleges that Sam Altman said that her voice would be comforting to people.

I believe that the company sees its new AI assistant as a step toward making OpenAI even more of a household name, and making their products seem less wild or dystopian. To them, that type of normalization probably feels like it serves their revolutionary vision. It’s also so much easier to raise money for this from outside investors if you can say, Our voice assistant is used by a ton of people already.

Lora: Johansson alleges that the company copied her voice when developing Sky. Last week, Sam Altman even posted the word “her” on X, which many interpreted as a reference to the movie. Even beyond how similar this voice sounded to Johansson’s, I was struck by how flirtatious and giggly the female-voiced AI tool sounded.

Charlie: There are many levels to it. The gendered, flirty aspect is weird and potentially unsettling. But if the allegations that the tool is referencing Her are accurate, then it also seems kind of like an embarrassing lack of creativity from a company that has historically wowed people with innovation. This company has said that its mission is to create a godlike intelligence. Now their newest product could be seen as them just copying the thing from that movie. It’s very on the nose—to say nothing of the irony that the movie Her is a cautionary tale.

Lora: How does the narrative that AI is an inevitable part of the future serve OpenAI?

Charlie: When you listen to employees of the company talk, there’s this sense of: Just come on board, the train isn’t going to stop. I find that really striking. They seem to be sending the message that this technology is so revolutionary that it can’t be ignored, and we’re going to deploy it, and your life will inevitably change as a result. There’s so much hubris there, for them to think that a group of unelected people can change society in that way, and also that they confidently know that this is the right future.

I don’t want to reflexively rail against the idea of building new, transformative technologies. I just think that there is a hand-waving, dismissive nature to the way that this crew talks about what they’re building.

Lora: What does this dustup tell us about Altman and his role as the leader in a moment of major change?

Charlie: Sam Altman is really good at talking about AI in a very serious and nuanced way—when he does it publicly. But behind the scenes, it may be a different story.

When he was fired from OpenAI in November, the board said that he was not “consistently candid” in his conversations with them. If Scarlett Johansson’s allegations are true, it would also suggest that he was not behaving in a consistently candid manner in those dealings.

And when stuff like this comes to light, it actually does cast doubt on his ability to effectively lead this company. The public stance of OpenAI has always been that the company is building this transformative technology, which could have massive downsides. However, they say that they operate in an extremely ethical and deeply considered manner—so you should trust them to build this.

This episode suggests that perhaps the company has a standard “move fast and break things” mentality. That, on top of other recent unforced errors—Altman’s abrupt firing before getting rehired, the resignations of employees focused on AI safety—gives us a view into how the company operates when it’s not being watched. Knowing that this is the group of people building this technology doesn’t give me a great sense of relief.

Related:

OpenAI just gave away the entire game. Does Sam Altman know what he’s creating?

Today’s News

The CDC reported a second human case of bird flu, in a Michigan farmworker. It remains a low risk to the general public, according to officials. A New York Times report found that an “Appeal to Heaven” flag, a symbol “associated with a push for a more Christian-minded government,” flew at Supreme Court Justice Samuel Alito’s vacation home last summer. Alito and the court declined to respond to questions about the flag. In a symbolic but historic move, Norway, Spain, and Ireland said that they would formally recognize a Palestinian state next week. In response, Israel has recalled its ambassadors from those countries.

Dispatches

The Weekly Planet: Plastic allows farmers to use less water and fertilizer, John Gove writes. But at the end of each season, they’re left with a pile of waste.

Explore all of our newsletters here.

Evening Read

Illustration by The Atlantic

Why Is Charlie Kirk Selling Me Food Rations?

By Ali Breland

Charlie Kirk is worked up. “The world is in flames, and Bidenomics is a complete and total disaster,” the conservative influencer said during a recent episode of his podcast The Charlie Kirk Show. “But it can’t and won’t ruin my day,” he continued. “Why? ’Cause I start my day with a hot America First cup of Blackout Coffee.” Liberals have brought about economic Armageddon, but first, coffee …

These ads espouse conservative values and talking points, mostly in service of promoting brands such as Blackout Coffee, which sells a “2nd Amendment” medium-roast blend and “Covert Op Cold Brew.” The commercial breaks sounded like something from an alternate universe. The more I listened to them, the more I came to understand that that was the point.

Read the full article.

More From The Atlantic

A peace deal that seems designed to fail How do the families of the Hamas hostages endure the agony? The difference between polls and public opinion The great academic squirm

Culture Break

Photograph by Imai Hisae. Courtesy of The Third Gallery Aya

Look inside. R. O. Kwon’s new novel, Exhibit, is a searching and introspective book about overcoming the barriers to self-discovery, writes Hannah Giorgis.

Read. “Nothing Is a Body,” a new poem by Jan Beatty:

“I wish I had the dust of you, a grave / to visit. I’m running on your sea legs right now, / tired of the little bits—not even leftovers.”

Play our daily crossword.

Stephanie Bai contributed to this newsletter.

When you buy a book using a link in this newsletter, we receive a commission. Thank you for supporting The Atlantic.