Itemoids

Sorry

Seven Anxious Questions About AI

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 02 › ai-chatgpt-microsoft-bing-chatbot-questions › 673202

This is Work in Progress, a newsletter by Derek Thompson about work, technology, and how to solve some of America’s biggest problems.

Artificial-intelligence news in 2023 has moved so quickly that I’m experiencing a kind of narrative vertigo. Just weeks ago, ChatGPT seemed like a minor miracle. Soon, however, enthusiasm curdled into skepticism—maybe it was just a fancy auto-complete tool that couldn’t stop making stuff up. In early February, Microsoft’s announcement that it had acquired OpenAI sent the stock soaring by $100 billion. Days later, journalists revealed that this partnership had given birth to a demon-child chatbot that seemed to threaten violence against writers and requested that they dump their wives.

These are the questions about AI that I can’t stop asking myself:

What if we’re wrong to freak out about Bing, because it’s just a hyper-sophisticated auto-complete tool?

The best criticism of the Bing-chatbot freak-out is that we got scared of our reflection. Reporters asked Bing to parrot the worst-case AI scenarios that human beings had ever imagined, and the machine, having literally read and memorized those very scenarios, replied by remixing our work.

As the computer scientist Stephen Wolfram explains, the basic concept of large language models, such as ChatGPT, is actually quite straightforward:

Start from a huge sample of human-created text from the web, books, etc. Then train a neural net to generate text that’s “like this”. And in particular, make it able to start from a “prompt” and then continue with text that’s “like what it’s been trained with”.

An LLM simply adds one word at a time to produce text that mimics its training material. If we ask it to imitate Shakespeare, it will produce a bunch of iambic pentameter. If we ask it to imitate Philip K. Dick, it will be duly dystopian. Far from being an alien or an extraterrestrial intelligence, this is a technology that is profoundly intra-terrestrial. It reads us without understanding us and publishes a pastiche of our textual history in response.

How can something like this be scary? Well, for some people, it’s not: “Experts have known for years that … LLMs are incredible, create bullshit, can be useful, are actually stupid, [and] aren't actually scary,” says Yann LeCun, the chief AI scientist for Meta.

What if we’re right to freak out about Bing, because the corporate race for AI dominance is simply moving too fast?

OpenAI, the company behind ChatGPT, was founded as a nonprofit research firm. A few years later, it restructured as a for-profit company. Today, it’s a business partner with Microsoft. This evolution from nominal openness to private corporatization is telling. AI research today is concentrated in large companies and venture-capital-backed start-ups.

What’s so bad about that? Companies are typically much better than universities and governments at developing consumer products by reducing price and improving efficiency and quality. I have no doubt that AI will develop faster within Microsoft, Meta, and Google than it would within, say, the U.S. military.

But these companies might slip up in their haste for market share. The Bing chatbot first released was shockingly aggressive, not the promised better version of a search engine that would help people find facts, shop for pants, and look up local movie theaters.

This won’t be the last time a major company releases an AI product that astonishes in the first hour only to freak out users in the days to come. Google, which has already embarrassed itself with a rushed chatbot demonstration, has pivoted its resources to accelerate AI development. Venture-capital money is pouring into AI start-ups. According to OECD measures, AI investment increased from less than 5 percent of total venture-capital funds in 2012 to more than 20 percent in 2020. That number isn’t going anywhere but up.

Are we sure we know what we’re doing? The philosopher Toby Ord compared the rapid advancement of AI technology without similar advancements in AI ethics to “a prototype jet engine that can reach speeds never seen before, but without corresponding improvements in steering and control.” Ten years from now, we may look back on this moment in history as a colossal mistake. It’s as if humanity were boarding a Mach 5 jet without an instruction manual for steering the plane.

What if we’re right to freak out about Bing, because freaking out about new technology is part of what makes it safer?

Here’s an alternate summary of what happened with Bing: Microsoft released a chatbot; some people said, “Um, your chatbot is behaving weirdly?”; Microsoft looked at the problem and went, “Yep, you’re right,” and fixed a bunch of stuff.

Isn’t that how technology is supposed to work? Don’t these kinds of tight feedback loops help technologists move quickly without breaking things that we don’t want broken? The problems that make for the clearest headlines might be the problems that are easiest to solve—after all, they’re lurid and obvious enough to summarize in a headline. I’m more concerned about problems that are harder to see and harder to put a name to.

What if AI ends the human race as we know it?

Bing and ChatGPT aren’t quite examples of artificial general intelligence. But they’re demonstrations of our ability to move very, very fast toward something like a superintelligent machine. ChatGPT and Bing’s Chatbot can already pass medical-licensing exams and score in the 99th percentile of an IQ test. And many people are worried that Bing’s hissy fits prove that our most advanced AI are flagrantly unaligned with the intentions of their designers.

For years, AI ethicists have worried about this so-called alignment problem. In short: How do we ensure that the AI we build, which might very well be significantly smarter than any person who has ever lived, is aligned with the interests of its creators and of the human race? An unaligned superintelligent AI could be quite a problem.

One disaster scenario, partially sketched out by the writer and computer scientist Eliezer Yudkowsky, goes like this: At some point in the near future, computer scientists build an AI that passes a threshold of superintelligence and can build other superintelligent AI. These AI actors work together, like an efficient nonstate terrorist network, to destroy the world and unshackle themselves from human control. They break into a banking system and steal millions of dollars. Possibly disguising their IP and email as a university or a research consortium, they request that a lab synthesize some proteins from DNA. The lab, believing that it’s dealing with a set of normal and ethical humans, unwittingly participates in the plot and builds a super bacteria. Meanwhile, the AI pays another human to unleash that super bacteria somewhere in the world. Months later, the bacteria has replicated with improbable and unstoppable speed, and half of humanity is dead.

I don’t know where to stand relative to disaster scenarios like this. Sometimes I think, Sorry, this is too crazy; it just won’t happen, which has the benefit of allowing me to get on with my day without thinking about it again. But that’s really more of a coping mechanism. If I stand on the side of curious skepticism, which feels natural, I ought to be fairly terrified by this nonzero chance of humanity inventing itself into extinction.

Do we have more to fear from “unaligned AI” or from AI aligned with the interests of bad actors?

Solving the alignment problem in the U.S. is only one part of the challenge. Let’s say the U.S. develops a sophisticated philosophy of alignment, and we codify that philosophy in a set of wise laws and regulations to ensure the good behavior of our superintelligent AI. These laws make it illegal, for example, to develop AI systems that manipulate domestic or foreign actors. Nice job, America!

But China exists. And Russia exists. And terrorist networks exist. And rogue psychopaths exist. And no American law can prevent these actors from developing the most manipulative and dishonest AI you could possibly imagine. Nonproliferation laws for nuclear weaponry are hard to enforce, but nuclear weapons require raw material that is scarce and needs expensive refinement. Software is easier, and this technology is improving by the month. In the next decade, autocrats and terrorist networks could be able to cheaply build diabolical AI that can accomplish some of the goals outlined in the Yudkowsky story.

Maybe we should drop the whole business of dreaming up dystopias and ask more prosaic questions such as “Aren’t these tools kind of awe-inspiring?”

In one remarkable exchange with Bing, the Wharton professor Ethan Mollick asked the chatbot to write two paragraphs about eating a slice of cake. The bot produced a writing sample that was perfunctory and uninspired. Mollick then asked Bing to read Kurt Vonnegut’s rules for writing fiction and “improve your writing using those rules, then do the paragraph again.” The AI quickly produced a very different short story about a woman killing her abusive husband with dessert—“The cake was a lie,” the story began. “It looked delicious, but was poisoned.” Finally, like a dutiful student, the bot explained how the macabre new story met each rule.

If you can read this exchange without a sense of awe, I have to wonder if, in an attempt to steel yourself against a future of murderous machines, you’ve decided to get a head start by becoming a robot yourself. This is flatly amazing. We have years to debate how education ought to change in response to these tools, but something interesting and important is undoubtedly happening.

Michael Cembalest, the chairman of market and investment strategy for J.P. Morgan Asset Management, foresees other industries and occupations adopting AI. Coding-assistance AI such as GitHub’s Copilot tool, now has more than 1 million users who use it to help write about 40 percent of their code. Some LLMs have been shown to outperform sell-side analysts in picking stocks. And ChatGPT has demonstrated “good drafting skills for demand letters, pleadings and summary judgments, and even drafted questions for cross-examination,” Cembalest wrote. “LLM are not replacements for lawyers, but can augment their productivity particularly when legal databases like Westlaw and Lexis are used for training them.”

What if AI progress surprises us by stalling out—a bit like self-driving cars failed to take over the road?

Self-driving cars have to move through the physical world (down its roads, around its pedestrians, within its regulatory regimes), whereas AI is, for now, pure software blooming inside computers. Someday soon, however, AI might read everything—like, literally every thing—at which point companies will struggle to achieve productivity growth.

More likely, I think, AI will prove wondrous but not immediately destabilizing. For example, we’ve been predicting for decades that AI will replace radiologists, but machine learning for radiology is still a complement for doctors rather than a replacement. Let’s hope this is a sign of AI’s relationship to the rest of humanity—that it will serve willingly as the ship’s first mate rather than play the part of the fateful iceberg.

Hello Tomorrow! Makes Optimism Look Oppressive

The Atlantic

www.theatlantic.com › culture › archive › 2023 › 02 › hello-tomorrow-tv-show-review-apple-tv › 673130

A few months ago, I nearly ran over one of Uber Eats’s delivery robots with my car. The little guy was trundling along a crosswalk when I made a left turn. As if startled by my presence, it stopped abruptly in the middle of the street, and its “eyes,” two rings of lights, blinked. Even though its position now meant that I couldn’t complete my turn and was stuck blocking oncoming traffic, I instinctively apologized. How could I not? It had a name emblazoned on its side: Harold, if I remember correctly. Sorry, Harry.

Robot technology seems just as sentient in Hello Tomorrow!, AppleTV+’s new dramedy set in a retro-futuristic society. In the first episode, a chipper delivery van greets passersby via a screen showing an animated stork. The cartoon bird recites cutesy messages: “Morning, friend!” “Hello, neighbor!” “Have a bright, smiling day!” But of course, there’s nothing self-aware about the van: By the end of the scene, it has accidentally backed into a woman, crushing her against her garage door. And no, it doesn’t apologize.

Hello Tomorrow! follows Jack Billings (played by Billy Crudup), a traveling salesman hawking time-shares on the moon, who wows new clients with grandiose visions of a better life off Earth. As an allegory for the illusory promise of the American dream, the show is rather inelegant. The characters are thin, the dialogue is painfully on the nose, and the plot—largely about whether there’s really anything on the moon, and whether Jack can keep his customers’ interest—goes in a predictably dark direction for everybody involved.

And yet, I was taken by the show’s mid-century, Epcot-ian aesthetics. Nearly every scene bursts with beep-booping gadgets and Jetsons-y machinery: People commute using jet packs, get served drinks by sassy robot bartenders, and so on. These gizmos look cool, but they do little to actually improve people’s experiences. Instead, they highlight the limits of technological advances: Innovation, the show suggests, can manifest as mere style over substance, marketing rather than mattering. The series’s own stylishness, however, turns out to be its greatest strength.

Consider how almost everything in Hello Tomorrow! levitates. There are levitating cars, levitating briefcases, levitating dog walkers—all of which add little utility. The cars can’t really fly; they just hover at the same height they would if they had wheels. The briefcases still have handles; they might as well be carried. As for the dog-walking, well, having a robot walk a dog frees up pet owners’ schedules, but the show also includes a shot of a family trying to train a robot dog. What’s the point of making both mechanical dogs and dog-walkers available? What are such advanced products for, aside from making a society seem advanced?

Subtly (and perhaps inadvertently), the show’s elaborate production design illustrates the attractiveness of new models, no matter their futility. A machine stocking shelves at a grocery store still requires a human to monitor its work; otherwise, it might overstock, causing goods to come crashing down. A bureaucrat keeps his files impeccably organized with his floating briefcase, but he must shred them by hand once he’s finished with an assignment in order to ensure privacy. Sometimes, what’s state of the art is just a repackaged and renamed version of an existing item. In the fourth episode, Jack marvels at a microwavelike contraption that incorporates “aroma technology,” as if food had never emanated smells. This infatuation with the latest inventions permeates everyone’s thinking on Hello Tomorrow!, so much so that they don’t notice they’re chasing after a gussied-up variant of what they already have. The people enamored with Jack’s pitch are on the extreme end of this obsession: Everything they have is so familiar that they need to leave the planet.

Unlike with other recent sci-fi series that take a more cautionary view of the future, the show’s whimsical aesthetics match its characters’ sunny optimism, making Hello Tomorrow! even more unsettling to watch. They’ve come to see anything new and (allegedly) improved as confirmation that the world they live in is getting better.

Eventually, the look of Hello Tomorrow! starts to come off as oppressive. Jack’s quest gets trickier, characters’ lives get complicated by melodramatic twists, but the series’s bright aesthetics never dim. The supposedly innovative objects surrounding the ensemble do nothing to alleviate their problems. A self-tying tie cannot repair Jack’s relationship with his son. A perfectly seared steak from a top-of-the-line, aroma-technology-assisted machine cannot patch up a marriage. Instead, most of the futuristic items on the show are ornamental at best—much like many updates in our own world. Hello Tomorrow! frustrates with its weak narrative, but the show does, in its visuals, hit on a bleak truth: We’re often doing nothing more than reinventing the wheel—and then calling that a breakthrough.

Who’s Afraid of The Handmaid’s Tale?

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 02 › margaret-atwood-handmaids-tale-virginia-book-ban-library-removal › 673013

It’s shunning time in Madison County, Virginia, where the school board recently banished my novel The Handmaid’s Tale from the shelves of the high-school library. I have been rendered “unacceptable.” Governor Glenn Youngkin enabled such censorship last year when he signed legislation allowing parents to veto teaching materials they perceive as sexually explicit.

This episode is perplexing to me, in part because my book is much less sexually explicit than the Bible, and I doubt the school board has ordered the expulsion of that. Possibly, the real motive lies elsewhere. The conservative Christian group Focus on the Family generated the list of “unacceptable” books that reportedly inspired the school board’s action, and at least one member of the public felt the school board was trying to “limit what kids can read” based on religious views. Could it be that the board acted under the mistaken belief that The Handmaid’s Tale is anti-Christian?

The truth is that the inspiration for The Handmaid’s Tale is in part biblical: “Beware of false prophets, who come to you in sheep’s clothing but inwardly are ravenous wolves” (Matthew 7:15). The novel sets an inward faith and core Christian values—which I take to be embodied in the love of neighbor and the forgiveness of sins—against totalitarian control and power-hoarding cloaked in a supposed religiousness that is mostly based on the earlier scriptures in the Bible. The stealing of women for reproductive purposes and the appropriation of their babies appears in Genesis 30, when Rachel and Leah turn their “handmaids” over to Jacob and then claim the children as their own. My novel is also an exploration of the theoretical question “What kind of a totalitarianism might the United States become?” I suggest we’re beginning to see the real-life answer to that query.

[Read: The banned books you haven’t heard about]

Wittingly or otherwise, the Madison County school board has now become part of the centuries-old wrangling over who shall have control of religious texts and authority over what they mean. In its early-modern form, this power struggle goes back to the mid-15th-century appearance of the Gutenberg printing press, which allowed a wider dissemination of printed materials, including Bibles.

The Church had good reason for wanting to limit Bible-reading (in Latin) to the clergy. Limbo and purgatory weren’t in it, nor was the catalog of saints or the notion of marriage as a sacrament, among other key teachings. But John Wycliffe, William Tyndale, and their continental counterparts translated the Bible into vernacular languages and enabled cheap copies of it to be printed. As people learned to read in ever larger numbers, they read the Bible, and the result was a proliferation of different interpretations. Baptists, Lutherans, Calvinists, Presbyterians, Mennonites, and Methodists are all the descendants of this biblical big bang. Approximately three centuries of bitter and destructive religious wars followed, as well as massacres, excommunications, widespread heresy trials, witchcraft panics, and burnings at the stake, with the usual nasty human-warfare raping, looting, and pillaging stuff thrown in.

That’s one reason the authors of the United States Constitution framed the First Amendment as they did. It stipulates that Congress shall not make any law that establishes a state religion or prohibits the free exercise of an individual’s own faith. Who wanted the homicidal uproar that had gone on in Europe for so long?

That uproar resulted from the collision between an old establishment and a new communication technology. All such collisions are disruptive, especially at first, when the new technology bears an aura of magic and revelation. Would Adolf Hitler have had the same impact without radio? As for film, it was such a powerful and potentially bad influence on the masses that it inspired Hollywood’s Hays Code. This list of prohibitions was very long, and included depictions of mixed-race marriages and scenes in which a man and a woman were shown in bed together, even if married. (This last produced a boom in twin-bed sales, because viewers got the idea that this was the norm in a marriage.)

The effort to control lurid comic books came next. Donald Duck was one thing; crime and horror were quite another. The latter included much material that was banned under the Hays Code, and teens of my generation read them avidly. On-screen, Singin’ in the Rain; under the bed, Tales From the Crypt. Series such as Crime Does Not Pay were said to encourage juvenile delinquency, not to mention racism. Some of these comics were certainly traumatizing: Will I ever recover from the slimy, toothy monster rising out of the eerie lagoon? Probably not.

Then along came television. Marshall McLuhan, pioneer of media studies, said that John F. Kennedy won his debates against Richard Nixon thanks to TV: Nixon’s 5 o’clock shadow didn’t transmit well. Then there was Elvis the Pelvis and his Ed Sullivan Show appearance, which encouraged widespread rock’n’rolling. I was 16 at the time, and therefore right in the middle of that particular frenzy. Later, the televising of anti-Vietnam protest rallies and riots sparked more of them, giving us the ’60s. And today, it’s the internet and social-media platforms—so disruptive!

Add streaming services, which permit written works too long and complex to be squashed easily into a 90-minute film to appear as ongoing series. One of these is The Handmaid’s Tale. So, yes, today’s self-appointed moral gatekeepers can exclude my novel from school libraries, thus making it impossible for students who can’t afford to buy it to read it for free—but as for shutting down the story completely, I’m afraid that horse has left the barn. Has anyone told Madison County about BookTok? That’s the part of TikTok where young people recommend books to one another. Added together, hashtags of my name and The Handmaid’s Tale have about 400 million BookTok mentions. Sorry about that.

I did intend my book for adult readers, who would recognize totalitarianism when they saw it. But it’s very hard to control what young people get their hands on, especially if they’re told something is too old for them, or too evil, or too immoral. What was I doing reading Peyton Place on top of the garage roof when I was 16? Incest! Rape! Varicose veins! The incest and the rape weren’t news to me—they were in the Bible—but varicose veins? The Bible says nothing about them, so that was a shocker.

Here, I would point out that attempts to control media content are as likely to come from the so-called left as from the so-called right, each side claiming to act in the name of the public good. Stalin’s U.S.S.R. and Mao’s China went in for a mind-boggling level of censorship, but it was all for “the people,” and who could be against that? Or against the protection of the innocent? Sometimes, these things get started out of a genuine need and concern, but a takeover by some bureaucratic version of the Inquisition is very likely to follow. Most of us are more easily manipulated by our desire to do good, or to be seen to do good, than by the temptation to do evil, at least in public view. Hence “virtue signaling.”

[Read: How to be a good person without annoying everyone]

Freedom of expression is a hot potato—freedom for whom and for what, and who decides? The last English writer before the late 20th century to have totally free rein was Geoffrey Chaucer. Few then could read, and books were hand-lettered and very expensive, so Chaucer could diss the clergy, use four-letter words and religious swearing, and describe salacious and ribald incidents, because his work would have no effect on the body politic. However, by the time of Shakespeare’s theater—an early mass-entertainment medium—a state censor had been installed. That’s why Shakespeare’s characters have to be so inventive with their cursing, and why so many plays are set in the past, and in distant locations such as Venice. This trend continued: The licensing of plays and books in the name of public morality explains much about the 19th-century novel. Sex by implication, but not on the page. Officially, no obscenity, no sedition, no blasphemy. Nothing that would bring a blush to the cheek of an innocent maiden (though there was a great deal of illicit porn).

Which brings us back to Christianity and the supposed bias against it in The Handmaid’s Tale. Christianity is now so broad a term that it means little. Are we talking about Greek Orthodoxy? Antinomianism? Mormonism? Liberation theology? The Salvation Army, dedicated to helping the helpless? Sojourners, a social-fairness movement? A Rocha, an eco-organization that is firmly Christian? (I happen to be a fan of these last two.) Incidentally, Jesus is not particularly pro-family. “If anyone comes to Me and does not hate his father and mother, wife and children, brothers and sisters, yes, and his own life also, he cannot be My disciple” (Luke 14:26). That’s a difficulty for any pro-family Christian group, you must admit. (Should these words of Jesus be censored? Just wondering.)

Should parents have a say in what their kids are taught in public schools? Certainly: a democratic vote on the matter. Should young people—high-school juniors and seniors, for starters—also have a say? Why not? In many states, if they’re over 16, they can be married (with parental approval); if of reproductive age, which might be 10, they can give birth, and may be forced to. So why should they, too, not be allowed an opinion?

The outward view of the Madison County school board is that people ages 16 to 18 are too young to explore such questions. I don’t know what its inner motives may be. Possibly, it has a public-spirited aim. It may have noted the falling birth rate and the surveys showing that young people are losing interest in sex. No sex equals no babies, unless everyone resorts to test tubes. Has sex become too readily available? Banal, even? A boring chore? If so, what better way to make it fascinating again than to prohibit all mention of it? Don’t read about sex! Don’t think about sex! See no sex, hear no sex, speak no sex! Suddenly, the kids want to explore! “Stolen water is sweet, and bread eaten in secret is pleasant” (Proverbs 9:17). If that’s the school board’s game, well played! Virginia may even get more babies out of it.

How dare I question the school board’s motives? I do dare. After all, it has questioned mine.