Itemoids

ChatGPT

A Court Ruling That Targets Trump’s Persona

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 09 › new-york-ruling-trump-organization › 675475

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

Donald Trump is a deals guy. He rode his image as real-estate mogul and a maestro of transactions first to pop-culture stardom, then to the White House. Now a judge has ruled that much of that dealmaking was fraudulent: New York Judge Arthur Engoron found yesterday that Trump and his associates, including his sons Eric and Donald Jr., committed persistent fraud by toggling estimates of property values in order to get insurance and favorable terms on loans. The judge ordered that some of the Trump Organization’s “certificates,” or corporate charters, be canceled, and that a receiver be appointed by the court to dissolve some of its New York companies. This latest blow for Trump puts on record that his mythos of business acumen was largely built on lies.

This ruling on its own hinders some of the Trump Organization’s operations in New York State by cutting off Trump’s control of assets. But really, it is just a first step toward the broader business restrictions on Trump that New York Attorney General Letitia James is seeking, Celia Bigoness, a clinical professor of law at Cornell, told me. And to the extent that this ruling shows how the judge feels about James’s suit, first brought against Trump last year, things are not looking great for him. In the trial set to start next week, the judge will determine penalties for the fraud committed: James has requested that those include a $250 million fine and restrictions that prevent the former president and some of his children from running a company in New York ever again. “Trump is synonymous with New York,” Bigoness said. Losing control of his New York businesses and properties would amount to “his home and the place that he has tied himself to shutting him out entirely.” It could also be hugely costly.

This week’s summary judgment is unusual, legal experts told me: The judge essentially determined that it was so clear that Trump had committed fraud that it wasn’t worth wasting time at a trial figuring that part out. Instead, the trial will be used to determine whether Trump’s New York businesses should be further limited as punishment for the fraud—and whether the other demands of James’s suit will be met. It’s somewhat rare for a summary judgment to get to the core of a case like this, and the judge’s decision was distinctly zingy and personal. Responding to Trump’s team’s claims that the suit wasn’t valid, Judge Engoron said that he had already rejected their arguments, and that he was reminded of the “time-loop in the film ‘Groundhog Day.’” In a footnote to his ruling, he quoted a Chico Marx line from Duck Soup: “Well, who ya gonna believe, me or your own eyes?”

In another unusual move, the judge also included individual fines against Trump’s lawyers as part of the ruling, charging each $7,500 for bringing arguments so “frivolous” that they wasted the court’s time. Separately, Trump’s lawyers are trying to sue the judge (a long-shot attempt). Trump, for his part, posted on Truth Social that he had “done business perfectly”; he also called the judge “deranged.” Reached for comment, the Trump attorney Christopher Kise called the decision “outrageous” and “completely disconnected from the facts and governing law.” “President Trump and his family will seek all available appellate remedies to rectify this miscarriage of justice,” he said in an emailed statement. An appeals process from Trump’s camp could extend into the next presidential-election cycle. His team might also attempt to get an emergency stay to prevent the trial from starting next week.

This ruling, and the rest of James’s suit, are circumscribed to New York. Technically, Trump would still be free to spin up new businesses as he sees fit in another state, and he has holdings beyond New York. But even if he could legally incorporate a new business in, say, Florida or Illinois, it might not make financial or brand sense for him. The fallout from this case could wind up being very costly for Trump, so setting up shop elsewhere, although not impossible, could be a major financial hurdle. Plus, “New York is the place Trump wants to do business and has been doing business for forever,” Caroline Polisi, a white-collar defense attorney and lecturer at Columbia Law School, told me.

Yesterday’s ruling may do little to dampen Trump’s appeal among his die-hard fans, who have stuck with him through all manner of scandals, including a running list of criminal indictments. But it could puncture Trump’s persona. My colleague David A. Graham wrote today that the fact that Trump and his co-defendants, including his sons, committed fraud is not surprising. What is surprising, he argued, is that they are facing harsh consequences. “Trump’s political career is based on the myth that he was a great businessman,” David told me. “This ruling cuts straight to the root of that, showing that his business success was built on years of lies.” Indeed, when Letitia James filed suit against Trump last year, she dubbed his behavior the “art of the steal.”

Related:

The end of Trump Inc. It’s just fraud all the way down.

Today’s News

The U.S. soldier Pvt. Travis King, who sprinted across the border into North Korea two months ago, has been released into American custody. The second Republican presidential primary debate will be held in California tonight.   A federal judge struck down a Texas law that drag performers worried would ban shows in the state.

Dispatches

Up for Debate: Driverless cars are a tough sell. Conor Friedersdorf compiles reader perspectives on the future of the technology.

Explore all of our newsletters here.

Evening Read

Illustration by The Atlantic. Source: Getty.

Revealed: The Authors Whose Pirated Books Are Powering Generative AI

By Alex Reisner

One of the most troubling issues around generative AI is simple: It’s being made in secret. To produce humanlike answers to questions, systems such as ChatGPT process huge quantities of written material. But few people outside of companies such as Meta and OpenAI know the full extent of the texts these programs have been trained on.

Some training text comes from Wikipedia and other online writing, but high-quality generative AI requires higher-quality input than is usually found on the internet—that is, it requires the kind found in books. In a lawsuit filed in California last month, the writers Sarah Silverman, Richard Kadrey, and Christopher Golden allege that Meta violated copyright laws by using their books to train LLaMA, a large language model similar to OpenAI’s GPT-4—an algorithm that can generate text by mimicking the word patterns it finds in sample texts. But neither the lawsuit itself nor the commentary surrounding it has offered a look under the hood: We have not previously known for certain whether LLaMA was trained on Silverman’s, Kadrey’s, or Golden’s books, or any others, for that matter.

In fact, it was. I recently obtained and analyzed a dataset used by Meta to train LLaMA. Its contents more than justify a fundamental aspect of the authors’ allegations: Pirated books are being used as inputs for computer programs that are changing how we read, learn, and communicate. The future promised by AI is written with stolen words.

Read the full article.

More From The Atlantic

Alabama strikes out. The banality of bad-faith science “My books were used to train Meta’s generative AI. Good.”

Culture Break

Courtesy of 20th Century Studios

Read. Libra, a fictionalization of the Kennedy assassination, is a paranoid American fable that reads so realistically that it could almost be nonfiction.

Watch. Gareth Edwards’s new movie, The Creator (in theaters September 29), is set in a future where AI has already failed to save the world.

Play our daily crossword.

Katherine Hu contributed to this newsletter.

When you buy a book using a link in this newsletter, we receive a commission. Thank you for supporting The Atlantic.

A New Coca-Cola Flavor at the End of the World

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 09 › coca-cola-y3000-ai-flavor › 675459

Coca-Cola often experiments with new flavors, and they’re usually flavors you can imagine, having tasted them before: vanilla, cherry, lemon. But the latest is called Y3000, a reference to the far-off year 3000, and one that Coca-Cola says was concocted with the help of, in some way, artificial intelligence. It smells like circus-peanut candies and tastes mostly like Coke.

The company says this soda was made to evoke a “positive future,” with a label that has “a futuristic feel,” due to its color palette of silver, violet, magenta, and cyan. The Coca-Cola logo on the Y3000 bottle is made of “fluid dot clusters that merge to represent the human connections of our future planet.” Customers can scan a QR code on the bottle to open a website that uses the AI model Stable Diffusion to turn photos of their surroundings into images with a similar color scheme and sci-fi aesthetics. In these images, the future looks sleek and very pink.

Y3000 is one of many recent Coke offerings promising a “flavor” that does not make a reference to anything like a known terrestrial taste. They have names such as “Ultimate” (Coca-Cola with “the electrifying taste of +XP,” which is a type of point you can accrue in video games) and “Soul Blast” (Coca-Cola that tastes like the Japanese anime Bleach). “Starlight” is “space flavored,” “Byte” tastes like “pixels,” “Move” tastes like “transformation.” “Dreamworld,” which is decorated with an M. C. Escher–like illustration, “taps into Gen Z’s passion for the infinite potential of the mind by exploring what a dream tastes like.” Coca-Cola did not respond to my requests for comment, but its senior director of global strategy, Oana Vlad, does recognize that some people might wonder what these flavors actually taste like. “We’re never really going to answer that question” in a “straightforward” way, she told CNN in June. But “the flavor profile is always, we say, 85 to 90 percent Coke.”

[Read: AI-generated junk is flooding Etsy]

Coke is already an abstraction, some complicated combination of cinnamon and nutmeg and vanilla and citrus and secret things. Further abstracting it with “pixel” and “dream” flavors is a brilliant way to get a lot of attention. So is referencing AI—a logical next step after the company dabbled with NFTs. Since the introduction of ChatGPT 10 months ago, the world has become captivated by the technology and the maybe apocalyptic, maybe wonderful future that it promises. AI is suddenly everywhere, even in our cola. It makes no sense! Which is why we have to try it. “Their shenanigans are something that’s always interesting to us,” Sean O’Keefe, a professor of food science and technology at Virginia Tech, told me.

O’Keefe doesn’t drink soda, which he refers to as “flavored, colored sugar water.” But if the soda was designed by AI to taste like the future, what choice does he have? “I don’t buy Coke, but if I see Y3000, I’m gonna try it,” he said. Of course—that’s what I did too. There are a ton of foods and drinks that exist more to be sampled once and photographed for the internet than to be habitually consumed—see the Grimace Shake, which was all over TikTok this summer. Around the same time, my colleague Megan Garber wrote about mustard-flavored Skittles, describing the product as a “pseudo-snack—produced not to be eaten but to be talked about.” These limited-edition Skittles were, she explained from the site of a terrifying-sounding marketing event held in Washington, D.C., “nearly impossible for the average consumer to obtain.”

[Read: The candy you (probably) won’t get to try]

These kinds of products are really spectacles, the artist Allie Wist argues. Wist has a master’s degree in food studies, and much of her art has to do with food. In the description for last year’s Extinct Armoatorium, a plexiglass box filled with the smell of banana, dirt, and fungus, she wrote about the history of artificial banana flavoring, which, she wrote, is based on “the sweeter taste” of the Gros Michel banana, a cultivar that was wiped out in the 1950s by a fungus (although this origin is sometimes contested). Artificial banana is now more real than the banana it’s based on, she suggests, because the real banana doesn’t exist anymore. Wist cited Jean Baudrillard’s 1981 essay “The Precession of Simulacra,” and told me that “the real world is now actually produced through the simulation world of images, videos, and, I’d argue, artificial flavoring and processed foods.” Rainbow bagels, chips with fake smoke flavoring, future-flavored cola—all “represent a lifestyle or an aesthetic fantasy” more than they do eating, she said.

I smelled the AI Coke about 10 times before I tasted it, and felt a creeping sense of recognition. At first it reminded me of bubblegum, although that isn’t a real flavor either. It was a bit more like Juicy Fruit gum, a flavor that O’Keefe described as a combination of pineapple, banana, and citrus—familiar enough to avoid alienating consumers, which is key. “We have to consider capitalism’s role in this,” Wist said. “Capitalism removes any real value of exchange and contains no inherent interest in morality or purpose.” This is why a company that already sells billions of dollars of products a year might continue coming up with “ever more provocative flavors,” as she put it, including one that alludes to a point in the future after which many cities may no longer be habitable.

A few years ago, I went to a postapocalyptic dinner party hosted by the chef Jen Monroe. I had a bunch of nice, jellyfish-forward food and then a rectangle of gelatin. One-half of the gelatin rectangle was pink and strawberry-flavored and delicious. The other half was blue and disgusting. Many people spit it out. “I decided it’s okay to serve food you hate to make a point,” Monroe told me after. “That would be the most sci-fi avenue, where we’ve abandoned food as food altogether.” The dinner party was supposed to take place in 2047. It was sad, but it was also kind of fun. It made me think, At least we can sample something strange at the end of the world.

So Much for ‘Learn to Code’

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 09 › computer-science-degree-value-generative-ai-age › 675452

The quickest way to second-guess a decision to major in English is this: have an extended family full of Salvadoran immigrants and pragmatic midwesterners. The ability to recite Chaucer in the original Middle English was unlikely to land me a job that would pay off my student loans and help me save for retirement, they suggested when I was a college freshman still figuring out my future. I stuck with English, but when my B.A. eventually spat me out into the thick of the Great Recession, I worried that they’d been right.

After all, computer-science degrees, and certainly not English, have long been sold to college students as among the safest paths toward 21st-century job security. Coding jobs are plentiful across industries, and the pay is good—even after the tech layoffs of the past year. The average starting salary for someone with a computer-science degree is significantly higher than that of a mid-career English graduate, according to the Federal Reserve; at Google, an entry-level software engineer reportedly makes $184,000, and that doesn’t include the free meals, massages, and other perks. Perhaps nothing has defined higher education over the past two decades more than the rise of computer science and STEM. Since 2016, enrollment in undergraduate computer-science programs has increased nearly 49 percent. Meanwhile, humanities enrollments across the United States have withered at a clip—in some cases, shrinking entire departments to nonexistence.

But that was before the age of generative AI. ChatGPT and other chatbots can do more than compose full essays in an instant; they can also write lines of code in any number of programming languages. You can’t just type make me a video game into ChatGPT and get something that’s playable on the other end, but many programmers have now developed rudimentary smartphone apps coded by AI. In the ultimate irony, software engineers helped create AI, and now they are the American workers who think it will have the biggest impact on their livelihoods, according to a new survey from Pew Research Center. So much for learning to code.

ChatGPT cannot yet write a better essay than a human author can, nor can it code better than a garden-variety developer, but something has changed even in the 10 months since its introduction. Coders are now using AI as a sort of souped-up Clippy to accelerate the more routine parts of their job, such as debugging lines of code. In one study, software developers with access to GitHub’s Copilot chatbot were able to finish a coding task 56 percent faster than those who did it solo. In 10 years, or maybe five, coding bots may be able to do so much more.

People will still get jobs, though they may not be as lucrative, says Matt Welsh, a former Harvard computer-science professor and entrepreneur. He hypothesizes that automation will lower the barrier to entry into the field: More people might get more jobs in software, guiding the machines toward ever-faster production. This development could make highly skilled developers even more essential in the tech ecosystem. But Welsh also says that an expanded talent pool “may change the economics of the situation,” possibly leading to lower pay and diminished job security.

If mid-career developers have to fret about what automation might soon do to their job, students are in the especially tough spot of anticipating the long-term implications before they even start their career. “The question of what it will look like for a student to go through an undergraduate program in computer science, graduate with that degree, and go on into the industry … That is something I do worry about,” Timothy Richards, a computer-science professor at the University of Massachusetts at Amherst, told me. Not only do teachers like Richards have to wrestle with just how worthwhile learning to code is anymore, but even teaching students to code has become a tougher task. ChatGPT and other chatbots can handle some of the basic tasks in any introductory class, such as finding problems with blocks of code. Some students might habitually use ChatGPT to cheat on their assignments, eventually collecting their diploma without having learned how to do the work themselves.

Richards has already started to tweak his approach. He now tells his introductory-programming students to use AI the way a math student would use a calculator, asking that they disclose the exact prompts they fed into the machine, and explain their reasoning. Instead of taking assignments home, Richards’s students now do the bulk of their work in the classroom, under his supervision. “I don’t think we can really teach students in the way that we’ve been teaching them for a long time, at least not in computer science,” he said.

Fiddling with the computer-science curriculum still might not be enough to maintain coding’s spot at the top of the higher-education hierarchy. “Prompt engineering,” which entails feeding phrases to large language models to make their responses more human-sounding, has already surfaced as a lucrative job option—and one perhaps better suited to English majors than computer-science grads. “Machines can’t be creative; at best, they’re very elaborate derivatives,” says Ben Royce, an AI lecturer at Columbia University. Chatbots don’t know what to do with a novel coding problem. They sputter and choke. They make stuff up. As AI becomes more sophisticated and better able to code, programmers may be tasked with leaning into the parts of their job that draw on conceptual ingenuity as opposed to sheer technical know-how. Those who are able to think more entrepreneurially—the tinkerers and the question-askers—will be the ones who tend to be almost immune to automation in the workforce.

The potential decline of “learn to code” doesn’t mean that the technologists are doomed to become the authors of their own obsolescence, nor that the English majors were right all along (I wish). Rather, the turmoil presented by AI could signal that exactly what students decide to major in is less important than an ability to think conceptually about the various problems that technology could help us solve. The next great Silicon Valley juggernaut might be seeded by a humanities grad with no coding expertise or a computer-science grad with lots of it. After all, the discipline has always been about more than just learning the ropes of Python and C++. Identifying patterns and piecing them together is its essence.

In that way, the answer to the question of what happens next in higher education may lie in what the machines can’t do. Royce pointed me toward Moravec’s paradox, the observation that AI shines at high-level reasoning and the kinds of skills that are generally considered to reflect cognitive aptitude (think: playing chess), but fumbles with the basics ones. The curiosity-driven instincts that have always been at the root of how humans create things are not just sticking around in an AI world; they are now more important than ever. Thankfully, students have plenty of ways to get there.

What I Found in a Database Meta Uses to Train Generative AI

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 09 › books3-ai-training-meta-copyright-infringement-lawsuit › 675411

Editor’s note: This article is part of The Atlantic’s series on Books3. You can search the database for yourself here, and read about its origins here.

This summer, I reported on a data set of more than 191,000 books that were used without permission to train generative-AI systems by Meta, Bloomberg, and others. “Books3,” as it’s called, was based on a collection of pirated ebooks that includes travel guides, self-published erotic fiction, novels by Stephen King and Margaret Atwood, and a lot more. It is now at the center of several lawsuits brought against Meta by writers who claim that its use amounts to copyright infringement.

Books play a crucial role in the training of generative-AI systems. Their long, thematically consistent paragraphs provide information about how to construct long, thematically consistent paragraphs—something that’s essential to creating the illusion of intelligence. Consequently, tech companies use huge data sets of books, typically without permission, purchase, or licensing. (Lawyers for Meta argued in a recent court filing that neither outputs from the company’s generative AI nor the model itself are “substantially similar” to existing books.)

In its training process, a generative-AI system essentially builds a giant map of English words—the distance between two words correlates with how often they appear near each other in the training text. The final system, known as a large language model, will produce more plausible responses for subjects that appear more often in its training text. (For further details on this process, you can read about transformer architecture, the innovation that precipitated the boom in large language models such as LLaMA and ChatGPT.) A system trained primarily on the Western canon, for example, will produce poor answers to questions about Eastern literature. This is just one reason it’s important to understand the training data used by these models, and why it’s troubling that there is generally so little transparency.

With that in mind, here are some of the most represented authors in Books3, with the approximate number of entries contributed:

Although 24 of the 25 authors listed here are fiction writers (the lone exception is Betty Crocker), the data set is two-thirds nonfiction overall. It includes several thousand technical manuals; more than 1,500 books from Christian publishers (including at least 175 Bibles and Bible commentaries); more than 400 Dungeons & Dragons– and Magic the Gathering–themed books; and 46 titles by Charles Bukowski. Nearly every subject imaginable is covered (including How to Housebreak Your Dog in 7 Days), but the collection skews heavily toward the interests and perspectives of the English-speaking Western world.

Many people have written about bias in AI systems. An AI-based face-recognition program, for example, that’s trained disproportionately on images of light-skinned people might work less well on images of people with darker skin—with potentially disastrous outcomes. Books3 helps us see the problem from another angle: What combination of books would be unbiased? What would be an equitable distribution of Christian, Muslim, Buddhist, and Jewish subjects? Are extremist views balanced by moderate ones? What’s the proper ratio of American history to Chinese history, and what perspectives should be represented within each? When knowledge is organized and filtered by algorithm rather than by human judgment, the problem of perspective becomes both crucial and intractable.

Books3 is a gigantic dataset. Here are just a few different ways to consider the authors, books, and publishers contained within. Note that the samples presented here are not comprehensive; they are chosen to give a quick sense of the many different types of writing used to train generative AI. As above, book counts may include multiple editions.

As AI chatbots begin to replace traditional search engines, the tech industry’s power to constrain our access to information and manipulate our perspective increases exponentially. If the internet democratized access to information by eliminating the need to go to a library or consult an expert, the AI chatbot is a return to the old gatekeeping model, but with a gatekeeper that’s opaque and unaccountable—a gatekeeper, moreover, that is prone to “hallucinations” and might or might not cite sources.

In its recent court filing—a motion to dismiss the lawsuit brought by the authors Richard Kadrey, Sarah Silverman, and Christopher Golden—Meta observed that “Books3 comprises an astonishingly small portion of the total text used to train LLaMA.” This is technically true (I estimate that Books3 is about 3 percent of LLaMA’s total training text) but sidesteps a core concern: If LLaMA can summarize Silverman’s book, then it likely relies heavily on the text of her book to do so. In general, it’s hard to know how much any given source contributes to a generative-AI system’s output, given the impenetrability of current algorithms.

Still, our only clue to the kinds of information and opinions AI chatbots will dispense is their training data. A look at Books3 is a good start, but it’s just one corner of the training-data universe, most of which remains behind closed doors.

When Netanyahu Met Musk

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 09 › benjamin-netanyahu-elon-musk-ai-pessimism › 675406

On Sunday, just before heading to the United Nations, Israeli Prime Minister Benjamin Netanyahu visited Elon Musk in San Francisco. Their livestreamed rendezvous held obvious appeal for both men. The embattled Netanyahu would get to show his voters that he could command the attention of the world’s richest man. Musk would get to show the world that he had a Jewish friend, days after getting caught up in an anti-Semitism scandal on his social-media platform. The meeting was, essentially, a glorified photo op.

That’s how it started, at least.

At the outset, Netanyahu called Musk the “Edison of our time.” Musk returned the favor by not challenging Netanyahu’s insistence that his proposed judicial reforms—which have provoked the largest protest movement in Israel’s history—would make the country a “stronger democracy.” (“Sounds good,” the mogul replied.) The two men discussed their shared love of books and then, after about 40 minutes, wrapped up their exchange, at which point most people tuned out. But that’s precisely when things got interesting.

[Yair Rosenberg: Elon Musk among the anti-Semites]

Musk and Netanyahu returned to the broadcast for a panel discussion about artificial intelligence with the MIT scientist Max Tegmark and Greg Brockman, the president of OpenAI, the company behind ChatGPT and the image generator DALL-E. What happened next received scant media coverage, because reporters were there to see a right-wing magnate hobnob with a right-wing world leader, not to listen to the two discuss AI with some nerds. Which is why many missed the moment when Netanyahu went off-script and challenged the utopian dreams of Musk and his fellow technologists.

Their conversation wasn’t just about AI. It was a confrontation of worldviews—a clash between American entrepreneurs who believe in the promise of transformational change for humanity and a deeply cynical Israeli politician who does not. And it was a glimpse into the profoundly pessimistic mind of one of the world’s most polarizing and influential leaders, revealing not just his philosophy of technology, but his understanding of people and power, and why he has led his country the way he has.

It began with a simple question from Netanyahu: “How do we inject a measure of responsibility and ethics into this exponentially changing development?” Musk, who previously signed a letter calling for a pause in AI development to ensure its safety, is not unaware of these concerns, and conceded their merit. “Just as Einstein didn’t expect his work in physics to lead to nuclear weapons, we need to be cautious that even with the best of intentions … we could create something bad,” he replied. “That is one of the possible outcomes.”

But as Netanyahu soon made clear, when it comes to AI, he believes that bad outcomes are the likely outcomes. The Israeli leader interrogated OpenAI’s Brockman about the impact of his company’s creations on the job market. By replacing more and more workers, Netanyahu argued, AI threatens to “cannibalize a lot more jobs than you create,” leaving many people adrift and unable to contribute to the economy. When Brockman suggested that AI could usher in a world where people would not have to work, Netanyahu countered that the benefits of the technology were unlikely to accrue to most people, because the data, computational power, and engineering talent required for AI are concentrated in a few countries.

“You have these trillion-dollar [AI] companies that are produced overnight, and they concentrate enormous wealth and power with a smaller and smaller number of people,” the Israeli leader said, noting that even a free-market evangelist like himself was unsettled by such monopolization. “That will create a bigger and bigger distance between the haves and the have-nots, and that’s another thing that causes tremendous instability in our world. And I don’t know if you have an idea of how you overcome that?”

The other panelists did not. Brockman briefly pivoted to talk about OpenAI’s Israeli employees before saying, “The world we should shoot for is one where all the boats are rising.” But other than mentioning the possibility of a universal basic income for people living in an AI-saturated society, Brockman agreed that “creative solutions” to this problem were needed—without providing any.

The conversation continued in this vein for some time: The AI boosters emphasized the incredible potential of their innovation, and Netanyahu raised practical objections to their enthusiasm. They cited futurists such as Ray Kurzweil to paint a bright picture of a post-AI world; Netanyahu cited the Bible and the medieval Jewish philosopher Maimonides to caution against upending human institutions and subordinating our existence to machines. Musk matter-of-factly explained that the “very positive scenario of AI” is “actually in a lot of ways a description of heaven,” where “you can have whatever you want, you don’t need to work, you have no obligations, any illness you have can be cured,” and death is “a choice.” Netanyahu incredulously retorted, “You want this world?”

By the time the panel began to wind down, the Israeli leader had seemingly made up his mind. “This is like having nuclear technology in the Stone Age,” he said. “The pace of development [is] outpacing what solutions we need to put in place to maximize the benefits and limit the risks.”

It might seem strange that Netanyahu so publicly challenged the ambitions of Musk and his colleagues, especially at what was meant to be a softball sit-down. But Netanyahu’s resistance to optimistic assurances about future progress is core to his worldview—a worldview that has long shaped his approach to the politics of Israel and the world around it.

In December 2010, a street vendor in Tunisia set himself on fire to protest state corruption, triggering protests across the Middle East, as part of what became known as the Arab Spring. At the time, Netanyahu was unimpressed, arguing that the region was going “not forward, but backward.” Israeli officials likened the demonstrations to those that ushered in Iran’s theocracy in 1979. But many Western leaders, including President Barack Obama, hailed the upheavals as the dawn of a new liberal era for that part of the world. “The events of the past six months show us that strategies of repression and strategies of diversion will not work anymore,” Obama said in a State Department speech in May 2011. “A new generation has emerged. And their voices tell us that change cannot be denied.” He continued:

In Cairo, we heard the voice of the young mother who said, “It’s like I can finally breathe fresh air for the first time.”

In Sanaa, we heard the students who chanted, “The night must come to an end.”

In Benghazi, we heard the engineer who said, “Our words are free now. It’s a feeling you can’t explain.”

In Damascus, we heard the young man who said, “After the first yelling, the first shout, you feel dignity.”

Today, Cairo is once again under military dictatorship. Sanaa is in ruins, a casualty of Yemen’s ongoing civil war. Benghazi is where an American ambassador was murdered in a failed Libyan state. Last May, Damascus’s Bashar al-Assad was welcomed back into the Arab League, after he brutally quelled the rebellion against his Syrian regime, including by using chemical weapons. And this week, Tunisia’s authoritarian president bizarrely connected “Zionist” influence to a storm that ravaged the area.

Netanyahu was a naysayer about the Arab Spring, unwilling to join the rapturous ranks of hopeful politicians, activists, and democracy advocates. But he was also right. This was less because he is a prophet and more because he is a pessimist. When it comes to grandiose predictions about a better tomorrow—whether through peace with the Palestinians, a nuclear deal with Iran, or the advent of artificial intelligence—Netanyahu always bets against. Informed by a dark reading of Jewish history, he is a cynic about human nature and a skeptic of human progress. After all, no matter how far civilization has advanced, it has always found ways to persecute the powerless, most notably, in his mind, the Jews. For Netanyahu, the arc of history is long, and it bends toward whoever is bending it.

This is why the Israeli leader puts little stock in utopian promises, whether they are made by progressive internationalists or Silicon Valley futurists, and places his trust in hard power instead. As he put it in a controversial 2018 speech, “The weak crumble, are slaughtered and are erased from history while the strong, for good or for ill, survive. The strong are respected, and alliances are made with the strong, and in the end peace is made with the strong.” To his many critics, myself included, Netanyahu’s refusal to envision a different future makes him a “creature of the bunker,” perpetually governed by fear. Although his pessimism may sometimes be vindicated, it also holds his country hostage. But the Israeli leader sees himself as a realist who does whatever it takes to preserve the Jewish people in an inherently hostile world. (Likewise, he also does whatever it takes to preserve his own power, because he believes that no one else can be trusted to do what he does.) This is why Netanyahu has gradually aligned his country with strongmen across Europe, the Middle East, and the Americas. And it’s why he resists any concessions to Israel’s Palestinian neighbors, seeing the conflict as a zero-sum game.

In other words, the same cynicism that drives Netanyahu’s reactionary politics is the thing that makes him an astute interrogator of AI and its promoters. Just as he doesn’t trust others not to use their power to endanger Jews, he doesn’t trust AI companies or AI itself to police its rapidly growing capabilities.

[Matti Friedman: After 30 years in Israel, I see my country differently]

“Life is a struggle,” he told the technologists in San Francisco. “It’s defined as a struggle, where you’re competing with forces of nature or with other human beings or with animals, and you constantly better your position. This is how the human race has defined itself, and our self-definition is based on that—both as individuals, as nations, as humanity as a whole.”

Ever the optimist, Musk has staked his electric cars, his rockets to Mars, and his AI algorithms on the assumption that humanity can transform its situation and build its way to a better tomorrow. But Netanyahu believes that all of these technological advances are only as good as the humans who operate them—and humans, he knows, don’t have the best track record.

Microsoft, Google, and OpenAI are getting questioned about their AI "data labelers"

Quartz

qz.com › tech-companies-ai-data-labelers-congress-1850834407

As tech executives flock to Capitol Hill to speak with lawmakers about potential AI regulations this week, they are also being probed on the working conditions of the workers who make ChatGPT, Bard, and Bing possible.

Read more...