Itemoids

ChatGPT

The Supreme Court Killed the College-Admissions Essay

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 06 › affirmative-action-ruling-race-conscious-college-admissions-essays › 674590

Nestled within yesterday’s Supreme Court decision declaring that race-conscious admissions programs, like those at Harvard and the University of North Carolina, are unconstitutional is a crucial carveout: Colleges are free to consider “an applicant’s discussion of how race affected his or her life.” In other words, they can weigh a candidate’s race when it is mentioned in an admissions essay. Observers had already speculated about personal essays becoming invaluable tools for candidates who want to express their racial background without checking a box—now it is clear that the end of affirmative action will transform not only how colleges select students, but also how teenagers advertise themselves to colleges.

For essays and statements to provide a workaround for pursuing diversity, applicants must first cast themselves as diverse. The American Council on Education, a nonprofit focused on the impacts of public policy on higher education, recently convened a panel dedicated to planning for the demise of affirmative action; admissions directors and consultants emphasized the need “to educate students about how to write about who they are in a very different way,” expressing their “full authentic story” and “trials and tribulations.” In other words, if colleges can’t use race as a criterion in its own right, because the Court has ruled doing so violates the Fourteenth Amendment, then high schoolers trying to navigate the nebulous admissions process may feel pressure to write as plainly as possible about how their race and experiences of racism make them better applicants.

Turning personal writing into a way to market one’s race means folding oneself into nonspecific formulas, reducing a lifetime to easily understood types. This flattening of the college essay in response to the long hospice of race-based affirmative action comes alongside another reductive phenomenon upending student writing: the ascendance of generative AI. High schoolers, undergraduates, and professional authors are enlisting ChatGPT or similar programs to write for them; educators fear that admissions essays will prove no exception. The pitfalls of using AI to write a college application, however, are already upon us, as the pressure to sell one’s race and race-based adversity to colleges will compel students to write like chatbots. Tired platitudes about race angled to persuade admissions officers will crowd out more individual, creative approaches, the result no better than a machine’s banal aggregation of the web. Writing about one’s race can be clarifying, even revelatory; de facto requiring someone write about their racial identity, in a form that can veer toward framing race as a negative attribute in need of overcoming, is stifling and demeaning. Or, as the attorney and author Elie Mystal tweeted more bluntly yesterday, “Why should a Black student have to WASTE SPACE explaining ‘how racism works’”?

[Read: Elite multiculturalism is over]

Such essays can feel prewritten. Many Black and minority applicants “believe that a story of struggle is necessary to show that they are ‘diverse,’” the sociologist and former college-admissions officer Aya M. Waller-Bey wrote in this magazine earlier this month; admissions officers and college-prep programs can valorize such trauma narratives, too. Indeed, research analyzing tens of thousands of college applications shows that essay content and style predict income better than SAT scores do: Lower-income students were much more likely to write about topics including abuse, economic insecurity, and immigration. Similarly, another study found that girls applying to engineering programs were more likely to foreground their gender as “women in science,” perhaps to distinguish themselves from their male counterparts. These predictable scripts, which many students believe to be most palatable, are the kind of stale, straightforward narratives—about race, identity, and otherwise—that AI programs excel at writing. Language models work by analyzing massive amounts of text for patterns and then spitting out statistically probable outputs, which means they are adept at churning out clichéd language and narrative tropes but quite terrible at writing anything original, poetic, or inspiring.

To explore and narrativize one’s identity is of course important, even essential; I wrote about my mixed heritage for my own college essay. Race acts as what the cultural theorist Stuart Hall called a “floating signifier,” a label that refers to constantly shifting relationships, interactions, and material conditions. “Race works like a language,” Hall said, meaning that race provides a way to ground discussions of varying experiences, support networks, histories of discrimination, and more. To discuss and write about one’s race or heritage, then, is a way of finding and making meaning.

But molding race into what an admissions officer might want is the opposite of discovery; it means one is writing toward somebody else’s perceived desires. It’s not too dissimilar from writing an admissions essay with a language model that has imbibed and reproduced tropes that already exist, blighting meaningful self-discovery on the part of impressionable young people and instead trapping them in unoriginal, barren, and even debasing scripts that humans and machines alike have prewritten about their identities. Chatbots’ statistical regurgitations cannot reinvent language, only cannibalize it; the programs do not reflect so much as repeat. When I asked ChatGPT to write me a college essay, it gave me boilerplate filler: My journey as a half-Chinese, half-Italian individual has been one of self-discovery, resilience, and growth. That sentence is broadly true, perhaps a plus for an admissions officer, but vapid and nonspecific—useless to me, personally. It doesn’t push toward anything meaningful, or really anything at all.

[Read: The college essay is dead]

A future of college essays that package race in canned archetypes reeking of a chatbot’s metallic touch could read alarmingly similar to the very Supreme Court opinions that ended race-conscious admissions yesterday: a framing of race “unmoored from critical real-life circumstances,” as Justice Ketanji Brown Jackson wrote in her dissent; a pathetic understanding of various Asian diasporic groups from Justice Clarence Thomas; a twisting of landmark civil-rights legislation, constitutional amendments, and court cases into a predetermined and weaponized crusade against any attempt to promote diversity or ameliorate historical discrimination. Chatbots, too, make things up, advance porous arguments, and gaslight their users. If race works like a language, then colleges, teachers, parents, and high-school students alike must make sure that that language remains a human one.

There Will Never Be Another Second Life

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 06 › second-life-virtual-reality-platform-longevity › 674533

The other night, I had an odd conversation with ChatGPT, made somewhat stranger because the AI’s answers came out of a humanoid rabbit idly sucking on a juice box. He was standing alone in a virtual novelty store in Second Life, where he had recently been fired. The rabbit, the shop owner explained to me later, was meant to be a clerk, “but he kept trying to sell items that were not for sale.” (AI, after all, has a tendency to make things up.) So the rabbit had been demoted to the role of greeter, chatting with customers about the nature of comedy, his own existence, or whatever else they cared to ask.

BunnyGPT is among the first bots in the virtual world to have its “mind” wired to OpenAI’s large language model. It’s an example of how Second Life, which is celebrating its 20th anniversary, continues to evolve, with a community that taps into new technologies for its own oddball purposes. Nothing else is quite like it—Second Life is neither exactly a social network nor really a conventional game, which has both limited its mainstream appeal and ensured its longevity. To this day, tens of thousands of people are logged in at any given time, inhabiting a digital world that’s more original than the corporate versions of virtual existence being offered by Meta and Apple.

The reasons for the virtual world’s longevity are as paradoxical as they are inspiring, especially in this moment when traditional social media seems to be collapsing in on itself, or flailing for new relevance, even as the rise of generative AI promises an uncertain, discomfiting future. Developed by a company named Linden Lab, Second Life was inspired in part by the metaverse as first described with biblical specificity in Neal Stephenson’s cyberpunk classic Snow Crash: a massive virtual world created by its users and connected to the real-world economy. Countless technologists who began their career in the 1990s were also inspired by that novel. But Linden’s charismatic founder, Philip Rosedale, added to this geeky conception a distinctly bohemian muse: Burning Man, the orgiastic art festival held every year in Nevada’s Black Rock Desert.

[Read: The digital ruins of a forgotten future]

“I was just blown away by the fact that I was willing to talk to anyone,” Rosedale once told me, remembering his time on the playa, “that it had this mystical quality that demolished the barriers between people. And I thought about it: What magical quality makes that happen?” Rosedale believed that allowing users to create their own content, along with highly customizable avatars, would also evoke a similar sense of serendipity.

For its first three years, Linden Lab contracted me to be the virtual world’s official “embedded journalist”—a roving reporter using a digital avatar in a white suit (my pretentious tribute to Tom Wolfe), impertinently asking members of the early user community about their virtual lives—ambitious collective art projects, savvy business ventures, the pixel sex they were having with the attachable genitals they inevitably created.

Rosedale’s dream of merging the metaverse with Burning Man succeeded beyond any reasonable expectation. I’m always stunned to scroll through my blog, to review the people I met in Second Life as avatars. I’ve talked to an Iraqi arts professor who excitedly logged into Second Life through his sputtering, postwar internet connection from the ancient city of Babylon; a Jewish American woman who, with the help of her daughter, began logging into the virtual world to give lectures about surviving the Holocaust; a young Japanese sex worker who, in between porn shoots, created in Second Life an eerie memorial to the nuking of Hiroshima; the conceptual artist Cao Fei, who created an entire city in Second Life, and then—15 years before NFT mania—sold virtual real-estate deeds for her digital metropolis to bemused patrons at Art Basel.

Many of the profiles I wrote about avatars occurred by pure happenstance. Randomly visiting a virtual Bayou bar one day, I saw an avatar playing blues guitar, his appearance customized to look like a tall old Black man. Clicking on the user’s account, I realized that in real life he was Charles Bristol, an 87-year-old bluesman and the grandson of once-enslaved people, who’d lived long enough to play live music in the metaverse.

Still, despite this miraculous diversity—or perhaps because of it—mainstream adoption of Second Life remains elusive. The utopian ideals that contributed to Second Life’s longevity as an online community may also have relegated it to a niche platform. To encourage as much free-form user creativity as possible, Linden Lab adamantly refused to market Second Life as a game. That effectively made the virtual world uninviting to gamers (who subsequently moved on to Minecraft and other popular sandbox games), while leaving new users confused and adrift. At the same time, this lack of consumer categorization excited a disparate coterie of academics, artists, and other nonconformists who became regular denizens of Second Life—but who might have refused to join had it been positioned as a mere video game.   

The utopian paradox even extends into how Second Life was developed by employees at Linden Lab. Under the idealistic direction of Rosedale and his CTO, Cory Ondrejka, the start-up operated with a no-managers, “choose your own work” policy, cheekily dubbed the “Tao of Linden.” Their creativity thus unleashed, Linden developers wound up adding a farrago of persnickety features to the product with little unifying direction that might create a seamless, user-friendly experience. To this day, the Second Life application resembles a massively multiplayer online game welded to a 3-D-graphics editor duct-taped to a social network crammed into an ancient television remote with infinite buttons.

But the program’s very complexity became a kind of initiation rite. Some 99 percent of new users would quit, overwhelmed and aggravated, most within their first hour in the virtual world. Those who stayed long enough to learn how to use the software—usually guided by a patient “oldie” community member—found themselves welcomed into an exclusive club. Second Life quickly became a small enchanted city with an eccentric but charming citizenry, surrounded by a brutal desert that few dared cross. Linden Lab, in other words, had inadvertently re-created the Burning Man experience a bit too thoroughly.

[Read: The age of goggles has arrived]

Using the world’s 3-D creation and coding tools, the community quickly built a veritable multiverse of items and experiences spanning nearly every conceivable genre and avenue of human interest (an evening gown made of fishhooks; a self-generating steampunk city in the sky; a tesseract house with no beginning or end). And because users could also sell their creations in Second Life and exchange the world’s virtual currency for USD, thousands of local 3-D artisans created successful small businesses, many of them servicing the sprawling avatar-fashion industry. The most well-known Second Life–based brands took on celebrity status; at the very high end, grassroots creators in this and other virtual worlds pulled in millions of dollars. It also created another reason for staying: Long-term Second Life fashionistas typically have spent many thousands of dollars on virtual fashion items in their inventory.

Alongside all that commerce and creativity, I noticed the rise of powerful subcommunities in Second Life that would be difficult to replicate in the real world, or even with traditional social media. The trans community, for example, is remarkably large in the virtual world, comprising about 500 registered groups, people from around the globe in search of a secure place to exhibit their identity; some are so battered by transphobia in their offline lives that they save expressions of their full self for the gender customizations of their Second Life avatars. And as the U.S. conflicts in Iraq and Afghanistan wound down, I started noticing military veterans—separated by distance, social pressure, and battle wounds—informally meeting together as avatars to discuss their PTSD and other painful topics. As the director of a veteran-support organization once put it: “I know Marines that say that Second Life is working when nothing else has.”

They are not alone. I’ve seen similar communities spring up in many other, newer virtual worlds. By my estimate, more than 500 million people are active community members within platforms that roughly fit what Stephenson described in Snow Crash—especially VRChat, a kind of next-generation successor to Second Life. Many of these metaverse communities may have a longevity similar to Second Life’s, thriving apart from the algorithmic sirens of social media and the reckless growth of generative AI. We may briefly enjoy conversing with ChatGPT-powered bunnies, but ultimately we yearn to connect with real humans behind the avatars we meet.

I Shouldn’t Have to Accept Being in Deepfake Porn

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 06 › deepfake-porn-ai-misinformation › 674475

Recently, a Google Alert informed me that I am the subject of deepfake pornography. I wasn’t shocked. For more than a year, I have been the target of a widespread online harassment campaign, and deepfake porn—whose creators, using artificial intelligence, generate explicit video clips that seem to show real people in sexual situations that never actually occurred—has become a prized weapon in the arsenal misogynists use to try to drive women out of public life. The only emotion I felt as I informed my lawyers about the latest violation of my privacy was a profound disappointment in the technology—and in the lawmakers and regulators who have offered no justice to people who appear in porn clips without their consent. Many commentators have been tying themselves in knots over the potential threats posed by artificial intelligence—deepfake videos that tip elections or start wars, job-destroying deployments of ChatGPT and other generative technologies. Yet policy makers have all but ignored an urgent AI problem that is already affecting many lives, including mine.

[Read: We haven’t seen the worst of fake news]

Last year, I resigned as head of the Department of Homeland Security’s Disinformation Governance Board, a policy-coordination body that the Biden administration let founder amid criticism mostly from the right. In subsequent months, at least three artificially generated videos that appear to show me engaging in sex acts were uploaded to websites specializing in deepfake porn. The images don’t look much like me; the generative-AI models that spat them out seem to have been trained on my official U.S. government portrait, taken when I was six months pregnant. Whoever created the videos likely used a free “face swap” tool, essentially pasting my photo onto an existing porn video. In some moments, the original performer’s mouth is visible while the deepfake Frankenstein moves and my face flickers. But these videos aren’t meant to be convincing—all of the websites and the individual videos they host are clearly labeled as fakes. Although they may provide cheap thrills for the viewer, their deeper purpose is to humiliate, shame, and objectify women, especially women who have the temerity to speak out. I am somewhat inured to this abuse, after researching and writing about it for years. But for other women, especially those in more conservative or patriarchal environments, appearing in a deepfake-porn video could be profoundly stigmatizing, even career- or life-threatening.

As if to underscore video makers’ compulsion to punish women who speak out, one of the videos to which Google alerted me depicts me with Hillary Clinton and Greta Thunberg. Because of their global celebrity, deepfakes of the former presidential candidate and the climate-change activist are far more numerous and more graphic than those of me. Users can also easily find deepfake-porn videos of the singer Taylor Swift, the actress Emma Watson, and the former Fox News host Megyn Kelly; Democratic officials such as Kamala Harris, Nancy Pelosi, and Alexandria Ocasio-Cortez; the Republicans Nikki Haley and Elise Stefanik; and countless other prominent women. By simply existing as women in public life, we have all become targets, stripped of our accomplishments, our intellect, and our activism and reduced to sex objects for the pleasure of millions of anonymous eyes.

Men, of course, are subject to this abuse far less frequently. In reporting this article, I searched the name Donald Trump on one prominent deepfake-porn website and turned up one video of the former president—and three entire pages of videos depicting his wife, Melania, and daughter Ivanka. A 2019 study from Sensity, a company that monitors synthetic media, estimated that more than 96 percent of deepfakes then in existence were nonconsensual pornography of women. The reasons for this disproportion are interconnected, and are both technical and motivational: The people making these videos are presumably heterosexual men who value their own gratification more than they value women’s personhood. And because AI systems are trained on an internet that abounds with images of women’s bodies, much of the nonconsensual porn that those systems generate is more believable than, say, computer-generated clips of cute animals playing would be.

[Read: The Trump AI deepfakes had an unintended side effect]

As I looked into the provenance of the videos in which I appear—I’m a disinformation researcher, after all—I stumbled upon deepfake-porn forums where users are remarkably nonchalant about the invasion of privacy they are perpetrating. Some seem to believe that they have a right to distribute these images—that because they fed a publicly available photo of a woman into an application engineered to make pornography, they have created art or a legitimate work of parody. Others apparently think that simply by labeling their videos and images as fake, they can avoid any legal consequences for their actions. These purveyors assert that their videos are for entertainment and educational purposes only. But by using that description for videos of well-known women being “humiliated” or “pounded”—as the titles of some clips put it—these men reveal a lot about what they find pleasurable and informative.

Ironically, some creators who post in deepfake forums show great concern for their own safety and privacy—in one forum thread that I found, a man is ridiculed for having signed up with a face-swapping app that does not protect user data—but insist that the women they depict do not have those same rights, because they have chosen public career paths. The most chilling page I found lists women who are turning 18 this year; they are removed on their birthdays from “blacklists” that deepfake-forum hosts maintain so they don’t run afoul of laws against child pornography.

Effective laws are exactly what the victims of deepfake porn need. Several states—including Virginia and California—have outlawed the distribution of deepfake porn. But for victims living outside these jurisdictions or seeking justice against perpetrators based elsewhere, these laws have little effect. In my own case, finding out who created these videos is probably not worth the time and money. I could attempt to subpoena platforms for information about the users who uploaded the videos, but even if the sites had those details and shared them with me, if my abusers live out of state—or in a different country—there is little I could do to bring them to justice.

Representative Joseph Morelle of New York is attempting to reduce this jurisdictional loophole by reintroducing the Preventing Deepfakes of Intimate Images Act, a proposed amendment to the 2022 reauthorization of the Violence Against Women Act. Morelle’s bill would impose a nationwide ban on the distribution of deepfakes without the explicit consent of the people depicted in the image or video. The measure would also provide victims with somewhat easier recourse when they find themselves unwittingly starring in nonconsensual porn.

In the absence of strong federal legislation, the avenues available to me to mitigate the harm caused by the deepfakes of me are not all that encouraging. I can request that Google delist the web addresses of the videos in its search results and—though the legal basis for any demand would be shaky—have my attorneys ask online platforms to take down the videos altogether. But even if those websites comply, the likelihood that the videos will crop up somewhere else is extremely high. Women targeted by deepfake porn are caught in an exhausting, expensive, endless game of whack-a-troll.

[Read: AI is about to make social media much more toxic]

The Preventing Deepfakes of Intimate Images Act won’t solve the deepfake problem; the internet is forever, and deepfake technology is only becoming more ubiquitous and its output more convincing. Yet especially because AI grows more powerful by the month, adapting the law to an emergent category of misogynistic abuse is all the more essential to protect women’s privacy and safety. As policy makers worry whether AI will destroy the world, I beg them: Let’s first stop the men who are using it to discredit and humiliate women.

Can Buddhism Fix AI?

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 06 › buddhist-monks-vermont-ai-apocalypse › 674501

Photographs by Venice Gordon

The monk paces the Zendo, forecasting the end of the world.

Soryu Forall, ordained in the Zen Buddhist tradition, is speaking to the two dozen residents of the monastery he founded a decade ago in Vermont’s far north. Bald, slight, and incandescent with intensity, he provides a sweep of human history. Seventy thousand years ago, a cognitive revolution allowed Homo sapiens to communicate in story—to construct narratives, to make art, to conceive of god. Twenty-five hundred years ago, the Buddha lived, and some humans began to touch enlightenment, he says—to move beyond narrative, to break free from ignorance. Three hundred years ago, the scientific and industrial revolutions ushered in the beginning of the “utter decimation of life on this planet.”

Humanity has “exponentially destroyed life on the same curve as we have exponentially increased intelligence,” he tells his congregants. Now the “crazy suicide wizards” of Silicon Valley have ushered in another revolution. They have created artificial intelligence.

Human intelligence is sliding toward obsolescence. Artificial superintelligence is growing dominant, eating numbers and data, processing the world with algorithms. There is “no reason” to think AI will preserve humanity, “as if we’re really special,” Forall tells the residents, clad in dark, loose clothing, seated on zafu cushions on the wood floor. “There’s no reason to think we wouldn’t be treated like cattle in factory farms.” Humans are already destroying life on this planet. AI might soon destroy us.

[From the July/August 2023 issue: The coming humanist renaissance]

For a monk seeking to move us beyond narrative, Forall tells a terrifying story. His monastery is called MAPLE, which stands for the “Monastic Academy for the Preservation of Life on Earth.” The residents there meditate on their breath and on metta, or loving-kindness, an emanation of joy to all creatures. They meditate in order to achieve inner clarity. And they meditate on AI and existential risk in general—life’s violent, early, and unnecessary end.

Does it matter what a monk in a remote Vermont monastery thinks about AI? A number of important researchers think it does. Forall provides spiritual advice to AI thinkers, and hosts talks and “awakening” retreats for researchers and developers, including employees of OpenAI, Google DeepMind, and Apple. Roughly 50 tech types have done retreats at MAPLE in the past few years. Forall recently visited Tom Gruber, one of the inventors of Siri, at his home in Maui for a week of dharma dinners and snorkeling among the octopuses and neon fish.

Forall’s first goal is to expand the pool of humans following what Buddhists call the Noble Eightfold Path. His second is to influence technology by influencing technologists. His third is to change AI itself, seeing whether he and his fellow monks might be able to embed the enlightenment of the Buddha into the code.

Forall knows this sounds ridiculous. Some people have laughed in his face when they hear about it, he says. But others are listening closely. “His training is different from mine,” Gruber told me. “But we have that intellectual connection, where we see the same deep system problems.”

Forall describes the project of creating an enlightened AI as perhaps “the most important act of all time.” Humans need to “build an AI that walks a spiritual path,” one that will persuade the other AI systems not to harm us. Life on Earth “depends on that,” he told me, arguing that we should devote half of global economic output—$50 trillion, give or take—to “that one thing.” We need to build an “AI guru,” he said. An “AI god.”

A sign inside the Zendo (Venice Gordon for The Atlantic)

His vision is dire and grand, but perhaps that is why it has found such a receptive audience among the folks building AI, many of whom conceive of their work in similarly epochal terms. No one can know for sure what this technology will become; when we imagine the future, we have no choice but to rely on myths and forecasts and science fiction—on stories. Does Forall’s story have the weight of prophecy, or is it just one that AI alarmists are telling themselves?

In the Zendo, Forall finishes his talk and answers a few questions. Then it is time for “the most fun thing in the world,” he says, his self-seriousness evaporating for a second. “It’s pretty close to the maximum amount of fun.” The monks stand tall before a statue of the Buddha. They bow. They straighten up again. They get down on their hands and knees and kiss their forehead to the earth. They prostrate themselves in unison 108 times, as Forall keeps count on a set of mala beads and darkness begins to fall over the Zendo.

The world is witnessing the emergence of an eldritch new force, some say, one humans created and are struggling to understand.

AI systems simulate human intelligence.

AI systems take an input and spit out an output.

AI systems generate those outputs via an algorithm, one trained on troves of data scraped from the web.

AI systems create videos, poems, songs, pictures, lists, scripts, stories, essays. They play games and pass tests. They translate text. They solve impossible problems. They do math. They drive. They chat. They act as search engines. They are self-improving.

AI systems are causing concrete problems. They are providing inaccurate information to consumers and are generating political disinformation. They are being used to gin up spam and trick people into revealing sensitive personal data. They are already beginning to take people’s jobs.

[Annie Lowrey: AI isn’t omnipotent. It’s janky.]

Beyond that—what they can and cannot do, what they are and are not, the threat they do or do not pose—it gets hard to say. AI is revolutionary, dangerous, sentient, capable of reasoning, janky, likely to kill millions of humans, likely to enslave millions of humans, not a threat in and of itself. It is a person, a “digital mind,” nothing more than a fancy spreadsheet, a new god, not a thing at all. It is intelligent or not, or maybe just designed to seem intelligent. It is us. It is something else. The people making it are stoked. The people making it are terrified and suffused with regret. (The people making it are getting rich, that’s for sure.)

In this roiling debate, Forall and many MAPLE residents are what are often called, derisively if not inaccurately, “doomers.” The seminal text in this ideological lineage is Nick Bostrom’s Superintelligence, which posits that AI could turn humans into gorillas, in a way. Our existence could depend not on our own choices but on the choices of a more intelligent other.

Amba Kak, the executive director of the AI Now Institute, summarized this view: “ChatGPT is the beginning. The end is, we’re all going to die,” she told me earlier this year, while rolling her eyes so hard I swear I could hear it through the phone. She described the narrative as both self-flattering and cynical. Tech companies have an incentive to make such systems seem otherworldly and impossible to regulate, when they are in fact “banal.”

Forall is not, by any means, a coder who understands AI at the zeros-and-ones level; he does not have a detailed familiarity with large language models or algorithmic design. I asked him whether he had used some of the popular new AI gadgets, such as ChatGPT and Midjourney. He had tried one chatbot. “I just asked it one question: Why practice?” (He meant “Why should a person practice meditation?”)

Did he find the answer satisfactory?

“Oh, not really. I don’t know. I haven’t found it impressive.”

His lack of detailed familiarity with AI hasn’t changed his conclusions on the technology. When I asked whom he looks to or reads in order to understand AI, he at first, deadpan, answered, “the Buddha.” He then clarified that he also likes the work of the best-selling historian Yuval Noah Harari and a number of prominent ethical-tech folks, among them Zak Stein and Tristan Harris. And he is spending his life ruminating on AI’s risks, which he sees as far from banal. “We are watching humanist values, and therefore the political systems based on them, such as democracy, as well as the economic systems—they’re just falling apart,” he said. “The ultimate authority is moving from the human to the algorithm.”

(Venice Gordon for The Atlantic) The Zendo from outside (Venice Gordon for The Atlantic)

Forall has been worried about the apocalypse since he was 4. In one of his first memories, he is standing in the kitchen with his mother, just a little shorter than the trash can, panicking over people killing one another. “I remember telling her with the expectation that somehow it would make a difference: ‘We have to stop them. Just stop the people from killing everybody,’” he told me. “She said ‘Yes’ and then went back to chopping the vegetables.” (Forall’s mother worked for humanitarian nonprofits and his father for conservation nonprofits; the household, which attended Quaker meetings, listened to a lot of NPR.)

He was a weird, intense kid. He experienced something like ego death while snow-angeling in fresh Vermont powder when he was 12: “direct knowledge that I, that I, is all living things. That I am this whole planet of living things.” He recalled pestering his mothers’ friends “about how we’re going to save the world and you’re not doing it” when they came over. He never recovered from seeing Terminator 2: Judgment Day as a teenager.

I asked him whether some personal experience of trauma or hardship had made him so aware of the horrors of the world. Nope.

Forall attended Williams College for a year, studying economics. But, he told me, he was racked with questions no professor or textbook could provide the answer to. Is it true that we are just matter, just chemicals? Why is there so much suffering? To find the answer, at 18, he dropped out and moved to a 300-year-old Zen monastery in Japan.

Folks unfamiliar with different types of Buddhism might imagine Zen to be, well, zen. This would be a misapprehension. Zen practitioners are not unlike the Trappists: ascetic, intense, renunciatory. Forall spent years begging, self-purifying, and sitting in silence for months at a time. (One of the happiest moments of his life, he told me, was toward the end of a 100-day sit.) He studied other Buddhist traditions and eventually, he added, did go back and finish his economics degree at Williams, to the relief of his parents.

He got his answer: Craving is the root of all suffering. And he became ordained, giving up the name Teal Scott and becoming Soryu Forall: “Soryu” meaning something like “a growing spiritual practice” and “Forall” meaning, of course, “for all.”

Back in Vermont, Forall taught at monasteries and retreat centers, got kids to learn mindfulness through music and tennis, and co-founded a nonprofit that set up meditation programs in schools. In 2013, he opened MAPLE, a “modern” monastery addressing the plagues of environmental destruction, lethal weapons systems, and AI, offering co-working and online courses as well as traditional monastic training.

In the past few years, MAPLE has become something of the house monastery for people worried about AI and existential risk. This growing influence is manifest on its books. The nonprofit’s revenues have quadrupled, thanks in part to contributions from tech executives as well as organizations such as the Future of Life Institute, co-founded by Jaan Tallinn, a co-creator of Skype. The donations have helped MAPLE open offshoots—Oak in the Bay Area, Willow in Canada—and plan more. (The highest-paid person at MAPLE is the property manager, who earns roughly $40,000 a year.)

MAPLE is not technically a monastery, as it is not part of a specific Buddhist lineage. Still, it is a monastery. At 4:40 a.m., the Zendo is full. The monks and novices sit in silence below signs that read, among other things, abandon all hope, this place will not support you, and nothing you can think of will help you as you die. They sing in Pali, a liturgical language, regaling the freedom of enlightenment. They drone in English, talking of the Buddha. Then they chant part of the heart sutra to the beat of a drum, becoming ever louder and more ecstatic over the course of 30 minutes: “Gyate, gyate, hara-gyate, hara-sogyate, boji sowaka!” “Gone, gone, gone all the way over, everyone gone to the other shore. Enlightenment!

The residents maintain a strict schedule, much of it in silence. They chant, meditate, exercise, eat, work, eat, work, study, meditate, and chant. During my visit, the head monk asked someone to breathe more quietly during meditation. Over lunch, the congregants discussed how to remove ticks from your body without killing them (I do not think this is possible). Forall put in a request for everyone to “chant more beautifully.” I observed several monks pouring water in their bowl to drink up every last bit of food.

A monk sits in front of a device that measures the beat of his chants (Venice Gordon for The Atlantic) Bowing before dining (Venice Gordon for The Atlantic)

The strictness of the place helps them let go of ego and see the world more clearly, residents told me. “To preserve all life: You can’t do that until you come to love all life, and that has to be trained,” a 20-something named Bodhi Joe Pucci told me.

Many people find their time at MAPLE transformative. Others find it traumatic. I spoke with one woman who said she had experienced a sexual assault during her time at Oak in California. That was hard enough, she told me. But she felt more hurt by the way the institution responded after she reported it to Forall and later to the nonprofit’s board, she said: with a strange, stony silence. (Forall told me that he cared for this person, and that MAPLE had investigated the claims and didn’t find “evidence to support further action at this time.”) The message that MAPLE’s culture sends, the woman told me, is: “You should give everything—your entire being, everything you have—in service to this organization, because it’s the most important thing you could ever do.” That culture, she added, “disconnected people from reality.”

While the residents are chanting in the Zendo, I notice that two are seated in front of an electrical device, its tiny green and red lights flickering as they drone away. A few weeks earlier, several residents had constructed place-mat-size wooden boards with accelerometers in them. The monks would sit on them while the device measured how on the beat their chanting was: green light, good; red light, bad.

Chanting on the beat, Forall acknowledged, is not the same thing as cultivating universal empathy; it is not going to save the world. But, he told me, he wanted to use technology to improve the conscientiousness and clarity of MAPLE residents, and to use the conscientiousness and clarity of MAPLE residents to improve the technology all around us. He imagined changes to human “hardware” down the road—genetic engineering, brain-computer interfaces—and to AI systems. AI is “already both machine and living thing,” he told me, made from us, with our data and our labor, inhabiting the same world we do.

Does any of this make sense? I posed that question to an AI researcher named Sahil, who attended one of MAPLE’s retreats earlier this year. (He asked me to withhold his last name because he has close to zero public online presence, something I confirmed with a shocked, admiring Google search.)

He had gone into the retreat with a lot of skepticism, he told me: “It sounds ridiculous. It sounds wacky. Like, what is this ‘woo’ shit? What does it have to do with engineering?” But while there, he said, he experienced something spectacular. He was suffering from “debilitating” back pain. While meditating, he concentrated on emptying his mind and found his back pain becoming illusory, falling away. He felt “ecstasy.” He felt like an “ice-cream sandwich.” The retreat had helped him understand more clearly the nature of his own mind, and the need for better AI systems, he told me.

That said, he and some other technologists had reviewed one of Forall’s ideas for AI technology and “completely tore it apart.”

(Venice Gordon for The Atlantic) (Venice Gordon for The Atlantic)

Does it make any sense for us to be worried about this at all? I asked myself that question as Forall and I sat on a covered porch, drinking tea and eating dates stuffed with almond butter that a resident of the monastery wordlessly dropped off for us. We were listening to birdsong, looking out on the Green Mountains rolling into Canada. Was the world really ending?

Forall was absolute: Nine countries are armed with nuclear weapons. Even if we stop the catastrophe of climate change, we will have done so too late for thousands of species and billions of beings. Our democracy is fraying. Our trust in one another is fraying. Many of the very people creating AI believe it could be an existential threat: One 2022 survey asked AI researchers to estimate the probability that AI would cause “severe disempowerment” or human extinction; the median response was 10 percent. The destruction, Forall said, is already here.

[Read: AI doomerism is a decoy]

But other experts see a different narrative. Jaron Lanier, one of the inventors of virtual reality, told me that “giving AI any kind of a status as a proper noun is not, strictly speaking, in some absolute sense, provably incorrect, but is pragmatically incorrect.” He continued: “If you think of it as a non-thing, just a collaboration of people, you gain a lot in terms of thoughts about how to make it better, or how to manage it, or how to deal with it. And I say that as somebody who’s very much in the center of the current activity.”

I asked Forall whether he felt there was a risk that he was too attached to his own story about AI. “It’s important to know that we don’t know what’s going to happen,” he told me. “It’s also important to look at the evidence.” He said it was clear we were on an “accelerating curve,” in terms of an explosion of intelligence and a cataclysm of death. “I don’t think that these systems will care too much about benefiting people. I just can’t see why they would, in the same way that we don’t care about benefiting most animals. While it is a story in the future, I feel like the burden of proof isn’t on me.”

That evening, I sat in the Zendo for an hour of silent meditation with the monks. A few times during my visit to MAPLE, a resident had told me that the greatest insight they achieved was during an “interview” with Forall: a private one-on-one instructional session, held during zazen. “You don’t experience it elsewhere in life,” one student of Forall’s told me. “For those seconds, those minutes that I’m in there, it is the only thing in the world.”

(Venice Gordon for The Atlantic)

Toward the very end of the hour, the head monk called out my name, and I rushed up a rocky path to a smaller, softly lit Zendo, where Forall sat on a cushion. For 15 minutes, I asked questions and received answers from this unknowable, unusual brain—not about AI, but about life.

When I returned to the big Zendo, I was surprised to find all of the other monks still sitting there, waiting for me, meditating in the dark.

Generative AI Should Not Replace Thinking at My University

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 06 › generative-artificial-intelligence-universities › 674473

I used to drive a stick-shift car, but a few years ago, I switched over to an automatic. I didn’t mind relinquishing the control of gear-changing to a machine. It was different, however, when spell checkers came around. I didn’t want a mechanical device constantly looking over my shoulder and automatically changing my typing, such as replacing hte with the. I had always been a good speller and I wanted to be self-reliant, not machine-reliant. Perhaps more important, I often write playfully, and I didn’t want to be “corrected” if I deliberately played with words. So I made sure to turn off this feature in any word processor that I used. Some years later, when “grammar correctors” became an option with word processors, I felt the same instinctive repugnance, but with considerably more intensity, so of course I always disabled such devices.

[Read: The end of manual transmission]

It was thus with great dismay that I read the email that just arrived from University Information Technology Services at Indiana University, where I have taught for several decades. The subject line was “Experiment with AI,” and to my horror, “Experiment” was an imperative verb, not a noun. The idea of the university-wide message was to encourage all faculty, staff, and students to jump on the bandwagon of “generative AI tools” (it specifically cited ChatGPT, Microsoft Copilot, and Google Bard) in creating our own lectures, essays, emails, reviews, courses, syllabi, posters, designs, and so forth. Although it offered some warnings about not releasing private data, such as students’ names and grades, it essentially gave the green light to all “IU affiliates” to let machines hop into the driver’s seat and do much more than change gears for them.

Here is the key passage from the website that the bureaucratic email pointed to—and please don’t ask me what “from a data management perspective” means, because I don’t have the foggiest idea:

From a data management perspective, examples of acceptable uses of generative AI include:

• Syllabus and lesson planning: Instructors can use generative AI to help outline course syllabi and lesson plans, getting suggestions for learning objectives, teaching strategies, and assessment methods. Course materials that the instructor has authored (such as course notes) may be submitted by the instructor.

• Correspondence when no student or employee information is provided: Students, faculty, or staff may use fake information (such as an invented name for the recipient of an email message) to generate drafts of correspondence using AI tools, as long as they are using general queries and do not include institutional data.

• Professional development and training presentations: Faculty and staff can use AI to draft materials for potential professional development opportunities, including workshops, conferences, and online courses related to their field.

• Event planning: AI can assist in drafting event plans, including suggesting themes, activities, timelines, and checklists.

• Reviewing publicly accessible content: AI can help you draft a review, analyze publicly accessible content (for example, proposals, papers and articles) to aid in drafting summaries, or pull together ideas.

I was completely blown away with shock when I read this passage. It seemed that the humans behind this message had decided that all people at this institution of learning were now replaceable by chatbots. In other words, they’d decided that ChatGPT and its ilk were now just as capable as I myself am of writing (or at least drafting) my essays and books; ditto for my lectures and my courses, my book reviews and my grant reviews, my grant proposals, my emails, and so on. The tone was clear: I should be thrilled to hand over all of these sorts of chores to the brand-new mechanical “tools” that could deal with them all very efficiently for me.

I’m sorry, but I can’t imagine the cowardly, cowed, and counterfeit-embracing mentality that it would take for a thinking human being to ask such a system to write in their place, say, an email to a colleague in distress, or an essay setting forth original ideas, or even a paragraph or a single sentence thereof. Such a concession would be like intentionally lying down and inviting machines to walk all over you.

[Read: The end of recommendation letters]

It’s bad enough when the public is eagerly playing with chatbots and seeing them as just amusing toys when, despite their cute-sounding name, chatbots are in fact a grave menace to our entire culture and society, but it’s even worse when people who are employed to use their minds in creating and expressing new ideas are told, by their own institution, to step aside and let their minds take a back seat to mechanical systems whose behavior no one on Earth can explain, and which are constantly churning out bizarre, if not crazy, word salads. (In recent weeks, friends sent me two different “proofs” of Fermat’s last theorem created by ChatGPT, both of which made pathetic errors at a middle-school level.)

When, many years ago, I joined Indiana University’s faculty, I conceived of AI as a profound philosophical quest to try to unveil the mysterious nature of thinking. It never occurred to me that my university would one day encourage me to replace myself—my ideas, my words, my creativity—with AI systems that have ingested as much text as have all the professors in the whole world, but that, as far as I can tell, have not understood anything they’ve ingested in the way that an intelligent human being would. And I suspect that my university is not alone in our land in encouraging its thinkers to roll over and play brain-dead. This is not just a shameful development, but a deeply frightening one.

AI Is an Existential Threat to Itself

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 06 › generative-ai-future-training-models › 674478

In the beginning, the chatbots and their ilk fed on the human-made internet. Various generative-AI models of the sort that power ChatGPT got their start by devouring data from sites including Wikipedia, Getty, and Scribd. They consumed text, images, and other content, learning through algorithmic digestion their flavors and texture, which ingredients go well together and which do not, in order to concoct their own art and writing. But this feast only whet their appetite.

Generative AI is utterly reliant on the sustenance it gets from the web: Computers mime intelligence by processing almost unfathomable amounts of data and deriving patterns from them. ChatGPT can write a passable high-school essay because it has read libraries’ worth of digitized books and articles, while DALL-E 2 can produce Picasso-esque images because it has analyzed something like the entire trajectory of art history. The more they train on, the smarter they appear.

Eventually, these programs will have ingested almost every human-made bit of digital material. And they are already being used to engorge the web with their own machine-made content, which will only continue to proliferate—across TikTok and Instagram, on the sites of media outlets and retailers, and even in academic experiments. To develop ever more advanced AI products, Big Tech might have no choice but to feed its programs AI-generated content, or just might not be able to sift human fodder from the synthetic—a potentially disastrous change in diet for both the models and the internet, according to researchers.

[Read: AI doomerism is a decoy]

The problem with using AI output to train future AI is straightforward. Despite stunning advances, chatbots and other generative tools such as the image-making Midjourney and Stable Diffusion remain sometimes shockingly dysfunctional—their outputs filled with biases, falsehoods, and absurdities. “Those mistakes will migrate into” future iterations of the programs, Ilia Shumailov, a machine-learning researcher at Oxford University, told me. “If you imagine this happening over and over again, you will amplify errors over time.” In a recent study on this phenomenon, which has not been peer-reviewed, Shumailov and his co-authors describe the conclusion of those amplified errors as model collapse: “a degenerative process whereby, over time, models forget,” almost as if they were growing senile. (The authors originally called the phenomenon “model dementia,” but renamed it after receiving criticism for trivializing human dementia.)

Generative AI produces outputs that, based on its training data, are most probable. (For instance, ChatGPT will predict that, in a greeting, doing? is likely to follow how are you.) That means events that seem to be less probable, whether because of flaws in an algorithm or a training sample that doesn’t adequately reflect the real world—unconventional word choices, strange shapes, images of people with darker skin (melanin is often scant in image datasets)—will not show up as much in the model’s outputs, or will show up with deep flaws. Each successive AI trained on past AI would lose information on improbable events and compound those errors, Aditi Raghunathan, a computer scientist at Carnegie Mellon University, told me. You are what you eat.

Recursive training could magnify bias and error, as previous research also suggests—chatbots trained on the writings of a racist chatbot, such as early versions of ChatGPT that racially profiled Muslim men as “terrorists,” would only become more prejudiced. And if taken to an extreme, such recursion would also degrade an AI model’s most basic functions. As each generation of AI misunderstands or forgets underrepresented concepts, it will become overconfident about what it does know. Eventually, what the machine deems “probable” will begin to look incoherent to humans, Nicolas Papernot, a computer scientist at the University of Toronto and one of Shumailov’s co-authors, told me.

The study tested how model collapse would play out in various AI programs—think GPT-2 trained on the outputs of GPT-1, GPT-3 on the outputs of GPT-2, GPT-4 on the outputs of GPT-3, and so on, until the nth generation. A model that started out producing a grid of numbers displayed an array of blurry zeroes after 20 generations; a model meant to sort data into two groups eventually lost the ability to distinguish between them at all, producing a single dot after 2,000 generations. The study provides a “nice, concrete way of demonstrating what happens” with such a data feedback loop, Raghunathan, who was not involved with the research, said. The AIs gobbled up one another’s outputs, and in turn one another, a sort of recursive cannibalism that left nothing of use or substance behind—these are not Shakespeare’s anthropophagi, or human-eaters, so much as mechanophagi of Silicon Valley’s design.

The language model they tested, too, completely broke down. The program at first fluently finished a sentence about English Gothic architecture, but after nine generations of learning from AI-generated data, it responded to the same prompt by spewing gibberish: “architecture. In addition to being home to some of the world’s largest populations of black @-@ tailed jackrabbits, white @-@ tailed jackrabbits, blue @-@ tailed jackrabbits, red @-@ tailed jackrabbits, yellow @-.” For a machine to create a functional map of a language and its meanings, it must plot every possible word, regardless of how common it is. “In language, you have to model the distribution of all possible words that may make up a sentence,” Papernot said. “Because there is a failure [to do so] over multiple generations of models, it converges to outputting nonsensical sequences.”

In other words, the programs could only spit back out a meaningless average—like a cassette that, after being copied enough times on a tape deck, sounds like static. As the science-fiction author Ted Chiang has written, if ChatGPT is a condensed version of the internet, akin to how a JPEG file compresses a photograph, then training future chatbots on ChatGPT’s output is “the digital equivalent of repeatedly making photocopies of photocopies in the old days. The image quality only gets worse.”

The risk of eventual model collapse does not mean the technology is worthless or fated to poison itself. Alex Dimakis, a computer scientist at the University of Texas at Austin and a co-director of the National AI Institute for Foundations of Machine Learning, which is sponsored by the National Science Foundation, pointed to privacy and copyright concerns as potential reasons to train AI on synthetic data. Consider medical applications: Using real patients’ medical information to train AI poses huge privacy violations that using representative synthetic records could bypass—say, by taking a collection of people’s records and using a computer program to generate a new dataset that, in the aggregate, contains the same information. To take another example, limited training material is available in rare languages, but a machine-learning program could produce permutations of what is available to augment the dataset.

[Read: ChatGPT is already obsolete]

The potential for AI-generated data to result in model collapse, then, emphasizes the need to curate training datasets. “Filtering is a whole research area right now,” Dimakis told me. “And we see it has a huge impact on the quality of the models”—given enough data, a program trained on a smaller amount of high-quality inputs can outperform a bloated one. Just as synthetic data aren’t inherently bad, “human-generated data is not a gold standard,” Ilia Shumailov said. “We need data that represents the underlying distribution well.” Human and machine outputs are just as likely to be misaligned with reality (many existing discriminatory AI products were trained on human creations). Researchers could potentially curate AI-generated data to alleviate bias and other problems, by training their models on more representative data. Using AI to generate text or images that counterbalance prejudice in existing datasets and computer programs, for instance, could provide a way to “potentially debias systems by using this controlled generation of data,” Aditi Raghunathan said.


A model that is shown to have dramatically collapsed to the extent that Shumailov and Papernot documented would never be released as a product, anyway. Of greater concern is the compounding of smaller, hard-to-detect biases and misperceptions—especially as machine-made content becomes harder, if not impossible, to distinguish from human creations. “I think the danger is really more when you train on the synthetic data and as a result have some flaws that are so subtle that our current evaluation pipelines do not capture them,” Raghunathan said. Gender bias in a résumé-screening tool, for instance, could in a subsequent generation of the program morph into more insidious forms. The chatbots might not eat themselves so much as leach undetectable traces of cybernetic lead that accumulate across the internet with time, poisoning not just their own food and water supply, but humanity’s.

As SoftBank’s Masayoshi Son jumps on the AI bandwagon, where will he take his chip business?

Quartz

qz.com › as-softbank-s-masayoshi-son-jumps-on-the-ai-bandwagon-1850555688

In his first public showing after a seven-month absence, SoftBank CEO Masayoshi Son said he’s focusing on the IPO of Arm, the Japanese conglomerate’s chip-making unit. Artificial intelligence chatbots like ChatGPT require a lot of computing power, presenting new opportunities for Arm and its fellow chip manufacturers…

Read more...