Itemoids

Silicon Valley

AI Doomerism Is a Decoy

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 06 › ai-regulation-sam-altman-bill-gates › 674278

On Tuesday morning, the merchants of artificial intelligence warned once again about the existential might of their products. Hundreds of AI executives, researchers, and other tech and business figures, including OpenAI CEO Sam Altman and Bill Gates, signed a one-sentence statement written by the Center for AI Safety declaring that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Those 22 words were released following a multi-week tour in which executives from OpenAI, Microsoft, Google, and other tech companies called for limited regulation of AI. They spoke before Congress, in the European Union, and elsewhere about the need for industry and governments to collaborate to curb their product’s harms—even as their companies continue to invest billions in the technology. Several prominent AI researchers and critics told me that they’re skeptical of the rhetoric, and that Big Tech’s proposed regulations appear defanged and self-serving.

Silicon Valley has shown little regard for years of research demonstrating that AI’s harms are not speculative but material; only now, after the launch of OpenAI’s ChatGPT and a cascade of funding, does there seem to be much interest in appearing to care about safety. “This seems like really sophisticated PR from a company that is going full speed ahead with building the very technology that their team is flagging as risks to humanity,” Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project, a nonprofit that advocates against mass surveillance, told me.

The unstated assumption underlying the “extinction” fear is that AI is destined to become terrifyingly capable, turning these companies’ work into a kind of eschatology. “It makes the product seem more powerful,” Emily Bender, a computational linguist at the University of Washington, told me, “so powerful it might eliminate humanity.” That assumption provides a tacit advertisement: The CEOs, like demigods, are wielding a technology as transformative as fire, electricity, nuclear fission, or a pandemic-inducing virus. You’d be a fool not to invest. It’s also a posture that aims to inoculate them from criticism, copying the crisis communications of tobacco companies, oil magnates, and Facebook before: Hey, don’t get mad at us; we begged them to regulate our product.

Yet the supposed AI apocalypse remains science fiction. “A fantastical, adrenalizing ghost story is being used to hijack attention around what is the problem that regulation needs to solve,” Meredith Whittaker, a co-founder of the AI Now Institute and the president of Signal, told me. Programs such as GPT-4 have improved on their previous iterations, but only incrementally. AI may well transform important aspects of everyday life—perhaps advancing medicine, already replacing jobs—but there’s no reason to believe that anything on offer from the likes of Microsoft and Google would lead to the end of civilization. “It’s just more data and parameters; what’s not happening is fundamental step changes in how these systems work,” Whittaker said.

Two weeks before signing the AI-extinction warning, Altman, who has compared his company to the Manhattan Project and himself to Robert Oppenheimer, delivered to Congress a toned-down version of the extinction statement’s prophecy: The kinds of AI products his company develops will improve rapidly, and thus potentially be dangerous. Testifying before a Senate panel, he said that “regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.” Both Altman and the senators treated that increasing power as inevitable, and associated risks as yet-unrealized “potential downsides.”

But many of the experts I spoke with were skeptical of how much AI will progress from its current abilities, and they were adamant that it need not advance at all to hurt people—indeed, many applications already do. The divide, then, is not over whether AI is harmful, but which harm is most concerning—a future AI cataclysm only its architects are warning about and claim they can uniquely avert, or a more quotidian violence that governments, researchers, and the public have long been living through and fighting against—as well as who is at risk and how best to prevent that harm.

[Read: It’s a weird time to be a doomsday prepper]

Take, for example, the reality that many existing AI products are discriminatory—racist and misgendering facial recognition, biased medical diagnoses, and sexist recruiting algorithms are among the most well-known examples. Cahn says that AI should be assumed prejudiced until proven otherwise. Moreover, advanced models are regularly accused of copyright infringement when it comes to their data sets, and labor violations when it comes to their production. Synthetic media is filling the internet with financial scams and nonconsensual pornography. The “sci-fi narrative” about AI, put forward in the extinction statement and elsewhere, “distracts us from those tractable areas that we could start working on today,” Deborah Raji, a Mozilla fellow who studies algorithmic bias, told me. And whereas algorithmic harms today principally wound marginalized communities and are thus easier to ignore, a supposed civilizational collapse would hurt the privileged too. “When Sam Altman says something, even though it’s so disassociated from the real way in which these harms actually play out, people are listening,” Raji said.

Even if people listen, the words can appear empty. Only days after Altman’s Senate testimony, he told reporters in London that if the EU’s new AI regulations are too stringent, his company could “cease operating” on the continent. The apparent about-face led to a backlash, and Altman then tweeted that OpenAI had “no plans to leave” Europe. “It sounds like some of the actual, sensible regulation is threatening the business model,” the University of Washington’s Bender said. In an emailed response to a request for comment about Altman’s remarks and his company’s stance on regulation, a spokesperson for OpenAI wrote, “Achieving our mission requires that we work to mitigate both current and longer-term risks” and that the company is “collaborating with policymakers, researchers and users” to do so.

The regulatory charade is a well-established part of the Silicon Valley playbook. In 2018, after Facebook was rocked by misinformation and privacy scandals, Mark Zuckerberg told Congress that his company has “a responsibility to not just build tools, but to make sure that they’re used for good” and that he would welcome “the right regulation.” Meta’s platforms have since failed miserably to limit election and pandemic misinformation. In early 2022, Sam Bankman-Fried told Congress that the federal government needs to establish “clear and consistent regulatory guidelines” for cryptocurrencies. By the end of the year, his own crypto firm had proved to be a sham, and he was arrested for financial fraud on the scale of the Enron scandal. “We see a really savvy attempt to avoid getting lumped in with tech platforms like Facebook and Twitter, which have drawn increasingly searching scrutiny from regulators about the harms they inflict,” Cahn told me.

At least some of the extinction statement’s signatories do seem to earnestly believe that superintelligent machines could end humanity. Yoshua Bengio, who signed the statement and is sometimes called a “godfather” of AI, told me he believes that the technologies have become so capable that they risk triggering a world-ending catastrophe, whether as rogue sentient entities or in the hands of a human. “If it’s an existential risk, we may have one chance, and that’s it,” he said.

[Read: Here’s how AI will come for your job]

Dan Hendrycks, the director of the Center for AI Safety, told me he thinks similarly about these risks. He also added that the public needs to end the current “AI arms race between these corporations, where they’re basically prioritizing the development of AI technologies over their safety.” That leaders from Google, Microsoft, OpenAI, Deepmind, Anthropic, and Stability AI signed his center’s warning, Hendrycks said, could be a sign of genuine concern. Altman wrote about this threat even before the founding of OpenAI. Yet “even under that charitable interpretation,” Bender told me, “you have to wonder: If you think this is so dangerous, why are you still building it?

The solutions these companies have proposed for both the empirical and fantastical harms of their products are vague, filled with platitudes that stray from an established body of work on what experts told me regulating AI would actually require. In his testimony, Altman emphasized the need to create a new government agency focused on AI. Microsoft has done the same. “This is warmed-up leftovers,” Signal’s Whittaker said. “I was in conversations in 2015 where the topic was ‘Do we need a new agency?’ This is an old ship that usually high-level people in a Davos-y environment speculate on before they go to cocktails.” And a new agency, or any exploratory policy initiative, “is a very long-term objective that would take many, many decades to even get close to realizing,” Raji said. During that time, AI could not only harm countless people but also become so entrenched in various companies and institutions as to make meaningful regulation much harder.

For about a decade, experts have rigorously studied the damage done by AI and proposed more realistic ways to prevent them. Possible interventions could involve public documentation of training data and model design; clear mechanisms for holding companies accountable when their products put out medical misinformation, libel, and other harmful content; antitrust legislation; or just enforcing existing laws related to civil rights, intellectual property, and consumer protection. “If a store is systematically targeting Black customers through human decision making, that’s a violation of civil-rights law,” Cahn said. “And to me, it’s no different when an algorithm does it.” Similarly, if a chatbot writes a racist legal brief or gives incorrect medical advice, was trained on copyrighted writing, or scams people for money, current laws should apply.

Doomsday prognostications and calls for a new AI agency amount to “an attempt at regulatory sabotage,” Whittaker said, because the very people selling and profiting from this technology would “shape, hollow out, and effectively sabotage” the agency and its powers. Just look at Altman testifying before Congress, or the recent “responsible”-AI meeting between various CEOs and President Joe Biden: The people developing and profiting from the software are the ones telling the government how to approach it—an early glimpse of regulatory capture. “There’s decades worth of very specific kinds of regulations people are calling for about equity, fairness, and justice,” Safiya Noble, an internet-studies scholar at UCLA and the author of Algorithms of Oppression, told me. “And the kinds of regulations I see [AI companies] talking about are ones that are favorable to their interests.” These companies also spent many millions of dollars lobbying Congress in just the first three months of this year.

All that has really changed from the years-old conversations around regulating AI is ChatGPT—a program that, because it spits out human-esque language, has captivated consumers and investors, granting Silicon Valley a Promethean aura. Beneath that myth, though, much about AI’s harms is unchanged. The technology depends on surveillance and data collection, exploits creative work and physical labor, amplifies bias, and is not sentient. The ideas and tools needed for regulation, which would require addressing those problems and perhaps reducing corporate profits, are around for anybody who might care to look. The 22-word warning is a tweet, not scripture; a matter of faith, not evidence. That an algorithm is harming somebody right now would have been a fact if you read this sentence a decade ago, and it remains one today.