Itemoids

European Union

AI Doomerism Is a Decoy

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 06 › ai-regulation-sam-altman-bill-gates › 674278

On Tuesday morning, the merchants of artificial intelligence warned once again about the existential might of their products. Hundreds of AI executives, researchers, and other tech and business figures, including OpenAI CEO Sam Altman and Bill Gates, signed a one-sentence statement written by the Center for AI Safety declaring that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Those 22 words were released following a multi-week tour in which executives from OpenAI, Microsoft, Google, and other tech companies called for limited regulation of AI. They spoke before Congress, in the European Union, and elsewhere about the need for industry and governments to collaborate to curb their product’s harms—even as their companies continue to invest billions in the technology. Several prominent AI researchers and critics told me that they’re skeptical of the rhetoric, and that Big Tech’s proposed regulations appear defanged and self-serving.

Silicon Valley has shown little regard for years of research demonstrating that AI’s harms are not speculative but material; only now, after the launch of OpenAI’s ChatGPT and a cascade of funding, does there seem to be much interest in appearing to care about safety. “This seems like really sophisticated PR from a company that is going full speed ahead with building the very technology that their team is flagging as risks to humanity,” Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project, a nonprofit that advocates against mass surveillance, told me.

The unstated assumption underlying the “extinction” fear is that AI is destined to become terrifyingly capable, turning these companies’ work into a kind of eschatology. “It makes the product seem more powerful,” Emily Bender, a computational linguist at the University of Washington, told me, “so powerful it might eliminate humanity.” That assumption provides a tacit advertisement: The CEOs, like demigods, are wielding a technology as transformative as fire, electricity, nuclear fission, or a pandemic-inducing virus. You’d be a fool not to invest. It’s also a posture that aims to inoculate them from criticism, copying the crisis communications of tobacco companies, oil magnates, and Facebook before: Hey, don’t get mad at us; we begged them to regulate our product.

Yet the supposed AI apocalypse remains science fiction. “A fantastical, adrenalizing ghost story is being used to hijack attention around what is the problem that regulation needs to solve,” Meredith Whittaker, a co-founder of the AI Now Institute and the president of Signal, told me. Programs such as GPT-4 have improved on their previous iterations, but only incrementally. AI may well transform important aspects of everyday life—perhaps advancing medicine, already replacing jobs—but there’s no reason to believe that anything on offer from the likes of Microsoft and Google would lead to the end of civilization. “It’s just more data and parameters; what’s not happening is fundamental step changes in how these systems work,” Whittaker said.

Two weeks before signing the AI-extinction warning, Altman, who has compared his company to the Manhattan Project and himself to Robert Oppenheimer, delivered to Congress a toned-down version of the extinction statement’s prophecy: The kinds of AI products his company develops will improve rapidly, and thus potentially be dangerous. Testifying before a Senate panel, he said that “regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.” Both Altman and the senators treated that increasing power as inevitable, and associated risks as yet-unrealized “potential downsides.”

But many of the experts I spoke with were skeptical of how much AI will progress from its current abilities, and they were adamant that it need not advance at all to hurt people—indeed, many applications already do. The divide, then, is not over whether AI is harmful, but which harm is most concerning—a future AI cataclysm only its architects are warning about and claim they can uniquely avert, or a more quotidian violence that governments, researchers, and the public have long been living through and fighting against—as well as who is at risk and how best to prevent that harm.

[Read: It’s a weird time to be a doomsday prepper]

Take, for example, the reality that many existing AI products are discriminatory—racist and misgendering facial recognition, biased medical diagnoses, and sexist recruiting algorithms are among the most well-known examples. Cahn says that AI should be assumed prejudiced until proven otherwise. Moreover, advanced models are regularly accused of copyright infringement when it comes to their data sets, and labor violations when it comes to their production. Synthetic media is filling the internet with financial scams and nonconsensual pornography. The “sci-fi narrative” about AI, put forward in the extinction statement and elsewhere, “distracts us from those tractable areas that we could start working on today,” Deborah Raji, a Mozilla fellow who studies algorithmic bias, told me. And whereas algorithmic harms today principally wound marginalized communities and are thus easier to ignore, a supposed civilizational collapse would hurt the privileged too. “When Sam Altman says something, even though it’s so disassociated from the real way in which these harms actually play out, people are listening,” Raji said.

Even if people listen, the words can appear empty. Only days after Altman’s Senate testimony, he told reporters in London that if the EU’s new AI regulations are too stringent, his company could “cease operating” on the continent. The apparent about-face led to a backlash, and Altman then tweeted that OpenAI had “no plans to leave” Europe. “It sounds like some of the actual, sensible regulation is threatening the business model,” the University of Washington’s Bender said. In an emailed response to a request for comment about Altman’s remarks and his company’s stance on regulation, a spokesperson for OpenAI wrote, “Achieving our mission requires that we work to mitigate both current and longer-term risks” and that the company is “collaborating with policymakers, researchers and users” to do so.

The regulatory charade is a well-established part of the Silicon Valley playbook. In 2018, after Facebook was rocked by misinformation and privacy scandals, Mark Zuckerberg told Congress that his company has “a responsibility to not just build tools, but to make sure that they’re used for good” and that he would welcome “the right regulation.” Meta’s platforms have since failed miserably to limit election and pandemic misinformation. In early 2022, Sam Bankman-Fried told Congress that the federal government needs to establish “clear and consistent regulatory guidelines” for cryptocurrencies. By the end of the year, his own crypto firm had proved to be a sham, and he was arrested for financial fraud on the scale of the Enron scandal. “We see a really savvy attempt to avoid getting lumped in with tech platforms like Facebook and Twitter, which have drawn increasingly searching scrutiny from regulators about the harms they inflict,” Cahn told me.

At least some of the extinction statement’s signatories do seem to earnestly believe that superintelligent machines could end humanity. Yoshua Bengio, who signed the statement and is sometimes called a “godfather” of AI, told me he believes that the technologies have become so capable that they risk triggering a world-ending catastrophe, whether as rogue sentient entities or in the hands of a human. “If it’s an existential risk, we may have one chance, and that’s it,” he said.

[Read: Here’s how AI will come for your job]

Dan Hendrycks, the director of the Center for AI Safety, told me he thinks similarly about these risks. He also added that the public needs to end the current “AI arms race between these corporations, where they’re basically prioritizing the development of AI technologies over their safety.” That leaders from Google, Microsoft, OpenAI, Deepmind, Anthropic, and Stability AI signed his center’s warning, Hendrycks said, could be a sign of genuine concern. Altman wrote about this threat even before the founding of OpenAI. Yet “even under that charitable interpretation,” Bender told me, “you have to wonder: If you think this is so dangerous, why are you still building it?

The solutions these companies have proposed for both the empirical and fantastical harms of their products are vague, filled with platitudes that stray from an established body of work on what experts told me regulating AI would actually require. In his testimony, Altman emphasized the need to create a new government agency focused on AI. Microsoft has done the same. “This is warmed-up leftovers,” Signal’s Whittaker said. “I was in conversations in 2015 where the topic was ‘Do we need a new agency?’ This is an old ship that usually high-level people in a Davos-y environment speculate on before they go to cocktails.” And a new agency, or any exploratory policy initiative, “is a very long-term objective that would take many, many decades to even get close to realizing,” Raji said. During that time, AI could not only harm countless people but also become so entrenched in various companies and institutions as to make meaningful regulation much harder.

For about a decade, experts have rigorously studied the damage done by AI and proposed more realistic ways to prevent them. Possible interventions could involve public documentation of training data and model design; clear mechanisms for holding companies accountable when their products put out medical misinformation, libel, and other harmful content; antitrust legislation; or just enforcing existing laws related to civil rights, intellectual property, and consumer protection. “If a store is systematically targeting Black customers through human decision making, that’s a violation of civil-rights law,” Cahn said. “And to me, it’s no different when an algorithm does it.” Similarly, if a chatbot writes a racist legal brief or gives incorrect medical advice, was trained on copyrighted writing, or scams people for money, current laws should apply.

Doomsday prognostications and calls for a new AI agency amount to “an attempt at regulatory sabotage,” Whittaker said, because the very people selling and profiting from this technology would “shape, hollow out, and effectively sabotage” the agency and its powers. Just look at Altman testifying before Congress, or the recent “responsible”-AI meeting between various CEOs and President Joe Biden: The people developing and profiting from the software are the ones telling the government how to approach it—an early glimpse of regulatory capture. “There’s decades worth of very specific kinds of regulations people are calling for about equity, fairness, and justice,” Safiya Noble, an internet-studies scholar at UCLA and the author of Algorithms of Oppression, told me. “And the kinds of regulations I see [AI companies] talking about are ones that are favorable to their interests.” These companies also spent many millions of dollars lobbying Congress in just the first three months of this year.

All that has really changed from the years-old conversations around regulating AI is ChatGPT—a program that, because it spits out human-esque language, has captivated consumers and investors, granting Silicon Valley a Promethean aura. Beneath that myth, though, much about AI’s harms is unchanged. The technology depends on surveillance and data collection, exploits creative work and physical labor, amplifies bias, and is not sentient. The ideas and tools needed for regulation, which would require addressing those problems and perhaps reducing corporate profits, are around for anybody who might care to look. The 22-word warning is a tweet, not scripture; a matter of faith, not evidence. That an algorithm is harming somebody right now would have been a fact if you read this sentence a decade ago, and it remains one today.

How Europe Won the Gas War With Russia

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 06 › russia-ukraine-natural-gas-europe › 674268

The most significant defeat in Russia’s war on Ukraine was suffered not on a battlefield but in the marketplace.

The Russian aggressors had expected to use natural gas as a weapon to bend Western Europe to their will. The weapon failed. Why? And will the failure continue?

Unlike oil, which is easily transported by ocean tanker, gas moves most efficiently and economically through fixed pipelines. Pipelines are time-consuming and expensive to build. Once the pipeline is laid, over land or underwater, the buyer at one end is bound to the seller on the other end. Gas can move by tanker, too, but first it must be compressed into liquid form. Compressing gas is expensive and technologically demanding. In the 2010s, European consumers preferred to rely on cheaper and supposedly reliable pipeline gas from Russia. Then, in 2021, the year before the Russian attack on Ukraine, Europeans abruptly discovered the limits of Russian-energy reliability.

The Russian pipeline network can carry only so much gas at a time. In winter, Europe consumes more than the network can convey, so Europe prepares for shortages by building big inventories of gas in the summertime, when it uses less.

Russian actions in the summer of 2021 thwarted European inventory building. A shortage loomed—and prices spiked. I wrote for The Atlantic on January 5, 2022:

In a normal year, Europe would enter the winter with something like 100 billion cubic meters of gas on hand. This December began with reserves 13 percent lower than usual. Thin inventories have triggered fearful speculation. Gas is selling on European commodity markets for 10 times the price it goes for in the United States.

These high prices have offered windfall opportunities for people with gas to sell. Yet Russia has refused those opportunities. Through August, when European utilities import surplus gas to accumulate for winter use, deliveries via the main Russian pipeline to Germany flowed at only one-quarter their normal rate. Meanwhile, Russia has been boycotting altogether the large and sophisticated pipeline that crosses Ukraine en route to more southerly parts of Europe.

I added a warning: “By design or default, the shortfalls have put a powerful weapon in [Russian President Vladimir] Putin’s hands.”

A month later, the world learned what Putin’s gas weapon was meant to do. Russian armored columns lunged toward the Ukrainian capital, Kyiv, on February 24. Putin’s gas cutoffs appear to have been intended to deter Western Europe from coming to Ukraine’s aid.

The day before the invasion, I tried to communicate the mood of fear that then gripped gas markets and European capitals:

In 2017, 2018, and 2019, Russia’s dominance over its gas customers in Western Europe was weaker, and its financial resources to endure market disruption were fewer. In 2022, Russia’s power over its gas customers is at a zenith—and its financial resources are enormous … One gas-industry insider, speaking on the condition of anonymity in order to talk candidly, predicted that if gas prices stay high, European economies will shrink—and Russia’s could grow—to the point where Putin’s economy will overtake at least Italy’s and perhaps France’s to stand second in Europe only to Germany’s.

That fear was mercifully not realized. Instead, European economies proved much more resilient—and Russia’s gas weapon much less formidable—than feared. The lights did not go out.

The story of this success is one of much ingenuity, solidarity, sacrifice, and some luck. If Putin’s war continues into its second winter and into Europe’s third winter of gas shortages, Western countries will need even more ingenuity, solidarity, sacrifice, and luck.

Over 12 months, European countries achieved a remarkable energy pivot. First, they reduced their demand for gas. European natural-gas consumption in 2022 was estimated to be 12 percent lower than the average for the years 2019–21. More consumption cuts are forecast for 2023.

Weather helped. Europe’s winter of 2022–23 was, for the most part, a mild one. Energy substitution made a difference too. Germany produced 12 percent more coal-generated electricity in 2022 than in 2021. The slow recovery from the coronavirus pandemic in China helped as well. Chinese purchases of liquid natural gas on world markets actually dropped by nearly 20 percent in 2022 from their 2021 level.

[David Frum: Putin’s big chill in Europe]

Second, European countries looked out for consumers, and for one another. European Union governments spent close to 800 billion euros ($860 billion) to subsidize fuel bills in 2022. The United Kingdom distributed an emergency grant of £400 ($500) a household to help with fuel costs. Germany normally reexports almost half of the gas it imports, and despite shortfalls at home through the crisis, it continued to reexport a similar proportion to EU partners.

Third, as European countries cut their consumption, they also switched their sources of supply. The star of this part of the story is Norway, which replaced Russia as Europe’s single largest gas supplier. Norway rejiggered its offshore fields to produce less oil and more gas, I learned from energy experts during a recent visit I made to Oslo.

Norwegians also made sacrifices for their neighbors. Norway has an abundance of cheap hydroelectricity, and exports much of that power. During the 2022 energy crisis, those export commitments pushed up Norwegian households’ power bills and helped push down the approval ratings of Norway’s governing Labor Party by more than a quarter from its level at the beginning of that year. Nevertheless, the government steadfastly honored its electricity-export commitments (although it has now moved to place some restrictions on future exports).

The redirection from Asia of shipments of liquid natural gas from the United States, the Persian Gulf, and West Africa also contributed to European energy security. In December 2022, Germany opened a new gas-receiving terminal in Wilhelmshaven, near Bremen, which was completed at record speed, in fewer than 200 days. Two more terminals will begin operating in 2023.

The net result is that Russian gas exports fell by 25 percent in 2022. And since the painful record prices set in the months before the February 2022 invasion, the cost of gas in Europe has steeply declined.

Russian leaders had assumed that their pipelines to Europe would make the continent dependent on Russia. They did not apparently consider that the same pipelines also made Russia dependent on Europe. By contrast, only a single pipeline connects Russia to the whole of China, and it is less valuable to Putin—according to a study conducted by the Carnegie Endowment for International Peace, the gas it carries commands prices much lower than the gas Russia pipes to Europe.

To reach world markets, Russia will have to undertake the costly business of compressing its gas into liquid form. A decompression plant like the one swiftly constructed in Wilhelmshaven costs about $500 million. Germany’s three newly built terminals to receive liquid natural gas will cost more than $3 billion. But the outbound terminals that compress the gas cost even more: $10.5 billion is the latest estimate for the next big project on the U.S. Gulf Coast. Russia depended on foreign investment and technology to compete in the liquid-natural-gas market. Under Western sanctions, the flow of both investment and technology to Russia have been cut.

[Eliot A. Cohen: It’s not enough for Ukraine to win. Russia has to lose.]

Russia lacks the economic and technological oomph to keep pace with the big competitors in the liquid-gas market, such as the U.S. and Qatar. In April, CNBC reported on a study by gas-industry consultants that projected growth of 50 percent for the liquid-natural-gas market by 2030. The Russian share of that market will, according to the same study, shrink to 5 percent (from about 7 percent), even as the American share rises to 25 percent (from about 20 percent).

If the war in Ukraine continues through the next winter, Europe will have to overcome renewed difficulties. For example, Germany’s nuclear-power plants, which eased the shock last year, went offline forever in April. And this time, the winter might be colder. But gas production by non-Russian producers keeps rising, outpacing demand in the rest of the world. The Chinese economy continues its slow recovery from COVID; India lags as a gas buyer.

Risks are everywhere—but so are possibilities. When this war comes to an end, the lesson will be clear: We have to hasten the planet to a post-fossil-fuel future—not only to preserve our environment but to uphold world peace from aggressors who use oil and gas as weapons. Yet perhaps the most enduring lesson is political. Through the energy shock, Europe discovered a new resource: the power of wisely led cooperation to meet and overcome a common danger.