Itemoids

Commerce

The New AI Panic

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 10 › technology-exports-ai-programs-regulations-china › 675605

For decades, the Department of Commerce has maintained a little-known list of technologies that, on grounds of national security, are prohibited from being sold freely to foreign countries. Any company that wants to sell such a technology overseas must apply for permission, giving the department oversight and control over what is being exported and to whom.

These export controls are now inflaming tensions between the United States and China. They have become the primary way for the U.S. to throttle China’s development of artificial intelligence: The department last year limited China’s access to the computer chips needed to power AI and is in discussions now to expand them. A semiconductor analyst told The New York Times that the strategy amounts to a kind of economic warfare.

The battle lines may soon extend beyond chips. Commerce is considering a new blockade on a broad category of general-purpose AI programs, not just physical parts, according to people familiar with the matter. (I am granting them anonymity because they are not authorized to speak to the press.) Although much remains to be seen about how the controls would roll out—and, indeed, whether they will ultimately roll out at all—experts described alarming stakes. If enacted, the limits could generate more friction with China while weakening the foundations of AI innovation in the U.S.

Of particular concern to Commerce are so-called frontier models. The phrase, popularized in the Washington lexicon by some of the very companies that seek to build these models—Microsoft, Google, OpenAI, Anthropic—describes a kind of “advanced” artificial intelligence with flexible and wide-ranging uses that could also develop unexpected and dangerous capabilities. By their determination, frontier models do not exist yet. But an influential white paper published in July and co-authored by a consortium of researchers, including representatives from most of those tech firms, suggests that these models could result from the further development of large language models—the technology underpinning ChatGPT. The same prediction capabilities that allow ChatGPT to write sentences might, in their next generation, be advanced enough to produce individualized disinformation, create recipes for novel biochemical weapons, or enable other unforeseen abuses that could threaten public safety.

This is a distinctly different concern from the use of AI to develop autonomous military systems, which has been part of the motivation for limiting the export of computer chips. The threats of frontier models are nebulous, tied to speculation about how new skill sets could suddenly “emerge” in AI programs. The paper authors argue that now is the time to consider them regardless. Once frontier models are invented and deployed, they could cause harm quickly and at scale. Among the proposals the authors offer, in their 51-page document, to get ahead of this problem: creating some kind of licensing process that requires companies to gain approval before they can release, or perhaps even develop, frontier AI. “We think that it is important to begin taking practical steps to regulate frontier AI today,” the authors write.

The white paper arrived just as policy makers were contemplating the same dread that many have felt since the release of ChatGPT: an inability to parse what it all means for the future. Shortly after the paper’s publication, the White House used some of the language and framing in its voluntary AI commitments, a set of guidelines for leading AI firms that are intended to ensure the safe deployment of the technology without sacrificing its supposed benefits. Microsoft, Google, OpenAI, and Anthropic subsequently launched the Frontier Model Forum, an industry group for producing research and recommendations on “safe and responsible” frontier-model development.

[Read: AI’s present matters more than its imagined future]

Markus Anderljung, one of the white paper’s lead authors and a researcher at the Centre for the Governance of AI and the Center for a New American Security, told me that the point of the document was simply to encourage timely regulatory thinking on an issue that had become top of mind for him and his collaborators. AI models advance rapidly, he reasoned, which necessitates forward thinking. “I don’t know what the next generation of models will be capable of, but I’m really worried about a situation where decisions about what models are put out there in the world are just up to these private companies,” he said.

For the four private companies at the center of discussions about frontier models, though, this kind of regulation could prove advantageous. Conspicuously absent from the gang is Meta, which similarly develops general-purpose AI programs but has recently touted a commitment to releasing at least some of them for free. This has posed a challenge to the other firms’ business models, which rest in part on being able to charge for the same technology. Convincing regulators to control frontier models could restrict the ability of Meta and any other firms to continue publishing and developing their best AI models through open-source communities on the internet; if the technology must be regulated, better for it to happen on terms that favor the bottom line.

Reached for comment, the tech companies at the center of this conversation were fairly tight-lipped. A Google DeepMind spokesperson told me the company believes that “a focus on safety is essential to innovating responsibly,” which is why it is working with industry peers through the forum to advance research on both near- and long-term harms. An Anthropic spokesperson told me the company believes that models should be tested prior to any kind of deployment, commercial or open-source, and that identifying the appropriate tests is the most important question for government, industry, academia, and civil society to work on. Microsoft’s president, Brad Smith, has previously emphasized the need for government to play a strong role in promoting secure, accountable, and trustworthy AI development. OpenAI did not respond to a request for comment.

The obsession with frontier models has now collided with mounting panic about China, fully intertwining ideas for the models’ regulation with national-security concerns. Over the past few months, members of Commerce have met with experts to hash out what controlling frontier models could look like and whether it would be feasible to keep them out of reach of Beijing. A spokesperson for the department told me it routinely assesses the landscape and adjusts its regulations as needed. She declined a more detailed request for comment.

That the white paper took hold in this way speaks to a precarious dynamic playing out in Washington. The tech industry has been readily asserting its power, and the AI panic has made policy makers uniquely receptive to their messaging, says Emily Weinstein, who spoke with me as a research fellow at Georgetown’s Center for Security and Emerging Technology and has since joined Commerce as a senior adviser. Combined with concerns about China and the upcoming election, it’s engendering new and confused policy thinking about how exactly to frame and address the AI-regulatory problem. “Parts of the administration are grasping onto whatever they can because they want to do something,” Weinstein told me.

[Read: The AI crackdown is coming]

The discussions at Commerce “are uniquely symbolic” of this dynamic, she added. The department’s previous chip-export controls “really set the stage for focusing on AI at the cutting edge”; now export controls on frontier models could be seen as a natural continuation. Weinstein, however, called it “a weak strategy”; other AI and tech-policy experts I spoke with sounded their own warnings as well.

The decision would represent an escalation against China, further destabilizing a fractured relationship. Since the chip-export controls were announced on October 7 last year, Beijing has engaged in different apparent retaliatory measures, including banning products from the U.S. chip maker Micron Technology and restricting the export of certain chipmaking metals. Many Chinese AI researchers I’ve spoken with in the past year have expressed deep frustration and sadness over having their work—on things such as drug discovery and image generation—turned into collateral in the U.S.-China tech competition. Most told me that they see themselves as global citizens contributing to global technology advancement, not as assets of the state. Many still harbor dreams of working at American companies.

AI researchers also have a long-standing tradition of regularly collaborating online. Whereas major tech firms, including those represented in the white paper, have the resources to develop their own models, smaller organizations rely on open sourcing—sharing and building on code released to the broader community. Preventing researchers from releasing code would give smaller developers fewer pathways than ever to develop AI products and services, while the AI giants currently lobbying Washington may see their power further entrenched. “If the export controls are broadly defined to include open-source, that would touch on a third-rail issue,” says Matt Sheehan, a Carnegie Endowment for International Peace fellow who studies global technology issues with a focus on China.

What’s frequently left out of considerations as well is how much this collaboration happens across borders in ways that strengthen, rather than detract from, American AI leadership. As the two countries that produce the most AI researchers and research in the world, the U.S. and China are each other’s No. 1 collaborator in the technology’s development. They have riffed off each other’s work to advance the field and a wide array of applications far faster than either one would alone. Whereas the transformer architecture that underpins generative-AI models originated in the U.S., one of the most widely used algorithms, ResNet, was published by Microsoft researchers in China. This trend has continued with Meta’s open-source model, Llama 2. In one recent example, Sheehan saw a former acquaintance in China who runs a medical-diagnostics company post on social media about how much Llama 2 was helping his work. Assuming they’re even enforceable, export controls on frontier models could thus “be a pretty direct hit” to the large community of Chinese developers who build on U.S. models and in turn contribute their own research and advancements to U.S. AI development, Sheehan told me.

[Read: Tech companies’ friendly new strategy to destroy one another]

But the technical feasibility of such export controls is up in the air as well. Because the premise of these controls rests entirely on hypothetical threats, it’s essentially impossible to specify exactly which AI models should be restricted. Any specifications could also be circumvented easily, whether through China accelerating its own innovation or through American firms finding work-arounds, as the previous round of controls showed. Within a month of the Commerce Department announcing its blockade on powerful chips last year, the California-based chipmaker Nvidia announced a less powerful chip that fell right below the export controls’ technical specifications, and was able to continue selling to China. Bytedance, Baidu, Tencent, and Alibaba have each since placed orders for about 100,000 of Nvidia’s China chips to be delivered this year, and more for future delivery—deals that are worth roughly $5 billion, according to the Financial Times.

An Nvidia spokesperson said the kinds of chips that the company sells are crucial to accelerating beneficial applications globally, and that restricting its exports to China “would have a significant, harmful impact on U.S. economic and technology leadership.” The company is, however, unsurprisingly in favor of controlling frontier-AI models as an alternative, which it called a more targeted action with fewer unintended consequences. Bytedance, Baidu, Tencent, and Alibaba did not respond to a request for comment.

In some cases, fixating on AI models would serve as a distraction from addressing the root challenge: The bottleneck for producing novel biochemical weapons, for example, is not finding a recipe, says Weinstein, but rather obtaining the materials and equipment to actually synthesize the armaments. Restricting access to AI models would do little to solve that problem.

Sarah Myers West, the managing director of the AI Now Institute, told me there could be another benefit to the four companies pushing for frontier-model regulation. Evoking the specter of future threats shifts the regulatory attention away from present-day harms of their existing models, such as privacy violations, copyright infringements, and job automation. The idea that “this is a technology that carries significant dangers, so we don’t want it to fall into the wrong hands—I think that very much plays into the fear-mongering anti-China frame that has often been used as a means to pretty explicitly stave off any efforts and regulatory intervention” of the here and now, she said.

I asked Anderljung what he thinks of this. “People overestimate how much this is in the interest of these companies,” he told me, caveating that as an external collaborator he cannot fully know what the companies are thinking. A regulator could very well tell a company after a billion-dollar investment in developing a model that it is not allowed to deploy the technology. “I don’t think it’s at all clear that that would be in the interest of companies,” he said. He added that such controls would be a “yes, and” kind of situation. They would not in any way replace the need for other types of AI regulation on existing models and their harms. “It would be sad,” he said, if the fixation on frontier models crowded out those other discussions.

But West, Weinstein, and others I spoke with said that this is exactly what’s happening. “AI safety as a domain even a few years ago was much more heterogeneous,” West told me. Now? “We’re not talking about the effects on workers and the labor impacts of these systems. We’re not talking about the environmental concerns.” It’s no wonder: When resources, expertise, and power have concentrated so heavily in a few companies, and policy makers are seeped in their own cocktail of fears, the landscape of policy ideas collapses under pressure, eroding the base of a healthy democracy.