Itemoids

Brown University

The Order That Defines the Future of AI in America

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 10 › biden-white-house-ai-executive-order › 675837

Earlier today, President Joe Biden signed the most sweeping set of regulatory principles on artificial intelligence in America to date: a lengthy executive order that directs all types of government agencies to make sure America is leading the way in developing the technology while also addressing the many dangers that it poses. The order explicitly pushes agencies to establish rules and guidelines, write reports, and create funding and research initiatives for AI—“the most consequential technology of our time,” in the president’s own words.

The scope of the order is impressive, especially given that the generative-AI boom began just about a year ago. But the document’s many parts—and there are many—are at times in tension, revealing a broader confusion over what, exactly, America’s primary attitude toward AI should be: Is it a threat to national security, or a just society? Is it a geopolitical weapon? Is it a way to help people?

The Biden administration has answered “all of the above,” demonstrating a belief that the technology will soon be everywhere. “This is a big deal,” Alondra Nelson, a professor at the Institute for Advanced Study who previously served as acting director of the White House Office of Science and Technology Policy, told us. AI will be “as ubiquitous as operating systems in our cellphones,” Nelson said, which means that regulating it will involve “the whole policy space itself.” That very scale almost necessitates ambivalence, and it is as if the Biden administration has taken into account conflicting views without deciding on one approach.

One section of the order adopts wholesale the talking points of a handful of influential AI companies such as OpenAI and Google, while others center the concerns of workers, vulnerable and underserved communities, and civil-rights groups most critical of Big Tech. The order also makes clear that the government is concerned that AI will exacerbate misinformation, privacy violations, and copyright infringement. Even as it heeds the recommendations of Big AI, the order additionally outlines approaches to support smaller AI developers and researchers. And there are plenty of nods toward the potential benefits of the technology as well: AI, the executive order notes, has the “potential to solve some of society’s most difficult challenges.” It could be a boon for small businesses and entrepreneurs, create new categories of employment, develop new medicines, improve health care, and much more.  

If the document reads like a smashing-together of papers written by completely different groups, that’s because it likely is. The president and vice president have held meetings with AI-company executives, civil-rights leaders, and consumer advocates to discuss regulating the technology, and the Biden administration published a Blueprint for an AI Bill of Rights before the launch of ChatGPT last November. That document called for advancing civil rights, racial justice, and privacy protections, among other things. Today’s executive order cites and expands that earlier proposal—it directly addresses AI’s demonstrated ability to contribute to discrimination in contexts such as health care and hiring, the risks of using AI in sentencing and policing, and more. These issues existed long before the arrival of generative AI, a subcategory of artificial intelligence that creates new—or at least compellingly remixed—material based on training data, but those older AI programs stir the collective imagination less than ChatGPT, with its alarmingly humanlike language.

[Read: The future of AI is GOMA]

The executive order, then, is naturally fixated to a great extent on the kind of ultrapowerful and computationally intensive software that underpins that newer technology. At particular issue are so-called dual-use foundation models, which have also been called “frontier AI” models—a term for future generations of the technology with supposedly devastating potential. The phrase was popularized by many of the companies that intend to build these models, and chunks of the executive order match the regulatory framing that these companies have recommended. One influential policy paper from this summer, co-authored in part by staff at OpenAI and Google DeepMind, suggested defining frontier-AI models as including those that would make designing biological or chemical weapons easier, those that would be able to evade human control “through means of deception and obfuscation,” and those that are trained above a threshold of computational power. The executive order uses almost exactly the same language and the same threshold.

A senior administration official speaking to reporters framed the sprawling nature of the document as a feature, not a bug. “AI policy is like running a decathlon,” the official said. “We don’t have the luxury of just picking, of saying, ‘We’re just going to do safety,’ or ‘We’re just going to do equity,’ or ‘We’re just going to do privacy.’ We have to do all of these things.” After all, the order has huge “signaling power,” Suresh Venkatasubramanian, a computer-science professor at Brown University who helped co-author the earlier AI Bill of Rights, told us. “I can tell you Congress is going to look at this, states are going to look at this, governors are going to look at this.”

Anyone looking at the order for guidance will come away with a mixed impression of the technology—which has about as many possible uses as a book has possible subjects—and likely also confusion about what the president decided to focus on or omit. The order spends quite a lot of words detailing how different agencies should prepare to address the theoretical impact of AI on chemical, biological, radiological, and nuclear threats, a framing drawn directly from the policy paper supported by OpenAI and Google. In contrast, the administration spends far fewer on the use of AI in education, a massive application for the technology that is already happening. The document acknowledges the role that AI can play in boosting resilience against climate change—such as by enhancing grid reliability and enabling clean-energy deployment, a common industry talking point—but it doesn’t once mention the enormous energy and water resources required to develop and deploy large AI models, nor the carbon emissions they produce. And it discusses the possibility of using federal resources to support workers whose jobs may be disrupted by AI but does not mention workers who are arguably exploited by the AI economy: for example, people who are paid very little to manually give feedback to chatbots.

[Read: America already has an AI underclass]

International concerns are also a major presence in the order. Among the most aggressive actions the order takes is directing the secretary of commerce to propose new regulations that would require U.S. cloud-service providers, such as Microsoft and Google, to notify the government if foreign individuals or entities who use their services start training large AI models that could be used for malicious purposes. The order also directs the secretary of state and the secretary of homeland security to streamline visa approval for AI talent, and urges several other agencies, including the Department of Defense, to prepare recommendations for streamlining the approval process for noncitizens with AI expertise seeking to work within national labs and access classified information.

Where the surveillance of foreign entities is an implicit nod to the U.S.’s fierce competition with and concerns about China in AI development, China is also the No. 1 source of foreign AI talent in the U.S. In 2019, 27 percent of top-tier U.S.-based AI researchers received their undergraduate education in China, compared with 31 percent who were educated in the U.S, according to a study from Macro Polo, a Chicago-based think tank that studies China’s economy. The document, in other words, suggests actions against foreign agents developing AI while underscoring the importance of international workers to the development of AI in the U.S.

[Read: The new AI panic]

The order’s international focus is no accident; it is being delivered right before a major U.K. AI Safety Summit this week, where Vice President Kamala Harris will be delivering a speech on the administration’s vision for AI. Unlike the U.S.’s broad approach, or that of the EU’s AI Act, the U.K. has been almost entirely focused on those frontier models—“a fairly narrow lane,” Nelson told us. In contrast, the U.S. executive order considers a full range of AI and automated decision-making technologies, and seeks to balance national security, equity, and innovation. The U.S. is trying to model a different approach to the world, she said.

The Biden administration is likely also using the order to make a final push on its AI-policy positions before the 2024 election consumes Washington and a new administration potentially comes in, Paul Triolo, an associate partner for China and a technology-policy lead at the consulting firm Albright Stonebridge, told us. The document expects most agencies to complete their tasks before the end of this term. The resulting reports and regulatory positions could shape any AI legislation brewing in Congress, which will likely take much longer to pass, and preempt a potential Trump administration that, if the past is any indication, may focus its AI policy almost exclusively on America’s global competitiveness.

Still, given that only 11 months have passed since the release of ChatGPT, and its upgrade to GPT-4 came less than five months after that, many of those tasks and timelines appear somewhat vague and distant. The order gives 180 days for the secretaries of defense and homeland security to complete a cybersecurity pilot project, 270 days for the secretary of commerce to launch an initiative to create guidance in another area, 365 days for the attorney general to submit a report on something else. The senior administration official told reporters that a newly formed AI Council among the agency heads, chaired by Bruce Reed, a White House deputy chief of staff, would ensure that each agency makes progress at a steady clip. Once the final deadline passes, perhaps the federal government’s position on AI will have crystallized.

But perhaps its stance and policies cannot, or even should not, settle. Like the internet itself, artificial intelligence is a capacious technology that could be developed, and deployed, in a dizzying combination of ways; Congress is still trying to figure out how copyright and privacy laws, as well as the First Amendment, apply to the decades-old web, and every few years the terms of those regulatory conversations seem to shift again.

A year ago, few people could have imagined how chatbots and image generators would change the basic way we think about the internet’s effects on elections, education, labor, or work; only months ago, the deployment of AI in search engines seemed like a fever dream. All of that, and much more in the nascent AI revolution, has begun in earnest. The executive order’s internal conflict over, and openness to, different values and approaches to AI may have been inevitable, then—the result of an attempt to chart a path for a technology when nobody has a reliable map of where it’s going.