Itemoids

Washington

The New AI Panic

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 10 › technology-exports-ai-programs-regulations-china › 675605

For decades, the Department of Commerce has maintained a little-known list of technologies that, on grounds of national security, are prohibited from being sold freely to foreign countries. Any company that wants to sell such a technology overseas must apply for permission, giving the department oversight and control over what is being exported and to whom.

These export controls are now inflaming tensions between the United States and China. They have become the primary way for the U.S. to throttle China’s development of artificial intelligence: The department last year limited China’s access to the computer chips needed to power AI and is in discussions now to expand them. A semiconductor analyst told The New York Times that the strategy amounts to a kind of economic warfare.

The battle lines may soon extend beyond chips. Commerce is considering a new blockade on a broad category of general-purpose AI programs, not just physical parts, according to people familiar with the matter. (I am granting them anonymity because they are not authorized to speak to the press.) Although much remains to be seen about how the controls would roll out—and, indeed, whether they will ultimately roll out at all—experts described alarming stakes. If enacted, the limits could generate more friction with China while weakening the foundations of AI innovation in the U.S.

Of particular concern to Commerce are so-called frontier models. The phrase, popularized in the Washington lexicon by some of the very companies that seek to build these models—Microsoft, Google, OpenAI, Anthropic—describes a kind of “advanced” artificial intelligence with flexible and wide-ranging uses that could also develop unexpected and dangerous capabilities. By their determination, frontier models do not exist yet. But an influential white paper published in July and co-authored by a consortium of researchers, including representatives from most of those tech firms, suggests that these models could result from the further development of large language models—the technology underpinning ChatGPT. The same prediction capabilities that allow ChatGPT to write sentences might, in their next generation, be advanced enough to produce individualized disinformation, create recipes for novel biochemical weapons, or enable other unforeseen abuses that could threaten public safety.

This is a distinctly different concern from the use of AI to develop autonomous military systems, which has been part of the motivation for limiting the export of computer chips. The threats of frontier models are nebulous, tied to speculation about how new skill sets could suddenly “emerge” in AI programs. The paper authors argue that now is the time to consider them regardless. Once frontier models are invented and deployed, they could cause harm quickly and at scale. Among the proposals the authors offer, in their 51-page document, to get ahead of this problem: creating some kind of licensing process that requires companies to gain approval before they can release, or perhaps even develop, frontier AI. “We think that it is important to begin taking practical steps to regulate frontier AI today,” the authors write.

The white paper arrived just as policy makers were contemplating the same dread that many have felt since the release of ChatGPT: an inability to parse what it all means for the future. Shortly after the paper’s publication, the White House used some of the language and framing in its voluntary AI commitments, a set of guidelines for leading AI firms that are intended to ensure the safe deployment of the technology without sacrificing its supposed benefits. Microsoft, Google, OpenAI, and Anthropic subsequently launched the Frontier Model Forum, an industry group for producing research and recommendations on “safe and responsible” frontier-model development.

[Read: AI’s present matters more than its imagined future]

Markus Anderljung, one of the white paper’s lead authors and a researcher at the Centre for the Governance of AI and the Center for a New American Security, told me that the point of the document was simply to encourage timely regulatory thinking on an issue that had become top of mind for him and his collaborators. AI models advance rapidly, he reasoned, which necessitates forward thinking. “I don’t know what the next generation of models will be capable of, but I’m really worried about a situation where decisions about what models are put out there in the world are just up to these private companies,” he said.

For the four private companies at the center of discussions about frontier models, though, this kind of regulation could prove advantageous. Conspicuously absent from the gang is Meta, which similarly develops general-purpose AI programs but has recently touted a commitment to releasing at least some of them for free. This has posed a challenge to the other firms’ business models, which rest in part on being able to charge for the same technology. Convincing regulators to control frontier models could restrict the ability of Meta and any other firms to continue publishing and developing their best AI models through open-source communities on the internet; if the technology must be regulated, better for it to happen on terms that favor the bottom line.

Reached for comment, the tech companies at the center of this conversation were fairly tight-lipped. A Google DeepMind spokesperson told me the company believes that “a focus on safety is essential to innovating responsibly,” which is why it is working with industry peers through the forum to advance research on both near- and long-term harms. An Anthropic spokesperson told me the company believes that models should be tested prior to any kind of deployment, commercial or open-source, and that identifying the appropriate tests is the most important question for government, industry, academia, and civil society to work on. Microsoft’s president, Brad Smith, has previously emphasized the need for government to play a strong role in promoting secure, accountable, and trustworthy AI development. OpenAI did not respond to a request for comment.

The obsession with frontier models has now collided with mounting panic about China, fully intertwining ideas for the models’ regulation with national-security concerns. Over the past few months, members of Commerce have met with experts to hash out what controlling frontier models could look like and whether it would be feasible to keep them out of reach of Beijing. A spokesperson for the department told me it routinely assesses the landscape and adjusts its regulations as needed. She declined a more detailed request for comment.

That the white paper took hold in this way speaks to a precarious dynamic playing out in Washington. The tech industry has been readily asserting its power, and the AI panic has made policy makers uniquely receptive to their messaging, says Emily Weinstein, who spoke with me as a research fellow at Georgetown’s Center for Security and Emerging Technology and has since joined Commerce as a senior adviser. Combined with concerns about China and the upcoming election, it’s engendering new and confused policy thinking about how exactly to frame and address the AI-regulatory problem. “Parts of the administration are grasping onto whatever they can because they want to do something,” Weinstein told me.

[Read: The AI crackdown is coming]

The discussions at Commerce “are uniquely symbolic” of this dynamic, she added. The department’s previous chip-export controls “really set the stage for focusing on AI at the cutting edge”; now export controls on frontier models could be seen as a natural continuation. Weinstein, however, called it “a weak strategy”; other AI and tech-policy experts I spoke with sounded their own warnings as well.

The decision would represent an escalation against China, further destabilizing a fractured relationship. Since the chip-export controls were announced on October 7 last year, Beijing has engaged in different apparent retaliatory measures, including banning products from the U.S. chip maker Micron Technology and restricting the export of certain chipmaking metals. Many Chinese AI researchers I’ve spoken with in the past year have expressed deep frustration and sadness over having their work—on things such as drug discovery and image generation—turned into collateral in the U.S.-China tech competition. Most told me that they see themselves as global citizens contributing to global technology advancement, not as assets of the state. Many still harbor dreams of working at American companies.

AI researchers also have a long-standing tradition of regularly collaborating online. Whereas major tech firms, including those represented in the white paper, have the resources to develop their own models, smaller organizations rely on open sourcing—sharing and building on code released to the broader community. Preventing researchers from releasing code would give smaller developers fewer pathways than ever to develop AI products and services, while the AI giants currently lobbying Washington may see their power further entrenched. “If the export controls are broadly defined to include open-source, that would touch on a third-rail issue,” says Matt Sheehan, a Carnegie Endowment for International Peace fellow who studies global technology issues with a focus on China.

What’s frequently left out of considerations as well is how much this collaboration happens across borders in ways that strengthen, rather than detract from, American AI leadership. As the two countries that produce the most AI researchers and research in the world, the U.S. and China are each other’s No. 1 collaborator in the technology’s development. They have riffed off each other’s work to advance the field and a wide array of applications far faster than either one would alone. Whereas the transformer architecture that underpins generative-AI models originated in the U.S., one of the most widely used algorithms, ResNet, was published by Microsoft researchers in China. This trend has continued with Meta’s open-source model, Llama 2. In one recent example, Sheehan saw a former acquaintance in China who runs a medical-diagnostics company post on social media about how much Llama 2 was helping his work. Assuming they’re even enforceable, export controls on frontier models could thus “be a pretty direct hit” to the large community of Chinese developers who build on U.S. models and in turn contribute their own research and advancements to U.S. AI development, Sheehan told me.

[Read: Tech companies’ friendly new strategy to destroy one another]

But the technical feasibility of such export controls is up in the air as well. Because the premise of these controls rests entirely on hypothetical threats, it’s essentially impossible to specify exactly which AI models should be restricted. Any specifications could also be circumvented easily, whether through China accelerating its own innovation or through American firms finding work-arounds, as the previous round of controls showed. Within a month of the Commerce Department announcing its blockade on powerful chips last year, the California-based chipmaker Nvidia announced a less powerful chip that fell right below the export controls’ technical specifications, and was able to continue selling to China. Bytedance, Baidu, Tencent, and Alibaba have each since placed orders for about 100,000 of Nvidia’s China chips to be delivered this year, and more for future delivery—deals that are worth roughly $5 billion, according to the Financial Times.

An Nvidia spokesperson said the kinds of chips that the company sells are crucial to accelerating beneficial applications globally, and that restricting its exports to China “would have a significant, harmful impact on U.S. economic and technology leadership.” The company is, however, unsurprisingly in favor of controlling frontier-AI models as an alternative, which it called a more targeted action with fewer unintended consequences. Bytedance, Baidu, Tencent, and Alibaba did not respond to a request for comment.

In some cases, fixating on AI models would serve as a distraction from addressing the root challenge: The bottleneck for producing novel biochemical weapons, for example, is not finding a recipe, says Weinstein, but rather obtaining the materials and equipment to actually synthesize the armaments. Restricting access to AI models would do little to solve that problem.

Sarah Myers West, the managing director of the AI Now Institute, told me there could be another benefit to the four companies pushing for frontier-model regulation. Evoking the specter of future threats shifts the regulatory attention away from present-day harms of their existing models, such as privacy violations, copyright infringements, and job automation. The idea that “this is a technology that carries significant dangers, so we don’t want it to fall into the wrong hands—I think that very much plays into the fear-mongering anti-China frame that has often been used as a means to pretty explicitly stave off any efforts and regulatory intervention” of the here and now, she said.

I asked Anderljung what he thinks of this. “People overestimate how much this is in the interest of these companies,” he told me, caveating that as an external collaborator he cannot fully know what the companies are thinking. A regulator could very well tell a company after a billion-dollar investment in developing a model that it is not allowed to deploy the technology. “I don’t think it’s at all clear that that would be in the interest of companies,” he said. He added that such controls would be a “yes, and” kind of situation. They would not in any way replace the need for other types of AI regulation on existing models and their harms. “It would be sad,” he said, if the fixation on frontier models crowded out those other discussions.

But West, Weinstein, and others I spoke with said that this is exactly what’s happening. “AI safety as a domain even a few years ago was much more heterogeneous,” West told me. Now? “We’re not talking about the effects on workers and the labor impacts of these systems. We’re not talking about the environmental concerns.” It’s no wonder: When resources, expertise, and power have concentrated so heavily in a few companies, and policy makers are seeped in their own cocktail of fears, the landscape of policy ideas collapses under pressure, eroding the base of a healthy democracy.

The World Needs a Unified and Resolute America

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 10 › israel-ukraine-wars-america-gop › 675604

This story seems to be about:

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

The rest of the planet does not pause while Washington sorts out its internal food fights. Republicans—and other Americans—need to put aside their childish squabbles.

First, here are three new stories from The Atlantic:

The Kamala Harris problem Israel’s two reckonings “We’re going to die here.”

Childish Squabbles

Two years ago, I wrote my first newsletter for The Atlantic, in which I worried that the United States was “no longer a serious country.”

Of course, we’re still a powerful country … But when it comes to seriousness—the invaluable discipline and maturity that allows us to discern matters that should transcend self-interest, to set aside churlish ego and emotionalism, and to act with prudence and self-restraint—we’re a weak, impoverished backwater.

When I wrote those words, the world was emerging from a pandemic, but many Americans were still refusing vaccines; Congress was bickering over infrastructure; Russia was occupying Crimea. Joe Biden had been elected president, but as I said at the time, “one president can’t sober up an entire nation.” I was, to say the least, pessimistic about the American future.

Today, the situation is even more dire. The Russians continue an all-out war of conquest in the middle of Europe, a conflict that could engulf the planet if the cowards in the Kremlin remain mired in their imperial delusions. Thousands are dying in Armenia and Sudan. And now Israel is at war, after suffering its worst surprise attack since the Yom Kippur War 50 years ago and with more Israeli citizens killed in a single day than ever in its history.

And yet much of America, and especially the remnants of the Republican Party (a party whose leaders during the Cold War defined themselves as the responsible stewards of U.S. foreign policy), remains in the grip of childish, even inane, politics. The international community in this difficult time needs a United States that is sane, tough, and principled; worthy of the title of leader of the free world; and determined, in the words of President John F. Kennedy, to “pay any price, bear any burden, meet any hardship, support any friend, oppose any foe, in order to assure the survival and the success of liberty.”

Instead of Kennedy’s inspiring vision, America has the ignorant and incoherent Donald Trump as an apparent lock to capture the eventual GOP presidential nomination, the House of Representatives without a speaker, and a public that cannot find Ukraine or Iran on a map.

“I look at the world and all the threats that are out there,” Representative Michael McCaul of Texas, chair of the House Foreign Affairs Committee, said on Sunday. “And what kind of message are we sending to our adversaries when we can’t govern? When we’re dysfunctional? When we don’t even have a speaker of the House?”

An excellent question, especially when the People’s House lacks a speaker because of a motion from Representative Matt Gaetz of Florida—an utterly unserious man who is despised even by other House Republicans. Backed by seven right-wing GOP extremists, Gaetz and this “chaos caucus” (to use Mike Pence’s description) notched a historical first by enabling the vote that tossed Representative Kevin McCarthy of California out of the job. (For some reason, the CBS show Face the Nation felt the need to interview one of the anti-McCarthy group, Nancy Mace, the day after war erupted in Israel—thus providing Mace with exactly the sort of attention she was likely hoping to garner.) At this point, the two main contenders for the post are Representatives Steve Scalise of Louisiana and Jim Jordan of Ohio.

The idea that someone as ridiculous as Jim Jordan could be in contention to lead the House should make every American pause and wonder how the United States has come to such a moment. Jordan is among Donald Trump’s most loyal supporters—Trump has already endorsed him for the speaker’s job—and one of the most cynical and huckstering members of Congress from either party. Jordan, on many issues (and especially when backing Trump’s preposterous claims about presidential power), is merely an annoying, gish-galloping gadfly.

But on the central issue of American democracy, he is much more dangerous.

Jordan was a consistent and vocal supporter of Trump’s claims of a stolen election. He usually couched this support in a “just asking questions” ploy, but occasionally the mask would slip and he would charge the Democrats with attempting to steal the election. As Thomas Joscelyn, one of the authors of the House’s January 6–committee report, told CNN: “Jim Jordan was deeply involved in Donald Trump’s antidemocratic efforts to overturn the 2020 presidential election.” Jordan refused to cooperate with the committee and defied its subpoena.

Jordan yesterday threatened another attempt to shut down the government (this time over immigration policies). But more to the point, how can the United States respond as one nation to the various crises around the world when the speaker of the House is an election denier spewing conspiracy theories about the current president? This is the man who could be wielding the speaker’s gavel when Congress receives the electoral votes in the 2024 election?

Scalise, the current majority leader, is as close to a “normal” candidate as the Republicans can produce, and he is likely in the lead for the job. That’s the good news. The bad news is that “normal” in this context means that Scalise is just another mainstream GOP figure calling for defunding “87,000 new IRS agents,” establishing “a committee on the weaponization of the federal government against citizens,” and holding “woke prosecutors accountable.”

The situation is no better over in the usually more staid and thoughtful U.S. Senate. As conflicts erupt around the world, hundreds of military promotions, including the chief of naval operations and many other senior appointments, remain frozen. They are being held up by Tommy Tuberville, a former Alabama college football coach who thinks U.S. servicepeople should be denied access to abortion and decries what he thinks is too much “wokeness” in the military. (Woke is now Republican speak for anyone who isn’t an obvious bigot.)

Meanwhile, the United States has been unable to send ambassadors to several nations, in part because of irresponsible holds placed by irresponsible senators. Senator J. D. Vance of Ohio, like Tuberville, appears to have held up posts over “wokeness,” while Senator Rand Paul of Kentucky has blocked appointments over his unhinged insistence on seeing what he thinks are nefarious U.S. government documents regarding the coronavirus’s origins.

And in a juvenile attempt to turn the war in Israel into yet another GOP weapon against U.S. support for Ukraine, Senator Josh Hawley of Missouri yesterday tweeted: “Israel is facing existential threat. Any funding for Ukraine should be redirected to Israel immediately.” Hawley, like Vance and others, is a smart man who disrespects his voters by pretending to be stupid. He almost certainly knows—one would hope, anyway—that pitting Israel against Ukraine is a false choice. (It also fudges the two situations: Israel has regained some control of the situation, for now, while Ukraine remains mired in a huge conventional war against a giant, nuclear-armed enemy.)

The old saw about partisanship ending at the water’s edge was never completely true. The right and the left in the United States have argued plenty about foreign policy, but they once did so with a seriousness of purpose and an understanding that millions of lives, the security of the nation—and in the final analysis, the survival of humanity—were at risk. If any adults remain in the GOP, they need to get control of their party and get to work.

President Biden’s foreign-policy leadership, especially with a Russian war so close to NATO’s borders, has been admirable and successful. But he cannot, and should not, do it alone. The world needs America—and that means all of us.

Related:

Biden will be guided by his Zionism. This war isn’t like Israel’s earlier wars.

Today’s News

Israel responded to Hamas’s brutal weekend attack by launching fierce air strikes on the Gaza Strip. Robert F. Kennedy Jr. will run as an independent in the 2024 presidential election, abandoning his Democratic bid. A person who crashed a car into the Chinese consulate in San Francisco was shot and killed by police.

More From The Atlantic

Your sweaters are garbage. Hiking needs new rules. Lizzo was a new kind of diva. Now she’s in a new kind of scandal.

Culture Break

Read. In Madonna: A Rebel Life, the author Mary Gabriel argues that Madonna’s entire life is an exercise in reinventing female power.

Watch. The Royal Hotel (in theaters now) taps into every female traveler’s fears.

Play our daily crossword.

P.S.

I’m not the kind of guy to say “I told you so,” but … oh, who am I kidding, of course I am. Back in 2022, I wrote about the James Bond film franchise, and I said that all the talk of casting a Black or female 007 was just silly. Bond, created by Ian Fleming, is forever frozen in time as an aspiring white male elitist of the old British establishment. In the books and the better entries in the films, he’s a hero you can admire only with serious reservations.

With the exception of Skyfall, I didn’t much care for the Daniel Craig movies; they were too emotional and introspective. (I won’t ruin Spectre for you, but when the movie revealed a twist involving the iconic villain Blofeld, I nearly walked out.) And so I’m gloating a bit now: The Bond rumor mill says that Christopher Nolan is in talks with EON Productions and Amazon to direct two Bond films. But he reportedly wants them to be period pieces that stay close to Fleming’s source materials, which would be pretty daring. (If you think the 1973 Live and Let Die movie was racist and offensive, wait’ll you read the 1954 novel—if you can find one that hasn’t been bowdlerized yet.)

If the rumors are true, then good for you, Mr. Nolan. Bond doesn’t need to be drinking beer and sharing his feelings. He needs to be saving England, the Empire, and the world, probably in that order. The last few Bond films were just British-accented Bourne movies. Let Bond be Bond—including the parts we don’t like in 2023.

— Tom

Katherine Hu contributed to this newsletter.

When you buy a book using a link in this newsletter, we receive a commission. Thank you for supporting The Atlantic.