Itemoids

United Arab Emirates

How the U.S. Gamed the Law of the Sea

The Atlantic

www.theatlantic.com › international › archive › 2025 › 01 › us-continental-shelf-seafloor-mining › 681451

You’d be forgiven for thinking that America’s continental shelf couldn’t get any bigger. It is, after all, mostly rock, the submerged landmass linking shore and abyss. But in late 2023, after a long and expensive mapping project, the State Department announced that the continental shelf had grown by 1 million square kilometers—more than two Californias.

The United States had ample motive to decide that the continental shelf extends farther than it had previously realized. A larger shelf means legal access to more of the ocean floor’s riches: animals, hydrocarbons, and, perhaps most important, minerals to power electric-vehicle batteries. America has no immediate plans to excavate its new seabed, which includes chunks of the Arctic Ocean, Bering Sea, and Atlantic, as well as several small pockets of the Gulf of Mexico and the Pacific. But, according to the State Department, the combined area could be worth trillions of dollars.

The announcement shows just how shrewdly the U.S. has gamed the international system. Since 1982, a United Nations agreement called the Law of the Sea has served as the cornerstone of the global maritime order. In its expansion project, the U.S. abided by the treaty’s rules dictating how nations can extend their shelves—but, notably, it never ratified the agreement, which means that unlike the 169 nations that did, it doesn’t have to pay royalties on the resources it extracts. Apparently America can have its cake and eat it, too: a brand-new shelf, acquired in seemingly good order, that it can mine for free. This gold rush in the making can be seen as the culmination of a long national bet that even though America helped create the global maritime order, it’s better off not joining.

America’s undersea enlargement would not have been possible without Larry Mayer. An oceanographer at the University of New Hampshire, Mayer began the U.S. government’s largest-ever offshore-mapping effort in 2003. Over the next 20 years, he led a team of scientists that dragged sensors across America’s neighboring oceans, scanning more than 1 million square miles of seabed. “When you do that at nine miles an hour, it takes time,” Mayer told me. The project logged more than three years afloat, “a lot of it in the Arctic, which takes even more time because we’ve got to break ice.”

[From the January/February 2020 issue: History’s largest mining operation is about to begin]

Forty voyages and more than $100 million later, Mayer returned with four terabytes of data, which State Department officials plugged into formulas laid out by the treaty. “Not all countries have the ability to hire Larry Mayer and the scientific wherewithal to go out for 20 years and spend tens of millions” to grow their shelf, says James Kraska, a law professor at the U.S. Naval War College who also teaches a course at Harvard Law School on international maritime code. “Ghana hasn’t done this.”

America first claimed jurisdiction over its continental shelf in 1945, a few weeks after Japan’s surrender in World War II. For several years, the U.S. government had been concerned about Japanese ships catching salmon off Alaska, as well as other nations drilling for oil off American shores. With the war over, President Harry Truman proclaimed that an underwater area of some 750,000 square miles—about 4.5 Californias—now belonged to America.

No internationally agreed-upon definition of continental shelves existed until 1958, when 86 countries gathered at the first UN Convention on the Law of the Sea. The group decided, somewhat unhelpfully, that a shelf could extend as far and as deep as a nation could drill. By the following decade, technology had advanced so quickly that a country could claim virtually an entire ocean. Sure enough, one member of Congress from Florida proposed that the U.S. occupy what amounted to two-thirds of the North Atlantic.

President Lyndon B. Johnson warned against such expansionism. In a 1966 speech, he denounced the “new form of colonial competition” that threatened to emerge among maritime nations. “We must ensure that the deep seas and the ocean bottoms are, and remain, the legacy of all human beings,” he said. The following year, Arvid Pardo, an ambassador from Malta, called on the UN to deem the ocean floor “the common heritage of mankind.” In 1970, the U.S. voted alongside 107 other nations to do precisely that.

The UN reconvened in 1973 to legislate a shared vision of the seas. Over the next nine years, more than 150 nations and as many as 5,000 people gathered for off-and-on negotiating sessions in New York City and Geneva. They discussed a wide range of topics—freedom of navigation, fishing, scientific research, pollution, the seabed—and ultimately produced the Law of the Sea.

The U.S. had helped pave the way. Three years before the convention, the Nixon administration had presented a draft treaty that proposed a forerunner to the International Seabed Authority: an agency established by the Law of the Sea that would collect royalties from underwater resources and distribute them to the developing world. But the nation’s posture changed after Ronald Reagan’s election in 1980. American delegates began showing up to negotiating sessions wearing ties that bore the image of Adam Smith, the father of free markets. It was an early sign of the administration’s reluctance to regulate the maritime economy.

In 1982, the U.S. voted against adopting the Law of the Sea—one of only four countries to do so—and said it would refuse to ratify the finalized treaty. Reagan’s reason: the regulations on mining, which he thought would hamper America’s ability to exploit undersea mineral resources. He seemed particularly worried about the royalty scheme that would govern the international seafloor, a vast virgin deep that lies beyond the jurisdiction of any one state and makes up about half of the world’s ocean floor.

That June, Reagan reportedly told his National Security Council, “We’re policed and patrolled on land and there is so much regulation that I kind of thought that when you go out on the high seas you can do what you want.” The president was concerned about “free oceans closing where we were getting along fine before,” minutes from the meeting show. He dispatched onetime Defense Secretary Donald Rumsfeld to persuade other nations to reject the treaty, but the mission failed.

Just 16 years earlier, the U.S. under Johnson had set out to prevent nations from making unilateral claims to the high seas. Then America made its own. Months after the Law of the Sea was finalized, Reagan said the U.S. would abide by its rules on “traditional uses of the oceans,” such as navigation, but not by the “unnecessary political and economic restraints” that the treaty imposed on mining. Instead, Reagan claimed jurisdiction over all the natural and mineral resources within 200 nautical miles of the nation’s shores (230 regular miles), an allowance that the Law of the Sea granted only signatories. That is, he cited “international law” for permission, even though he had refused to ratify that law. Reagan showed that the U.S. could take what it wanted from the treaty without submitting to the UN. Judging by the newly extended shelf, it still can.

The State Department’s Extended Continental Shelf Project works out of a National Oceanic and Atmospheric Administration building in Boulder, Colorado, some 800 miles from the nearest ocean. Its office is down the hall from the Space Weather Prediction Center. When I visited last year, maps of the Arctic adorned the walls, and a whiteboard showed an elementary red drawing of the U.S. and Canada protruding into the Atlantic. Inside sat Brian Van Pay, the director of the project, and Kevin Baumert, its lawyer.

Van Pay and Baumert are picky about words. When I asked whether America had just gotten bigger, Van Pay replied: “It depends on how you define it. If you’re talking about sovereignty”—he emphasized the last syllable—then no. “But if you’re talking about sovereign rights”—maybe. “But it’s not territory.”

[From the April 1969 issue: The deep-sea bed]

According to the Law of the Sea, a continental shelf stretches 200 nautical miles from a nation’s shores. Any country can mine this area without worrying about royalties. But the treaty lays out two formulas for tacking on “extended” shelf; calculating this is what kept Van Pay and Baumert busy. If you mine there, you need to pay royalties to the International Seabed Authority—unless you’re America and haven’t ratified the treaty.

The first formula requires finding the “foot of the continental slope,” where the seabed starts to flatten out. For the next 60 nautical miles beyond that point, you’ve got continental shelf. The second formula involves the sediment on the ocean floor. (This goes by the technical name “ooze.” It’s plankton skeletons, mainly.) Shelves extend as long as the sediment covering them is thick enough that oil and gas could plausibly be stashed underneath. A team of scientists, led by the geologist Debbie Hutchinson, scanned the ocean floor with seismic sensors to find this boundary. Two regulatory limits circumscribed Van Pay and Baumert’s calculations: No shelf can spread more than 350 nautical miles from shore, or more than 100 nautical miles beyond 2,500 meters of depth. The formulas yielded 1,279 coordinate points delineating the new shelf.

The rules are objective, but the results depend on other nations’ recognition. Parts of America’s new shelf overlap with those of the Bahamas, Canada, and Japan, prompting ongoing negotiations. And in March, Russia’s foreign ministry said that it wouldn’t recognize America’s shelf, because the U.S. hadn’t sent its data to the Commission on the Limits of the Continental Shelf, the agency created by the Law of the Sea to review such submissions.

Russia’s claim relates to a broader concern that the U.S. has essentially ignored unfriendly provisions in the treaty—such as oversight requirements—while exploiting advantageous ones, such as formulas for shelf expansion. Van Pay and Baumert disagree with that characterization. Baumert told me that America’s expansion is not unprecedented; more than three dozen countries have extended their shelves without ratifying the Law of the Sea. (Only four of those still haven’t ratified, though: Syria, the United Arab Emirates, Venezuela, and the United States.)

Furthermore, Van Pay and Baumert told me that they hadn’t sent in their new coordinate points because the Commission on the Limits of the Continental Shelf had never considered submissions from a nation that wasn’t a party to the Law of the Sea. I asked the commission, If America submitted its shelf boundaries, would you review them? “This question has never been raised,” Aldino Campos, the chair of the commission, told me. He said it wouldn’t discuss whether to consider such a submission unless it actually receives one. But ultimately the commission only makes recommendations; actually asserting the new limits of a continental shelf falls to the United States.

Even though America hasn’t ratified the treaty, Kraska, the law professor, told me it has an obligation to comply with it. He argued that it has taken on the force of “customary international law”—that is, a set of norms and practices that are so widely followed that they become binding to all nations, whether or not they’re signatories. All told, he said, the U.S. has made a “credible, good-faith effort” to extend its continental shelf in accordance with the Law of the Sea.

Most mainstream U.S. government officials want America to ratify the treaty. Five presidents and at least five secretaries of state have urged Congress to join, arguing that it would help bolster the international rule of law. Becoming a party to the Law of the Sea would also allow the U.S. to further legitimize its expanded shelf.

Ever since Reagan, though, Republican lawmakers have staved off ratification, which requires two-thirds of the Senate. Along with conservative groups such as the Heritage Foundation, they worry that the royalty schemes would impose an undue financial burden and that joining the treaty could result in a “dangerous loss of American sovereignty.”

Their calculus may soon change. As early as this year, the International Seabed Authority could finalize regulations that would open up mining on the international seafloor. Because America hasn’t ratified the Law of the Sea, it won’t have the right to participate. (Some conservatives argue, however, that the U.S. can simply do as it pleases on the international seafloor.) Pressure is mounting on lawmakers: In March, more than 300 former political and military leaders called on the Senate to ratify, reflecting concerns that America might not be able to keep up with China if it relies solely on its own shelf.

America may not mine its new seabed for decades anyhow. The role of the State Department, Van Pay and Baumert insist, is to set the fence posts, not referee what happens within them. In the meantime, America’s shelf could keep growing. “We always want to leave open that possibility,” Van Pay told me. More data could be collected, he said. “There are more invisible lines to draw.”

A Gaza Deal Closed, but No Closure

The Atlantic

www.theatlantic.com › international › archive › 2025 › 01 › gaza-hamas-ceasefire-war › 681336

Israel and Hamas have reached a hostage-release and cease-fire agreement, offering a measure of relief and hope to the region. But the deal brings no certain closure to the catastrophic Gaza war. It does not guarantee an end to the fighting, a full release of the Israeli hostages, or a lasting political solution for Gaza.

For Israelis, joy at the return of some of the hostages is tempered by trepidation about the fate of the rest. The deal provides for a six-week cease-fire, during which 33 Israeli hostages will come home—some alive, some for burial—in exchange for the release of a much larger number of Palestinian prisoners held by Israel. A second stage of negotiations will then begin, to include the return of the remaining 65 hostages in Gaza and a lasting cease-fire. The success of those talks is just one of the questions the current deal leaves open.

Another is why the agreement wasn’t reached months ago. The framework appears to be the same one—“but for a few small nuances,” the Israeli ex–cabinet minister and former general Gadi Eisenkot said in a radio interview yesterday—that President Joe Biden presented last spring. Had both parties agreed to these terms then, thousands of Gazans might still be alive, and the recent destruction in the northern Gaza Strip could have been averted. At least eight Israeli hostages—including Hersh Goldberg-Polin, the best-known—might have survived, along with more than 100 Israeli soldiers.

So why was the agreement reached only now? The most significant development in recent days appears to be Israeli Prime Minister Benjamin Netanyahu’s new urgency. This week, unlike in May, he pressed the leaders of his coalition’s two resistant, far-right parties to accept a hostage agreement. One new element is Donald Trump. The president-elect demanded a hostage deal before his inauguration, promising that there would be “hell to pay” otherwise. He sent his own envoy, Steven Witkoff, to Qatar, where the indirect negotiations were taking place. Witkoff went from Qatar to Israel on Saturday and insisted on having a meeting with the prime minister on the afternoon of the Jewish sabbath—a violation of Israeli protocol rudely designed to remind Netanyahu who was the vassal and who was the suzerain.

Israeli government and military sources have tried to explain the timing of the deal to national media outlets by pointing to the death of Hamas’s leader Yahya Sinwar in October; the defeats suffered by its Lebanese ally, Hezbollah; and the devastation of northern Gaza. But the purpose of this account largely appears to be presenting the agreement as the fruit of Israel’s military success—rather than a sharp change of course under pressure. In reality, Hamas managed to sustain its war of attrition despite being weakened.

[Read: Sinwar’s death changes nothing]

Meanwhile, Netanyahu’s willingness to pursue a deal is a major reversal. Last summer, he reportedly stymied progress toward a cease-fire by raising new conditions, which infuriated his then–defense minister, Yoav Gallant. (The dispute was one reason Netanyahu dismissed Gallant in November.)

The Israeli right, which assumed that Trump’s bluster was aimed only at Hamas, is in shock. One clue as to what Trump may have threatened—or promised—the prime minister has come from leaks about Netanyahu’s talks with his finance minister, Bezalel Smotrich. The leader of the far-right Religious Zionist Party, Smotrich is a prominent patron of West Bank settlement. In a meeting between the two on Sunday, Netanyahu reportedly told Smotrich that “we must not harm relations with the Trump administration,” and explained that Trump would help with the government’s designs for “Judea and Samaria”—apparently referring to plans to expand West Bank settlement construction.

That promise did not satisfy Smotrich’s party. After a meeting of its Knesset members today, the party demanded a commitment from Netanyahu that he resume the war “after completion of the first stage of the deal.” This, it said, was “a condition for the party remaining in the [ruling] coalition and the government.” As of this writing, Netanyahu has not responded.

While the ultimatum is unlikely to scuttle the deal immediately, it underlines a central question: whether the first stage will lead to an agreement on the next one and a lasting cease-fire. The previous agreement, in November 2023, furnished only a pause. This one could be similar—a six-week hiatus, after which the fighting and destruction resume, while the rest of the hostages remain in Gaza.

A more lasting settlement would require political arrangements in Gaza that Netanyahu has so far studiously avoided discussing. Gaza needs a new Palestinian governing authority, with its own forces or foreign troops capable of keeping the peace. Without that, Hamas will almost certainly resume control in the shattered territory after Israeli troops pull out—and this war will have been just one particularly destructive round of fighting, but not the last. Israel should have been working with the United States, Egypt, the United Arab Emirates, and the Palestinian Authority in the West Bank to create the framework for a new government in Gaza from the very beginning of this conflict. Instead, by failing to define a policy for Gaza’s future, the Netanyahu government turned the war into a highway to nowhere.

[Yair Rosenberg: Trump made the Gaza cease-fire happen]

Netanyahu’s far-right partners have pledged to reverse the 2005 Israeli withdrawal from Gaza and resume Israeli settlement there. Netanyahu has not endorsed that goal, but he has opposed any governing role for the Palestinian Authority in Gaza, despite the fact that foreign partners consider its inclusion essential. Outgoing Secretary of State Antony Blinken emphasized as much in a speech on Tuesday.

For the second stage of the deal to succeed—for the war to end and for the remaining hostages to come home—both Hamas and the Israeli government will have to face the complex problem of Gaza’s future. Anyone who wants an end to the agony of the past 15 months must conjure up at least a quarter measure of hope. But best to hold off on any celebrations until a final deal is reached.

A ‘Holy Grail’ of Science Is Getting Closer

The Atlantic

www.theatlantic.com › technology › archive › 2025 › 01 › generative-ai-virtual-cell › 681246

The human cell is a miserable thing to study. Tens of trillions of them exist in the body, forming an enormous and intricate network that governs every disease and metabolic process. Each cell in that circuit is itself the product of an equally dense and complex interplay among genes, proteins, and other bits of profoundly small biological machinery.

Our understanding of this world is hazy and constantly in flux. As recently as a few years ago, scientists thought there were only a few hundred distinct cell types, but new technologies have revealed thousands (and that’s just the start). Experimenting in this microscopic realm can be a kind of guesswork; even success is frequently confounding. Ozempic-style drugs were thought to act on the gut, for example, but might turn out to be brain drugs, and Viagra was initially developed to treat cardiovascular disease.

Speeding up cellular research could yield tremendous things for humanity—new medicines and vaccines, cancer treatments, even just a deeper understanding of the elemental processes that shape our lives. And it’s beginning to happen. Scientists are now designing computer programs that may unlock the ability to simulate human cells, giving researchers the ability to predict the effect of a drug, mutation, virus, or any other change in the body, and in turn making physical experiments more targeted and likelier to succeed. Inspired by large language models such as ChatGPT, the hope is that generative AI can “decode the language of biology and then speak the language of biology,” Eric Xing, a computer scientist at Carnegie Mellon University and the president of Mohamed bin Zayed University of Artificial Intelligence, in the United Arab Emirates, told me.

Much as a chatbot can discern style and perhaps even meaning from huge volumes of written language, which it then uses to construct humanlike prose, AI could in theory be trained on huge quantities of biological data to extract key information about cells or even entire organisms. This would allow researchers to create virtual models of the many, many cells within the body—and act upon them. “It’s the holy grail of biology,” Emma Lundberg, a cell biologist at Stanford, told me. “People have been dreaming about it for years and years and years.”

These grandiose claims—about so ambiguous and controversial a technology as generative AI, no less—may sound awfully similar to self-serving prophesies from tech executives: OpenAI’s Sam Altman, Google DeepMind’s Demis Hassabis, and Anthropic’s Dario Amodei have all declared that their AI products will soon revolutionize medicine.

If generative AI does make good on such visions, however, the result may look something like the virtual cell that Xing, Lundberg, and others have been working toward. (Last month, they published a perspective in Cell on the subject. Xing has taken the idea a step further, co-authoring several papers about the possibility that such virtual cells could be combined into an “AI-driven digital organism”—a simulation of an entire being.) Even in these early days—scientists told me that this approach, if it proves workable, may take 10 or 100 years to fully realize—it’s a demonstration that the technology’s ultimate good may come not from chatbots, but from something much more ambitious.

Efforts to create a virtual cell did not begin with the arrival of large language models. The first modern attempts, back in the 1990s, involved writing equations and code to describe every molecule and interaction. This approach yielded some success, and the first whole-cell model, of a bacteria species, was eventually published in 2012. But it hasn’t worked for human cells, which are more complicated—scientists lack a deep enough understanding to imagine or write all of the necessary equations, Lundberg said.

The issue is not that there isn’t any relevant information. Over the past 20 years, new technologies have produced a trove of genetic-sequence and microscope data related to human cells. The problem is that the corpus is so large and complex that no human could possibly make total sense of it. But generative AI, which works by extracting patterns from huge amounts of data with minimal human instructions, just might. “We’re at this tipping point” for AI in biology, Eran Segal, a computational biologist at the Weizmann Institute of Science and a collaborator of Xing’s, told me. “All the stars aligned, and we have all the different components: the data, the compute, the modeling.”

Scientists have already begun using generative AI in a growing number of disciplines. For instance, by analyzing years of meteorological records or quantum-physics measurements, an AI model might reliably predict the approach of major storms or how subatomic particles behave, even if scientists can’t say why the predictions are accurate. The ability to explain is being replaced by the ability to predict, human discovery supplanted by algorithmic faith. This may seem counterintuitive (if scientists can’t explain something, do they really understand it?) and even terrifying (what if a black-box algorithm trusted to predict floods misses one?). But so far, the approach has yielded significant results.

[Read: Science is becoming less human]

“The big turning point in the space was six years ago,” Ziv Bar-Joseph, a computational biologist at Carnegie Mellon University and the head of research and development and computational sciences at Sanofi, told me. In 2018—before the generative-AI boom—Google DeepMind released AlphaFold, an AI algorithm that functionally “solved” a long-standing problem in molecular biology: how to discern the three-dimensional structure of a protein from the list of amino acids it is made of. Doing so for a single protein used to take a human years of experimenting, but in 2022, just four years after its initial release, AlphaFold predicted the structure of 200 million of them, nearly every protein known to science. The program is already advancing drug discovery and fundamental biological research, which won its creators a Nobel Prize this past fall.

The program’s success inspired researchers to design so-called foundation models for other building blocks of biology, such as DNA and RNA. Inspired by how chatbots predict the next word in a sentence, many of these foundation models are trained to predict what comes next in a biological sequence, such as the next set of As, Ts, Gs, and Cs that make up a strand of DNA, or the next amino acid in a protein. Generative AI’s value extends beyond straightforward prediction, however. As they analyze text, chatbots develop abstract mathematical maps of language based on the relationships between words. They assign words and sentences coordinates on those maps, known as “embeddings”: In one famous example, the distance between the embeddings of queen and king is the same as that between woman and man, suggesting that the program developed some internal notion of gender roles and royalty. Basic, if flawed, capacities for mathematics, logical reasoning, and persuasion seem to emerge from this word prediction.

Many AI researchers believe that the basic understanding reflected in these embeddings is what allows chatbots to effectively predict words in a sentence. This same idea could be of use in biological foundation models as well. For instance, to accurately predict a sequence of nucleotides or amino acids, an algorithm might need to develop internal, statistical approximations of how those nucleotides or amino acids interact with one another, and even how they function in a cell or an organism.

Although these biological embeddings—essentially a long list of numbers—are on their own meaningless to people, the numbers can be fed into other, simpler algorithms that extract latent “meaning” from them. The embeddings from a model designed to understand the structure of DNA, for instance, could be fed into another program that predicts DNA function, cell type, or the effect of genetic mutations. Instead of having a separate program for every DNA- or protein-related task, a foundation model can address many at once, and several such programs have been published over the past two years.

Take scGPT, for example. This program was designed to predict bits of RNA in a cell, but it has succeeded in predicting cell type, the effects of genetic alterations, and more. “It turns out by just predicting next gene tokens, scGPT is able to really understand the basic concept of what is a cell,” Bo Wang, one of the programs’ creators and a biologist at the University of Toronto, told me. The latest version of AlphaFold, published last year, has exhibited far more general capabilities—it can predict the structure of biological molecules other than proteins as well as how they interact. Ideally, the technology will make experiments more efficient and targeted by systematically exploring hypotheses, allowing scientists to physically test only the most promising or curiosity-inducing. Wang, a co-author on the Cell perspective, hopes to build even more general foundation models for cellular biology.

The language of biology, if such a thing exists, is far more complicated than any human tongue. All the components and layers of a cell affect one another, and scientists hope that composing various foundation models creates something greater than the sum of their parts—like combining an engine, a hull, landing gear, and other parts into an airplane. “Eventually it’s going to all come together into one big model,” Stephen Quake, the head of science at the Chan Zuckerberg Initiative (CZI) and a lead author of the virtual-cell perspective, told me. (CZI—a philanthropic organization focused on scientific advancement that was co-founded by Priscilla Chan and her husband, Mark Zuckerberg—has been central in many of these recent efforts; in March, it held a workshop focused on AI in cellular biology that led to the publication of the perspective in Cell, and last month, the group announced a new set of resources dedicated to virtual-cell research, which includes several AI models focused on cell biology.)

In other words, the idea is that algorithms designed for DNA, RNA, gene expression, protein interactions, cellular organization, and so on might constitute a virtual cell if put together in the right way. “How we get there is a little unclear right now, but I’m confident it will,” Quake said. But not everyone shares his enthusiasm.

Across contexts, generative AI has a persistent problem: Researchers and enthusiasts see a lot of potential that may not always work out in practice. The LLM-inspired approach of predicting genes, amino acids, or other such biological elements in a sequence, as if human cells and bodies were sentences and libraries, is in its “very early days,” Quake said. Xing likened his and similar virtual-cell research to having a “GPT-1” moment, referencing an early proof-of-concept program that eventually led to ChatGPT.

Although using deep-learning algorithms to analyze huge amounts of data is promising, the quest for more and more universal solutions struck some researchers I spoke with as well-intentioned but unrealistic. The foundation-model approach in Xing’s AI-driven digital organisms, for instance, suggests “a little too much faith in the AI methods,” Steven Salzberg, a biomedical engineer at Johns Hopkins University, told me. He’s skeptical that such generalist programs will be more useful than bespoke AI models such as AlphaFold, which are tailored to concrete, well-defined biological problems such as protein folding. Predicting genes in a sequence didn’t strike Salzberg as an obviously useful biological goal. In other words, perhaps there is no unifying language of biology—in which case no embedding can capture every relevant bit of biological information.

[Read: We’re entering uncharted territory for math]

More important than AlphaFold’s approach, perhaps, was that it reliably and resoundingly beat other, state-of-the-art protein-folding algorithms. But for now, “the jury is still out on these cell-based models,” Bar-Joseph, the CMU biologist, said. Researchers have to prove how well their simulations work. “Experiment is the ultimate arbiter of truth,” Quake told me—if a foundation model predicts the shape of a protein, the degree of a gene’s expression, or the effects of a mutation, but actual experiments produce confounding results, the model needs reworking.

Even with working foundation models, the jump from individual programs to combining them into full-fledged cells is a big one. Scientists haven’t figured out all of the necessary models, let alone how to assemble them. “I haven’t seen a good application where all these different models come together,” Bar-Joseph said, though he is optimistic. And although there are a lot of data for researchers to begin with, they will need to collect far more moving forward. “The key challenge is still data,” Wang said. For example, many of today’s premier cellular data sets don’t capture change over time, which is a part of every biological process, and might not be applicable to specific scientific problems, such as predicting the effects of a new drug on a rare disease. Right now, the field isn’t entirely sure which data to collect next. “We have sequence data; we have image data,” Lundberg said. “But do we really know which data to generate to reach the virtual cell? I don’t really think we do.”

In the near term, the way forward might not be foundation models that “understand” DNA or cells in the abstract, but instead programs tailored to specific queries. Just as there isn’t one human language, there may not be a unified language of biology, either. “More than a universal system, the first step will be in developing a large number of AI systems that solve specific problems,” Andrea Califano, a computational biologist at Columbia and the president of the Chan Zuckerberg Biohub New York, and another co-author of the Cell perspective, told me. Even if such a language of biology exists, aiming for something so universal could also be so difficult as to waste resources when simpler, targeted programs would more immediately advance research and improve patients’ lives.

Scientists are trying anyway. Every level of ambition in the quest to bring the AI revolution to cell biology—whether modeling of entire organisms, single cells, or single processes within a cell—emerges from the same hope: to let virtual simulations, rather than physical experiments, lead the way. Experiments may always be the arbiters of truth, but computer programs will determine which experiments to carry out, and inform how to set them up. At some point, humans may no longer be making discoveries so much as verifying the work of algorithms—constructing biological laboratories to confirm the prophecies of silicon.