Itemoids

Russian

This unofficial border crossing between Canada and the US is seeing a record number of migrants

CNN

www.cnn.com › 2023 › 03 › 13 › us › roxham-canada-border-migrant-increase › index.html

On a snowy March afternoon, a small convoy of taxis and hired cars rolled north along a New York country road that dead-ends at the Canadian border. Among those onboard: a Nigerian family of five, a Russian man traveling alone and a tearful South American woman named Giovanna.

The Alaska Oil Project Will Be Obsolete Before It’s Finished

The Atlantic

www.theatlantic.com › science › archive › 2023 › 03 › biden-willow-alaska-arctic-oil-drilling › 673382

If the world turned off the tap of fossil fuels tomorrow, all hell would break loose. Something like 30 percent of global electricity and 9 percent of transport would still be running; billions of people would be stuck at home in the dark.

That’s why, even though world leaders now talk constantly about transitioning away from fossil fuels, they also fret about ensuring a supply of oil and gas for next week, next month, and next year. But right now they are also green-lighting new fossil-fuel projects that won’t start producing energy for years and won’t wind down operations for decades.

It is in this context that the Biden administration has just approved a highly contested proposal to drill for oil on federal land in northern Alaska. The project, called Willow, would damage the complex local tundra ecosystem and, according to an older government estimate, release the same amount of greenhouse gases annually as half a million homes. The administration hopes to soften the blow with a set of restrictions on further drilling on- and offshore in the area, as if to say that Willow will be the last major extraction project in the Alaskan Arctic—one last big score, to propel us across the energy gap.

But the oil from the three drill sites approved today won’t begin to flow for six years. It won’t address any of our next-week, next-month, or next-year supply concerns. In fact, Willow probably won’t do much of anything. By the time it’s finished, the gap may already be largely bridged. The world might not have enough renewable energy to power everything by 2029, but we’ll have more than enough to keep the lights on without additional drilling.

The Willow site is in a chunk of federally owned land called the National Petroleum Reserve in Alaska, to the west of the Arctic National Wildlife Refuge on the state’s North Slope. ConocoPhillips, which has a long-term lease on the land, originally sought to build five drill sites. Even after a lawsuit brought by environmental groups pushed the administration to withhold approval from two of them, the federal government’s environmental-impact statement for the project calculates that Willow would produce some 576 million barrels over approximately 30 years.

Activists say those barrels will come with increases in both greenhouse-gas emissions and local environmental destruction. The law firm Earthjustice, which has sued the government over elements of the plan, calls Willow a “carbon bomb.” The Willow Project has also been the target of a vigorous TikTok activism campaign. A letter from community leaders closest to the Willow site says that the proposed project threatens “our culture, traditions, and our ability to keep going out on the land and the waters.” Climate change is already warming the Arctic nearly four times faster than the rest of the planet, and threatening to melt the permafrost of the North Slope; in fact, ConocoPhillips plans to deploy cooling devices called “thermosyphons” to keep the permafrost frozen under its drill pads. (Ryan Lance, the company’s chairman, said in a statement, “Willow fits within the Biden Administration’s priorities on environmental and social justice, facilitating the energy transition and enhancing our energy security.”)

[Read: How long until Alaska’s next oil disaster?]

But in a state that has long depended on oil and gas revenues, Willow has also received vigorous support. Leaders for Voice of the Arctic Inupiat, a coalition of North Slope Inupiat leaders, said in a statement that the project means “generational economic stability” for their region. ConocoPhillips estimates the project would produce “2,500 construction jobs and 300 permanent jobs,” and generate $8 billion to $17 billion in government revenue. Alaska’s two Republican senators and one Democratic congresswoman co-wrote an op-ed in support of the Willow project. “We all recognize the need for cleaner energy, but there is a major gap between our capability to generate it and our daily needs,” the bipartisan trio wrote.

It is true that there aren’t yet enough solar panels, wind turbines, or electric vehicles to quit fossil fuels cold turkey, and that the Russian invasion of Ukraine sent shock waves through the global energy economy that are still affecting supplies and prices. But assuming that this “state of emergency” will persist is a mistake, says Jennifer Layke, the global energy director of the World Resources Institute. Besides, the United States is now a net exporter of oil. In 2022, we exported nearly 6 million barrels a day, a new record. The decision to proceed with Willow, Layke told me, is an economic one; “it’s not about the renewables transition.” If it were, she said, we would probably not be drilling in the Arctic right now.

Given how quickly renewables are ramping up, experts say the world could meet its energy needs without drilling any new wells. In May 2021, the International Energy Agency (IEA), an intergovernmental organization that tracks and analyzes the global energy system, produced a “roadmap” to achieve the goal of “net-zero emissions in 2050.” The report recommends an immediate end to new oil and gas fields, plus a ban on new coal mines and mine extensions—along with massive investments in renewable energy and energy efficiency and a tax on carbon. In this future, total energy supply drops 7 percent by the end of the decade, relative to 2020, as the mix of energy sources reshuffles, but increased energy efficiency makes up the difference.

[Read: There’s no scenario in which 2050 is “normal”]

The IEA pathway is a bit utopian, because it assumes that every nation tries its best to decarbonize all at once when the reality is likely to be far messier. Which brings us to another argument that Alaska’s political leaders have made in favor of approving Willow: “We need oil, and compared to the other countries we can source it from, we believe Willow is by far the most environmentally responsible choice,” they wrote in their op-ed. Indeed, when the Bureau of Land Management (BLM) ran a modeling exercise to estimate the emissions associated with not drilling at the Willow site, it concluded that only 11 percent of total energy produced by the project would never be used in a world without Willow and that less than 10 percent of the energy not produced at Willow would be instead produced by natural gas or renewable sources. Most of the rest would be replaced by oil from abroad.

However, the BLM model is based on the way the energy market has looked in the past, not the way it is shaping up to look in a greener future. The report admits as much, saying, “Energy substitutes for Willow may look significantly different in a low carbon future.” Whether other oil-producing countries might also, over the course of the next several decades, eventually decide to limit or end their fossil-fuel production is not taken into account. Nor does the model include the effect of the United States keeping or losing the moral high ground it might need to help broker a substantive global cooperative agreement to enact such limits.

[Read: Fighting climate change was costly. Now it’s profitable.]

Even the BLM’s own model, which somewhat absurdly assumes that “regulations and consumption patterns will not change over the long term,” tells us that approving Willow will increase total global energy use and displace at least some energy that could have been generated cleanly—all to produce oil that experts say we simply do not need to bridge any “gap” between where we stand and the greener future ahead. Every day, the gap gets narrower. Moves like the passage of the Inflation Reduction Act are only compressing it further, as monetary incentives for building renewable energy infrastructure and buying electric cars work their magic on the collective behavior of Americans.

The IEA forecasts that the world will add as much renewable power in the next five years as it did in the past 20. If renewables keep growing at their current rate, it projects, renewable energy would account for 38 percent of global electricity by 2027—two years before Willow oil would finally start flowing. Add in some serious demand reduction through energy-efficiency improvements and electrification of transport, and our remaining fossil-fuel needs will easily be met by existing drill sites. Forget about not needing Willow at the end of its 30-year life span. It’ll be obsolete before the ribbon is cut.

The Age of Infinite Misinformation Has Arrived

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 03 › ai-chatbots-large-language-model-misinformation › 673376

New AI systems such as ChatGPT, the overhauled Microsoft Bing search engine, and the reportedly soon-to-arrive GPT-4 have utterly captured the public imagination. ChatGPT is the fastest-growing online application, ever, and it’s no wonder why. Type in some text, and instead of getting back web links, you get well-formed, conversational responses on whatever topic you selected—an undeniably seductive vision.

But the public, and the tech giants, aren’t the only ones who have become enthralled with the Big Data–driven technology known as the large language model. Bad actors have taken note of the technology as well. At the extreme end, there’s Andrew Torba, the CEO of the far-right social network Gab, who said recently that his company is actively developing AI tools to “uphold a Christian worldview” and fight “the censorship tools of the Regime.” But even users who aren’t motivated by ideology will have their impact. Clarkesworld, a publisher of sci-fi short stories, temporarily stopped taking submissions last month, because it was being spammed by AI-generated stories—the result of influencers promoting ways to use the technology to “get rich quick,” the magazine’s editor told The Guardian.  

This is a moment of immense peril: Tech companies are rushing ahead to roll out buzzy new AI products, even after the problems with those products have been well documented for years and years. I am a cognitive scientist focused on applying what I’ve learned about the human mind to the study of artificial intelligence. Way back in 2001, I wrote a book called The Algebraic Mind in which I detailed then how neural networks, a kind of vaguely brainlike technology undergirding some AI products, tended to overgeneralize, applying individual characteristics to larger groups. If I told an AI back then that my aunt Esther had won the lottery, it might have concluded that all aunts, or all Esthers, had also won the lottery.

Technology has advanced quite a bit since then, but the general problem persists. In fact, the mainstreaming of the technology, and the scale of the data it’s drawing on, has made it worse in many ways. Forget Aunt Esther: In November, Galactica, a large language model released by Meta—and quickly pulled offline—reportedly claimed that Elon Musk had died in a Tesla car crash in 2018. Once again, AI appears to have overgeneralized a concept that was true on an individual level (someone died in a Tesla car crash in 2018) and applied it erroneously to another individual who happens to shares some personal attributes, such as gender, state of residence at the time, and a tie to the car manufacturer.

This kind of error, which has come to be known as a “hallucination,” is rampant. Whatever the reason that the AI made this particular error, it’s a clear demonstration of the capacity for these systems to write fluent prose that is clearly at odds with reality. You don’t have to imagine what happens when such flawed and problematic associations are drawn in real-world settings: NYU’s Meredith Broussard and UCLA’s Safiya Noble are among the researchers who have repeatedly shown how different types of AI replicate and reinforce racial biases in a range of real-world situations, including health care. Large language models like ChatGPT have been shown to exhibit similar biases in some cases.

Nevertheless, companies press on to develop and release new AI systems without much transparency, and in many cases without sufficient vetting. Researchers poking around at these newer models have discovered all kinds of disturbing things. Before Galactica was pulled, the journalist Tristan Greene discovered that it could be used to create detailed, scientific-style articles on topics such as the benefits of anti-Semitism and eating crushed glass, complete with references to fabricated studies. Others found that the program generated racist and inaccurate responses. (Yann LeCun, Meta’s chief AI scientist, has argued that Galactica wouldn’t make the online spread of misinformation easier than it already is; a Meta spokesperson told CNET in November, “Galactica is not a source of truth, it is a research experiment using [machine learning] systems to learn and summarize information.”)

More recently, the Wharton professor Ethan Mollick was able to get the new Bing to write five detailed and utterly untrue paragraphs on dinosaurs’ “advanced civilization,” filled with authoritative-sounding morsels including “For example, some researchers have claimed that the pyramids of Egypt, the Nazca lines of Peru, and the Easter Island statues of Chile were actually constructed by dinosaurs, or by their descendents or allies.” Just this weekend, Dileep George, an AI researcher at DeepMind, said he was able to get Bing to create a paragraph of bogus text stating that OpenAI and a nonexistent GPT-5 played a role in the Silicon Valley Bank collapse. Microsoft did not immediately answer questions about these responses when reached for comment; last month, a spokesperson for the company said, “Given this is an early preview, [the new Bing] can sometimes show unexpected or inaccurate answers … we are adjusting its responses to create coherent, relevant and positive answers.”

[Read: Conspiracy theories have a new best friend]

Some observers, like LeCun, say that these isolated examples are neither surprising nor concerning: Give a machine bad input and you will receive bad output. But the Elon Musk car crash example makes clear these systems can create hallucinations that appear nowhere in the training data. Moreover, the potential scale of this problem is cause for worry. We can only begin to imagine what state-sponsored troll farms with large budgets and customized large language models of their own might accomplish. Bad actors could easily use these tools, or tools like them, to generate harmful misinformation, at unprecedented and enormous scale. In 2020, Renée DiResta, the research manager of the Stanford Internet Observatory, warned that the “supply of misinformation will soon be infinite.” That moment has arrived.

Each day is bringing us a little bit closer to a kind of information-sphere disaster, in which bad actors weaponize large language models, distributing their ill-gotten gains through armies of ever more sophisticated bots. GPT-3 produces more plausible outputs than GPT-2, and GPT-4 will be more powerful than GPT-3. And none of the automated systems designed to discriminate human-generated text from machine-generated text has proved particularly effective.

[Read: ChatGPT is about to dump more work on everyone]

We already face a problem with echo chambers that polarize our minds. The mass-scale automated production of misinformation will assist in the weaponization of those echo chambers and likely drive us even further into extremes. The goal of the Russian “Firehose of Falsehood” model is to create an atmosphere of mistrust, allowing authoritarians to step in; it is along these lines that the political strategist Steve Bannon aimed, during the Trump administration, to “flood the zone with shit.” It’s urgent that we figure out how democracy can be preserved in a world in which misinformation can be created so rapidly, and at such scale.  

One suggestion, worth exploring but likely insufficient, is to “watermark” or otherwise track content that is produced by large language models. OpenAI might for example watermark anything generated by GPT-4, the next-generation version of the technology powering ChatGPT; the trouble is that bad actors could simply use alternative large language models to create whatever they want, without watermarks.

A second approach is to penalize misinformation when it is produced at large scale. Currently, most people are free to lie most of the time without consequence, unless they are, for example, speaking under oath. America’s Founders simply didn’t envision a world in which someone could set up a troll farm and put out a billion mistruths in a single day, disseminated with an army of bots, across the internet. We may need new laws to address such scenarios.

A third approach would be to build a new form of AI that can detect misinformation, rather than simply generate it. Large language models are not inherently well suited to this; they lose track of the sources of information that they use, and lack ways of directly validating what they say. Even in a system like Bing’s, where information is sourced from the web, mistruths can emerge once the data are fed through the machine. Validating the output of large language models will require developing new approaches to AI that center reasoning and knowledge, ideas that were once popular but are currently out of fashion.  

It will be an uphill, ongoing move-and-countermove arms race from here; just as spammers change their tactics when anti-spammers change theirs, we can expect a constant battle between bad actors striving to use large language models to produce massive amounts of misinformation and governments and private corporations trying to fight back. If we don’t start fighting now, democracy may well be overwhelmed by misinformation and consequent polarization—and perhaps quite soon. The 2024 elections could be unlike anything we have seen before.