Itemoids

Next

The Schools Without ChatGPT Plagiarism

The Atlantic

www.theatlantic.com › newsletters › archive › 2024 › 10 › the-schools-without-chatgpt-plagiarism › 680407

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

Among the most tangible and immediate effects of the generative-AI boom has been a total upending of English classes. On November 30, 2022, the release of ChatGPT offered a tool that could write at least reasonably well for students—and by all accounts, the plagiarism began the next day and hasn’t stopped since.

But there are at least two American colleges that ChatGPT hasn’t ruined, according to a new article for The Atlantic by Tyler Austin Harper: Haverford College (Harper’s alma mater) and nearby Bryn Mawr. Both are small, private liberal-arts colleges governed by the honor code—students are trusted to take unproctored exams or even bring tests home. At Haverford, none of the dozens of students Harper spoke with “thought AI cheating was a substantial problem at the school,” he wrote. “These interviews were so repetitive, they almost became boring.”

Both Haverford and Bryn Mawr are relatively wealthy and small, meaning students have access to office hours, therapists, a writing center, and other resources when they struggle with writing—not the case for, say, students at many state universities or parents squeezing in online classes between work shifts. Even so, money can’t substitute for culture: A spike in cheating recently led Stanford to end a century of unproctored exams, for instance. “The decisive factor” for schools in the age of ChatGPT “seems to be whether a university’s honor code is deeply woven into the fabric of campus life,” Harper writes, “or is little more than a policy slapped on a website.”

Illustration by Jackie Carlise

ChatGPT Doesn’t Have to Ruin College

By Tyler Austin Harper

Two of them were sprawled out on a long concrete bench in front of the main Haverford College library, one scribbling in a battered spiral-ring notebook, the other making annotations in the white margins of a novel. Three more sat on the ground beneath them, crisscross-applesauce, chatting about classes. A little hip, a little nerdy, a little tattooed; unmistakably English majors. The scene had the trappings of a campus-movie set piece: blue skies, green greens, kids both working and not working, at once anxious and carefree.

I said I was sorry to interrupt them, and they were kind enough to pretend that I hadn’t. I explained that I’m a writer, interested in how artificial intelligence is affecting higher education, particularly the humanities. When I asked whether they felt that ChatGPT-assisted cheating was common on campus, they looked at me like I had three heads. “I’m an English major,” one told me. “I want to write.” Another added: “Chat doesn’t write well anyway. It sucks.” A third chimed in, “What’s the point of being an English major if you don’t want to write?” They all murmured in agreement.

Read the full article.

What to Read Next

AI cheating is getting worse: “At the start of the third year of AI college, the problem seems as intractable as ever,” Ian Bogost wrote in August. A chatbot is secretly doing my job: “Does it matter that I, a professional writer and editor, now secretly have a robot doing part of my job?” Ryan Bradley asks.

P.S.

With Halloween less than a week away, you may be noticing some startlingly girthy pumpkins. In fact, giant pumpkins have been getting more gargantuan for years—the largest ever, named Michael Jordan, set the world record for heaviest pumpkin in 2023, at 2,749 pounds. Nobody knows what the upper limit is, my colleague Yasmin Tayag reports in a delightful article this week.

— Matteo

The Man Who’s Sure That Harris Will Win

The Atlantic

www.theatlantic.com › ideas › archive › 2024 › 10 › allan-lichtman-election-win › 680258

If you follow politics, you can hardly escape Allan Lichtman, the American University history professor known for correctly forecasting the victor of all but one presidential election since 1984. In a whimsical New York Times video published over the summer, the 77-year-old competes in a Senior Olympics qualifying race—and confidently declares that Kamala Harris will win the race (get it?) for the White House. You might also have recently seen Lichtman on cable news, heard him on the radio, or read an interview with him.

In an era of statistically complex, probabilistic election models, Lichtman is a throwback. He bases his predictions not on polls, but rather on the answers to a set of 13 true-or-false questions, which he calls “keys,” and which in 2016 signaled a Trump victory when the polls said otherwise. He has little patience for data crunchers who lack his academic credentials. “The issue with @NateSilver538 is he’s a compiler of polls, a clerk,” Lichtman posted on X in July, as part of a long-running spat with the prominent election modeler. “He has no fundamental basis in history and elections.”

Lichtman’s complaint isn’t just with polls and the nerds who love them. In his view, almost everything that the media and political establishment pay attention to—such as campaigns, candidate quality, debates, and ideological positions—is irrelevant to the outcome. An election is a referendum on the incumbent party’s track record. “The study of history,” he writes in his book Predicting the Next President, “shows that a pragmatic American electorate chooses a president according to the performance of the party holding the White House, as measured by the consequential events and episodes of a term.”

[Anne Applebaum: The danger of believing that you are powerless]

According to Lichtman, the standard account of how presidential campaigns work is a harmful fiction. “The media, the candidates, the pollsters, and the consultants,” Lichtman writes, “are complicit in the idea that elections are exercises in manipulating voters,” which stymies political reform and meaningful policy debate. That argument contains a touch of the conspiratorial, but there’s a big difference between Lichtman’s worldview and a conspiracy theory: His predictions actually come true. If Lichtman is wrong about how elections work, how can he be so good at foretelling their outcomes?

One possible answer is that, in fact, he isn’t.

Lichtman developed his method in 1981 in collaboration with Vladimir Keilis-Borok, a Russian mathematical geophysicist. Lichtman had a hunch, he told me, that “it was the performance and strength of the White House Party that turned elections.” He and Keilis-Borok analyzed every election from 1860 to 1980; the hunch bore out.

Each of the 13 keys can be defined as a true-or-false statement. If eight or more of them are true, the incumbent-party candidate will win; seven or fewer, and they will lose. Here they are, as spelled out in Predicting the Next President:

1. Incumbent-party mandate: After the midterm elections, the incumbent party holds more seats in the U.S. House of Representatives than it did after the previous midterm elections.

2. Nomination contest: There is no serious contest for the incumbent-party nomination.

3. Incumbency: The incumbent-party candidate is the sitting president.

4. Third party: There is no significant third-party or independent campaign.

5. Short-term economy: The economy is not in recession during the election campaign.

6. Long-term economy: Real annual per capita economic growth during the term equals or exceeds mean growth during the two previous terms.

7. Policy change: The incumbent administration effects major changes in national policy.

8. Social unrest: There is no sustained social unrest during the term.

9. Scandal: The incumbent administration is untainted by major scandal.

10. Foreign or military failure: The incumbent administration suffers no major failure in foreign or military affairs.

11. Foreign or military success: The incumbent administration achieves a major success in foreign or military affairs.

12. Incumbent charisma: The incumbent-party candidate is charismatic or a national hero.

13. Challenger charisma: The challenging-party candidate is not charismatic or a national hero.

Lichtman says that keys 2, 4, 5, 6, 7, 8, 9, and 13 are true this year: just enough to assure a Harris victory.

Although some of the keys sound extremely subjective, Lichtman insists that they are not subjective at all—assessing them simply requires the kind of judgments that historians are trained to make. The charisma key, for example, doesn’t depend on your gut feeling about a candidate. “We are talking about the once-in-a-generation, across-the-board, inspirational, truly transformational candidates, like Franklin Roosevelt or Ronald Reagan,” he told me.

I can attest that applying the keys is challenging for those of us without a history Ph.D. The keys must be “turned” consistently from election to election without regard to polls, but in practice seem to be influenced by fluctuating public-opinion data. The Democratic nominee in 2008, Barack Obama, qualified as charismatic, but the 2012 nominee, who was also Barack Obama, did not, because of his diminished approval ratings. The “third-party challenger” key cuts against the incumbent if a third-party candidate is likely to get 5 percent of the vote—but this is only knowable through horse-race polling, which we’re supposed to ignore, or after the fact, in which case it’s not a prediction.

Lichtman insists that voters don’t change their minds in response to what the candidates say or do during the course of a campaign. This leads him to make some deeply counterintuitive claims. He has written that George H. W. Bush’s attacks on Michael Dukakis in 1988—which included the infamous Willie Horton ad—accomplished nothing, and actually hurt Bush’s subsequent ability to govern, because he already had enough keys to win and should have been focused on his policy agenda. He implies that JFK, who edged out Richard Nixon by less than two-tenths of a percentage point in 1960, would have won even if he had had the personality of, say, his nephew Robert, because he had eight keys in his favor in addition to charisma. And this past summer, Lichtman told anyone who would listen that Joe Biden should stay in the race, despite his difficulty completing a sentence, because replacing him on the ticket would mean the loss of the incumbency key. If Democrats persuaded Biden to drop out, he wrote in a July 3 op-ed, “they would almost surely doom their party to defeat and reelect Donald Trump.” (He changed his mind once it became clear that no one would challenge Harris for the nomination, thus handing her key 2.)

Arguments such as these are hard to accept, because they require believing that Lichtman’s “pragmatic electorate” places no stock in ideological positions or revelations about character and temperament. Lichtman is unperturbed by such objections, however. All arguments against the keys fail because they suggest that the keys are in some way wrong, which they plainly are not. Lichtman has written, for example, that the infamous “Comey letter” did not tip the 2016 election to Trump, as poll-focused analysts such as Nate Silver have “incorrectly claimed.” How does Lichtman know the claim is incorrect? Because the keys already predicted a Trump victory. The proof is in the fact that the system works. This raises the question of whether it actually does.

Going nine for 10 on presidential predictions is not as hard as it sounds. Only four of the past 10 elections were particularly close. Most campaign years, you can just look at the polls. Lichtman predicted a Biden victory in 2020, for example, but you probably did too.

To his credit, Lichtman has made many accurate calls, in some cases well before polls showed the eventual victor in the lead. Even in 2000, the election that he is generally considered to have gotten wrong, the system worked as advertised. As he explains in Predicting the Next President, the keys “predict only the national popular vote and not the vote within individual states.” (Lichtman has devoted considerable energy to proving that the election was stolen in Florida by the GOP, and that he has thus really gone 10 for 10.)

Lichtman’s most celebrated feat of foresight by far, the gutsy call that supposedly sets his keys apart from mere polls, was his 2016 prediction. Calling the race for Trump when the polls pointed the other way was reputationally risky. After Lichtman was vindicated, he was showered with praise and received a personal note of congratulations from Trump himself. “Authorities in the field recognized my nearly unique successful prediction of a Trump victory,” Lichtman told me in an email. He quoted the assessment of the political scientist Gerald M. Pomper: “In 2016, nine of eleven major studies predicted Clinton’s lead in the national popular vote. However, by neglecting the Electoral College and variations among the state votes, they generally failed to predict Trump’s victory. One scholar did continue his perfect record of election predictions, using simpler evaluations of the historical setting (Lichtman 2016).”

Oddly, no one seems to have noticed at the time what seems in hindsight like an obvious problem. By Lichtman’s own account, the keys predict the popular-vote winner, not the state-by-state results. But Trump lost the popular vote by two percentage points, eking out an Electoral College victory by fewer than 80,000 votes in three swing states.

Lichtman has subsequently addressed the apparent discrepancy. “In 2016, I made the first modification of the keys system since its inception in 1981,” he writes in the most recent edition of Predicting the Next President. In “my final forecast for 2016, I predicted the winner of the presidency, e.g., the Electoral College, rather than the popular vote winner.” He did this, he writes, because of the divergence of the Electoral College results from the popular vote: “In any close election, Democrats will win the popular vote but not necessarily the Electoral College.”

[Peter Wehner: This election is different]

But the gap that Lichtman describes did not become apparent until the results of the 2016 election were known. In 2008 and 2012, the Electoral College actually gave a slight advantage to Obama, and until 2016, the difference between the margin in the popular vote and in the Electoral College tipping state was typically small. Why would Lichtman have changed his methodology to account for a change that hadn’t happened yet?

Odder still is the fact that Lichtman waited to announce his new methodology until well after the election in which he says he deployed it. According to an investigation published this summer by the journalists Lars Emerson and Michael Lovito for their website, The Postrider, no record exists of Lichtman mentioning the modification before the fact. In their estimation, “he appears to have retroactively changed” the predictive model “as a means of preserving his dubious 10 for 10 streak.”

This is a sore subject for Lichtman. Whether he got 2016 totally right or merely sort of right might seem like a quibble; surely he was closer to the mark than most experts. But a forecaster who changes his methodology after the fact has no credibility. When I brought the matter up with Lichtman in a Zoom interview, he became angry. “Let me tell you: It steams me,” he said, his voice rising. “I dispute this, you know, When did you stop beating your wife? kind of question.”

Lichtman directed me to an interview he gave The Washington Post in September 2016. (When I tried to interject that I had read the article, he cut me off and threatened to end the interview.) There and elsewhere, Lichtman said, he clearly stated that Trump would win the election. Trump did win the election, ergo, the prediction was accurate. Nowhere did he say anything about the popular vote.

Later that evening, Lichtman sent me a follow-up email with the subject line “2016.” In it, he described Emerson and Lovito as “two unknown journalists with no qualifications in history or political science.” As for their claims, he pointed once again to the Washington Post interview, and also to an article in the October 2016 issue of the academic journal Social Education, in which he published his final prediction.

Here is what Lichtman wrote in the Social Education article: “As a national system, the Keys predict the popular vote, not the state-by-state tally of Electoral College votes. However, only once in the last 125 years has the Electoral College vote diverged from the popular vote.”

This seemed pretty cut-and-dried. I replied to Lichtman’s email asking him to explain. “Yes, I was not as clear as I could have been in that article,” he responded. “However, I could not have been clearer in my Washington Post prediction and subsequent Fox News and CBS interviews, all of which came after I wrote the article.” In those interviews, he said nothing about the popular vote or the Electoral College.

I got another email from Lichtman, with the subject line “Postriders,” later that night. “Here is more information on the two failed journalists who have tried to make a name for themselves on my back,” Lichtman wrote. Attached to the email was a Word document, a kind of opposition-research memo, laying out the case against Lovito and Emerson: “They post a blog—The Postrider—that has failed to gain any traction as documented below. They are not qualified to comment on the Keys, the polls, or any aspect of election prediction.” The document then went through some social-media numbers. Lichtman has 12,000 followers on Facebook; The Postrider has only 215, and the articles get no engagement. One hundred thousand followers for Lichtman on X; a few hundred for Emerson and Lovito.

[Gilad Edelman: The asterisk on Kamala Harris’s poll numbers]

I ran these criticisms by Emerson and Lovito, who were already familiar with Lichtman’s theory of the case. After they published their article, he emailed them, cc’ing his lawyer and American University’s general counsel, accusing them of defamation.

To the charge of being less famous than Lichtman, they pled guilty. “It’s true that a public intellectual who has been publishing books since the late 1970s and is interviewed every four years by major media outlets has a larger following than us, yes,” they wrote in an email. “But we fail to see what relevance that has to our work.” Regarding their qualifications, they pointed out that they each have a bachelor’s degree in political science from American University, where Lichtman teaches. (Emerson is a current student at American’s law school.) “As for this story on the Keys, we spent months reading and reviewing Professor Lichtman’s books, academic papers, and interviews regarding the Keys. If we are not qualified to comment at that point, he should reconsider how he publicly communicates about his work.”

In a December 2016 year-in-review article, the journalist Chris Cillizza looked back on the stories that had generated the most interest for his Washington Post politics blog, The Fix. “The answer this year? Allan Lichtman. Allan Lichtman. Allan Lichtman … Of the 10 most trafficked posts on The Fix in 2016, four involved Lichtman and his unorthodox predictions,” Cillizza wrote. “Those four posts totaled more than 10 million unique visitors alone and were four of the 37 most trafficked posts on the entire WaPo website this year.”

Americans love a prediction. We crave certainty. This makes the life of a successful predictor an attractive one, as Lichtman, who has achieved some measure of fame, can attest. But a professional forecaster is always one bad call away from irrelevance.

Give Lichtman credit for making concrete predictions to which he can be held accountable. As he always says, the probabilistic forecasts currently in vogue can’t be proved or disproved. The Nate Silvers of the world, who have unanimously labeled the upcoming election a toss-up, will be correct no matter who wins. Not so for Lichtman. A Trump restoration would not just end his winning streak. It would call into question his entire theory of politics. We are all waiting to find out how pragmatic the electorate really is.

A Nobel Prize for Artificial Intelligence

The Atlantic

www.theatlantic.com › newsletters › archive › 2024 › 10 › of-course-ai-just-got-a-nobel-prize › 680197

This is Atlantic Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. Sign up here.

The list of Nobel laureates reads like a collection of humanity’s greatest treasures: Albert Einstein, Marie Curie, Francis Crick, Toni Morrison. As of this morning, it also includes two physicists whose research, in the 1980s, laid the foundations for modern artificial intelligence.

Earlier today, the 2024 Nobel Prize in Physics was awarded to John Hopfield and Geoffrey Hinton for using “tools from physics to develop methods that are the foundation of today’s powerful machine learning.” Hinton is sometimes referred to as a “godfather of AI,” and today’s prize—one that is intended for those whose work has conferred “the greatest benefit to humankind”—would seem to mark the generative-AI revolution, and tech executives’ grand pronouncements about the prosperity that ChatGPT and its brethren are bringing, as a fait accompli.

Not so fast. Committee members announcing the prize, while gesturing to generative AI, did not mention ChatGPT. Instead, their focus was on the grounded ways in which Hopfield and Hinton’s research, which enabled the statistical analysis of enormous datasets, has transformed physics, chemistry, biology, and more. As I wrote in an article today, the award “should not be taken as a prediction of a science-fictional utopia or dystopia to come so much as a recognition of all the ways that AI has already changed the world.”

AI models will continue to change the world, but AI’s proven applications should not be confused with Big Tech’s prophecies. Machines that can “learn” from large datasets are the stuff of yesterday’s news, and superintelligent machines that replace humans remain the stuff of yesterday’s novels. Let’s not forget that.

Illustration by The Atlantic. Source: Science & Society Picture Library / Getty.

AI’s Penicillin and X-Ray Moment

By Matteo Wong

Today, John Hopfield and Geoffrey Hinton received the Nobel Prize in Physics for groundbreaking statistical methods that have advanced physics, chemistry, biology, and more. In the announcement, Ellen Moons, the chair of the Nobel Committee for Physics and a physicist at Karlstad University, celebrated the two laureates’ work, which used “fundamental concepts from statistical physics to design artificial neural networks” that can “find patterns in large data sets.” She mentioned applications of their research in astrophysics and medical diagnosis, as well as in daily technologies such as facial recognition and language translation. She even alluded to the changes and challenges that AI may bring in the future. But she did not mention ChatGPT, widespread automation and the resulting global economic upheaval or prosperity, or the possibility of eliminating all disease with AI, as tech executives are wont to do.

Read the full article.

What to Read Next

Today’s Nobel Prize announcement focused largely on the use of AI for scientific research. In an article last year, I reported on how machine learning is making science faster and less human, in turn “challenging the very nature of discovery.” Whether the future will be awash with superintelligent chatbots, however, is far from certain. In July, my colleague Charlie Warzel spoke with Sam Altman and Ariana Huffington about an AI-based health-care venture they recently launched, and came away with the impression that AI is becoming an “industry powered by blind faith.”

P.S.

A couple weeks ago, I had the pleasure of speaking with Terence Tao, perhaps the world’s greatest living mathematician, about his perceptions of today’s generative AI and his vision for an entirely new, “industrial-scale” mathematics that AI could one day enable. I found our conversation fascinating, and hope you will as well.

— Matteo

What If Your ChatGPT Transcripts Leaked?

The Atlantic

www.theatlantic.com › newsletters › archive › 2024 › 10 › what-if-your-chatgpt-transcripts-leaked › 680165

This is Atlantic Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. Sign up here.

Shortly after Facebook became popular, the company launched an ad network that would allow businesses to gather data on people and target them with marketing. So many issues with the web’s social-media era stemmed from this original sin. It was from this technology that Facebook, now Meta, would make its fortune and become dominant. And it was here that our perception of online privacy forever changed, as people became accustomed to various bits of their identity being mined and exploited by political campaigns, companies with something to sell, and so on.

AI may shift how we experience the web, but it is unlikely to turn back the clock on the so-called surveillance economy that defines it. In fact, as my colleague Lila Shroff explained in a recent article for The Atlantic, chatbots may only supercharge data collection.

“AI companies are quietly accumulating tremendous amounts of chat logs, and their data policies generally let them do what they want. That may mean—what else?—ads,” Lila writes. “So far, many AI start-ups, including OpenAI and Anthropic, have been reluctant to embrace advertising. But these companies are under great pressure to prove that the many billions in AI investment will pay off.”

Ad targeting may be inevitable—in fact, since Lila wrote this article, Google has begun rolling out related advertisements in some of its AI Overviews—but there are other issues to contend with here. Users have long conversations with chatbots, and frequently share sensitive information with them. AI companies have a responsibility to keep those data locked down. But, as Lila explains, there have already been glitches that have leaked information. So think twice about what you type into that text box: You never know who’s going to see it.

Illustration by The Atlantic. Source: Getty.

Shh, ChatGPT. That’s a Secret.

By Lila Shroff

This past spring, a man in Washington State worried that his marriage was on the verge of collapse. “I am depressed and going a little crazy, still love her and want to win her back,” he typed into ChatGPT. With the chatbot’s help, he wanted to write a letter protesting her decision to file for divorce and post it to their bedroom door. “Emphasize my deep guilt, shame, and remorse for not nurturing and being a better husband, father, and provider,” he wrote. In another message, he asked ChatGPT to write his wife a poem “so epic that it could make her change her mind but not cheesy or over the top.”

The man’s chat history was included in the WildChat data set, a collection of 1 million ChatGPT conversations gathered consensually by researchers to document how people are interacting with the popular chatbot. Some conversations are filled with requests for marketing copy and homework help. Others might make you feel as if you’re gazing into the living rooms of unwitting strangers.

Read the full article.

What to Read Next

It’s time to stop taking Sam Altman at his word: “Understand AI for what it is, not what it might become,” David Karpf writes. We’re entering uncharted territory for math: “Terence Tao, the world’s greatest living mathematician, has a vision for AI,” Matteo Wong writes.

P.S.

Meta and other companies are still trying to make smart glasses happen—and generative AI may be the secret ingredient that makes the technology click, my colleague Caroline Mimbs Nyce wrote in a recent article. What do you think: Would you wear them?

— Damon

The Next President Will Have to Deal With Bird Flu

The Atlantic

www.theatlantic.com › health › archive › 2024 › 10 › bird-flu-election-bird-flu › 680103

Presidents always seem to have a crisis to deal with. George W. Bush had 9/11. Barack Obama had the Great Recession. Donald Trump had the coronavirus pandemic. Joe Biden had the war in the Middle East. For America’s next president, the crisis might be bird flu.

The United States is in the middle of an unprecedented bout of bird flu, also known as H5N1. Since 2022, the virus has killed millions of birds and spread to mammals, including cows. Dairy farms are struggling to contain outbreaks. A few humans have fallen sick, too—mostly farmworkers who spend a lot of time near chickens or cows—but Americans have largely remained nonplussed by bird flu. No one in the U.S. has died or gotten seriously sick, and the risk to us is considered low, because humans rarely spread the virus to others.

On Friday, the fear of human-to-human spread grew ever so slightly: The CDC confirmed that four health-care workers in Missouri had fallen sick after caring for a patient who was infected with bird flu. A few weeks earlier, three other Missourians showed symptoms of bird flu after coming in contact with the same person. It’s still unclear if the workers were infected with H5N1 or some other respiratory bug; only one has been given an H5N1 test, which came back negative.

The CDC says the risk to humans has not changed, but the incident in Missouri underscores that the virus is only likely to generate more scares about human-to-human transmission. The virus is showing no signs of slowing down. In the absolute worst-case scenario—where Friday’s news is the first sign of the virus freely spreading from person to person—we are hurtling toward another pandemic. But the outbreak doesn’t have to get that dire to create headaches for the American public, and liabilities for the next president.

Either Trump or Kamala Harris will inherit an H5N1 response that has been nightmarishly complex, controversial, and at times slow. Three government agencies—the FDA, the CDC, and the U.S. Department of Agriculture—share responsibility for the bird-flu response, and it’s unclear which agency is truly in charge. The USDA, for example, primarily protects farmers, while the CDC is focused on public health, and the FDA monitors the safety of milk.

Adding to the complexity is that a lot of power also rests with the states, many of which have been loath to involve the feds in their response. States must typically invite federal investigators to assess potential bird-flu cases in person, and some have bristled at the prospect of letting federal officials onto farms. The agriculture commissioner for Texas, which has emerged as one of the bird-flu hot spots, recently said the federal government needs to “back off.” Meanwhile, wastewater samples—a common way to track the spread of a virus—indicate that bird flu is circulating through 10 of the state’s cities.

Government alone can only do so much. Though only 14 Americans have knowingly come down with bird flu, we have a woefully incomplete picture of how widely it is spreading in humans. Since March, about 230 people nationwide have been tested for the virus. Although the federal government has attempted to compel farmworkers to get tested—even offering them $75 to give blood and nasal swabs—it has struggled to make inroads. That could be because of a range of factors, such as distrust of the federal government because of farmworkers’ immigration status, and lack of awareness about the growing threat of bird flu. A USDA spokesperson told me the agency expects testing to increase as it “continues outreach to farmers.”

You should be experiencing some serious déjà vu by now. In 2020, the U.S. was operating in the dark regarding COVID because tests were scarce, many states were not publicly reporting their COVID numbers, and the federal government and states were fighting over lockdowns. The systematic problems that dogged the pandemic response are still impediments today, and it’s unclear whether either candidate has a plan to fix them. Trump and Harris both seem more intent on pretending that the worrying signs of bird flu simply don’t exist. Neither has outlined a plan for containing the virus, or said much of anything publicly about it. (The Trump and Harris campaigns did not respond to requests for comment.) If America is going to avoid repeating our COVID mistakes, things need to change fast. Angela Rasmussen, a virologist at the University of Saskatchewan, highlighted the need for more widespread testing, and vaccinations for those at high risk of catching the virus. (The federal government has a stockpile of bird-flu vaccines, but has not deployed them.)

H5N1 is already showing its potential to spoil both candidates’ promises to lower grocery prices. Poultry flocks have been hit hard by bird flu, and the price of eggs has spiked by 28 percent compared with a year ago. (Inflation also played a role in increased prices, but bird flu is mostly to blame.) The next president will have to spar with America’s dairy industry if they want to get useful data on how widely the virus is spreading. Dairy farmers have been reluctant to test workers or animals for fear of financial losses. But none of this will compare with the disruption that a new president will have to deal with should this virus spread more freely to humans. For Americans, that will likely mean a return to masks, another vaccine to get, and isolation. Some experts are warning that schools could be affected if the virus begins spreading to humans more readily.

Bird flu doesn’t seem like a winning message for either candidate. Talk of preparing for any type of infectious disease triggers the fears of uncertainty, isolation, and inconvenience that Americans are still trying to shake after the pandemic. It’s hard to imagine either Trump or Harris starting their presidency by instituting the prevention measures that so many people have grown to hate. Unfortunately, the next commander in chief may not have a choice.