Itemoids

University

Why ‘Died Suddenly’ Will Not Die

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 01 › died-suddenly-documentary-covid-vaccine-conspiracy-theory › 672819

Lisa Marie Presley died unexpectedly earlier this month, and within hours, lacking any evidence, Twitter users were suggesting that her death had been caused by the COVID-19 vaccine.  

The Twitter account @DiedSuddenly_, which has about 250,000 followers, also started tweeting about it immediately, using the hashtag #DiedSuddenly. Over the past several months, news stories about any kind of sudden death or grave injury—including the death of the sports journalist Grant Wahl and the sudden collapse of the Buffalo Bills safety Damar Hamlin—have been met with a similar reaction from anti-vaccine activists. Though most of the incidents had obvious explanations and almost certainly no connection to the vaccine, which has an extremely remote risk of causing heart inflammation—much smaller than the risk from COVID-19 itself—the idea that the shots are causing mass death has been boosted by right-wing media figures and a handful of well-known professional athletes.

They are supported by a recent video, Died Suddenly, that bills itself as “the documentary film of a generation.” The hour-long movie has spread unchecked on Rumble, a moderation-averse video-streaming platform, and Twitter, which abandoned its COVID-misinformation policy two days after the film premiered in November. It puts forth the familiar conspiracy theory that the vaccines were engineered as a form of population control, illustrated by stomach-turning footage of funeral directors and embalmers removing “white fibrous clots” that “look like calamari” from the corpses of people who have purportedly been vaccinated against COVID-19. (There are also some clips of Lee Harvey Oswald and the moon landing, for unclear reasons.)

Died Suddenly has been viewed nearly 20 million times and cheered on by far-right personalities such as Marjorie Taylor Greene and Candace Owens. It was released by the Stew Peters Network, whose other videos on Rumble have titles like “Obama Formed Shadow Government BEFORE Plandemic” and “AIRPORTS SHUT DOWN FOR EVERYONE BUT JEWS!” And its creators are already asking for donations to fund a sequel, Died Suddenly 2, which promises to explore “deeper rabbit holes.” (Nicholas Stumphauzer, one of the film’s directors, did not respond to questions, other than to say that the production team was motivated by a desire to "stop the globalist death cult.")

[Read: Why is Marjorie Taylor Greene like this?]

As a meme, “died suddenly” could last a long time—possibly indefinitely. People will always be dying suddenly, so it will always be possible to redeploy it and capture further attention. What’s more, there is a thriving alt-tech ecosystem that can circulate the meme; a whole cohort of right-wing, anti-vaccine influencers and celebrities who can amplify it; and, crucially, a basically unmoderated mainstream social-media platform that can put it in front of hundreds of millions of users—some of whom will make fun of it, but others of whom will start to see something unsettling and credible in its repetitions.

What is most startling about the Died Suddenly documentary is not its argument, but the way that people are watching it. “#DiedSuddenly is the first movie to premiere on Twitter since your friendly takeover,” the official Died Suddenly account, @DiedSuddenly_, tweeted at Elon Musk. The account has a blue checkmark next to it—a symbol that used to indicate some kind of trustworthiness but now indicates a willingness to pay a monthly fee. When @DiedSuddenly_ first uploaded the movie in full on Twitter, it was labeled as misleading, in accordance with the COVID-19-misinformation policies that were then in place on the site. But this label was soon removed, on November 23, the same day that Twitter stopped enforcing rules about COVID-19 misinformation—including posts stating that the vaccines intentionally cause mass death.

Twitter, like many platforms, has spent the past decade refining its content-moderation policies. Now it is randomly throwing them out. Jing Zeng, a researcher at the University of Zurich, began her work on Twitter and conspiracy theories in 2018, and she noted a major transformation in response to the pandemic and the rise of QAnon. “Especially since the start of COVID, Twitter had been active in deplatforming conspiracy-theory-related accounts,” she told me. A lot of conspiracy theorists moved to fringe sites where they had trouble rebuilding the huge audiences they’d had on Twitter. But now their time in the desert may be over. “Twitter under Elon Musk has been giving signals to the communities of conspiracy theorists that Twitter’s door might be open to them again,” Zeng said.

The anti-vaccine movement is always poised to take advantage of such opportunities. Absent any moderation on Twitter, anti-vaxxers are once again free to experiment wildly with their messaging, according to Tamar Ginossar, a health-communication professor at the University of New Mexico who published a paper earlier in the pandemic about how vaccine-related content traveled on Twitter and YouTube. “Enough people are sharing this and enough content is being made that it’s taking off,” she told me.

In just a few months, the #DiedSuddenly meme has become a presence on most major social platforms, including Instagram and Facebook. At the end of 2022, researchers and reporters pointed to large Facebook groups dedicated to “Died Suddenly News.” Last week, I was able to join a community that was created in October and had more than 34,000 members. They referred to themselves as “pure bloods” and to vaccines as “cookies” or “cupcakes,” and alternated between mourning “sudden deaths” and gloating about them. And they had been careful to evade detection by Facebook’s automated content-moderation systems: Group administrators asked them to write about “de@ths and injury from the c0v1d sh0ts” and “disguise ALL words that have any medical meaning.” (Facebook removed the group after I inquired about it.)

But “died suddenly” thrives on Twitter. Tweets referencing news stories about unexpected deaths can be flooded with replies trumpeting the conspiracy theory, which go unmoderated. It’s a radical change from the earlier years of the pandemic, during which Twitter implemented new policies against health misinformation and updated them regularly, gradually finessing the wording and clarifying how the company assessed misleading information. These policies and the tactics used to enforce them tightened as the pandemic went on. According to a transparency report the company published in July 2022, Twitter suspended significantly more accounts and removed far more content during the vaccine rollout than during the earliest months of the pandemic, when various groups first expressed concern about dangerous misinformation spreading online.

This isn’t to say that Twitter’s policies were perfect. Journalists, politicians, and medical experts all had issues with how the site moderated content in the pandemic’s first two years. But from 2020 on, parties who were interested in the challenges of moderating health information were able to have a fairly nuanced debate about how well Twitter was doing with this super-convoluted task, and how it might improve. In 2020, a sea-change year for content moderation across the social web, major platforms were pushed by activists, politicians, and regular users to do more than they had ever done before. That year saw the proliferation of election disinformation and Donald Trump’s leadership of a violent, anti-democracy meme army, as well as nationwide protests in support of social justice whose reach extended to the practices of internet companies. And there was a backlash in response: Aggrieved right-wing influencers bemoaned the rise of censorship and the end of free speech; commentators with bad opinions about vaccines or other public-health measures got booted off Twitter and wound up on Substack, where they talked about getting booted off Twitter.

Now we’re in a reactionary moment in the history of content moderation. The alt-tech ecosystem expanded with the launch of Trump’s Truth Social and the return of Parler; the Died Suddenly filmmakers were recently interviewed for a program exclusive to Frank, the supposed free speech platform created by the MyPillow founder and conspiracy-theory promoter Mike Lindell. Some of the alt-tech platforms, including Rumble, saw significant growth by openly marketing themselves as anti-moderation. As I wrote at the end of last year, Rumble grew from 1 million monthly average users in 2020 to 36 million in the third quarter of 2021. The platform used to market itself as a “clean” alternative to YouTube, but its CEO now talks about its aversion to “cancel culture” and its goal of “restoring” the internet “to its roots” by eliminating content guidelines.

And Twitter is backsliding, led by a CEO who has delighted in sharing company documents with critics who held the old COVID-19 policies in disdain. In the “Died Suddenly” Facebook group I joined, commenters praised Musk’s version of the site. “Sign up for Twitter,” one wrote. Those questioning the vaccines used to be “censored earlier by the old Twitter nazis,” but now there is “FREE SPEECH.” “If you want TRUE information … get off Facebook and get on Twitter,” another posted before the group was shut down.

Earlier in the pandemic, researchers like Zeng were concerned about “dark platforms” such as 8kun or Gab, and how their wacky, dangerous ideas about COVID-19 could leech onto mainstream platforms. But now? The difference between alt and mainstream is getting slimmer.

To Defend Civilization, Defeat Russia

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 01 › ukraine-russia-weapons-nato-germany › 672817

This story seems to be about:

This is an edition of  The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

Some NATO nations are wavering about sending tanks and other advanced weapons to Ukraine. I understand fears of escalation, but if Russia wins in Ukraine, the world will lose.

But first, here are three new stories from The Atlantic.

A guide to the possible forthcoming indictments of Donald Trump What really took America to war in Iraq The brutal reality of life in America’s most notorious jail

No Other Choice

I don’t often find myself agreeing with Senator Lindsey Graham, the South Carolina conservative who long ago rebranded himself as Donald Trump’s faithful valet and No. 1 fan. Last week, however, Graham lashed out in frustration at the dithering in Europe and America over sending more weapons to Ukraine. “I am tired of the shit show surrounding who is going to send tanks and when they’re gonna send them,” he said during a press conference in Kyiv, flanked by Democratic Senators Richard Blumenthal of Connecticut and Sheldon Whitehouse of Rhode Island. “World order is at stake. [Vladimir] Putin is trying to rewrite the map of Europe by force of arms.”

Graham is right. Germany, for example, has been reluctant to send Leopard tanks to Ukraine; the Germans, for their part, would likely prefer to see the United States send American tanks first. But everyone in the West should be sending anything the Ukrainians can learn to use, because a lot more than mere order is at stake, and order, by itself, is not enough. As Rousseau wrote, “Tranquility is found also in dungeons,” but that does not make dungeons desirable places to live. Global civilization itself is on the line: the world built after the defeat of the Axis, in which, for all of our faults as nations and peoples, we strive to live in peace and cooperation—and, at the least, to not butcher one another. If Russia’s campaign of terror and other likely war crimes erases Ukraine, it will be a defeat of the first order for every institution of international life, be it the United Nations or the international postal union.

I suspect that many people in Europe and the United States are having a hard time getting their arms around the magnitude of this threat. We are all afflicted by normalcy bias, our inherent resistance to accept that large changes can upend our lives. I struggled with this in the early stages of the war; I thought Ukraine would probably lose quickly, and then when the Russians were repulsed by the heroic Ukrainian defenses, I hoped (in vain) that the fighting would fizzle out, that Putin would try to conserve what was left of his shattered military, and that the world’s institutions, damaged by yet another act of Russian barbarism, would somehow continue to limp along.

We’re long past such possibilities. Putin has made clear that he will soak the ground of East-Central Europe with blood—both of Ukrainians and of his own hapless mobiks, the recently mobilized draftees he’s sending into the military meat grinder—if that’s what it takes to subjugate Kyiv and end the Kremlin’s unexpected and ongoing humiliation. At this point, the fight in Ukraine is not about borders or flags but about what kind of world we’ve built over the past century, and whether that world can sustain itself in the face of limitless brutality. As the Finnish Prime Minister Sanna Marin said in Davos last week: “We don’t know when the war ends, but Ukraine has to win. I don’t see another choice.”

Neither do I, and it’s past time to send Ukraine even more and better weapons. (Or, as my colleague David Frum tweeted last June: “If there’s anything that Ukraine can use in any NATO warehouse from Vancouver to Vilnius, that’s a scandal. Empty every inventory.”) I say all this despite my concerns about escalation to a wider European and even global war. I still oppose direct U.S. and NATO intervention in this fight, and I have taken my share of criticism for that reticence. I do not fear that such measures will instantly provoke World War III. Rather, I reject proposals that I think could increase the odds of an accident or a miscalculation that could bring the superpowers into a nuclear standoff that none of them wants. (Putin, for all his bluster, has no interest in living out his last days eating dry rations in a dark fallout shelter, but that does not mean he is competent at assessing risks.)

Americans and their allies must face how far a Russian victory would extend beyond Ukraine. In a recent discussion with my old friend Andrew Michta (a scholar of European affairs who is now dean at the George C. Marshall European Center for Security Studies, in Germany), he referred to the conflict in Ukraine as a “system-transforming” war, as Russian aggression dissolves the last illusions of a stable European order that were perhaps too quickly embraced in the immediate post–Cold War euphoria. Andrew has always been less sanguine about the post–World War II international order than old-school institutionalists like me, but he has a point: The pessimists after 1991 were right about Russia and its inability to live in peace with its neighbors. If Ukraine loses, dictators elsewhere will draw the lesson that the West has lost its will to defend its friends—and itself.

If Russia finally captures Ukraine by mass murder, torture, and nuclear threats, then everything the world has gained since the defeat of the Axis in 1945 and the end of the Cold War in 1991 will be in mortal peril. Putin will prove to himself and to every dictator on the planet that nothing has changed since Hitler, that lawless nations can achieve their aims by using force at will, by killing and raping innocent people and then literally grinding their ashes into the dirt. This is no longer about Russia’s neo-imperial dreams or Ukraine’s borders: This is a fight for the future of the international system and the safety of us all.

Related:

The brutal alternate world in which the U.S. abandoned Ukraine The bitter truth behind Russia’s looting of Ukrainian art

Today’s News

The first victims of Saturday night’s shooting at a Monterey Park, California, dance hall have been identified. Eleven people were killed and 10 others injured, and the gunman was found dead of a self-inflicted gunshot. President Joe Biden plans to name Jeffrey Zients, his administration’s former COVID-19-response coordinator, as the next White House chief of staff.    The FDA is considering a change to how COVID-19 vaccines are updated. The simpler process would more closely resemble annual flu-shot updates, according to documents the organization posted online.

Dispatches

Up for Debate: Readers weigh in on the pros and cons of corporate diversity training.

Explore all of our newsletters here.

Evening Read

Paul Spella / The Atlantic; Ted S. Warren / Getty; Shutterstock

A Grim New Low for Internet Sleuthing

By Megan Garber

On November 13, 2022, four students from the University of Idaho—Ethan Chapin, Kaylee Goncalves, Xana Kernodle, and Madison Mogen—were found dead in the house that the latter three rented near campus. Each had been stabbed, seemingly in bed. Two other students lived in the house, and were apparently in their rooms that night; they were unharmed.

From the public’s standpoint, the case had few leads at first: an unknown assailant, an unknown motive. Law-enforcement officials in the college town of Moscow, Idaho, initially offered the public little information about the evidence they were gathering in their investigation. Into that void came a frenzy of public speculation—and, soon enough, public accusation. The familiar alchemy set in: The real crime, as the weeks dragged on, became a “true crime”; the murders, as people discussed them and analyzed them and competed to solve them, became a grim form of interactive entertainment.

Read the full article.

More From The Atlantic

The culture wars look different on Wikipedia. Aubrey Plaza gave SNL permission to get weird.

Culture Break

Harrison Ford in "The Fugitive" (Pictorial Press Ltd / Alamy)

Read. “Woman in Labor,” a poem by Daria Serenko.

“Yesterday a woman began giving birth directly on the Red Square with an assault rifle pressed to her temple.”

Watch. Return to a blockbuster that was among the last of its kind. The Fugitive, available to stream on multiple platforms, is the perfect popcorn movie.

Play our daily crossword.

P.S.

I had to do some traveling this weekend, and although I usually connect to airline Wi-Fi and annoy people with random thoughts on Twitter, flying is also a way to catch up on old movies. For some reason, this time out I put on the 1974 classic The Longest Yard, with Burt Reynolds playing a dissolute former football star who ends up in a Florida jail. He is cornered by a sadistic warden (played with genial smarm by the great Eddie Albert) who blackmails him into coaching the prison football team. Reynolds instead suggests tuning up the team of guards by having them play a pickup team composed of inmates, which goes about the way you’d expect. I seemed to recall liking it as a kid, and I wanted to see it again as an adult. (Do not confuse this one with a far-inferior 2005 remake starring Adam Sandler.)

I don’t like sports, and I’m not sure why I thought I would enjoy the movie, but I did, and the reason is that The Longest Yard isn’t really a football movie. It’s a prison movie built around the game between the inmates and guards, a kind of lighthearted Shawshank Redemption about bad men who, for one moment, get a chance to be the good guys. There’s even a murder of an innocent man, as there was in Shawshank, and a similar, if far less dramatic, moment of getting even with the creepy warden. And yes, it includes a message about sportsmanship, as the inmates earn the grudging respect of the guards at the end. Finally, long before it was a joke on The Simpsons, the movie actually gets a laugh by hitting a guy in the groin with a football. Twice.

— Tom

Isabel Fattal contributed to this newsletter.

Are Standardized Tests Racist, or Are They Anti-racist?

The Atlantic

www.theatlantic.com › science › archive › 2023 › 01 › should-college-admissions-use-standardized-test-scores › 672816

They’re making their lists, checking them twice, trying to decide who’s in and who’s not. Once again, it’s admissions season, and tensions are running high as university leaders wrestle with challenging decisions that will affect the future of their schools. Chief among those tensions, in the past few years, has been the question of whether standardized tests should be central to the process.

In 2021, the University of California system ditched the use of all standardized testing for undergraduate admissions. California State University followed suit last spring, and in November, the American Bar Association voted to abandon the LSAT requirement for admission to any of the nation’s law schools beginning in 2025. Many other schools have lately reached the same conclusion. Science magazine reports that among a sample of 50 U.S. universities, only 3 percent of Ph.D. science programs currently require applicants to submit GRE scores, compared with 84 percent four years ago. And colleges that dropped their testing requirements or made them optional in response to the pandemic are now feeling torn about whether to bring that testing back.

Proponents of these changes have long argued that standardized tests are biased against low-income students and students of color, and should not be used. The system serves to perpetuate a status quo, they say, where children whose parents are in the top 1 percent of income distribution are 77 times more likely to attend an Ivy League university than children whose parents are in the bottom quintile. But those who still endorse the tests make the mirror-image claim: Schools have been able to identify talented low-income students and students of color and give them transformative educational experiences, they argue, precisely because those students are tested.

These two perspectives—that standardized tests are a driver of inequality, and that they are a great tool to ameliorate it—are often pitted against each other in contemporary discourse. But in my view, they are not oppositional positions. Both of these things can be true at the same time: Tests can be biased against marginalized students and they can be used to help those students succeed. We often forget an important lesson about standardized tests: They, or at least their outputs, take the form of data; and data can be interpreted—and acted upon—in multiple ways. That might sound like an obvious statement, but it’s crucial to resolving this debate.

I teach a Ph.D. seminar on quantitative research methods that dives into the intricacies of data generation, interpretation, and application. One of the readings I assign —Andrea Jones-Rooy’s article “I’m a Data Scientist Who Is Skeptical About Data”—contains a passage that is relevant to our thinking about standardized tests and their use in admissions:

Data can’t say anything about an issue any more than a hammer can build a house or almond meal can make a macaron. Data is a necessary ingredient in discovery, but you need a human to select it, shape it, and then turn it into an insight.

When reviewing applications, admissions officials have to turn test scores into insights about each applicant’s potential for success at the university. But their ability to generate those insights depends on what they know about the broader data-generating process that led students to get those scores, and how the officials interpret what they know about that process. In other words, what they do with test scores—and whether they end up perpetuating or reducing inequality—depends on how they think about bias in a larger system.

First, who takes these tests is not random. Obtaining a score can be so costly—in terms of both time and money—that it’s out of reach for many students. This source of bias can be addressed, at least in part, by public policy. For example, research has found that when states implement universal testing policies in high schools, and make testing part of the regular curriculum rather than an add-on that students and parents must provide for themselves, more disadvantaged students enter college and the income gap narrows. Even if we solve that problem, though, another—admittedly harder—issue would still need to be addressed.

The second issue relates to what the tests are actually measuring. Researchers have argued about this question for decades, and continue to debate it in academic journals. To understand the tension, recall what I said earlier: Universities are trying to figure out applicants’ potential for success. Students’ ability to realize their potential depends both on what they know before they arrive on campus and on being in a supportive academic environment. The tests are supposed to measure prior knowledge, but the nature of how learning works in American society means they end up measuring some other things, too.

In the United States, we have a primary and secondary education system that is unequal because of historic and contemporary laws and policies. American schools continue to be highly segregated by race, ethnicity, and social class, and that segregation affects what students have the opportunity to learn. Well-resourced schools can afford to provide more enriching educational experiences to their students than underfunded schools can. When students take standardized tests, they answer questions based on what they’ve learned, but what they’ve learned depends on the kind of schools they were lucky (or unlucky) enough to attend.

This creates a challenge for test-makers and the universities that rely on their data. They are attempting to assess student aptitude, but the unequal nature of the learning environments in which students have been raised means that tests are also capturing the underlying disparities; that is one of the reasons test scores tend to reflect larger patterns of inequality. When admissions officers see a student with low scores, they don’t know whether that person lacked potential or has instead been deprived of educational opportunity.

So how should colleges and universities use these data, given what they know about the factors that feed into it? The answer depends on how colleges and universities view their mission and broader purpose in society.

From the start, standardized tests were meant to filter students out. A congressional report on the history of testing in American schools describes how, in the late 1800s, elite colleges and universities had become disgruntled with the quality of high-school graduates, and sought a better means of screening them. Harvard’s president first proposed a system of common entrance exams in 1890; the College Entrance Examination Board was formed 10 years later. That orientation—toward exclusion—led schools down the path of using tests to find and admit only those students who seemed likely to embody and preserve an institution’s prestigious legacy. This brought them to some pretty unsavory policies. For example, a few years ago, a spokesperson for the University of Texas at Austin admitted that the school’s adoption of standardized testing in the 1950s had come out of its concerns over the effects of Brown v. Board of Education. UT looked at the distribution of test scores, found cutoff points that would eliminate the majority of Black applicants, and then used those cutoffs to guide admissions.

[Read: The college-admissions process is completely broken]

These days universities often claim to have goals of inclusion. They talk about the value of educating not just children of the elite, but a diverse cross-section of the population. Instead of searching for and admitting students who have already had tremendous advantages and specifically excluding nearly everyone else, these schools could try to recruit and educate the kinds of students who have not had remarkable educational opportunities in the past.

A careful use of testing data could support this goal. If students’ scores indicate a need for more support in particular areas, universities might invest more educational resources into those areas. They could hire more instructors or support staff to work with low-scoring students. And if schools notice alarming patterns in the data—consistent areas where students have been insufficiently prepared—they could respond not with disgruntlement, but with leadership. They could advocate for the state to provide K–12 schools with better resources.

Such investments would be in the nation’s interest, considering that one of the functions of our education system is to prepare young people for current and future challenges. These include improving equity and innovation in science and engineering, addressing climate change and climate justice, and creating technological systems that benefit a diverse public. All of these areas benefit from diverse groups of people working together—but diverse groups cannot come together if some members never learn the skills necessary for participation.

[Read: The SAT isn’t what’s unfair]

But universities—at least the elite ones—have not traditionally pursued inclusion, through the use of standardized testing or otherwise. At the moment, research on university behavior suggests that they operate as if they were largely competing for prestige. If that’s their mission—as opposed to advancing inclusive education—then it makes sense to use test scores for exclusion. Enrolling students who score the highest helps schools optimize their marketplace metrics—that is, their ranking.

Which is to say, the tests themselves are not the problem. Most components of admissions portfolios suffer from the same biases. In terms of favoring the rich, admissions essays are even worse than standardized tests; the same goes for participation in extracurricular activities and legacy admissions. Yet all of these provide universities with usable information about the kinds of students who may arrive on campus.

None of those data speak for themselves. Historically, the people who interpret and act upon this information have conferred advantages to wealthy students. But they can make different decisions today. Whether universities continue on their exclusive trajectories or become more inclusive institutions does not depend on how their students fill in bubble sheets. Instead, schools must find the answers for themselves: What kind of business are they in, and whom do they exist to serve?