Itemoids

Stanford

What Going ‘Wild on Health’ Looks Like

The Atlantic

www.theatlantic.com › politics › archive › 2024 › 11 › health-department-nomination-trump › 680711

Robert F. Kennedy Jr., the bear-fondling, gravel-voiced Camelot scion, is President-Elect Donald Trump’s pick to lead the Department of Health and Human Services, where presumably he will “go wild on health,” to quote Trump. His nomination has raised concerns among public-health experts because many of Kennedy’s views on health are, well, wild.

To be sure, among Kennedy’s battier ideas are a few reasonable ones, such as reducing obesity and cracking down on direct-to-consumer drug commercials and conflicts of interest among researchers. But these are eclipsed by some troubling ones, such as the ideas that common cooking oils are poisonous, that fluoride doesn’t belong in tap water, and that childhood vaccines are questionable.

What if Kennedy did, in fact, go wild on health, get his way, and remake America in his own image? If his worst ideas come to pass, experts tell me, heart attacks might increase, dental infections might spike, and children might needlessly die of completely preventable diseases.

[Read: RFK Jr. collects his reward]

Even if he is confirmed as health secretary, Kennedy’s influence on some of these domains might be limited. Most public-health measures—including water fluoridation and vaccines—are a matter for states and localities, not the federal government. (This is why different states had such different COVID-19 responses.) But even so, a Secretary Kennedy would have a prominent perch from which to espouse his ideas, and his position would give him a veneer of credibility that he has not earned. Right-leaning states and judges might listen, and adapt local policies to suit his worldview. At the very least, parents who support Trump and Kennedy might take the administration’s views into account when making decisions for their families.

Let’s begin with seed oils, which keep popping up in Kennedy’s speeches and media clips. (He even mentioned them while suspending his presidential bid.) Kennedy has called seed oils, which include common cooking oils such as canola oil and sunflower oil, “one of the most unhealthy ingredients that we have in foods,” and says Americans are being “unknowingly poisoned” by them.

Kennedy believes that seed oils cause “body-wide inflammation” and disease. But this isn’t true, Christopher Gardner, a nutrition scientist at Stanford, told me. In fact, replacing foods high in saturated fat, such as butter, with those high in unsaturated fat, such as canola oil, has been proven again and again to lower cholesterol levels and reduce the risk of heart disease. To the extent that seed oils are bad, Gardner said, it’s because they often show up in highly processed junk and fast food.

And Kennedy’s solution to this supposed health crisis—to replace seed oils with beef tallow—is troubling. (Several of his seed-oil clips end with a promo of red Kennedy swag that reads MAKE FRYING OIL TALLOW AGAIN.) Whatever you do with seed oil, “don’t replace it with beef tallow,” Gardner said. “That’s friggin’ nuts.” Replacing all the oil you eat with beef fat can cause cholesterol to pile into plaques in your arteries, impeding the flow of blood. “That’s how you get a heart attack,” Gardner said.

Kennedy has also said he wants to remove fluoride from tap water, claiming that the compound is an “industrial waste associated with arthritis, bone fractures, bone cancer, IQ loss, neurodevelopmental disorders, and thyroid disease.”

There is some risk associated with excessive fluoride intake: Consuming fluoride above a level of 1.5 milligrams a liter—about twice the level that’s in most fluoridated tap water—has been linked to lowered IQ in children. Fluoridated water can also cause light stains on teeth, which affect about 12 percent of people in the United States.

But researchers say these risks are generally worth it because the consequences of removing fluoride from the water are much worse. Fluoride helps strengthen tooth enamel, and it also fights off the acid that attacks our teeth any time we eat carbohydrates. If the teeth lose this battle, decay can set in—and if the decay goes untreated, it can cause excruciating pain and, in extreme cases, pus-filled abscesses. “There will certainly be an increase in dental decay if fluoride is removed from the drinking water,” Gary Slade, a dentistry professor at the University of North Carolina at Chapel Hill, told me. Slade found in a study that fluoride in drinking water reduces decay by 30 percent in baby teeth and 12 percent in permanent teeth.

Some cities and countries have removed fluoride from the water, and kids’ dental health suffered as a result. After Israel ceased water fluoridation in 2014, dental treatments in a clinic in Tel Aviv increased twofold across all ages. In Canada, after Calgary ceased water fluoridation in 2011, second graders there experienced more cavities than those in Edmonton, where water was still fluoridated. After Juneau, Alaska, ceased water fluoridation in 2007, children younger than 6 underwent more cavity-related dental procedures—at a cost of about $300 more a year per child. Some cities have even reintroduced fluoride into the water supply after noticing an uptick in tooth decay among children.

Kennedy is perhaps most infamous for his skepticism of vaccines, and this is also likely the issue where his views are most consequential and worrisome. Although Kennedy sometimes shies away from calling himself anti-vaccine, he is the founder of the anti-vaccine group Children’s Health Defense and once wrote a (now-retracted) magazine story on the (false) link between vaccines and autism. He’s called vaccines “a holocaust” and has claimed that “there’s no vaccine that is safe and effective.” A co-chair of the Trump-Vance transition team has said that Kennedy would be given access to federal health data in order to assess the safety of vaccines.

Though school vaccine requirements are determined by states, a prominent national-health figure casting doubt on vaccines’ safety can influence both state policy and individual parents’ decisions to vaccinate. If vaccination rates do drop, among the diseases that health experts worry will return is measles, the most contagious of the vaccine-preventable diseases.

A person infected with measles is most contagious right before they develop symptoms. They can infect others simply by sharing their air space; tiny droplets infected with measles can hang in the air for two hours “like a ghost,” Paul Offit, the director of the Vaccine Education Center at the Children’s Hospital of Philadelphia, told me.

Kids with measles are sick and miserable. They’re photophobic—afraid of the light—and may struggle to breathe. Before the measles vaccine came along in 1963, 48,000 people were hospitalized with measles each year in America, many with pneumonia or inflammation of the brain. Five hundred of them died each year. When Samoa suffered a measles outbreak in 2019, 83 people died, out of a population of just 200,000.

Measles can also weaken the immune system, Matthew Ferrari, a biology professor at Penn State, told me. For two to three years after contracting measles, you’re likely to be hit harder by flu and other viruses. In rare cases, measles can cause a chronic form of brain inflammation that leads to a gradual loss of mental faculties and motor skills, and eventually, death.

[John Hendrickson: The first MAGA Democrat]

Measles is such a menace, in fact, that giving people “a choice” about whether to vaccinate their kids, as Kennedy often suggests, is not sufficient. People who have received two doses of the MMR vaccine are 97 percent protected against measles. But about 9 million people, including kids who are undergoing chemotherapy or who are on some kinds of immunosuppressants, can’t get vaccinated. These individuals rely on herd immunity from other vaccinated people, and when more than 5 percent of people choose not to be vaccinated, herd immunity suffers.

“Is it your right to catch and transmit a potentially fatal infection? No, it’s not,” Offit said. “You are part of this society, and you have to recognize that what you do affects other people.” Offit told me he’s already talked with pediatricians who say parents are hesitant to get their children vaccinated because of what they’ve heard Kennedy say.

Of course, there is a way to prevent Kennedy from having this much influence over public health: The Senate could reject his nomination. But that would require Republicans to stand up to Trump, which is a wild idea in itself.

Facebook Doesn’t Want Attention Right Now

The Atlantic

www.theatlantic.com › technology › archive › 2024 › 11 › meta-election-policy-2024 › 680532

After the 2016 elections, critics blamed Facebook for undermining American democracy. They believed that the app’s algorithmic News Feed pushed hyperpartisan content, outright fake news, and Russian-seeded disinformation to huge numbers of people. (The U.S. director of national intelligence agreed, and in January 2017 declassified a report that detailed Russia’s actions.) At first, the company’s executives dismissed these concerns—shortly after Donald Trump won the presidential election, Mark Zuckerberg said it was “pretty crazy” to think that fake news on Facebook had played a role—but they soon grew contrite. “Calling that crazy was dismissive and I regret it,” Zuckerberg would say 10 months later. Facebook had by then conceded that its own data did “not contradict” the intelligence report. Shortly thereafter, Adam Mosseri, the executive in charge of News Feed at the time, told this magazine that the company was launching a number of new initiatives “to stop the spread of misinformation, click-bait and other problematic content on Facebook.” He added: “We’ve learned things since the election, and we take our responsibility to protect the community of people who use Facebook seriously.”

Nowhere was the effort more apparent than in the launch of the company’s “war room” ahead of the 2018 midterms. Here, employees across departments would come together in front of a huge bank of computers to monitor Facebook for misinformation, fake news, threats of violence, and other crises. Numerous reporters were invited in at the time; The Verge, Wired, and The New York Times were among the outlets that ran access-driven stories about the effort. But the war room looked, to some, less like a solution and more like a mollifying stunt—a show put on for the press. And by 2020, with the rise of QAnon conspiracy theories and “Stop the Steal” groups, things did not seem generally better on Facebook.

[Read: What Facebook did to American democracy]

What is happening on Facebook now? On the eve of another chaotic election, journalists have found that highly deceptive political advertisements still run amok there, as do election-fraud conspiracy theories. The Times reported in September that the company, now called Meta, had fewer full-time employees working on election integrity and that Zuckerberg was no longer having weekly meetings with the lieutenants in charge of them. The paper also reported that Meta had replaced the war room with a less sharply defined “election operations center.”

When I reached out to Meta to ask about its plans, the company did not give many specific details. But Corey Chambliss, a Meta spokesperson focused on election preparedness, told me that the war room definitely still exists and that “election operations center” is just another of its names. He proved this with a video clip showing B-roll footage of a few dozen employees working in a conference room on Super Tuesday. The video had been shot in Meta’s Washington, D.C., office, but Chambliss impressed upon me that it could really be anywhere: The war room moves and exists in multiple places. “Wouldn’t want to over-emphasize the physical space as it’s sort of immaterial,” he wrote in an email.

It is clear that Meta wants to keep its name out of this election however much that is possible. It may marshal its considerable resources and massive content-moderation apparatus to enforce its policies against election interference, and it may “break the glass,” as it did in 2021, to take additional action if something as dramatic as January 6 happens again. At the same time, it won’t draw a lot of attention to those efforts or be very specific about them. Recent conversations I’ve had with a former policy lead at the company and academics who have worked with and studied Facebook, as well as Chambliss, made it clear that as a matter of policy, the company has done whatever it can to fly under the radar this election season—including Zuckerberg’s declining to endorse a candidate, as he has in previous presidential elections. When it comes to politics, Meta and Zuckerberg have decided that there is no winning. At this pivotal moment, it is simply doing less.

Meta’s war room may be real, but it is also just a symbol—its meaning has been haggled over for six years now, and its name doesn’t really matter. “People got very obsessed with the naming of this room,” Katie Harbath, a former public-policy director at Facebook who left the company in March 2021, told me. She disagreed with the idea that the room was ever a publicity stunt. “I spent a lot of time in that very smelly, windowless room,” she said. I wondered whether the war room—ambiguous in terms of both its accomplishments and its very existence—was the perfect way to understand the company’s approach to election chaos. I posed to Harbath that the conversation around the war room was really about the anxiety of not knowing what, precisely, Meta is doing behind closed doors to meet the challenges of the moment.

She agreed that part of the reason the room was created was to help people imagine content moderation. Its primary purpose was practical and logistical, she said, but it was “a way to give a visual representation of what the work looks like too.” That’s why, this year, the situation is so muddy. Meta doesn’t want you to think there is no war room, but it isn’t drawing attention to the war room. There was no press junket; there were no tours. There is no longer even a visual of the war room as a specific room in one place.

This is emblematic of Meta’s in-between approach this year. Meta has explicit rules against election misinformation on its platforms; these include a policy against content that attempts to deceive people about where and how to vote. The rules do not, as written, include false claims about election results (although such claims are prohibited in paid ads). Posts about the Big Lie—the false claim that the 2020 presidential election was stolen—were initially moderated with fact-checking labels, but these were scaled back dramatically before the 2022 midterms, purportedly because users disliked them. The company also made a significant policy update this year to clarify that it would require labels on AI-generated content (a change made after its Oversight Board criticized its previous manipulated-media policy as “incoherent”). But tons of unlabeled generative-AI slop still flows without consequence on Facebook.

[Read: “History will not judge us kindly”]

In recent years, Meta has also attempted to de-prioritize political content of all kinds in its various feeds. “As we’ve said for years, people have told us they want to see less politics overall while still being able to engage with political content on our platforms if they want,” Chambliss told me. “That’s exactly what we’ve been doing.” When I emailed to ask questions about the company’s election plans, Chambliss initially responded by linking me to a short blog post that Meta put out 11 months ago, and attaching a broadly circulated fact sheet, which included such vague figures as “$20 billion invested in teams and technology in this area since 2016.” This information is next-to-impossible for a member of the public to make sense of—how is anyone supposed to know what $20 billion can buy?

In some respects, Meta’s reticence is just part of a broader cultural shift. Content moderation has become politically charged in recent years. Many high-profile misinformation and disinformation research projects born in the aftermath of the January 6 insurrection have shut down or shrunk. (When the Stanford Internet Observatory, an organization that published regular reports on election integrity and misinformation, shut down, right-wing bloggers celebrated the end of its “reign of censorship.”) The Biden administration experimented in 2022 with creating a Disinformation Governance Board, but quickly abandoned the plan after it drew a firestorm from the right—whose pundits and influencers portrayed the proposal as one for a totalitarian “Ministry of Truth.” The academic who had been tasked with leading it was targeted so intensely that she resigned.

“Meta has definitely been quieter,” Harbath said. “They’re not sticking their heads out there with public announcements.” This is partly because Zuckerberg has become personally exasperated with politics, she speculated. She added that it is also the result of the response the company got in 2020—accusations from Democrats of doing too little, accusations from Republicans of doing far too much. The far right was, for a while, fixated on the idea that Zuckerberg had personally rigged the presidential election in favor of Joe Biden and that he frequently bowed to Orwellian pressure from the Biden administration afterward. In recent months, Zuckerberg has been oddly conciliatory about this position; in August, he wrote what amounted to an apology letter to Representative Jim Jordan of Ohio, saying that Meta had overdone it with its efforts to curtail COVID-19 misinformation and that it had erred by intervening to minimize the spread of the salacious news story about Hunter Biden and his misplaced laptop.  

Zuckerberg and his wife, Priscilla Chan, used to donate large sums of money to nonpartisan election infrastructure through their philanthropic foundation. They haven’t done so this election cycle, seeking to avoid a repeat of the controversy ginned up by Republicans the last time. This had not been enough to satisfy Trump, though, and he recently threatened to put Zuckerberg in prison for the rest of his life if he makes any political missteps—which may, of course, be one of the factors Zuckerberg is considering in choosing to stay silent.

Other circumstances have changed dramatically since 2020, too. Just before that election, the sitting president was pushing conspiracy theories about the election, about various groups of his own constituents, and about a pandemic that had already killed hundreds of thousands of Americans. He was still using Facebook, as were the adherents of QAnon, the violent conspiracy theory that positioned him as a redeeming godlike figure. After the 2020 election, Meta said publicly that Facebook would no longer recommend political or civic groups for users to join—clearly in response to the criticism that the site’s own recommendations guided people into “Stop the Steal” groups. And though Facebook banned Trump himself for using the platform to incite violence on January 6, the platform reinstated his account once it became clear that he would again be running for president

This election won’t be like the previous one. QAnon simply isn’t as present in the general culture, in part because of actions that Meta and other platforms took in 2020 and 2021. More will happen on other platforms this year, in more private spaces, such as Telegram groups. And this year’s “Stop the Steal” movement will likely need less help from Facebook to build momentum: YouTube and Trump’s own social platform, Truth Social, are highly effective for this purpose. Election denial has also been galvanized from the top by right-wing influencers and media personalities including Elon Musk, who has turned X into the perfect platform for spreading conspiracy theories about voter fraud. He pushes them himself all the time.

In many ways, understanding Facebook’s relevance is harder than ever. A recent survey from the Pew Research Center found that 33 percent of U.S. adults say they “regularly” get news from the platform. But Meta has limited access to data for both journalists and academics in the past two years. After the 2020 election, the company partnered with academics for a huge research project to sort out what happened and to examine Facebook’s broader role in American politics. It was cited when Zuckerberg was pressed to answer for Facebook’s role in the organization of the “Stop the Steal” movement and January 6: “We believe that independent researchers and our democratically elected officials are best positioned to complete an objective review of these events,” he said at the time. That project is coming to an end, some of the researchers involved told me, and Chabliss confirmed.

The first big release of research papers produced through the partnership, which gave researchers an unprecedented degree of access to platform data, came last summer. Still more papers will continue to be published as they pass peer review and are accepted to scientific journals—one paper in its final stages will deal with the diffusion of misinformation—but all of these studies were conducted using data from 2020 and 2021. No new data have or will be provided to these researchers.

When I asked Chambliss about the end of the partnership, he emphasized that no other platform had bothered to do as robust of a research project. However, he wouldn’t say exactly why it was coming to an end. “It’s a little frustrating that such a massive and unprecedented undertaking that literally no other platform has done is put to us as a question of ‘why not repeat this?’ vs asking peer companies why they haven't come close to making similar commitments for past or current elections,” he wrote in an email.

The company also shut down the data-analysis tool CrowdTangle—used widely by researchers and by journalists—earlier this year. It touts new tools that have been made available to researchers, but academics scoff at the claim that they approximate anything like real access to live and robust information. Without Meta’s cooperation, it becomes much harder for academics to effectively monitor what happens on its platforms.

I recently spoke with Kathleen Carley, a professor at Carnegie Mellon’s School of Computer Science, about research she conducted from 2020 to 2022 on the rise of “pink slime,” a type of mass-produced misinformation designed to look like the product of local newspapers and to be shared on social media. Repeating that type of study for the 2024 election would cost half a million dollars, she estimated, because researchers now have to pay if they want broad data access. From her observations and the more targeted, “surgical” data pulls that her team has been able to do this year, pink-slime sites are far more concentrated in swing states than they had been previously, while conspiracy theories were spreading just as easily as ever. But these are observations; they’re not a real monitoring effort, which would be too costly.

Monitoring implies that we’re doing consistent data crawls and have wide-open access to data,” she told me, “which we do not.” This time around, nobody will.