Itemoids

QAnon

X Is a White-Supremacist Site

The Atlantic

www.theatlantic.com › technology › archive › 2024 › 11 › x-white-supremacist-site › 680538

X has always had a Nazi problem. I’ve covered the site, formerly known as Twitter, for more than a decade and reported extensively on its harassment problems, its verification (and then de-verification) of a white nationalist, and the glut of anti-Semitic hatred that roiled the platform in 2016.

But something is different today. Heaps of unfiltered posts that plainly celebrate racism, anti-Semitism, and outright Nazism are easily accessible and possibly even promoted by the site’s algorithms. All the while, Elon Musk—a far-right activist and the site’s owner, who is campaigning for and giving away millions to help elect Donald Trump—amplifies horrendous conspiracy theories about voter fraud, migrants run amok, and the idea that Jewish people hate white people. Twitter was always bad if you knew where to look, but because of Musk, X is far worse. (X and Musk did not respond to requests for comment for this article.)

It takes little effort to find neo-Nazi accounts that have built up substantial audiences on X. “Thank you all for 7K,” one white-nationalist meme account posted on October 17, complete with a heil-Hitler emoji reference. One week later, the account, which mostly posts old clips of Hitler speeches and content about how “Hitler was right,” celebrated 14,000 followers. One post, a black-and-white video of Nazis goose-stepping, has more than 187,000 views. Another racist and anti-Semitic video about Jewish women and Black men—clearly AI-generated—has more than 306,000 views. It was also posted in late October.

Many who remain on the platform have noticed X decaying even more than usual in recent months. “I’ve seen SO many seemingly unironic posts like this on Twitter recently this is getting insane,” one X user posted in response to a meme that the far-right influencer Stew Peters recently shared. It showed an image of Adolf Hitler holding a telephone with overlaid text reading, “Hello … 2024? Are you guys starting to get it yet?” Peters appended the commentary, “Yes. We’ve noticed.” The idea is simply that Hitler was right, and X users ate it up: As of this writing, the post has received about 67,000 likes, 10,000 reposts, and 11.4 million views. When Musk took over, in 2022, there were initial reports that hate speech (anti-Black and anti-Semitic slurs) was surging on the platform. By December of that year, one research group described the increase in hate speech as “unprecedented.” And it seems to only have gotten worse. There are far more blatant examples of racism now, even compared with a year ago. In September, the World Bank halted advertising on X after its promoted ads were showing up in the replies to pro-Nazi and white-nationalist content from accounts with hundreds of thousands of followers. Search queries such as Hitler was right return posts with tens of thousands of views—they’re indistinguishable from the poison once relegated to the worst sites on the internet, including 4chan, Gab, and Stormfront.

The hatred isn’t just coming from anonymous fringe posters either. Late last month, Clay Higgins, a Republican congressman from Louisiana, published a racist, threatening post about the Haitians in Springfield, Ohio, saying they’re from the “nastiest country in the western hemisphere.” Then he issued an ultimatum: “All these thugs better get their mind right and their ass out of our country before January 20th,” he wrote in the post, referencing Inauguration Day. Higgins eventually deleted the post at the request of his House colleagues on both sides of the aisle but refused to apologize. “I can put up another controversial post tomorrow if you want me to. I mean, we do have freedom of speech. I’ll say what I want,” he told CNN later that day.

And although Higgins did eventually try to walk his initial post back, clarifying that he was really referring to Haitian gangs, the sentiment he shared with CNN is right. The lawmaker can put up another vile post maligning an entire country whenever he desires. Not because of his right to free speech—which exists to protect against government interference—but because of how Musk chooses to operate his platform. Despite the social network’s policy that prohibits “incitement of harassment,” X seemingly took no issue with Higgins’s racist post or its potential to cause real-world harm for Springfield residents. (The town has already closed and evacuated its schools twice because of bomb threats.) And why would X care? The platform, which reinstated thousands of banned accounts following Musk’s takeover, in 2022—accounts that belong to QAnon supporters, political hucksters, conspiracy theorists, and at least one bona fide neo-Nazi—is so inundated with bigoted memes, racist AI slop, and unspeakable slurs that Higgins’s post seemed almost measured by comparison. In the past, when Twitter seemed more interested in enforcing content-moderation standards, the lawmaker’s comments may have resulted in a ban or some other disciplinary response: On X, he found an eager, sympathetic audience willing to amplify his hateful message.

His deleted post is instructive, though, as a way to measure the degradation of X under Musk. The site is a political project run by a politically radicalized centibillionaire. The worthwhile parts of Twitter (real-time news, sports, culture, silly memes, spontaneous encounters with celebrity accounts) have been drowned out by hateful garbage. X is no longer a social-media site with a white-supremacy problem, but a white-supremacist site with a social-media problem.

Musk has certainly bent the social network to support his politics, which has recently involved joking on Tucker Carlson’s show (which streams on X) that “nobody is even bothering to try to kill Kamala” and repurposing the @america handle from an inactive user to turn it into a megaphone for his pro-Trump super PAC. Musk has also quite clearly reengineered the site so that users see him, and his tweets, whether or not they follow him.

When Musk announced his intent to purchase Twitter, in April 2022, the New York Times columnist Ezra Klein aptly noted that “Musk reveals what he wants Twitter to be by how he acts on it.” By this logic, it would seem that X is vying to be the official propaganda outlet not just for Trump generally but also for the “Great Replacement” theory, which states that there is a global plot to eradicate the white race and its culture through immigration. In just the past year, Musk has endorsed multiple posts about the conspiracy theory. In November 2023, in response to a user named @breakingbaht who accused Jews of supporting bringing “hordes of minorities” into the United States, Musk replied, “You have said the actual truth.” Musk’s post was viewed more than 8 million times.

[Read: Musk’s Twitter is the blueprint for a MAGA government]

Though Musk has publicly claimed that he doesn’t “subscribe” to the “Great Replacement” theory, he appears obsessed with the idea that Republican voters in America are under attack from immigrants. Last December, he posted a misleading graph suggesting that the number of immigrants arriving illegally was overtaking domestic birth rates. He has repeatedly referenced a supposed Democratic plot to “legalize vast numbers of illegals” and put an end to fair elections. He has falsely suggested that the Biden administration was “flying ‘asylum seekers’, who are fast-tracked to citizenship, directly into swing states like Pennsylvania, Ohio, Wisconsin and Arizona” and argued that, soon, “everywhere in America will be like the nightmare that is downtown San Francisco.” According to a recent Bloomberg analysis of 53,000 of Musk’s posts, the billionaire has posted more about immigration and voter fraud than any other topic (more than 1,300 posts in total), garnering roughly 10 billion views.

But Musk’s interests extend beyond the United States. This summer, during a period of unrest and rioting in the United Kingdom over a mass stabbing that killed three children, the centibillionaire used his account to suggest that a civil war there was “inevitable.” He also shared (and subsequently deleted) a conspiracy theory that the U.K. government was building detainment camps for people rioting against Muslims. Additionally, X was instrumental in spreading misinformation and fueling outrage among far-right, anti-immigration protesters.

In Springfield, Ohio, X played a similar role as a conduit for white supremacists and far-right extremists to fuel real-world harm. One of the groups taking credit for singling out Springfield’s Haitian community was Blood Tribe, a neo-Nazi group known for marching through city streets waving swastikas. Blood Tribe had been focused on the town for months, but not until prominent X accounts (including Musk’s, J. D. Vance’s, and Trump’s) seized on a Facebook post from the region did Springfield become a national target. “It is no coincidence that there was an online rumor mill ready to amplify any social media posts about Springfield because Blood Tribe has been targeting the town in an effort to stoke racial resentment against ‘subhuman’ Haitians,” the journalist Robert Tracinski wrote recently. Tracinski argues that social-media channels (like X) have been instrumental in transferring neo-Nazi propaganda into the public consciousness—all the way to the presidential-debate stage. He is right. Musk’s platform has become a political tool for stoking racial hatred online and translating it into harassment in the physical world.

The ability to drag fringe ideas and theories into mainstream political discourse has long been a hallmark of X, even back when it was known as Twitter. There’s always been a trade-off with the platform’s ability to narrow the distance between activists and people in positions of power. Social-justice movements such as the Arab Spring and Black Lives Matter owe some of the success of their early organizing efforts to the platform.

Yet the website has also been one of the most reliable mainstream destinations on the internet to see Photoshopped images of public figures (or their family members) in gas chambers, or crude, racist cartoons of Jewish men. Now, under Musk’s stewardship, X seems to run in only one direction. The platform eschews healthy conversation. It abhors nuance, instead favoring constant escalation and engagement-baiting behavior. And it empowers movements that seek to enrage and divide. In April, an NBC News investigation found that “at least 150 paid ‘Premium’ subscriber X accounts and thousands of unpaid accounts have posted or amplified pro-Nazi content on X in recent months.” According to research from the extremism expert Colin Henry, since Musk’s purchase, there’s been a decline in anti-Semitic posts on 4chan’s infamous “anything goes” forum, and a simultaneous rise in posts targeting Jewish people on X.

X’s own transparency reports show that the social network has allowed hateful content to flourish on its site. In its last report before Musk’s acquisition, in just the second half of 2021, Twitter suspended about 105,000 of the more than 5 million accounts reported for hateful conduct. In the first half of 2024, according to X, the social network received more than 66 million hateful-conduct reports, but suspended just 2,361 accounts. It’s not a perfect comparison, as the way X reports and analyzes data has changed under Musk, but the company is clearly taking action far less frequently.

[Read: I’m running out of ways to explain how bad this is]

Because X has made it more difficult for researchers to access data by switching to a paid plan that prices out many academics, it is now difficult to get a quantitative understanding of the platform’s degradation. The statistics that do exist are alarming. Research from the Center for Countering Digital Hate found that in just the first month of Musk’s ownership, anti–Black American slurs used on the platform increased by 202 percent. The Anti-Defamation League found that anti-Semitic tweets on the platform increased by 61 percent in just two weeks after Musk’s takeover. But much of the evidence is anecdotal. The Washington Post summed up a recent report from the Institute for Strategic Dialogue, noting that pro-Hitler content “reached the largest audiences on X [relative to other social-media platforms], where it was also most likely to be recommended via the site’s algorithm.” Since Musk took over, X has done the following:

Seemingly failed to block a misleading advertisement post purchased by Jason Köhne, a white nationalist with the handle @NoWhiteGuiltNWG. Seemingly failed to block an advertisement calling to reinstate the death penalty for gay people. Reportedly run ads on 20 racist and anti-Semitic hashtags, including #whitepower, despite Musk pledging that he would demonetize posts that included hate speech. (After NBC asked about these, X removed the ability for users to search for some of these hashtags.) Granted blue-check verification to an account with the N-word in its handle. (The account has since been suspended.) Allowed an account that praised Hitler to purchase a gold-check badge, which denotes an “official organization” and is typically used by brands such as Doritos and BlackRock. (This account has since been suspended.) Seemingly failed to take immediate action on 63 of 66 accounts flagged for disseminating AI-generated Nazi memes from 4chan. More than half of the posts were made by paid accounts with verified badges, according to research by the nonprofit Center for Countering Digital Hate.

None of this is accidental. The output of a platform tells you what it is designed to do: In X’s case, all of this is proof of a system engineered to give voice to hateful ideas and reward those who espouse them. If one is to judge X by its main exports, then X, as it exists now under Musk, is a white-supremacist website.

You might scoff at this notion, especially if you, like me, have spent nearly two decades willingly logged on to the site, or if you, like me, have had your professional life influenced in surprising, occasionally delightful ways by the platform. Even now, I can scroll through the site’s algorithmic pond scum and find things worth saving—interesting commentary, breaking news, posts and observations that make me laugh. But these exceptional morsels are what make the platform so insidious, in part because they give cover to the true political project that X now represents and empowers.

As I was preparing to write this story, I visited some of the most vile corners of the internet. I’ve monitored these spaces for years, and yet this time, I was struck by how little distance there was between them and what X has become. It is impossible to ignore: The difference between X and a known hateful site such as Gab are people like myself. The majority of users are no doubt creators, businesses, journalists, celebrities, political junkies, sports fans, and other perfectly normal people who hold their nose and cling to the site. We are the human shield of respectability that keeps Musk’s disastrous $44 billion investment from being little more than an algorithmically powered Stormfront.

The justifications—the lure of the community, the (now-limited) ability to bear witness to news in real time, and of the reach of one’s audience of followers—feel particularly weak today. X’s cultural impact is still real, but its promotional use is nonexistent. (A recent post linking to a story of mine generated 289,000 impressions and 12,900 interactions, but only 948 link clicks—a click rate of roughly 0.00328027682 percent.) NPR, which left the platform in April 2023, reported almost negligible declines in traffic referrals after abandoning the site.

Continuing to post on X has been indefensible for some time. But now, more than ever, there is no good justification for adding one’s name to X’s list of active users. To leave the platform, some have argued, is to cede an important ideological battleground to the right. I’ve been sympathetic to this line of thinking, but the battle, on this particular platform, is lost. As long as Musk owns the site, its architecture will favor his political allies. If you see posting to X as a fight, then know it is not a fair one. For example: In October, Musk shared a fake screenshot of an Atlantic article, manipulated to show a fake headline—his post, which he never deleted, garnered more than 18 million views. The Atlantic’s X post debunking Musk’s claim received just 28,000 views. Musk is unfathomably rich. He’s used that money to purchase a platform, take it private, and effectively turn it into a megaphone for the world’s loudest racists. Now he’s attempting to use it to elect a corrupt, election-denying felon to the presidency.

To stay on X is not an explicit endorsement of this behavior, but it does help enable it. I’m not at all suggesting—as Musk has previously alleged—that the site be shut down or that Musk should be silenced. But there’s no need to stick around and listen. Why allow Musk to appear even slightly more credible by lending our names, our brands, and our movements to a platform that makes the world more dangerous for real people? To my dismay, I’ve hid from these questions for too long. Now that I’ve confronted them, I have no good answers.

Facebook Doesn’t Want Attention Right Now

The Atlantic

www.theatlantic.com › technology › archive › 2024 › 11 › meta-election-policy-2024 › 680532

After the 2016 elections, critics blamed Facebook for undermining American democracy. They believed that the app’s algorithmic News Feed pushed hyperpartisan content, outright fake news, and Russian-seeded disinformation to huge numbers of people. (The U.S. director of national intelligence agreed, and in January 2017 declassified a report that detailed Russia’s actions.) At first, the company’s executives dismissed these concerns—shortly after Donald Trump won the presidential election, Mark Zuckerberg said it was “pretty crazy” to think that fake news on Facebook had played a role—but they soon grew contrite. “Calling that crazy was dismissive and I regret it,” Zuckerberg would say 10 months later. Facebook had by then conceded that its own data did “not contradict” the intelligence report. Shortly thereafter, Adam Mosseri, the executive in charge of News Feed at the time, told this magazine that the company was launching a number of new initiatives “to stop the spread of misinformation, click-bait and other problematic content on Facebook.” He added: “We’ve learned things since the election, and we take our responsibility to protect the community of people who use Facebook seriously.”

Nowhere was the effort more apparent than in the launch of the company’s “war room” ahead of the 2018 midterms. Here, employees across departments would come together in front of a huge bank of computers to monitor Facebook for misinformation, fake news, threats of violence, and other crises. Numerous reporters were invited in at the time; The Verge, Wired, and The New York Times were among the outlets that ran access-driven stories about the effort. But the war room looked, to some, less like a solution and more like a mollifying stunt—a show put on for the press. And by 2020, with the rise of QAnon conspiracy theories and “Stop the Steal” groups, things did not seem generally better on Facebook.

[Read: What Facebook did to American democracy]

What is happening on Facebook now? On the eve of another chaotic election, journalists have found that highly deceptive political advertisements still run amok there, as do election-fraud conspiracy theories. The Times reported in September that the company, now called Meta, had fewer full-time employees working on election integrity and that Zuckerberg was no longer having weekly meetings with the lieutenants in charge of them. The paper also reported that Meta had replaced the war room with a less sharply defined “election operations center.”

When I reached out to Meta to ask about its plans, the company did not give many specific details. But Corey Chambliss, a Meta spokesperson focused on election preparedness, told me that the war room definitely still exists and that “election operations center” is just another of its names. He proved this with a video clip showing B-roll footage of a few dozen employees working in a conference room on Super Tuesday. The video had been shot in Meta’s Washington, D.C., office, but Chambliss impressed upon me that it could really be anywhere: The war room moves and exists in multiple places. “Wouldn’t want to over-emphasize the physical space as it’s sort of immaterial,” he wrote in an email.

It is clear that Meta wants to keep its name out of this election however much that is possible. It may marshal its considerable resources and massive content-moderation apparatus to enforce its policies against election interference, and it may “break the glass,” as it did in 2021, to take additional action if something as dramatic as January 6 happens again. At the same time, it won’t draw a lot of attention to those efforts or be very specific about them. Recent conversations I’ve had with a former policy lead at the company and academics who have worked with and studied Facebook, as well as Chambliss, made it clear that as a matter of policy, the company has done whatever it can to fly under the radar this election season—including Zuckerberg’s declining to endorse a candidate, as he has in previous presidential elections. When it comes to politics, Meta and Zuckerberg have decided that there is no winning. At this pivotal moment, it is simply doing less.

Meta’s war room may be real, but it is also just a symbol—its meaning has been haggled over for six years now, and its name doesn’t really matter. “People got very obsessed with the naming of this room,” Katie Harbath, a former public-policy director at Facebook who left the company in March 2021, told me. She disagreed with the idea that the room was ever a publicity stunt. “I spent a lot of time in that very smelly, windowless room,” she said. I wondered whether the war room—ambiguous in terms of both its accomplishments and its very existence—was the perfect way to understand the company’s approach to election chaos. I posed to Harbath that the conversation around the war room was really about the anxiety of not knowing what, precisely, Meta is doing behind closed doors to meet the challenges of the moment.

She agreed that part of the reason the room was created was to help people imagine content moderation. Its primary purpose was practical and logistical, she said, but it was “a way to give a visual representation of what the work looks like too.” That’s why, this year, the situation is so muddy. Meta doesn’t want you to think there is no war room, but it isn’t drawing attention to the war room. There was no press junket; there were no tours. There is no longer even a visual of the war room as a specific room in one place.

This is emblematic of Meta’s in-between approach this year. Meta has explicit rules against election misinformation on its platforms; these include a policy against content that attempts to deceive people about where and how to vote. The rules do not, as written, include false claims about election results (although such claims are prohibited in paid ads). Posts about the Big Lie—the false claim that the 2020 presidential election was stolen—were initially moderated with fact-checking labels, but these were scaled back dramatically before the 2022 midterms, purportedly because users disliked them. The company also made a significant policy update this year to clarify that it would require labels on AI-generated content (a change made after its Oversight Board criticized its previous manipulated-media policy as “incoherent”). But tons of unlabeled generative-AI slop still flows without consequence on Facebook.

[Read: “History will not judge us kindly”]

In recent years, Meta has also attempted to de-prioritize political content of all kinds in its various feeds. “As we’ve said for years, people have told us they want to see less politics overall while still being able to engage with political content on our platforms if they want,” Chambliss told me. “That’s exactly what we’ve been doing.” When I emailed to ask questions about the company’s election plans, Chambliss initially responded by linking me to a short blog post that Meta put out 11 months ago, and attaching a broadly circulated fact sheet, which included such vague figures as “$20 billion invested in teams and technology in this area since 2016.” This information is next-to-impossible for a member of the public to make sense of—how is anyone supposed to know what $20 billion can buy?

In some respects, Meta’s reticence is just part of a broader cultural shift. Content moderation has become politically charged in recent years. Many high-profile misinformation and disinformation research projects born in the aftermath of the January 6 insurrection have shut down or shrunk. (When the Stanford Internet Observatory, an organization that published regular reports on election integrity and misinformation, shut down, right-wing bloggers celebrated the end of its “reign of censorship.”) The Biden administration experimented in 2022 with creating a Disinformation Governance Board, but quickly abandoned the plan after it drew a firestorm from the right—whose pundits and influencers portrayed the proposal as one for a totalitarian “Ministry of Truth.” The academic who had been tasked with leading it was targeted so intensely that she resigned.

“Meta has definitely been quieter,” Harbath said. “They’re not sticking their heads out there with public announcements.” This is partly because Zuckerberg has become personally exasperated with politics, she speculated. She added that it is also the result of the response the company got in 2020—accusations from Democrats of doing too little, accusations from Republicans of doing far too much. The far right was, for a while, fixated on the idea that Zuckerberg had personally rigged the presidential election in favor of Joe Biden and that he frequently bowed to Orwellian pressure from the Biden administration afterward. In recent months, Zuckerberg has been oddly conciliatory about this position; in August, he wrote what amounted to an apology letter to Representative Jim Jordan of Ohio, saying that Meta had overdone it with its efforts to curtail COVID-19 misinformation and that it had erred by intervening to minimize the spread of the salacious news story about Hunter Biden and his misplaced laptop.  

Zuckerberg and his wife, Priscilla Chan, used to donate large sums of money to nonpartisan election infrastructure through their philanthropic foundation. They haven’t done so this election cycle, seeking to avoid a repeat of the controversy ginned up by Republicans the last time. This had not been enough to satisfy Trump, though, and he recently threatened to put Zuckerberg in prison for the rest of his life if he makes any political missteps—which may, of course, be one of the factors Zuckerberg is considering in choosing to stay silent.

Other circumstances have changed dramatically since 2020, too. Just before that election, the sitting president was pushing conspiracy theories about the election, about various groups of his own constituents, and about a pandemic that had already killed hundreds of thousands of Americans. He was still using Facebook, as were the adherents of QAnon, the violent conspiracy theory that positioned him as a redeeming godlike figure. After the 2020 election, Meta said publicly that Facebook would no longer recommend political or civic groups for users to join—clearly in response to the criticism that the site’s own recommendations guided people into “Stop the Steal” groups. And though Facebook banned Trump himself for using the platform to incite violence on January 6, the platform reinstated his account once it became clear that he would again be running for president

This election won’t be like the previous one. QAnon simply isn’t as present in the general culture, in part because of actions that Meta and other platforms took in 2020 and 2021. More will happen on other platforms this year, in more private spaces, such as Telegram groups. And this year’s “Stop the Steal” movement will likely need less help from Facebook to build momentum: YouTube and Trump’s own social platform, Truth Social, are highly effective for this purpose. Election denial has also been galvanized from the top by right-wing influencers and media personalities including Elon Musk, who has turned X into the perfect platform for spreading conspiracy theories about voter fraud. He pushes them himself all the time.

In many ways, understanding Facebook’s relevance is harder than ever. A recent survey from the Pew Research Center found that 33 percent of U.S. adults say they “regularly” get news from the platform. But Meta has limited access to data for both journalists and academics in the past two years. After the 2020 election, the company partnered with academics for a huge research project to sort out what happened and to examine Facebook’s broader role in American politics. It was cited when Zuckerberg was pressed to answer for Facebook’s role in the organization of the “Stop the Steal” movement and January 6: “We believe that independent researchers and our democratically elected officials are best positioned to complete an objective review of these events,” he said at the time. That project is coming to an end, some of the researchers involved told me, and Chabliss confirmed.

The first big release of research papers produced through the partnership, which gave researchers an unprecedented degree of access to platform data, came last summer. Still more papers will continue to be published as they pass peer review and are accepted to scientific journals—one paper in its final stages will deal with the diffusion of misinformation—but all of these studies were conducted using data from 2020 and 2021. No new data have or will be provided to these researchers.

When I asked Chambliss about the end of the partnership, he emphasized that no other platform had bothered to do as robust of a research project. However, he wouldn’t say exactly why it was coming to an end. “It’s a little frustrating that such a massive and unprecedented undertaking that literally no other platform has done is put to us as a question of ‘why not repeat this?’ vs asking peer companies why they haven't come close to making similar commitments for past or current elections,” he wrote in an email.

The company also shut down the data-analysis tool CrowdTangle—used widely by researchers and by journalists—earlier this year. It touts new tools that have been made available to researchers, but academics scoff at the claim that they approximate anything like real access to live and robust information. Without Meta’s cooperation, it becomes much harder for academics to effectively monitor what happens on its platforms.

I recently spoke with Kathleen Carley, a professor at Carnegie Mellon’s School of Computer Science, about research she conducted from 2020 to 2022 on the rise of “pink slime,” a type of mass-produced misinformation designed to look like the product of local newspapers and to be shared on social media. Repeating that type of study for the 2024 election would cost half a million dollars, she estimated, because researchers now have to pay if they want broad data access. From her observations and the more targeted, “surgical” data pulls that her team has been able to do this year, pink-slime sites are far more concentrated in swing states than they had been previously, while conspiracy theories were spreading just as easily as ever. But these are observations; they’re not a real monitoring effort, which would be too costly.

Monitoring implies that we’re doing consistent data crawls and have wide-open access to data,” she told me, “which we do not.” This time around, nobody will.