Itemoids

Boston

The Business-School Scandal That Just Keeps Getting Bigger

The Atlantic

www.theatlantic.com › magazine › archive › 2025 › 01 › business-school-fraud-research › 680669

This story seems to be about:

This article was featured in the One Story to Read Today newsletter. Sign up for it here.

For anyone who teaches at a business school, the blog post was bad news. For Juliana Schroeder, it was catastrophic. She saw the allegations when they first went up, on a Saturday in early summer 2023. Schroeder teaches management and psychology at UC Berkeley’s Haas School of Business. One of her colleagues—­­a star professor at Harvard Business School named Francesca Gino—­had just been accused of academic fraud. The authors of the blog post, a small team of business-school researchers, had found discrepancies in four of Gino’s published papers, and they suggested that the scandal was much larger. “We believe that many more Gino-authored papers contain fake data,” the blog post said. “Perhaps dozens.”

The story was soon picked up by the mainstream press. Reporters reveled in the irony that Gino, who had made her name as an expert on the psychology of breaking rules, may herself have broken them. (“Harvard Scholar Who Studies Honesty Is Accused of Fabricating Findings,” a New York Times headline read.) Harvard Business School had quietly placed Gino on administrative leave just before the blog post appeared. The school had conducted its own investigation; its nearly 1,300-page internal report, which was made public only in the course of related legal proceedings, concluded that Gino “committed research misconduct intentionally, knowingly, or recklessly” in the four papers. (Gino has steadfastly denied any wrongdoing.)

Schroeder’s interest in the scandal was more personal. Gino was one of her most consistent and important research partners. Their names appear together on seven peer-reviewed articles, as well as 26 conference talks. If Gino were indeed a serial cheat, then all of that shared work—and a large swath of Schroeder’s CV—was now at risk. When a senior academic is accused of fraud, the reputations of her honest, less established colleagues may get dragged down too. “Just think how horrible it is,” Katy Milkman, another of Gino’s research partners and a tenured professor at the University of Pennsylvania’s Wharton School, told me. “It could ruin your life.”

Juliana Schroeder (LinkedIn)

To head that off, Schroeder began her own audit of all the research papers that she’d ever done with Gino, seeking out raw data from each experiment and attempting to rerun the analyses. As that summer progressed, her efforts grew more ambitious. With the help of several colleagues, Schroeder pursued a plan to verify not just her own work with Gino, but a major portion of Gino’s scientific résumé. The group started reaching out to every other researcher who had put their name on one of Gino’s 138 co-authored studies. The Many Co-Authors Project, as the self-audit would be called, aimed to flag any additional work that might be tainted by allegations of misconduct and, more important, to absolve the rest—and Gino’s colleagues, by extension—of the wariness that now afflicted the entire field.

That field was not tucked away in some sleepy corner of academia, but was instead a highly influential one devoted to the science of success. Perhaps you’ve heard that procrastination makes you more creative, or that you’re better off having fewer choices, or that you can buy happiness by giving things away. All of that is research done by Schroeder’s peers—­business-school professors who apply the methods of behavioral research to such subjects as marketing, management, and decision making. In viral TED Talks and airport best sellers, on morning shows and late-night television, these business-school psychologists hold tremendous sway. They also have a presence in this magazine and many others: Nearly every business academic who is named in this story has been either quoted or cited by The Atlantic on multiple occasions. A few, including Gino, have written articles for The Atlantic themselves.

Francesca Gino (LinkedIn)

Business-school psychologists are scholars, but they aren’t shooting for a Nobel Prize. Their research doesn’t typically aim to solve a social problem; it won’t be curing anyone’s disease. It doesn’t even seem to have much influence on business practices, and it certainly hasn’t shaped the nation’s commerce. Still, its flashy findings come with clear rewards: consulting gigs and speakers’ fees, not to mention lavish academic incomes. Starting salaries at business schools can be $240,000 a year—double what they are at campus psychology departments, academics told me.

The research scandal that has engulfed this field goes far beyond the replication crisis that has plagued psychology and other disciplines in recent years. Long-standing flaws in how scientific work is done—including insufficient sample sizes and the sloppy application of statistics—have left large segments of the research literature in doubt. Many avenues of study once deemed promising turned out to be dead ends. But it’s one thing to understand that scientists have been cutting corners. It’s quite another to suspect that they’ve been creating their results from scratch.

[Read: Psychology’s replication crisis has a silver lining]

Schroeder has long been interested in trust. She’s given lectures on “building trust-based relationships”; she’s run experiments measuring trust in colleagues. Now she was working to rebuild the sense of trust within her field. A lot of scholars were involved in the Many Co-Authors Project, but Schroeder’s dedication was singular. In October 2023, a former graduate student who had helped tip off the team of bloggers to Gino’s possible fraud wrote her own “post mortem” on the case. It paints Schroeder as exceptional among her peers: a professor who “sent a clear signal to the scientific community that she is taking this scandal seriously.” Several others echoed this assessment, saying that ever since the news broke, Schroeder has been relentless—heroic, even—in her efforts to correct the record.

But if Schroeder planned to extinguish any doubts that remained, she may have aimed too high. More than a year since all of this began, the evidence of fraud has only multiplied. The rot in business schools runs much deeper than almost anyone had guessed, and the blame is unnervingly widespread. In the end, even Schroeder would become a suspect.

Gino was accused of faking numbers in four published papers. Just days into her digging, Schroeder uncovered another paper that appeared to be affected—and it was one that she herself had helped write.

The work, titled “Don’t Stop Believing: Rituals Improve Performance by Decreasing Anxiety,” was published in 2016, with Schroeder’s name listed second out of seven authors. Gino’s name was fourth. (The first few names on an academic paper are typically arranged in order of their contributions to the finished work.) The research it described was pretty standard for the field: a set of clever studies demonstrating the value of a life hack—one simple trick to nail your next presentation. The authors had tested the idea that simply following a routine—even one as arbitrary as drawing something on a piece of paper, sprinkling salt over it, and crumpling it up—could help calm a person’s nerves. “Although some may dismiss rituals as irrational,” the authors wrote, “those who enact rituals may well outperform the skeptics who forgo them.”

In truth, the skeptics have never had much purchase in business-school psychology. For the better part of a decade, this finding had been garnering citations—­about 200, per Google Scholar. But when Schroeder looked more closely at the work, she realized it was questionable. In October 2023, she sketched out some of her concerns on the Many Co-Authors Project website.

The paper’s first two key experiments, marked in the text as Studies 1a and 1b, looked at how the salt-and-paper ritual might help students sing a karaoke version of Journey’s “Don’t Stop Believin’ ” in a lab setting. According to the paper, Study 1a found that people who did the ritual before they sang reported feeling much less anxious than people who did not; Study 1b confirmed that they had lower heart rates, as measured with a pulse oximeter, than students who did not.

As Schroeder noted in her October post, the original records of these studies could not be found. But Schroeder did have some data spreadsheets for Studies 1a and 1b—she’d posted them shortly after the paper had been published, along with versions of the studies’ research questionnaires—and she now wrote that “unexplained issues were identified” in both, and that there was “uncertainty regarding the data provenance” for the latter. Schroeder’s post did not elaborate, but anyone can look at the spreadsheets, and it doesn’t take a forensic expert to see that the numbers they report are seriously amiss.

The “unexplained issues” with Studies 1a and 1b are legion. For one thing, the figures as reported don’t appear to match the research as described in other public documents. (For example, where the posted research questionnaire instructs the students to assess their level of anxiety on a five-point scale, the results seem to run from 2 to 8.) But the single most suspicious pattern shows up in the heart-rate data. According to the paper, each student had their pulse measured three times: once at the very start, again after they were told they’d have to sing the karaoke song, and then a third time, right before the song began. I created three graphs to illustrate the data’s peculiarities. They depict the measured heart rates for each of the 167 students who are said to have participated in the experiment, presented from left to right in their numbered order on the spreadsheet. The blue and green lines, which depict the first and second heart-rate measurements, show those values fluctuating more or less as one might expect for a noisy signal, measured from lots of individuals. But the red line doesn’t look like this at all: Rather, the measured heart rates form a series going up, across a run of more than 100 consecutive students.

DATA FROM “DON’T STOP BELIEVING: RITUALS IMPROVE PERFORMANCE BY DECREASING ANXIETY” (2016), STUDY 1B (Charts by The Atlantic. Based on data posted to OSF.io.)

I’ve reviewed the case with several researchers who suggested that this tidy run of values is indicative of fraud. “I see absolutely no reason” the sequence in No. 3 “should have the order that it does,” James Heathers, a scientific-­integrity investigator and an occasional Atlantic contributor, told me. The exact meaning of the pattern is unclear; if you were fabricating data, you certainly wouldn’t strive for them to look like this. Nick Brown, a scientific-integrity researcher affiliated with Linnaeus University Sweden, guessed that the ordered values in the spreadsheet may have been cooked up after the fact. In that case, it might have been less important that they formed a natural-­looking plot than that, when analyzed together, they matched fake statistics that had already been reported. “Someone sat down and burned quite a bit of midnight oil,” he proposed. I asked how sure he was that this pattern of results was the product of deliberate tampering; “100 percent, 100 percent,” he told me. “In my view, there is no innocent explanation in a universe where fairies don’t exist.”

Schroeder herself would come to a similar conclusion. Months later, I asked her whether the data were manipulated. “I think it’s very likely that they were,” she said. In the summer of 2023, when she reported the findings of her audit to her fellow authors, they all agreed that, whatever really happened, the work was compromised and ought to be retracted. But they could not reach consensus on who had been at fault. Gino did not appear to be responsible for either of the paper’s karaoke studies. Then who was?

This would not seem to be a tricky question. The published version of the paper has two lead authors who are listed as having “contributed equally” to the work. One of them was Schroeder. All of the co-authors agree that she handled two experiments—labeled in the text as Studies 3 and 4—in which participants solved a set of math problems. The other main contributor was Alison Wood Brooks, a young professor and colleague of Gino’s at Harvard Business School.

From the start, there was every reason to assume that Brooks had run the studies that produced the fishy data. Certainly they are similar to Brooks’s prior work. The same quirky experimental setup—in which students were asked to wear a pulse oximeter and sing a karaoke version of “Don’t Stop Believin’ ”—­appears in her dissertation from the Wharton School in 2013, and she published a portion of that work in a sole-authored paper the following year. (Brooks herself is musically inclined, performing around Boston in a rock band.)

Yet despite all of this, Brooks told the Many Co-Authors Project that she simply wasn’t sure whether she’d had access to the raw data for Study 1b, the one with the “no innocent explanation” pattern of results. She also said she didn’t know whether Gino played a role in collecting them. On the latter point, Brooks’s former Ph.D. adviser, Maurice Schweitzer, expressed the same uncertainty to the Many Co-Authors Project.

Plenty of evidence now suggests that this mystery was manufactured. The posted materials for Study 1b, along with administrative records from the lab, indicate that the work was carried out at Wharton, where Brooks was in grad school at the time, studying under Schweitzer and running another, very similar experiment. Also, the metadata for the oldest public version of the data spreadsheet lists “Alison Wood Brooks” as the last person who saved the file.

Alison Wood Brooks (LinkedIn)

Brooks, who has published research on the value of apologies, and whose first book—Talk: The Science of Conversation and the Art of Being Ourselves—is due out from Crown in January, did not respond to multiple requests for interviews or to a detailed list of written questions. Gino said that she “neither collected nor analyzed the data for Study 1a or Study 1b nor was I involved in the data audit.”

If Brooks did conduct this work and oversee its data, then Schroeder’s audit had produced a dire twist. The Many Co-Authors Project was meant to suss out Gino’s suspect work, and quarantine it from the rest. “The goal was to protect the innocent victims, and to find out what’s true about the science that had been done,” Milkman told me. But now, to all appearances, Schroeder had uncovered crooked data that apparently weren’t linked to Gino. That would mean Schroeder had another colleague who had contaminated her research. It would mean that her reputation—and the credibility of her entire field—was under threat from multiple directions at once.

Among the four research papers in which Gino was accused of cheating is one about the human tendency to misreport facts and figures for personal gain. Which is to say: She was accused of faking data for a study of when and how people might fake data. Amazingly, a different set of data from the same paper had already been flagged as the product of potential fraud, two years before the Gino scandal came to light. The first was contributed by Dan Ariely of Duke University—a frequent co-author of Gino’s and, like her, a celebrated expert on the psychology of telling lies. (Ariely has said that a Duke investigation—which the school has not acknowledged—discovered no evidence that he “falsified data or knowingly used falsified data.” He has also said that the investigation “determined that I should have done more to prevent faulty data from being published in the 2012 paper.”)

The existence of two apparently corrupted data sets was shocking: a keystone paper on the science of deception wasn’t just invalid, but possibly a scam twice over. But even in the face of this ignominy, few in business academia were ready to acknowledge, in the summer of 2023, that the problem might be larger still—and that their research literature might well be overrun with fantastical results.

Some scholars had tried to raise alarms before. In 2019, Dennis Tourish, a professor at the University of Sussex Business School, published a book titled Management Studies in Crisis: Fraud, Deception and Meaningless Research. He cites a study finding that more than a third of surveyed editors at management journals say they’ve encountered fabricated or falsified data. Even that alarming rate may undersell the problem, Tourish told me, given all of the misbehavior in his discipline that gets overlooked or covered up.

Anonymous surveys of various fields find that roughly 2 percent of scholars will admit to having fabricated, falsified, or modified data at least once in their career. But business-school psychology may be especially prone to misbehavior. For one thing, the field’s research standards are weaker than those for other psychologists. In response to the replication crisis, campus psychology departments have lately taken up a raft of methodological reforms. Statistically suspect practices that were de rigueur a dozen years ago are now uncommon; sample sizes have gotten bigger; a study’s planned analyses are now commonly written down before the work is carried out. But this great awakening has been slower to develop in business-school psychology, several academics told me. “No one wants to kill the golden goose,” one early-career researcher in business academia said. If management and marketing professors embraced all of psychology’s reforms, he said, then many of their most memorable, most TED Talk–able findings would go away. “To use marketing lingo, we’d lose our unique value proposition.”

It’s easy to imagine how cheating might lead to more cheating. If business-school psychology is beset with suspect research, then the bar for getting published in its flagship journals ratchets up: A study must be even flashier than all the other flashy findings if its authors want to stand out. Such incentives move in only one direction: Eventu­ally, the standard tools for torturing your data will no longer be enough. Now you have to go a little further; now you have to cut your data up, and carve them into sham results. Having one or two prolific frauds around would push the bar for publishing still higher, inviting yet more corruption. (And because the work is not exactly brain surgery, no one dies as a result.) In this way, a single discipline might come to look like Major League Baseball did 20 years ago: defined by juiced-up stats.

In the face of its own cheating scandal, MLB started screening every single player for anabolic steroids. There is no equivalent in science, and certainly not in business academia. Uri Simonsohn, a professor at the Esade Business School in Barcelona, is a member of the blogging team, called Data Colada, that caught the problems in both Gino’s and Ariely’s work. (He was also a motivating force behind the Many Co-Authors Project.) Data Colada has called out other instances of sketchy work and apparent fakery within the field, but its efforts at detection are highly targeted. They’re also quite unusual. Crying foul on someone else’s bad research makes you out to be a troublemaker, or a member of the notional “data police.” It can also bring a claim of defamation. Gino filed a $25 million defamation lawsuit against Harvard and the Data Colada team not long after the bloggers attacked her work. (This past September, a judge dismissed the portion of her claims that involved the bloggers and the defamation claim against Harvard. She still has pending claims against the university for gender discrimination and breach of contract.) The risks are even greater for those who don’t have tenure. A junior academic who accuses someone else of fraud may antagonize the senior colleagues who serve on the boards and committees that make publishing decisions and determine funding and job appointments.

[Read: Francesca Gino, the Harvard expert on dishonesty who is accused of lying]

These risks for would-be critics reinforce an atmosphere of complacency. “It’s embarrassing how few protections we have against fraud and how easy it has been to fool us,” Simonsohn said in a 2023 webinar. He added, “We have done nothing to prevent it. Nothing.”

Like so many other scientific scandals, the one Schroeder had identified quickly sank into a swamp of closed-door reviews and taciturn committees. Schroeder says that Harvard Business School declined to investigate her evidence of data-tampering, citing a policy of not responding to allegations made more than six years after the misconduct is said to have occurred. (Harvard Business School’s head of communications, Mark Cautela, declined to comment.) Her efforts to address the issue through the University of Pennsylvania’s Office of Research Integrity likewise seemed fruitless. (A spokesperson for the Wharton School would not comment on “the existence or status of” any investigations.)

Retractions have a way of dragging out in science publishing. This one was no exception. Maryam Kouchaki, an expert on workplace ethics at Northwestern University’s Kellogg School of Management and co–editor in chief of the journal that published the “Don’t Stop Believing” paper, had first received the authors’ call to pull their work in August 2023. As the anniversary of that request drew near, Schroeder still had no idea how the suspect data would be handled, and whether Brooks—or anyone else—would be held responsible.

Finally, on October 1, the “Don’t Stop Believing” paper was removed from the scientific literature. The journal’s published notice laid out some basic conclusions from Schroeder’s audit: Studies 1a and 1b had indeed been run by Brooks, the raw data were not available, and the posted data for 1b showed “streaks of heart rate ratings that were unlikely to have occurred naturally.” Schroeder’s own contributions to the paper were also found to have some flaws: Data points had been dropped from her analysis without any explanation in the published text. (Although this practice wasn’t fully out-of-bounds given research standards at the time, the same behavior would today be understood as a form of “p-hacking”—a pernicious source of false-positive results.) But the notice did not say whether the fishy numbers from Study 1b had been fabricated, let alone by whom. Someone other than Brooks may have handled those data before publication, it suggested. “The journal could not investigate this study any further.”

Two days later, Schroeder posted to X a link to her full and final audit of the paper. “It took *hundreds* of hours of work to complete this retraction,” she wrote, in a thread that described the flaws in her own experiments and Studies 1a and 1b. “I am ashamed of helping publish this paper & how long it took to identify its issues,” the thread concluded. “I am not the same scientist I was 10 years ago. I hold myself accountable for correcting any inaccurate prior research findings and for updating my research practices to do better.” Her peers responded by lavishing her with public praise. One colleague called the self-audit “exemplary” and an “act of courage.” A prominent professor at Columbia Business School congratulated Schroeder for being “a cultural heroine, a role model for the rising generation.”

But amid this celebration of her unusual transparency, an important and related story had somehow gone unnoticed. In the course of scouting out the edges of the cheating scandal in her field, Schroeder had uncovered yet another case of seeming science fraud. And this time, she’d blown the whistle on herself.

That stunning revelation, unaccompanied by any posts on social media, had arrived in a muffled update to the Many Co-Authors Project website. Schroeder announced that she’d found “an issue” with one more paper that she’d produced with Gino. This one, “Enacting Rituals to Improve Self-Control,” came out in 2018 in the Journal of Personality and Social Psychology; its author list overlaps substantially with that of the earlier “Don’t Stop Believing” paper (though Brooks was not involved). Like the first, it describes a set of studies that purport to show the power of the ritual effect. Like the first, it includes at least one study for which data appear to have been altered. And like the first, its data anomalies have no apparent link to Gino.

The basic facts are laid out in a document that Schroeder put into an online repository, describing an internal audit that she conducted with the help of the lead author, Allen Ding Tian. (Tian did not respond to requests for comment.) The paper opens with a field experiment on women who were trying to lose weight. Schroeder, then in grad school at the University of Chicago, oversaw the work; participants were recruited at a campus gym.

Half of the women were instructed to perform a ritual before each meal for the next five days: They were to put their food into a pattern on their plate. The other half were not. Then Schroeder used a diet-tracking app to tally all the food that each woman reported eating, and found that the ones in the ritual group took in about 200 fewer calories a day, on average, than the others. But in 2023, when she started digging back into this research, she uncovered some discrepancies. According to her study’s raw materials, nine of the women who reported that they’d done the food-arranging ritual were listed on the data spreadsheet as being in the control group; six others were mislabeled in the opposite direction. When Schroeder fixed these errors for her audit, the ritual effect completely vanished. Now it looked as though the women who’d done the food-arranging had consumed a few more calories, on average, than the women who had not.

Mistakes happen in research; sometimes data get mixed up. These errors, though, appear to be intentional. The women whose data had been swapped fit a suspicious pattern: The ones whose numbers might have undermined the paper’s hypothesis were disproportionately affected. This is not a subtle thing; among the 43 women who reported that they’d done the ritual, the six most prolific eaters all got switched into the control group. Nick Brown and James Heathers, the scientific-integrity researchers, have each tried to figure out the odds that anything like the study’s published result could have been attained if the data had been switched at random. Brown’s analysis pegged the answer at one in 1 million. “Data manipulation makes sense as an explanation,” he told me. “No other explanation is immediately obvious to me.” Heathers said he felt “quite comfortable” in concluding that whatever went wrong with the experiment “was a directed process, not a random process.”

Whether or not the data alterations were intentional, their specific form—flipped conditions for a handful of participants, in a way that favored the hypothesis—matches up with data issues raised by Harvard Business School’s investigation into Gino’s work. Schroeder rejected that comparison when I brought it up, but she was willing to accept some blame. “I couldn’t feel worse about that paper and that study,” she told me. “I’m deeply ashamed of it.”

Still, she said that the source of the error wasn’t her. Her research assistants on the project may have caused the problem; Schroeder wonders if they got confused. She said that two RAs, both undergraduates, had recruited the women at the gym, and that the scene there was chaotic: Sometimes multiple people came up to them at once, and the undergrads may have had to make some changes on the fly, adjusting which participants were being put into which group for the study. Maybe things went wrong from there, Schroeder said. One or both RAs might have gotten ruffled as they tried to paper over inconsistencies in their record-keeping. They both knew what the experiment was meant to show, and how the data ought to look—so it’s possible that they peeked a little at the data and reassigned the numbers in the way that seemed correct. (Schroeder’s audit lays out other possibilities, but describes this one as the most likely.)

Schroeder’s account is certainly plausible, but it’s not a perfect fit with all of the facts. For one thing, the posted data indicate that during most days on which the study ran, the RAs had to deal with only a handful of participants—sometimes just two. How could they have gotten so bewildered?

Any further details seem unlikely to emerge. The paper was formally retracted in the February issue of the journal. Schroeder has chosen not to name the RAs who helped her with the study, and she told me that she hasn’t tried to contact them. “I just didn’t think it was appropriate,” she said. “It doesn’t seem like it would help matters at all.” By her account, neither one is currently in academia, and she did not discover any additional issues when she reviewed their other work. (I reached out to more than a dozen former RAs and lab managers who were thanked in Schroeder’s published papers from around this time. Five responded to my queries; all of them denied having helped with this experiment.) In the end, Schroeder said, she took the data at the assistants’ word. “I did not go in and change labels,” she told me. But she also said repeatedly that she doesn’t think her RAs should take the blame. “The responsibility rests with me, right? And so it was appropriate that I’m the one named in the retraction notice,” she said. Later in our conversation, she summed up her response: “I’ve tried to trace back as best I can what happened, and just be honest.”

Across the many months I spent reporting this story, I’d come to think of Schroeder as a paragon of scientific rigor. She has led a seminar on “Experimental Design and Research Methods” in a business program with a sterling reputation for its research standards. She’d helped set up the Many Co-Authors Project, and then pursued it as aggressively as anyone. (Simonsohn even told me that Schroeder’s look-at-everything approach was a little “overboard.”) I also knew that she was devoted to the dreary but important task of reproducing other people’s published work.

As for the dieting research, Schroeder had owned the awkward optics. “It looks weird,” she told me when we spoke in June. “It’s a weird error, and it looks consistent with changing things in the direction to get a result.” But weirder still was how that error came to light, through a detailed data audit that she’d undertaken of her own accord. Apparently, she’d gone to great effort to call attention to a damning set of facts. That alone could be taken as a sign of her commitment to transparency.

But in the months that followed, I couldn’t shake the feeling that another theory also fit the facts. Schroeder’s leading explanation for the issues in her work—An RA must have bungled the data—sounded distressingly familiar. Francesca Gino had offered up the same defense to Harvard’s investigators. The mere repetition of this story doesn’t mean that it’s invalid: Lab techs and assistants really do mishandle data on occasion, and they may of course engage in science fraud. But still.

As for Schroeder’s all-out focus on integrity, and her public efforts to police the scientific record, I came to understand that most of these had been adopted, all at once, in mid-2023, shortly after the Gino scandal broke. (The version of Schroeder’s résumé that was available on her webpage in the spring of 2023 does not describe any replication projects whatsoever.) That makes sense if the accusations changed the way she thought about her field—and she did describe them to me as “a wake-up call.” But here’s another explanation: Maybe Schroeder saw the Gino scandal as a warning that the data sleuths were on the march. Perhaps she figured that her own work might end up being scrutinized, and then, having gamed this out, she decided to be a data sleuth herself. She’d publicly commit to reexamining her colleagues’ work, doing audits of her own, and asking for corrections. This would be her play for amnesty during a crisis.

I spoke with Schroeder for the last time on the day before Halloween. She was notably composed when I confronted her with the possibility that she’d engaged in data-tampering herself. She repeated what she’d told me months before, that she definitely did not go in and change the numbers in her study. And she rejected the idea that her self-audits had been strategic, that she’d used them to divert attention from her own wrongdoing. “Honestly, it’s disturbing to hear you even lay it out,” she said. “Because I think if you were to look at my body of work and try to replicate it, I think my hit rate would be good.” She continued: “So to imply that I’ve actually been, I don’t know, doing a lot of fraudulent stuff myself for a long time, and this was a moment to come clean with it? I just don’t think the evidence bears that out.”

That wasn’t really what I’d meant to imply. The story I had in mind was more mundane—and in a sense more tragic. I went through it: Perhaps she’d fudged the results for a study just once or twice early in her career, and never again. Perhaps she’d been committed, ever since, to proper scientific methods. And perhaps she really did intend to fix some problems in her field.

Schroeder allowed that she’d been susceptible to certain research practices—excluding data, for example—that are now considered improper. So were many of her colleagues. In that sense, she’d been guilty of letting her judgment be distorted by the pressure to succeed. But I understood what she was saying: This was not the same as fraud.

Throughout our conversations, Schroeder had avoided stating outright that anyone in particular had committed fraud. But not all of her colleagues had been so cautious. Just a few days earlier, I’d received an unexpected message from Maurice Schweitzer, the senior Wharton business-school professor who oversaw Alison Wood Brooks’s “Don’t Stop Believing” research. Up to this point, he had not responded to my request for an interview, and I figured he’d chosen not to comment for this story. But he finally responded to a list of written questions. It was important for me to know, his email said, that Schroe­der had “been involved in data tampering.” He included a link to the retraction notice for her paper on rituals and eating. When I asked Schweitzer to elaborate, he did not respond. (Schweitzer’s most recent academic work is focused on the damaging effects of gossip; one of his papers from 2024 is titled “The Interpersonal Costs of Revealing Others’ Secrets.”)

I laid this out for Schroeder on the phone. “Wow,” she said. “That’s unfortunate that he would say that.” She went silent for a long time. “Yeah, I’m sad he’s saying that.”

Another long silence followed. “I think that the narrative that you laid out, Dan, is going to have to be a possibility,” she said. “I don’t think there’s a way I can refute it, but I know what the truth is, and I think I did the right thing, with trying to clean the literature as much as I could.”

This is all too often where these stories end: A researcher will say that whatever really happened must forever be obscure. Dan Ariely told Business Insider in February 2024: “I’ve spent a big part of the last two years trying to find out what happened. I haven’t been able to … I decided I have to move on with my life.” Schweit­zer told me that the most relevant files for the “Don’t Stop Believing” paper are “long gone,” and that the chain of custody for its data simply can’t be tracked. (The Wharton School agreed, telling me that it “does not possess the requested data” for Study 1b, “as it falls outside its current data retention period.”) And now Schroeder had landed on a similar position.

It’s uncomfortable for a scientist to claim that the truth might be unknowable, just as it would be for a journalist, or any other truth-seeker by vocation. I daresay the facts regarding all of these cases may yet be amenable to further inquiry. The raw data from Study 1b may still exist, somewhere; if so, one might compare them with the posted spreadsheet to confirm that certain numbers had been altered. And Schroeder says she has the names of the RAs who worked on her dieting experiment; in theory, she could ask those people for their recollections of what happened. If figures aren’t checked, or questions aren’t asked, it’s by choice.

What feels out of reach is not so much the truth of any set of allegations, but their consequences. Gino has been placed on administrative leave, but in many other instances of suspected fraud, nothing happens. Both Brooks and Schroeder appear to be untouched. “The problem is that journal editors and institutions can be more concerned with their own prestige and reputation than finding out the truth,” Dennis Tourish, at the University of Sussex Business School, told me. “It can be easier to hope that this all just goes away and blows over and that somebody else will deal with it.”

Pablo Delcan

Some degree of disillusionment was common among the academics I spoke with for this story. The early-career researcher in business academia told me that he has an “unhealthy hobby” of finding manipulated data. But now, he said, he’s giving up the fight. “At least for the time being, I’m done,” he told me. “Feeling like Sisyphus isn’t the most fulfilling experience.” A management professor who has followed all of these cases very closely gave this assessment: “I would say that distrust characterizes many people in the field—­it’s all very depressing and demotivating.”

It’s possible that no one is more depressed and demotivated, at this point, than Juliana Schroeder. “To be honest with you, I’ve had some very low moments where I’m like, ‘Well, maybe this is not the right field for me, and I shouldn’t be in it,’ ” she said. “And to even have any errors in any of my papers is incredibly embarrassing, let alone one that looks like data-tampering.”

I asked her if there was anything more she wanted to say.

“I guess I just want to advocate for empathy and transparency—­maybe even in that order. Scientists are imperfect people, and we need to do better, and we can do better.” Even the Many Co-Authors Project, she said, has been a huge missed opportunity. “It was sort of like a moment where everyone could have done self-reflection. Everyone could have looked at their papers and done the exercise I did. And people didn’t.”

Maybe the situation in her field would eventually improve, she said. “The optimistic point is, in the long arc of things, we’ll self-correct, even if we have no incentive to retract or take responsibility.”

“Do you believe that?” I asked.

“On my optimistic days, I believe it.”

“Is today an optimistic day?”

“Not really.”

This article appears in the January 2025 print edition with the headline “The Fraudulent Science of Success.”

SNL Isn’t Bothering With Civility Anymore

The Atlantic

www.theatlantic.com › culture › archive › 2024 › 11 › saturday-night-live-bill-burr-post-election › 680614

Voters gave America’s rudest man permission to return to the White House; what else have they given permission to? Michael Che has one idea. “So y’all gonna let a man with 34 felonies lead the free world and be the president of the United States?” he asked during last night’s “Weekend Update.” “That’s it. I’m listening to R. Kelly again.”

The joke captured a feeling that’s been circulating in America ever since last Tuesday’s election: silver-lining nihilism, a relief that we can stop trying to be good. Kamala Harris lost probably because of the economy, but the Republican campaign did effectively leverage widespread exhaustion with identity politics, inclusive speech, and perhaps even civility itself. Some of Trump’s supporters have celebrated by crowing vileness such as “Your body, my choice.” Some of Harris’s fans have openly denigrated the minorities who voted for Trump.

Eesh. But if this is, as my colleague Thomas Chatterton Williams posted on X, the “post-woke era,” then perhaps at least comedy—the entertainment form that’s grouched the most about progressive piety—will be funnier now. Maybe someone will channel the spirit of Joan Rivers in her prime, turning nastiness into a high art. But judging from last night’s SNL, we will not be so lucky.

The episode’s host, the comedian Bill Burr, seemed well positioned to interpret Trump’s win. With his Boston accent and stubbled beard, he has long drawn upon his white-working-class bona fides to critique both sides of the partisan divide. When he hosted SNL shortly before the 2020 presidential election, he mocked wokeness in a somewhat sneaky way: By accusing white women and gay people of hijacking the posture of oppression from people of color, he in effect co-opted the logic of intersectionality to call out its own excesses. Whether you were offended or amused by his monologue, it at least had a point.

Last night, however, Burr just seemed ornery. He opened with a promise to avoid talking about the election, and then said he’d just gotten over the flu. When you’re sick, he observed, you lie awake “just going through this Rolodex of people that coughed on you. Sniffled near ya. Walked by an Asian or something.” Smattered chuckles. “You try to fight it. You’re like, ‘They say on the internet that’s where all the disease comes from.’” Almost no laughs.

Eventually he got to the election. “All right, ladies you’re oh-and-two against this guy,” he said, referring to Harris’s and Hillary Clinton’s losses to Trump. “Ladies, enough with the pantsuit, okay? It’s not working. Stop trying to have respect for yourselves. You don’t win the office, like, on policy, you know? You gotta whore it up a little.” He added, “I know a lot of ugly women—feminists, I mean—don’t want to hear this message.”

Maybe in those oh-so-woke times a week ago, I’d feel compelled to spell out how repeating stereotypes about Asian people and reducing women to their looks effectively makes life harder for Asian people and women. Other pundits would have then defended Burr on the grounds that he’s mocking his own racism and America’s sexism. Let’s skip all that and agree that Burr’s attempt to push the line of acceptability led him to bomb in a way that was horrible to watch. He created the same sucking feeling that Tony Hinchcliffe did when he made an arena of MAGAs groan at the idea that Puerto Rico is floating garbage. There’s no wit, no passion, no aha to this kind of comedy. It’s just guys flailing about for a reaction.

To be fair, Burr might have just been tired. This election cycle “took forever,” even though most voters made up their mind long ago, he complained. Their choices were two “polar opposite” candidates: “It’s like, ‘Let’s see. What does the orange bigot have to say? How about the real-estate agent that speaks through her nose?’”  (“Orange bigot”—is this The View in 2015?)

The rest of the episode was a bit better than the monologue. Burr’s presence pushed the writers to focus on sketches about masculinity, an apt subject given the role that male voters played in the election. A segment in which young guys tried to get their dads to open up about their feelings by talking about sports and cars was oddly touching. A bit featuring a self-pitying bro at group therapy was amusingly deranged. In the edgiest sketch, Burr played a fire fighter with a fetish involving children’s cartoons, leading SNL to air an image of the dad from Bluey in a ball gag. Was this post-woke Hollywood vulgarity or what comedy’s always been—the search for surprise?

The truth that SNL and the culture at large must now wrestle with is this: Trump may be back in office after four years away, but the world only turns forward. Wokeness has not been some fad; it hasn’t even been a movement that can be defeated. It’s been, as the term itself implies, an awakening—reshaping how people think about the relationship between the words they use and the society they live in. The case it made was so persuasive that it altered the English language likely forever. It also spread shame and overreached in a way that created backlash—but that backlash will cause cultural changes that build off what we just lived through, not reverse it entirely. The way to fully get back to a pre-woke time would be through actual Orwellian fascism.  

SNL isn’t counting that possibility out. Last night opened with the cast members speaking to the camera, telling Trump that they’d supported him all along, that they shouldn’t be on an enemies list, and that they’ll help him hunt down any colleagues who voted for Harris. Their tone was light but the satire was dark, highlighting the way that leaders—in politics, media, and business—who were once critical of Trump have taken to flattering him out of fear of retribution. The sketch anticipated a future that would make recent speech wars look quaint. But for now, as for long before, we can say what we want to say, not only what we think we should say.

The Exhibit That Will Change How You See Impressionism

The Atlantic

www.theatlantic.com › magazine › archive › 2024 › 12 › national-gallery-exhibit-paris-1874-impressionist-movement › 680401

This story seems to be about:

For museums and their public, Impressionism is the Goldilocks movement: not too old or too new, not too challenging or too sappy; just right. Renaissance art may baffle with arcane religious symbolism, contemporary art may baffle on purpose, but put people in a gallery with Claude Monet, Edgar Degas, and Camille Pissarro, and explanatory wall texts feel superfluous. Eyes roam contentedly over canvases suffused with light, vibrant with gesture, and alive with affable people doing pleasant things. What’s not to love?

Famously, of course, Impressionism was not greeted with love at the outset. In 1874, the first Impressionist exhibition was derided in the press as a “vexatious mystification for the public, or the result of mental derangement.” A reviewer called Paul Cézanne “a sort of madman, painting in a state of delirium tremens,” while Berthe Morisot was privately advised by her former teacher to “go to the Louvre twice a week, stand before Correggio for three hours, and ask his forgiveness.” The very term Impressionism was born as a diss, a mocking allusion to Monet’s shaggy, atmospheric painting of the Le Havre waterfront, Impression, Sunrise (1872). Few people saw affability: In 1874, the term commonly applied to Monet and his ilk was “intransigent.”

Impressionism’s rom-com arc from spirited rejection to public rapture informs our fondness for the pictures (plucky little underdogs), and has also provided a lasting model for avant-gardism as a mechanism of cultural change. We now take it for granted that young mavericks should team up to foment new ways of seeing that offend the establishment before being vindicated by soaring auction prices and long museum queues. For most of history, however, that wasn’t the way things worked. Thus the 1874 exhibition has acquired legendary status as the origin point of self-consciously modern art.

Its 150th anniversary this year has been celebrated with numerous exhibitions, most notably “Paris 1874: The Impressionist Moment,” organized by the Musée d’Orsay, in Paris, and the National Gallery of Art, in Washington, D.C. (where it is on view until January 19, 2025). Given the masterpieces that these museums could choose from, this might have been an easygoing lovefest, but the curators—Sylvie Patry and Anne Robbins in Paris, and Mary Morton and Kimberly A. Jones in Washington—have delivered something far more intriguing and valuable: a chance to see what these artists were being intransigent about, and to survey the unexpected turns that art and politics may take in a polarized, traumatized time and place.

Nineteenth-century French history was messy—all those republics, empires, and monarchies tumbling one after the other—but it contains a crucial backstory to Impressionism, often overlooked. In the 1860s, France was the preeminent military and cultural power on the continent. Paris was feted as the most sophisticated, most modern, most beautiful of cities, and the Paris Salon was the most important art exhibition on the planet. Then, in 1870, some fatuous chest bumping between Emperor Napoleon III (nephew of the original) and Otto von Bismarck set off an unimagined catastrophe: By the spring of 1871, mighty France had been vanquished by upstart Prussia, its emperor deposed, its sublime capital bombed and besieged for months. When France sued for peace, Paris rebelled and established its own new socialist-anarchist government, the Commune. In May 1871, the French army moved in to crush the Commune, and the ensuing week of urban warfare killed tens of thousands. In the nine months between the start of the siege in September and the destruction of the Commune in May, perhaps as many as 90,000 Parisians died of starvation and violence.

These events and their impact on French painters are detailed in the art critic Sebastian Smee’s absorbing new book, Paris in Ruins: Love, War, and the Birth of Impressionism. His main focus is on the star-crossed not-quite-lovers Morisot and Édouard Manet, but nobody in this tale escaped unscathed. Morisot was in the city through the bombardment, the famine, and the street fighting; Manet and Degas volunteered for the National Guard; Pierre-Auguste Renoir served in the cavalry. Some of their most promising peers were killed. Everyone saw ghastly things.

[From the April 1892 issue: Some notes on French Impressionism]

And yet nothing about Degas’ ballerinas practicing their tendus or Renoir’s frothy scene of sophisticates out on the town suggests recent experience with terror, starvation, or climbing over dead bodies in the street, though they were painted when those events were still fresh. The Boulevard des Capucines, where the first Impressionist show took place, had been the site of “atrocious violence” in 1871, Smee tells us, but in 1874, Monet’s painting of the street is limpid with light and bustling with top hats and hansom cabs. If most fans of Impressionism remain unaware of its intimacy with the horrors of what Victor Hugo dubbed “l’année terrible,” it’s because the Impressionists did not picture them.

Like Sir Arthur Conan Doyle’s unbarking dog, this suggests an absence in search of a story, and indeed, “Paris 1874” ultimately leaves one with a sense of why they chose to turn away, and how that choice helped set a new course for art. The standard version of Impressionism—the one most people will come through the door with—has, however, always emphasized a different conflict: the David-versus-Goliath contest between the young Impressionists and the illustrious Salon.

With more than 3,000 works displayed cheek by jowl, the 1874 Salon was nearly 20 times the size of the first Impressionist show, and attracted an audience of about half a million—aristocrats, members of the bourgeoisie, workers with families in tow. (Of the latter, one journalist sniffed: “If he could, he would even bring his dog or his cat.”) Presided over by the nation’s Académie des Beaux-Arts, an institution whose pedigree went back to Louis XIV, the Salon was allied with the state and had a vested interest in preserving the status quo. The Impressionists, wanting to preside over themselves, had founded their own organization—the Société Anonyme des Artistes Peintres, Sculpteurs, Graveurs, etc.—with a charter they adapted from the bakers’ union in Pissarro’s hometown.

“Paris 1874” is built from these two shows. With a handful of exceptions (mainly documentary photographs of the shattered city), the art on the walls in Washington now was on the walls in Paris then. (Identifying the relevant works to select from was no small achievement, given the 19th-century catalogs’ lack of images or measurements, and their penchant for unhelpful titles like Portrait.) Labels indicate which exhibition each artwork appeared in, beginning with the Salon’s medal-of-honor winner, Jean-Léon Gérôme’s L’Éminence Grise (1873), alongside Monet’s celebrated and pilloried Impression, Sunrise.

L’Éminence Grise (1873), Jean-Léon Gérôme (© 2024 Museum of Fine Arts, Boston)

The two paintings might be mascots for the opposing teams. Impeccably executed, the Gérôme is an umbrous scene in which Cardinal Richelieu’s right-hand monk, François Leclerc du Tremblay, descends a staircase as the high and mighty doff their caps. The fall of light is dramatic and convincing, the dispatch of color deft, the actors choreographed and costumed to carry you through the action. Every satin ribbon, every curl of Baroque metalwork seems palpable.

Beside it, the Monet looks loose and a bit jangly. The muted gray harbor flits between solidity and dissolution. The orange blob of a sun and its shredded reflection are called into being with an almost militant economy of means. And somehow, the painting glows as if light were passing through the canvas to land at our feet. The Gérôme is a perfect portal into another world. But the Monet is a world. More than just displaying different styles, the pictures embody divergent notions of what art could and should do.

Impression, Sunrise (1872), Claude Monet (© Musée Marmottan Monet, Paris / Studio Christian Baraja SLB)

For 200 years, the Académie had defined and defended visual art—both its manual skill set (perspective, anatomy, composition) and its intellectual status as a branch of rhetoric, conveying moral ideals and building better citizens. (L’Éminence Grise is, among other things, an engaging lesson in French history: When Cardinal Richelieu was the flashy power behind the throne of Louis XIII, the somber Capuchin friar was the “gray eminence” behind the cardinal.) Such content is what made “fine art” fine and separated painters and sculptors from decorators and cabinetmakers.

This value system had stylistic consequences. Narrative clarity demanded visual clarity. Figuration ranked higher than landscapes and still lifes in part because human figures instruct more lucidly than trees and grapes. Space was theatrical and coherent, bodies idealized, actions easily identified. Surfaces were smooth, brushstrokes self-effacing. This is still what we mean by “academic art.”

Most visitors confronting the opening wall at the National Gallery will know which painting they’re supposed to like—and it’s not the one with the fawning courtiers. Impressionism is universally admired, while academic art is sometimes treated as the butt of a joke. Admittedly, Jean Jules Antoine Lecomte Du Nouÿ’s huge, body-waxed Eros with surly cupids is easier to laugh at than to love, but most of the academic art on view strives, like the Gérôme, for gripping plausibility. You can see the assiduous archaeological research that went into the Egyptian bric-a-brac pictured in Lawrence Alma-Tadema’s pietà The Death of the Pharaoh’s First-Born Son (1872), or the armor of the sneaky Greeks descending from their giant gift horse in Henri-Paul Motte’s starlit scene of Troy.

[From the July 1900 issue: Impressionism and appreciation]

Today these pictures look like film stills. It’s easy to imagine Errol Flynn dashing up Gérôme’s stairs, or Timothée Chalamet brooding in the Alma-Tadema gloom. Perhaps the reason such paintings no longer move audiences the way they once did is that we have actual movies to provide that immersive storytelling kick. What we want from painting is something different—something personal, handmade, “authentic” (even when we aren’t quite clear what that means).

It’s a mistake, though, to assume that this impulse was new with Impressionism. Beginning in the 1840s, concurrent with the literary “Realism” of Stendhal and Honoré de Balzac, Realist painters turned away from the studio confections of the Académie and began schlepping their easels out into the weather to paint en plein air—peasants toiling in fields, or fields just being fields. Visible brushstrokes and rough finish were the price (or certificate of authenticity) of a real-time response to a real world. These were aesthetic choices, and in turn they suggested political viewpoints. In place of explicit narratives valorizing order, sacrifice, and loyalty, Realist art carried implicit arguments for social equality (“These plain folk are worthy of being seen”) and individual liberty (“My personal experience counts”).

The Salon was the Académie’s enforcement mechanism: In the absence of anything like today’s gallery system, it represented the only practical path for a French artist to establish a reputation. Yet for decades it flip-flopped—sometimes rejecting Realist art, sometimes accepting it and even rewarding it with prizes. Manet, considered a Realist because of his contemporary subjects and ambiguous messaging, had a famously volatile history with the Salon. In 1874, Degas explained the rationale behind the Société Anonyme in these terms: “The Realist movement no longer has to fight with others. It is, it exists, it needs to show itself on its own.”

But nothing in 1874 was quite that simple. A room at the National Gallery is given over to art about the Franco-Prussian War, both academic and Realist. All of it appeared in the Salon. The contrast is instructive: The elegant bronze by Antonin Mercié, conceived (prematurely) as a monument to victory, was altered in the face of actual events and titled Glory to the Vanquished. Although the naked soldier in the clasp of Victory has breathed his last, arms and wings still zoom ecstatically skyward and draperies flutter. He is beautiful even in death. The corpses laid out on the dirt in Auguste Lançon’s Dead in Line! (1873), dressed in the uniforms they were wearing when they fell, are neither naked nor beautiful. Their skin is gray, and their fists are clenched in cadaveric spasm. In the background, troops march by, officers chat, and a village burns. There is no glory, just the banality of slaughter. Unlike Mercié, Lançon had been at the front.

Dead in Line! (1873), Auguste Lançon (© Département de la Moselle, MdG1870&A, Rebourg)

Here also is Manet’s quiet etching of women queuing at a butcher shop in Paris as food supplies dwindled. Black lines, swift and short, capture a sea of shining umbrellas above a snaking mass of black dresses, at the back of which you can just make out the faint lightning-bolt outline of an upthrust bayonet. It’s a picture with no argument, just a set of observations: patience, desperation, rain.

In “Paris 1874,” a model of curatorial discretion, the art is allowed to speak for itself. Visitors are encouraged to look and guess whether a given work appeared in the Salon or the Société before checking the answer on the label. One quickly finds that applying the standard checklist of Impressionist attributes—“urban life,” “French countryside,” “leisure,” “dappled brushwork”—is remarkably unhelpful. The dog-walking ladies in Giuseppe De Nittis’s Avenue du Bois de Boulogne (1874, Salon) sport the same complicated hats, fashionable bustles, and acres of ruched fabric as Renoir’s The Parisian Girl (1874, Société). Charles-François Daubigny’s The Fields in June (1874, Salon) and Pissarro’s June Morning in Pontoise (1873, Société) are both sunny summer landscapes laid out with on-the-fly brushwork. Both sides did flowers.

As for the celebration of leisure, the Salon seems to have been full of moony girls lounging around and people entertaining fluffy white lapdogs, while the artists we now call Impressionists were paying much more attention to the working world. The glinting light of Pissarro’s Hoarfrost (1873, Société) falls on an old man trudging down a road with a large bundle of wood on his back. The backlit fug of Impression, Sunrise was probably smog—the admirably informative exhibition catalog alerts readers to Stendhal’s description of the same vista, “permeated by the sooty brown smoke of the steamboats.” Pictured at labor, not at play, Degas’ dancers stand around splayfooted, bored and tired, adjusting their shoe ribbons, scratching an itch. Even the bourgeois family outing in Degas’ transcendently odd At the Races in the Countryside (1869, Société) is focused on work: Together in a carriage, husband, wife, and dog are all transfixed by the baby’s wet nurse, doing her job. As for the scenes of mothers and children, it is possible that later observers have overestimated the leisure involved.

Hoarfrost (1873), Camille Pissarro (© Musée d’Orsay, Dist. RMN-Grand Palais / Patrice Schmidt)

Jules-Émile Saintin’s Washerwoman (1874, Salon) is assertively a picture of urban working life, but in an entirely academic mode. The scene is “modern” in the same way that Alma-Tadema’s pharaoh was ancient, time-stamped by an array of meticulously rendered accessories. But the Alma-Tadema at least had the gravitas of tragedy. Saintin is content with smarm: He arranges his working girl awkwardly in the street, grinning coquettishly at the viewer while twirling a pole of white linens and hoisting her skirt to give a peek of ankle—the eternal trope of the trollop.

[Read: Why absolutely everyone hates Renoir]

Then there is art so wonderful and so peculiarly modern, it seems unfair that it went to the Salon. In contrast to Saintin’s washerwoman, Manet’s The Railway (1873) is reticent to the point of truculence. Against the backdrop of an iron railing, a little girl stands with her back to us, watching the steam of a train below, while next to her, a poker-faced young woman glances up from the book and sleeping puppy in her lap to meet our gaze. A bunch of grapes sits on the stone footing of the fence. The emotional tenor is ambiguous, the relationships between woman, child, dog, grapes, and train unclear. Everything is perfectly still and completely unsettled. Why was this at the Salon? Manet believed that appearing there was a necessary career move and declined to join in the Société event.

The Railway (1873), Édouard Manet (Courtesy of the National Gallery of Art)

He had a point. The Société chose, in its egalitarian zeal, to have no jury and to give space to anyone who paid the modest membership fee. The exhibit ended up even more of a grab bag than the Salon, so alongside some of the most adventurous and lasting art of the 1870s, you got Antoine Ferdinand Attendu’s conventional still-life pile of dead birds, and Auguste Louis Marie Ottin’s marble head of Jean-Auguste-Dominique Ingres, the great master of hard-edged Neoclassicism, made more than 30 years earlier.

One function of “Paris 1874” is to debunk the tale of the little exhibition that could. The “first Impressionist exhibition,” it turns out, wasn’t all that Impressionist (only seven of its 31 participants are commonly categorized as such). Many artists took part in both shows simultaneously, prioritizing career opportunities over stylistic allegiance. (Not only was organized avant-gardism not a thing before 1874; it appears not to have been a thing in 1874.) As for those famously annoyed reviews, the catalog explains that they came from a handful of critics who specialized in being annoyed, and that most of the modest attention the Société show received was neutral or even friendly. Impression, Sunrise was “barely noticed.” Just four works sold. Goliath wandered off without a scratch, and David went broke.

But debunking is a short-lived thrill. The real rewards of “Paris 1874” lie in the rising awareness one gets walking through the galleries of a new signal in the noise, a set of affinities beyond either the certainties of the Académie or the earthy truths of Realism, and even a hint of how the unpictured traumas of 1870–71 left their mark. We know about the highlights to come (Monet’s water lilies at Giverny are hanging just down the hall), but there is something much more riveting about the moment before everything shifts into focus. By contrast, later Impressionist shows (there were eight in all) knew what they were about. The standard checklist works there. In 1874, it wasn’t yet clear, but you can begin to see a kind of opening up, a sideways slip into letting light be light and paint be paint.

As the Salon-tagged items demonstrate, the battle over subject matter had abated by 1874. Myths and modernity were both admissible. The shift that followed had less to do with what was being painted than how. The most frequent complaint about Impressionist art concerned style—it was too “sketchy.” The preference for loose brushwork, the disregard for clean edges and smooth gradients, was seen as slapdash and lazy, as if the artists were handing in early drafts in place of a finished thesis. More than one painting in the Société show was compared to “palette scrapings.”

Now we like the slap and the dash. We tend to see those independent-minded brushstrokes as evidence not of diminished attention, but of attention homing in on a new target—a fresh fascination with the transitory fall of light, at the expense, perhaps, of the stable object it falls on. Like a shape seen in the distance, sketchiness has the power to suggest multiple realities at once. Monet’s dark-gray squiggle in the Le Havre water might be a rock or a boat; certainly it is a squiggle of paint. Emphasizing the physicality of the image—the gloppiness of the paint, the visible canvas below—calls attention to the instability of the illusion. Step backwards and it’s a harbor; step forward and it’s bits of colorful dried goo.

At the Races in the Countryside (1869), Edgar Degas (© 2024 Museum of Fine Arts, Boston)

Sketchiness wasn’t the only means of undermining pictorial certainty. Degas never went in for fluttering brushstrokes or elusive edges, but his Ballet Rehearsal (1874) is scattered with pentimenti—the ghosts of a former foot, the trace of an altered elbow, the shadow of a male observer removed from the scene. He had sketched the dancers from life, but then used and reused those drawings for years, reconfiguring them like paper dolls, exactly the way an academic artist might go about peopling a crowd scene. The all-important difference is that Degas shows how the trick is played. In At the Races in the Countryside, the carriage and family are placed so far down and to the right that the nose and shoulder of one of the horses fall off the canvas, as if the painting were a snapshot whose taker was jostled just as the shutter clicked. It’s a way of calling attention to the bucket of artifice and conventions on which painterly illusion depends. This is art being disarmingly honest about being dishonest.

What this fledgling Impressionism puts on offer, distinct from the works around it, is a kind of gentle disruption or incompleteness—a willingness to leave things half-said, an admission of ambiguity, not as a problem to be solved but as a truth to be treasured. Nowhere is this more compelling than in Morisot’s The Cradle (1872). A portrait of the artist’s sister Edma watching her sleeping daughter, it takes a soft subject—mother and child, linen and lace—and girds it with a tensile framework of planes, taut lines, and swooping catenaries. Look beyond the “femininity” and you can see the first steps of the dance with abstraction that would dominate 20th-century painting from Henri Matisse to Richard Diebenkorn. At least as astonishing, though, is the neutrality and distance of the expression on Edma’s face. It might be exhaustion, or reverie, or (because before her marriage, she too had been a gifted professional painter) dispassionate study. Think what you will.

The Cradle is not harrowing or angst-ridden. It doesn’t picture unpleasantness. But when Smee writes of Morisot’s pursuit of “a new language of lightness and evanescence—a language based in close observation, devoid of rhetoric or hysteria,” he’s talking about a response to 1870–71. Both the right-wing empire and the left-wing Commune had ended in pointless, bloody, self-inflicted tragedies. The survivors, at least some of them, had learned to mistrust big ideas. An art about nothing might seem a strange defense, but the act of paying attention to what is rather than what should be—to the particular and ephemeral rather than the abstract and eternal—could be a bulwark against the seductions of ideology.

Resistance, of necessity, adapts to circumstance. In China during the Cultural Revolution, when message-laden art was an instrument of the state, artists belonging to the No Name Group took to clandestine plein air painting in the French mode precisely because it “supported no revolutionary goals—it was hand-made, unique, intimate and personal,” the scholar and artist Chang Yuchen has written. “In this context nature was less a retreat than a chosen battlefield.”

I used to think that Impressionism’s just-rightness was simply a function of time’s passage—that its inventions had seeped so deeply into our culture that they felt comfy. But although familiarity might explain our ease, it doesn’t fully explain Impressionism’s continued hold: the sense that beyond being nice to look at, it still has something to say. The more time I spent in “Paris 1874,” the more I cooled on the soft-edged moniker “impressionist” and warmed to the bristlier “intransigent.” It was a term often applied to unrepentant Communards, but the most intransigent thing of all might just be refusing to tell people what to think.

The contemporary art world, like the world at large, has reentered a period of high moral righteousness. Major institutions and scrappy start-ups share the conviction that the job (or at least a job) of art is to instruct the public in values. Educators, publicists, and artists work hard to ensure that nobody gets left behind and nobody misses the point. But what if leaving the point unfixed is the point?

Whether all of this would have developed in the same way without the violence and disillusionment of the Franco-Prussian War and the Commune is impossible to know. But there are worse lessons to derive from trauma than these: Take pleasure in your senses, question authority, look around you. Look again.

This article appears in the December 2024 print edition with the headline “The Dark Origins of Impressionism.”

The Animal-Cruelty Election

The Atlantic

www.theatlantic.com › ideas › archive › 2024 › 11 › animal-abuse-stories-election-season › 680457

Why has this election season featured so many stories about animal cruelty? The 2024 campaign has contained many remarkable moments—the Democrats’ sudden switch from Joe Biden to Kamala Harris; the two assassination attempts on Donald Trump; the emergence of Elon Musk as the MAGA minister for propaganda; the grimly racist “America First” rally at Madison Square Garden. But the bizarre run of stories about animal abuse has been one of the least discussed.

In late October, the National Rifle Association was supposed to hold a “Defend the 2nd” event with a keynote address by Trump, but it was canceled at the last minute, because of what the NRA described as “campaign scheduling changes.” Here’s another possible reason: Earlier last month, the NRA’s new chief executive, Doug Hamlin, was outed as an accessory to cat murder.

In 1980, according to contemporary news accounts unearthed by The Guardian, Hamlin and four buddies at the University of Michigan pleaded no contest to animal cruelty following the death of their fraternity’s cat, BK. The cat’s paws had been cut off before it was set on fire and strung up, allegedly for not using the litter box. “I took responsibility for this regrettable incident as chapter president although I wasn’t directly involved,” Hamlin wrote in a statement to media outlets after the Guardian report appeared.

In April, Kristi Noem, South Dakota’s Republican governor, scuttled her chances of becoming Trump’s running mate when her memoir revealed that two decades ago, she shot her wirehaired pointer, Cricket, in a gravel pit after the puppy had attacked some chickens and then bit her. (“I hated that dog,” Noem wrote, adding that she later killed an unruly goat in the same spot.) More recently, during his only debate with Harris, Trump painted immigrants as murderers of American cats and dogs, repeating unsubstantiated internet rumors that Haitians in Springfield, Ohio, were eating “the pets of the people that live there.”

[Read: The link between animal abuse and murder]

American political figures have long showcased their pets to humanize themselves—remember Barack Obama’s Portuguese water dogs, Bo and Sunny, and Socks, Bill Clinton’s cat? But the relationship between animals and humans keeps growing in salience as our lifestyles change. Domestic animals have moved from being seen as ratcatchers, guards, and hunting companions to pampered lap dogs that get dressed up as pumpkins on Halloween. Half of American pet owners say that their animals are as much part of the family as any human, and many of us mainline cute videos of cats and dogs for hours every week. These shifting attitudes have made accusations of animal abuse a potent attack on political adversaries—and social media allows such claims to be amplified even when they are embellished or made up entirely.

At the same time, we make arbitrary distinctions between species on emotional grounds, treating some as friends, some as food, and some as sporting targets. Three-quarters of Americans support hunting and fishing, and the Democratic nominee for vice president, Tim Walz, was so keen to burnish his rural credentials that he took part in a pheasant shoot on the campaign trail. Similarly, only 3 percent of Americans are vegetarian, and 1 percent are vegan, but killing a pet—a member of the family—violates a deep taboo.

Noem, who seemed to view Cricket purely as a working dog, was clearly caught off guard by the reaction to her memoir. “The governor that killed the family pet was the one thing that united the extreme right and the extreme left,” Hal Herzog, a Western Carolina University psychology professor who studies human attitudes toward animals, told me. “There was this moral outrage. She was just oblivious.”

Herzog, the author of Some We Love, Some We Hate, Some We Eat: Why It’s So Hard to Think Straight About Animals, has been interested in how people think about animal cruelty since he researched illegal cockfighting rings for his doctorate several decades ago. He told me that the people who ran the fights, who made money by inflicting great pain on the roosters involved, “loved dogs and had families. But they had this one little quirk.” Politicians can trip over these categories—our deep-down feeling that some animals can be killed or hurt, and others cannot—without realizing it until it’s too late.

I had called Herzog to ask what he made of someone like the NRA’s Hamlin—a prominent man who was once involved in the torture of an animal. Should a history of animal cruelty or neglect—or just plain weirdness—be disqualifying for a politician, a corporate leader, or an activist? In his media statement, Hamlin maintained after the fraternity story came out that he had not done anything similar again. “Since that time I served my country, raised a family, volunteered in my community, started a business, worked with Gold Star families, and raised millions of dollars for charity,” he declared. “I’ve endeavored to live my life in a manner beyond reproach.” Could that be true—could someone be involved in such a sadistic act without it being evidence of wider moral depravity?

“What strikes me about animal cruelty is that most people that are cruel to animals are not sadists or sociopaths; they’re everyday people,” Herzog told me. A review of the literature showed that a third of violent offenders had a history of animal abuse—but so did a third of the members of the control group, he said. Then Herzog blew my mind. “To me, the greatest paradox of all is Nazi animal protection.”

I’m sorry?

“The Nazis passed the world’s most progressive animal-rights legislation,” he continued, unfazed. The German regime banned hunting with dogs, the production of foie gras, and docking dogs’ tails without anesthetic. Heinrich Himmler, the head of the SS, “wrote that he would put in a prison camp anyone who was cruel to an animal.” When the Nazis decreed that Jews could no longer own pets, the regime ensured that the animals were slaughtered humanely. It sent their owners to concentration camps.

[Read: A single male cat’s reign of terror]

The Nazis dehumanized their enemies and humanized their animals, but Herzog thinks that the reverse is more common: Many people who are good to other humans are often cruel to animals. And even those who claim to love animals are nonetheless capable of causing them pain. Circus trainers who whip their charges might dote on their pets. People who deliberately breed dogs with painfully flat faces to win competitions insist that they adore their teeny asthmatic fur babies. “These sorts of paradoxes are so common,” Herzog said.

The lines separating cruelty from the acceptable handling of animals have a way of shifting. I’m old enough to remember the 2012 election cycle, when Mitt Romney was reviled for having driven his station wagon with a kennel strapped to the top containing the family dog, Seamus. Midway through the 12-hour drive from Boston to Ontario, the dog suffered from diarrhea, obscuring the rear windshield. Like Noem, Romney was also blindsided by the scandal: Animal activists described his actions as cruelty, and a Facebook group called Dogs Against Romney attracted 38,000 fans. By the standards of a dozen years ago, Seamusgate was a big story, but it’s mild in comparison with this year’s headlines. When Romney was asked about Noem’s memoir earlier this year, he said the two incidents were not comparable: “I didn’t eat my dog. I didn’t shoot my dog. I loved my dog, and my dog loved me.”

One of the most reliable sources of strange animal stories this cycle has been Robert F. Kennedy Jr., an environmentalist with a lifelong interest in keeping, training, and eating animals who has frequently transgressed the accepted Western boundaries of interaction with the natural world. In July, Vanity Fair published a photograph that it said Kennedy, then an independent candidate for president, had sent to a friend. In it, he and an unidentified woman are holding a barbecued animal carcass up to their open mouths. The suggestion was that the animal was a dog. “The picture’s intent seems to have been comedic—Kennedy and his companion are pantomiming—but for the recipient it was disturbing evidence of Kennedy’s poor judgment and thoughtlessness,” the magazine reported. (In response, Kennedy said that the animal was a goat.)

A month later, Kennedy admitted that he had once found a dead bear cub on the side of a road in upstate New York and put it in his trunk. He said he had intended to skin it and “put the meat in my refrigerator.” However, that never happened, because, in NPR’s glorious phrasing, Kennedy claimed to have been “waylaid by a busy day of falconry” and a steak dinner, and instead decided to deposit the carcass in Central Park. (He even posed the dead bear so that it appeared to have been run over by a cyclist.) “I wasn’t drinking, of course, but people were drinking with me who thought this was a good idea,” he later told the comedian Roseanne Barr in a video that he released on X. He was 60 when the incident occurred. What made the idea of picking up a dead bear sound so strange to many commentators, when the falconry would have caused, at most, a raised eyebrow—and the steak dinner no comment at all?

Kennedy’s animal antics still weren’t finished. In September, he released a bizarre video in which he fondled an iguana and recounted how in some countries, people slit open the lizards’ stomachs to eat the eggs inside. Then another old anecdote surfaced: His daughter Kick recalled a trip home from the beach with parts of a dead whale strapped to the roof of the car. “Every time we accelerated on the highway, whale juice would pour into the windows of the car, and it was the rankest thing on the planet,” Kick told Town & Country. She added that this was “just normal day-to-day stuff” for her father. Not everyone was so quick to minimize Kennedy’s conduct. “These are behaviors you read about in news articles not about a candidate but about a suspect,” my colleague Caitlin Flanagan observed.

[Pagan Kennedy: New York’s grand dame of dog poisoning]

I’m as guilty as anyone of making illogical distinctions—though I would like to stress that I have never murdered a cat or dismembered a dead whale. Having recently driven across Pennsylvania, where I counted three dead deer by the side of the road on a single trip, I support the right to hunt—population control is essential. Yet the infamous photograph of Donald Trump Jr. and Eric Trump posing with a dead leopard on a safari trip more than a decade ago disturbs me far more than the unproven assertion that one immigrant, somewhere, has eaten a dog or cat for sustenance. You can tell from the Trump sons’ expressions that they are extremely proud of having killed a rare and beautiful creature purely for their own entertainment. The image is grotesque. It reminds me of Atticus Finch’s instruction that it’s a sin to kill a mockingbird, because “mockingbirds don’t do one thing but make music for us to enjoy.”

As it happens, hunters, many of them animal lovers in their everyday life, have a complicated code of ethics about what counts as a fair chase. Hence the backlash over the former Republican vice-presidential nominee Sarah Palin’s support for shooting Alaskan wolves from an aircraft. Most of us are okay with killing animals—or having them killed on our behalf—as long as the process does not involve unnecessary cruelty or excessive enjoyment.

In the end, arbitrary categories can license or restrict our capacity for cruelty and allow us to entertain two contradictory thoughts at once. We love animals and we kill animals. We create boundaries around an us and a them, and treat transgressors of each limit very differently. In a similar way, some of Donald Trump’s crowds applaud his racist rumors about migrants—when they might not dream of being rude to their neighbor who was born abroad. “What we see in animals,” Herzog told me, “is a microcosm of the big issue of how humans make moral decisions.” In other words, illogically and inconsistently. The same individual is capable of great humanity—and great cruelty or indifference.

Climate Change Comes for Baseball

The Atlantic

www.theatlantic.com › culture › archive › 2024 › 11 › baseball-climate-change-tropicana-field › 680510

It happened fast. Almost as soon as Hurricane Milton bore down on South Florida last month, high winds began shredding the roof of Tropicana Field, home for 26 years to the Tampa Bay Rays baseball team. Gigantic segments of Teflon-coated fiberglass flapped in the wind, then sheared off entirely. In the end, it took only a few hours for the Trop to lose most of its roof—a roof that was built to withstand high winds; a roof that was necessary because it exists in a place where people can no longer sit outside in the summer; a roof that was supposed to be the solution.

The problem, of course, is the weather. Of America’s four major professional sports, baseball is uniquely vulnerable to climate change in that it is typically played outside, often during the day, for a long, unrelenting season: six games a week per team, from March to October, which incidentally is when the Northern Hemisphere gets steamy and unpredictable, more so every year. In 1869, when the first professional baseball club was formed, the average July temperature in New York City’s Central Park was 72.8 degrees. In 2023, it was 79. By 2100, it could be as much as 13.5 degrees hotter, according to recent projections, hot enough to make sitting in the sunshine for a few hours unpleasant at best and hazardous at worst. In June, four Kansas City Royals fans were hospitalized for heat illness during an afternoon home game. On a muggy day four seasons ago, Los Angeles Angels starting pitcher Dylan Bundy began sweating so much, you could see it on TV. He then took a dainty puke behind the mound and exited the game with heat exhaustion.

Games have been moved because of wildfire smoke on the West Coast and delayed because of catastrophic flooding in New York. What we used to call generational storms now come nearly every year. Two weeks before the Trop’s roof came off, a different storm ripped through Atlanta, postponing a highly consequential Mets-Braves matchup and extending the season by a day.  

Climate change is already affecting some basic material realities of the sport. Some ball clubs have added misting fans and massive ice-water containers for temporary relief, making the experience of going to the game feel a little less like relaxing and a little more like surviving. A 2021 study found that umpires are more prone to mistaken calls in extreme heat, and one from last year found that decreased air density—the result of hotter temperatures—is changing the fundamental physics of how balls fly through the air.

Baseball just saw its latest season come and go, with the L.A. Dodgers—who play in a city that already experiences extreme storms, deadly heat, and drought—taking the World Series in five games. As we look forward to the next season, and the one after that, the biggest question isn’t whether Shohei Ohtani’s new elbow can make him the greatest player in history (possibly), or whether sports betting has ruined baseball (quite possibly), or whether the Mets will go the distance in 2025 (definitely)—it’s whether the sport will be able to adapt in time to save itself. “It’s becoming difficult for me, as somebody who enjoys the sport, and as somebody who researches climate change,” Jessica Murfree, an assistant professor of sport administration at the University of North Carolina at Chapel Hill, told me. “I don’t know that there’s a way to have it all.”

[Read: Climate collapse could happen fast]

In a scene from the movie Interstellar, the film’s protagonist, a pilot named Joseph Cooper, takes his children and father-in-law to a baseball game in the blight-ravaged, storm-battered year 2067. A few dozen people are sitting in the stands of a dinky diamond that looks like it could belong to a high-school team, eating popcorn; Cooper’s father-in-law is grousing about how, in his day, “we had real ballplayers—who are these bums?” And then one such bum turns around to reveal his jersey, and there’s the joke, if you want to call it that: These are the New York Yankees.

Timothy Kellison shows this clip to the students he teaches at Florida State University’s Department of Sport Management. “That’s the future of sport in the long run,” he told me: The most powerful franchise in the history of baseball could become a traveling oddity. “From a Yankees fan’s perspective, from a baseball fan’s perspective, that’s a very troubling future.”

Murfree was even more direct: “I do think sport might be one of the first things to go when we really move past these alarming tipping points about climate.”

Baseball has long been defined, and enriched, by its openness to the world. It gets “better air in our lungs” and allows us to “leave our close rooms,” as Walt Whitman wrote in 1846, during the sport’s earliest days. It is the only major sport in which the point is for the ball to leave the field of play; once in a while—on a lucky night, in an open park—a home run lands in the parking lot or a nearby body of water. Wind, temperature, and precipitation are such a part of the game that the website FanGraphs includes weather in its suite of advanced statistics. The season begins in spring and ends in autumn, in a cycle that binds the sport to all living things: renewal and decay, renewal and decay. “Playing baseball in the fall has a certain smell,” Alva Noë, a Mets fan and philosophy professor at UC Berkeley, told me. “Playing baseball in the spring, in the hot summer, has a certain feel.” In his book The Summer Game, the famed baseball chronicler Roger Angell wrote of the “flight of pigeons flashing out of the barn-shadow of the upper stands”; of “the heat of the sun-warmed iron coming through your shirtsleeve under your elbow”; of “the moon rising out of the scoreboard like a spongy, day-old orange balloon.”

Angell was writing in 1964, in the context of the closure of the Polo Grounds, the “bony, misshapen old playground” that was home to both the Mets and the Yankees at various times. He mourned the future of the sport, when “our surroundings become more undistinguished and indistinguishable.” The next year, baseball’s first indoor stadium, the Houston Astrodome, opened, the argument being that a roof was the only viable way to play baseball in the subtropical Texas climate.

Sixty years later, Houston is much hotter, and eight teams (including the Rays, who are still figuring out where to play next season) have roofs; this includes two of the three newest parks in baseball (in Miami and the Dallas metro area). The next new one (in Las Vegas, which is one of the fastest-warming cities in the country) will have one, too. Most of these roofs are retractable, but in practice, many tend to stay closed during summer’s high heat and heavy rains. During any given week of the season, several games are played on plastic grass in a breezeless hangar, under not sky but steel. In the future, “the aesthetics of the game, the feel of the game, will be so different, if you’re sitting in … a sort of neutral, sanitized, protected” space, Noë said. “There won’t be birds, there won’t be clouds, there won’t be glare from the sun, there won’t be wind, there won’t be rain, there won’t be pollution, there won’t be the sound of overflying airplanes. You’ll be playing baseball in a shopping mall.”

[Read: Why are baseball players always eating?]

This vision is, to be clear, the best answer we have so far to baseball’s climate problem. If anything, it’s actually too ambitious, too far off. Renovating existing parks to add roofs is impractical and expensive; building new ones costs even more: “We’re not talking about one business and relocating it to a different building higher up on the land,” Kellison said. “These are billion-dollar stadiums. They’re intended to be permanent.” Baseball is also highly invested in its own iconography; in cities such as Boston and Chicago, places with famous, century-old, open parks, domes will be a tough sell.

And, obviously, they’re not a perfect solution to extreme weather. In Phoenix, a city that had 113 straight 100-degree-or-more days this summer, the air-conditioning system at Chase Field has been straining; players have left games due to cramps, blaming the heat. Even if teams find the money and the will to build new parks, and even if those parks do the thing they’re supposed to do, they might not do it fast or well enough to make baseball comfortable or safe enough to keep its fans—fans whom baseball is already anxious to retain, as other entertainment becomes more popular.

Kellison is actually pretty optimistic about some adaptation being possible, precisely because baseball, like all sports, is so dependent on its fans. People pay lots of money to be in baseball stadiums—about $3.3 billion in 2023, according to one analysis. Owners and the league have a major incentive to keep them coming. “These are very wealthy and successful business leaders who aren’t just going to let a product like this go away with such a financial stake in it,” he said. Aileen McManamon, a sports-management consultant and a board member of the trade association Green Sports Alliance, told me that Major League Baseball does recognize that examining its relationship to the environment “is fundamental to [its] continued existence.”

But MLB isn’t a monolith—it’s a multibillion-dollar organization composed of 30 teams with 30 ownership groups, in 27 cities across two countries. (The league did not immediately respond to a request for comment.) Kellison doesn’t believe that MLB is thinking as ambitiously or formally as it should be about climate change’s effect on the sport, and neither does Murfree. “There really is no excuse to say this is a once-in-a-lifetime thing, a freak accident,” Murfree said. “The league and its organizations do have a responsibility to be forward-thinking and protect their people and their organizations from something that scientists have been waving their hands in the air about for a long time.”

[Read: A touch revolution could transform pitching]

Experts have all kinds of proposals, both radical and subtle, to go along with domes: Brad Wilkins, the director of the University of Oregon’s Performance Research Laboratory, suggested making changes to the uniforms, which are polyester, highly insulative, and “not very good at dissipating heat.” (The league did change the uniforms slightly this year, in part to incorporate more “breathable” fabric, but many players found the quality lacking.) McManamon talked with me about being more strategic regarding where and how we build new stadiums, looking for sites with natural ventilation and better shade, and using novel materials. She also suggested shortening the season, to make it a little gentler on fans and players. Murfree, meanwhile, has argued for shifting the timing of the season, and for opportunistically moving games based on weather, making baseball less tied to place.

Not all of these ideas are immediately feasible, and none will be popular. All sports like to mythologize themselves, but baseball—this young country’s oldest game—might have one of the most powerful and pernicious mythmaking apparatuses of all. It’s the stuff of poetry, of 18-hour documentaries, of love stories. Baseball people are intensely nostalgic. They love to find ways to be cranky about changes much less consequential than these. But Murfree’s a fan, and a pragmatist. “If we dig our heels into the status quo, we will lose out on the things that we enjoy,” she said. “If baseball is to remain America’s favorite pastime, we have no choice but to be flexible.”

Fans, players, and Major League Baseball think of the sport as something static, but in fact it is changing all the time. The earliest baseball games were played by amateurs, on irregularly sized fields, with inconsistent rules and balls that were made of melted shoes wrapped in yarn and pitched underhand. Since then, we have seen, among other things, the introduction of racial integration, night games, free agency, the designated hitter, instant replay, sabermetrics, and the pitch clock, each new development greeted with skepticism and outrage and then, eventually, acceptance. Now we face the most radical changes of all. Eventually, baseball—the sport of sunbaked afternoons, a sport made beautiful and strange by its exposure to the elements—may be unrecognizable. This will be the best-case scenario, because the alternative is that baseball doesn’t exist.  

​​When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.

What to Watch if You Need a Distraction This Week

The Atlantic

www.theatlantic.com › newsletters › archive › 2024 › 11 › what-to-watch-if-you-need-a-distraction-this-week › 680492

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

Welcome back to The Daily’s Sunday culture edition.

The thought of Election Day may bring a twinge of anxiety for some people. “A big event should prompt big feelings,” our staff writer Shayla Love recently observed. But waiting for the results also leaves plenty of downtime for many Americans, whose nerves are unlikely to abate until after the race is called. Today, The Atlantic’s writers and editors answer the question: What should you watch if you’re feeling overwhelmed by election anxiety?

What to Watch

Marcel the Shell With Shoes On (streaming on Max)

When thinking of movies that ease my anxiety, election-related or not, this one is a no-brainer. Allow me to introduce you to Marcel, the shell with shoes on, who will likely give you some hope for the future.

In this mockumentary for all ages, Marcel (co-created and voiced by Jenny Slate) faces tough situations with incredible grace—something we could all aim to do right now. He takes care of his grandmother while also looking for the rest of his family and community, who all disappeared one night. But this heartbreaking situation is no match for Marcel’s relentless positivity, corny sense of humor, and cheesy-but-adorable observations (for example, he says that a documentary is “like a movie, but nobody has any lines and nobody even knows what it is while they’re making it”). And when things don’t go his way or he wants to back down, his grandmother steps in to show us where Marcel got his cheerfulness from—and to tell him to be more like Lesley Stahl from 60 Minutes.

— Mariana Labbate, assistant audience editor

The Verdict (available to rent on YouTube), Darkest Hour (streaming on Netflix)

I should probably recommend something uplifting and funny and distracting, but whenever I feel down or stressed, I return to two rather heavy movies that inspire me. Both of them are about the determination of one person to do the right thing, even when all seems lost.

Start with The Verdict, a 1982 courtroom drama starring Paul Newman as Frank Galvin, a down-and-out lawyer trying to win a medical-malpractice case against a famous Boston hospital. Once a rising legal star, Frank is now just a day-drinking ambulance chaser. But he rediscovers himself—and his sense of justice—as he fights the hospital and its evil white-shoe law firm.

After that, watch Darkest Hour, in which Winston Churchill—magnificently portrayed by Gary Oldman—fights to save Western civilization during the terrifying days around the time of the fall of France in 1940. The United Kingdom stands alone as British politicians around Churchill urge him to make a deal with Hitler. Instead, the prime minister rallies the nation to stand and fight.

No matter what happens on Election Day, both movies will remind you that every one of us can make a difference each day if we stay true to our moral compass.

— Tom Nichols, staff writer

Outrageous Fortune (available to rent on YouTube)

Bette Midler and Shelley Long star in this campy 1987 flick, which starts out as a satire of the New York theater scene before escalating into a buddy comedy slash action thriller (with a healthy dose of girl-power revenge).

Some scenes haven’t aged all that well. But the dynamic between the two stars as they careen into truly absurd situations is winning enough to carry the film. To keep track of who is who—and who mustn’t be trusted—you will need to put down your phone and focus (doubly true because some elements of the plot are slightly underbaked). The blend of slapstick antics and pulpy suspense should help take your mind off the race, as will the costume jewelry, shots of 1980s New York, Shakespeare references, and explosions. Through the plot’s various twists and turns, one takeaway is clear: The power of dance should never be underestimated. This movie may not exactly restore anyone’s faith in humanity, but it will definitely help pass the time as you wait for results to roll in.

— Lora Kelley, associate editor

The Hunt for Red October (streaming on Max)

There are three movies I’ll watch at the drop of a hat: Arrival, a genre-bender in which Amy Adams plays a linguist who learns to speak backwards and forward in time; The Devil Wears Prada, as long as we skip through the scenes with Andy’s annoying friends; and the Cold War underwater thriller The Hunt for Red October. I consider all three films a balm in anxious times, but this week, I’m setting sail with Sean Connery and Alec Baldwin.

Maybe because I write about war, I don’t consider a plotline centered on the threat of nuclear Armageddon an unusually nerve-racking experience. This movie transports me. The script is as tight as the hull of a Typhoon-class submarine. James Earl Jones is near perfect as an admiral turned CIA honcho. Baldwin was super hot then. And a bonus: The supporting performances by Scott Glenn, Courtney B. Vance, Sam Neill, and Tim Curry (Tim Curry!) are some of the most memorable of their careers. (Fight me.) If you haven’t seen this movie, treat yourself—if only for the opening minutes, so you can hear Connery, in Edinburgh-tinged Russian, proclaim morning in Murmansk to be “Cold … and hard.”

— Shane Harris, staff writer

How I Met Your Mother (streaming on Netflix and Hulu)

The right sitcom can cure just about anything. If you, like me, somehow missed out on watching How I Met Your Mother when it first aired, it’s the perfect show to transport you back to a not-so-distant past when TV still had laugh tracks and politics was … not this. For the uninitiated, the series is exactly what it sounds like, featuring a dorky romantic named Ted as he tells his kids the seemingly interminable story of, well, how he met their mother.

The roughly 20-minute episodes are both goofy and endearing. Although the plot, which follows Ted and his four best friends, centers on the characters’ romantic entanglements, the story is fundamentally about friendship. As Kevin Craft wrote in The Atlantic in the run-up to the series finale, the show’s unstated mantra is “We’re all in this together.” Over the next few days, this is perhaps the most important thing we can remember.

— Lila Shroff, assistant editor

Here are three Sunday reads from The Atlantic:

Throw out your black plastic spatula. A future without Hezbollah What Orwell didn’t anticipate

The Week Ahead

Heretic, a horror-thriller film starring Hugh Grant, about a man who traps two young missionaries in a deadly game inside his house (in theaters Friday) Season 4 of Outer Banks, a series about a group of teenagers hunting for treasure (part two premieres Thursday on Netflix) You Can’t Please All, a memoir by Tariq Ali about how his years of political activism shaped his life (out Tuesday)

Essay

Illustration by Jan Buchczik

Why You Might Need an Adventure

By Arthur C. Brooks

Almost everyone knows the first line of Herman Melville’s 1851 masterpiece Moby-Dick: “Call me Ishmael.” Fewer people may remember what comes next—which might just be some of the best advice ever given to chase away a bit of depression:

“Whenever I find myself growing grim about the mouth; whenever it is a damp, drizzly November in my soul; whenever I find myself involuntarily pausing before coffin warehouses, and bringing up the rear of every funeral I meet … then, I account it high time to get to sea as soon as I can.”

Read the full article.

More in Culture

Making new friends is tough. The Golden Bachelorette understands why. The celebrities are saying the loud part quietly. MomTok is the apotheosis of 21st-century womanhood. Eight nonfiction books that will frighten you “Dear James”: My colleague repeats herself constantly. Conclave is a crowd-pleaser about the papacy.

Catch Up on The Atlantic

A brief history of Trump’s violent remarks Trump suggests training guns on Liz Cheney’s face. The Democratic theory of winning with less

Photo Album

A competitor paddles in a giant hollowed-out pumpkin at the yearly pumpkin regatta in Belgium. (Bart Biesemans / Reuters)

Check out these photos of people around the world dressing up in Halloween costumes and celebrating the holiday with contests, parades, and more.

Explore all of our newsletters.

When you buy a book using a link in this newsletter, we receive a commission. Thank you for supporting The Atlantic.