Itemoids

Love

Falling in Love With Reading Will Change Your Life

The Atlantic

www.theatlantic.com › magazine › archive › 2024 › 12 › the-commons › 680388

The Elite College Students Who Can’t Read Books

To read a book in college, it helps to have read a book in high school, Rose Horowitch wrote in the November 2024 issue.

I’m an English teacher at a private college-preparatory school, and much of “The Elite College Students Who Can’t Read Books” sounded familiar. My students, too, now struggle to read long texts. Unaddressed in this apt article, though, are changes to the broader high-school context in which reading for homework now occurs. Today, students with elite college aspirations have extracurricular schedules that demand as much—if not more—time than school itself. These commitments are necessary, in their eyes, to gain admission to selective institutions. As a result, teachers face considerable pressure from not only students but also parents and school administrators to limit homework time—no matter if the assignment is a calculus problem set or Pride and Prejudice. In combination with considerably slower rates of reading and diminished reading comprehension, curtailed homework time means that an English teacher might not be able to assign more than 10 to 15 pages of relatively easy prose per class meeting, a rate so excruciatingly slow, it diminishes one’s ability to actually grasp a novel’s meaning and structure. I see how anxious and drained my students are, but I think it’s important for them to experience what can grow from immersive reading and sustained written thought. If we want students to read books, we have to be willing to prioritize the time for them to do so.

Anna Clark
San Diego, Calif.

As a professor, I agree with my colleagues who have noticed the declining literacy of American students at elite universities.

However, I am not sure if the schools are entirely to blame. In American universities, selection is carried out by admissions offices with little interest in the qualities that faculty might consider desirable in a college student. If faculty members were polled—something that has never happened to me in my 20-year career—I’m sure we would rank interest and experience in reading books quite highly.

Admissions decisions in the United States are based on some qualities that, however admirable, have little or nothing to do with academic aptitude. In contrast, at Oxford and Cambridge, in the United Kingdom, undergraduate admissions are typically conducted by the same academics who will teach those students. Most personal statements primarily consist of a discussion of which books the student has read and what they learned from them. Students are then expected to discuss these books in more detail in an interview. When considered alongside the undergraduate selection process, the decline in literacy among American undergraduates is totally understandable.

Ione Fine
Psychology Professor, University of Washington
Seattle, Wash.

Having taught English in a public school for 32 years, I am not surprised that colleges and universities are discovering that incoming students lack the skill, focus, and endurance to read novels. Throughout my career, primarily teaching ninth graders, I fostered student readership not by assigning novels for the whole class to read, but rather by allowing students to select young-adult books that they would read independently in class. Thousands of lifelong readers were created as a result.

Ten years ago, however, my district administration told me that I could no longer use class time for independent student reading. Instead, I was to focus on teaching skills and content that the district believed would improve standardized-test scores. Ironically, research showed that the students who read more books scored significantly better than their classmates on standardized reading tests.

I knew that many students were unlikely to read at home. So I doubled down: I found time for students to read during the school day and repurposed class time to allow my students to share their ideas; to question, respond, and react along with their peers. The method was so successful that the district adopted my approach for seventh through ninth grade, and I published a university-level textbook preparing teachers to create similar communities of readers in their own classrooms.

Whole-class novels just aren’t working: Some students will always be uninterested in a teacher’s choice, and perceive the classics as irrelevant and difficult to comprehend. But allowing students to select their books can help them fall in love with reading.

Michael Anthony
Reading, Pa.

I am an educator of 16 years living in New Hampshire. “The Elite College Students Who Can’t Read Books” reflects a lot of what I’ve seen recently. But a large piece of the puzzle is public-school budgets. A major reason novels have been removed from curricula is money: Many districts cannot afford to purchase a book for every student, especially in the upper grades. Typically, districts will buy a “class set” of novels, about 20 to 30 books—that’s it. The books must be used during the English blocks for instruction and reading time. There are not enough books for students to take home and read; if they are reading them only in their class block, a novel will take months and months to finish. I knew of one district that would have teachers make copies of entire novels to share with their students; they’d take turns on copy duty to pull it off. I wish I could teach more complete novels, because students love it. But districts need budgets large enough to buy books for everyone.

Meaghan Kelly
Rumney, N.H.

When teaching my college history courses, I have polled my students to see how many have ever read a book cover to cover. Sometimes, only a few students would raise their hand.

I inquired because I always gave them the option to read a book instead of writing a 10-page research paper. They then would have a one-on-one, hour-long discussion with me about the book they’d selected. Students who chose that option generally had a good experience. But one student shines bright in my mind. In truth, I didn’t remember him well—but he stopped me at an alumni function to say thank you. He had taken my class the second semester of his senior year to fill an elective, and he had chosen to read David McCullough’s 1776. He’d devoured the book—and he’d loved our discussion. He told me that the assignment had changed his life: Up to that point, he had never read a whole book. Since that class, he has read two or three books a month, and now has hundreds of books in his own library. He assured me that he would be a reader for the rest of his life.

It was one of the most gratifying moments of my career. I hope more teachers, professors, and parents give their students a chance to learn what this student did—that books are one of the great joys in life.

Scott Salvato
Mooresville, N.C.

Rose Horowitch replies:

Anna Clark’s letter builds on an idea that I hoped to convey in the article: that the shift away from reading full books is about more than individual students, teachers, or schools. Much of the change can be understood as the consequence of a change in values. The professors I spoke with didn’t think their students were lazy; if anything, they said they were overscheduled and frazzled like never before, facing immense pressure to devote their time to activities that will further their career. Under these circumstances, it can be difficult to see how reading The Iliad in its entirety is a good use of time. Acknowledging this reality can be disheartening, because the solution will not be as simple as changing curricula at the college, high-school, or middle-school level. (And as several of these letters note, changing curricula isn’t all that straightforward.) But letters like Scott Salvato’s are a hopeful reminder of the power of a good—full—book to inspire a student to become a lifelong reader.

The Atlantic Behind the Cover

In this month’s cover story, “How the Ivy League Broke America,” David Brooks describes the failure of the United States’ meritocracy, created in part by James Conant, the influential president of Harvard from 1933 to 1953. Conant and like-minded reformers had hoped to overturn America’s “hereditary aristocracy of wealth”; instead, they helped create a new ruling class—the so-called cognitive elite, selected and credentialed by the nation’s top universities. For our cover image, the artist Danielle Del Plato placed the story’s headline on pennants she created for each of the eight Ivy League schools, which have been instrumental in shaping and perpetuating America’s meritocracy.

Paul Spella, Senior Art Director

Corrections

Due to an editing error, “The Elite College Students Who Can’t Read Books” (November) misstated the year Nicholas Dames started teaching Literature Humanities. He began teaching the course in 1998, not 1988. “What Zoya Sees” (November) misstated where in Nigeria Zoya Cherkassky-Nnadi and her husband, Sunny, have a home. Their home is in Ngwo, not Igwo.

This article appears in the December 2024 print edition with the headline “The Commons.”

A Classic Blockbuster for a Sunday Afternoon

The Atlantic

www.theatlantic.com › newsletters › archive › 2024 › 11 › a-classic-blockbuster-for-a-sunday-afternoon › 680671

This story seems to be about:

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

Welcome back to The Daily’s Sunday culture edition, in which one Atlantic writer or editor reveals what’s keeping them entertained. Today’s special guest is Jen Balderama, a Culture editor who leads the Family section and works on stories about parenting, language, sex, and politics (among other topics).

Jen grew up training as a dancer and watching classic movies with her mom, which instilled in her a love for film and its artistry. Her favorites include Doctor Zhivago, In the Mood for Love, and Pina; she will also watch anything starring Cate Blanchett, an actor whose “ability to inhabit is simply unmatched.”

The Culture Survey: Jen Balderama

My favorite blockbuster film: I’m grateful that when I was quite young, my mom started introducing me to her favorite classic movies—comedies, romances, noirs, epics—which I’m pretty sure had a lasting influence on my taste. So for a blockbuster, I have to go with a nostalgia pick: Doctor Zhivago. The hours we spent watching this movie, multiple times over the years, each viewing an afternoon-long event. (The film, novelty of novelties, had its own intermission!) My mom must have been confident that the more adult elements—the rape, the politics—would go right over my head, but that I could appreciate the movie for its aesthetics. She had a huge crush on Omar Sharif and swooned over the soft-focus close-ups of his watering eyes. I was entranced by the landscapes and costumes and sets—the bordello reds of the Sventitskys’ Christmas party, the icy majesty of the Varykino dacha in winter. But I was also taken by the film’s sheer scope, its complexity, and the fleshly and revolutionary messiness. I’m certain it helped ingrain in me, early, an enduring faith in art and artists as preservers of humanity, especially in dark, chaotic times. [Related: Russia from within: Boris Pasternak’s first novel]

My favorite art movie: May I bend the rules? Because I need to pick two: Wong Kar Wai’s In the Mood for Love and Wim Wenders’s Pina. One is fiction, the other documentary. Both are propelled by yearning and by music. Both give us otherworldly depictions of bodies in motion. And both delve into the ways people communicate when words go unspoken.

In the Mood for Love might be the dead-sexiest film I’ve ever seen, and no one takes off their clothes. Instead we get Maggie Cheung and Tony Leung in a ravishing tango of loaded phone calls and intense gazes, skin illicitly brushing skin, figures sliding past each other in close spaces: electricity.

Pina is Wenders’s ode to the German choreographer Pina Bausch, a collaboration that became an elegy after Bausch died when the film was in preproduction. Reviewing the movie for The New York Times in 2017, the critic Gia Kourlas, whom I admire, took issue with one of Wenders’s choices: In between excerpts of Bausch’s works, her dancers sit for “interviews,” but they don’t speak to camera; recordings of their voices play as they look toward the audience or off into the distance. Kourlas wrote that these moments felt “mannered, self-conscious”; they made her “wince.” But to me, a (highly self-conscious) former dancer, Wenders nailed it—I’ve long felt more comfortable expressing myself through dance than through spoken words. These scenes are a brilliantly meta distillation of that tension: Dancers with something powerful to say remain outwardly silent, their insights played as inner narrative. Struck by grief, mouths closed, they articulate how Bausch gave them the gift of language through movement—and thus offered them the gift of themselves. Not for nothing do I have one of Bausch’s mottos tattooed on my forearm: “Dance, dance, otherwise we are lost.”

An actor I would watch in anything: Cate Blanchett. Her ability to inhabit is simply unmatched: She can play woman, man, queen, elf, straight/gay/fluid, hero/antihero/villain. Here I’m sure I’ll scandalize many of our readers by saying out loud that I am not a Bob Dylan person, but I watched Todd Haynes’s I’m Not There precisely because Blanchett was in it—and her roughly 30 minutes as Dylan were all I needed. She elevates everything she appears in, whether it’s deeply serious or silly. I’m particularly captivated by her subtleties, the way she turns a wrist or tilts her head with the grace and precision of a dancer’s épaulement. (Also: She is apparently hilarious.)

An online creator I’m a fan of: Elle Cordova, a musician turned prolific writer of extremely funny, often timely, magnificently nerdy poems, sketches, and songs, performed in a winning low-key deadpan. I was tipped off to her by a friend who sent a link to a video and wrote: “I think I’m falling for this woman.” The vid was part of a series called “Famous authors asking you out”—Cordova parroting Jane Austen, Charles Bukowski, Franz Kafka, Edgar Allan Poe (“Should I come rapping at your chamber door, or do you wanna rap at mine?”), Dr. Seuss, Kurt Vonnegut, Virginia Woolf, James Joyce (“And what if we were to talk a pretty yes in the endbegin of riverflow and moon’s own glimpsing heartclass …”). She does literature. She does science. She parodies pretentious podcasters; sings to an avocado; assumes the characters of fonts, planets, ChatGPT, an election ballot. Her brain is a marvel; no way can AI keep up.

Something delightful introduced to me by a kid in my life: Lego Masters Australia. Technically, we found this one together, but I watch Lego Masters because my 10-year-old is a Lego master himself—he makes truly astonishing creations!—and this is the kind of family entertainment I can get behind: Skilled obsessives, working in pairs, turn the basic building blocks of childhood into spectacular works of architecture and engineering, in hopes of winning glory, prize money, and a big ol’ Lego trophy. They can’t churn out the episodes fast enough for us. The U.S. has a version hosted by Will Arnett, which we also watch, but our family finds him a bit … over-the-top. We much prefer the Australian edition, hosted by the comedian Hamish Blake and judged by “Brickman,” a.k.a. Lego Certified Professional Ryan McNaught, both of whom exude genuine delight and affection for the contestants. McNaught has teared up during critiques of builds, whether gobsmacked by their beauty or moved by the tremendous effort put forth by the builders. It’s a show about teamwork, ingenuity, artistry, hilarity, physics, stamina, and grit—with a side helping of male vulnerability. [Related: Solving a museum’s bug problem with Legos]

A poem that I return to: Joint Custody,” by Ada Limón. My family is living this. Limón, recalling a childhood of being “taken /  back and forth on Sundays,” of shifting between “two different / kitchen tables, two sets of rules,” reassures me that even though this is sometimes “not easy,” my kids will be okay—more than okay—as long as they know they are “loved each place.” That beautiful wisdom guides my every step with them.

Something I recently rewatched: My mom died when my son was 2 and my daughter didn’t yet exist, and each year around this time—my mom’s birthday—I find little ways to celebrate her by sharing with my kids the things she loved. Chocolate was a big one, I Love Lucy another. So on a recent weekend, we snuggled up and watched Lucille Ball stuffing bonbons down the front of her shirt, and laughed and laughed and laughed. And then we raided a box of truffles.

Here are three Sunday reads from The Atlantic:

How the Ivy League broke America The secret to thinking your way out of anxiety How one woman became the scapegoat for America’s reading crisis

The Week Ahead

Gladiator II, an action film starring Paul Mescal as Lucius, the son of Maximus, who becomes a gladiator and seeks to save Rome from tyrannical leaders (in theaters Friday) Dune: Prophecy, a spin-off prequel series about the establishment of the Bene Gesserit (premieres today on HBO and Max) An Earthquake Is a Shaking of the Surface of the Earth, a novel by Anna Moschovakis about an unnamed protagonist who attempts to find—and eliminate—her housemate, who was lost after a major earthquake (out Tuesday)

Essay

Illustration by Raisa Álava

What the Band Eats

By Reya Hart

I grew up on the road. First on the family bus, traveling from city to city to watch my father, Mickey Hart, play drums with the Grateful Dead and Planet Drum, and then later with the various Grateful Dead offshoots. When I was old enough, I joined the crew, working for Dead & Company, doing whatever I could be trusted to handle … Then, late-night, drinking whiskey from the bottle with the techs, sitting in the emptying parking lot as the semitrucks and their load-out rumble marked the end of our day.

But this summer, for the first time in the band’s history, there would be no buses; there would be no trucks. Instead we stayed in one place, trading the rhythms of a tour for the dull ache of a long, endlessly hot Las Vegas summer.

Read the full article.

More in Culture

The exhibit that will change how you see Impressionism SNL isn’t bothering with civility anymore. Abandon the empty nest. Instead, try the open door. Richard Price’s radical, retrograde novel “Dear James”: How can I find more satisfaction in work?

Catch Up on The Atlantic

Why the Gaetz announcement is already destroying the government The sanewashing of RFK Jr. The not-so-woke Generation Z

Photo Album

People feed seagulls in the Yamuna River, engulfed in smog, in New Delhi, India. (Arun Sankar / AFP / Getty)

Check out these photos of the week, showing speed climbing in Saudi Arabia, wildfires in California and New Jersey, a blanket of smog in New Delhi, and more.

Explore all of our newsletters.

When you buy a book using a link in this newsletter, we receive a commission. Thank you for supporting The Atlantic.

The Exhibit That Will Change How You See Impressionism

The Atlantic

www.theatlantic.com › magazine › archive › 2024 › 12 › national-gallery-exhibit-paris-1874-impressionist-movement › 680401

This story seems to be about:

For museums and their public, Impressionism is the Goldilocks movement: not too old or too new, not too challenging or too sappy; just right. Renaissance art may baffle with arcane religious symbolism, contemporary art may baffle on purpose, but put people in a gallery with Claude Monet, Edgar Degas, and Camille Pissarro, and explanatory wall texts feel superfluous. Eyes roam contentedly over canvases suffused with light, vibrant with gesture, and alive with affable people doing pleasant things. What’s not to love?

Famously, of course, Impressionism was not greeted with love at the outset. In 1874, the first Impressionist exhibition was derided in the press as a “vexatious mystification for the public, or the result of mental derangement.” A reviewer called Paul Cézanne “a sort of madman, painting in a state of delirium tremens,” while Berthe Morisot was privately advised by her former teacher to “go to the Louvre twice a week, stand before Correggio for three hours, and ask his forgiveness.” The very term Impressionism was born as a diss, a mocking allusion to Monet’s shaggy, atmospheric painting of the Le Havre waterfront, Impression, Sunrise (1872). Few people saw affability: In 1874, the term commonly applied to Monet and his ilk was “intransigent.”

Impressionism’s rom-com arc from spirited rejection to public rapture informs our fondness for the pictures (plucky little underdogs), and has also provided a lasting model for avant-gardism as a mechanism of cultural change. We now take it for granted that young mavericks should team up to foment new ways of seeing that offend the establishment before being vindicated by soaring auction prices and long museum queues. For most of history, however, that wasn’t the way things worked. Thus the 1874 exhibition has acquired legendary status as the origin point of self-consciously modern art.

Its 150th anniversary this year has been celebrated with numerous exhibitions, most notably “Paris 1874: The Impressionist Moment,” organized by the Musée d’Orsay, in Paris, and the National Gallery of Art, in Washington, D.C. (where it is on view until January 19, 2025). Given the masterpieces that these museums could choose from, this might have been an easygoing lovefest, but the curators—Sylvie Patry and Anne Robbins in Paris, and Mary Morton and Kimberly A. Jones in Washington—have delivered something far more intriguing and valuable: a chance to see what these artists were being intransigent about, and to survey the unexpected turns that art and politics may take in a polarized, traumatized time and place.

Nineteenth-century French history was messy—all those republics, empires, and monarchies tumbling one after the other—but it contains a crucial backstory to Impressionism, often overlooked. In the 1860s, France was the preeminent military and cultural power on the continent. Paris was feted as the most sophisticated, most modern, most beautiful of cities, and the Paris Salon was the most important art exhibition on the planet. Then, in 1870, some fatuous chest bumping between Emperor Napoleon III (nephew of the original) and Otto von Bismarck set off an unimagined catastrophe: By the spring of 1871, mighty France had been vanquished by upstart Prussia, its emperor deposed, its sublime capital bombed and besieged for months. When France sued for peace, Paris rebelled and established its own new socialist-anarchist government, the Commune. In May 1871, the French army moved in to crush the Commune, and the ensuing week of urban warfare killed tens of thousands. In the nine months between the start of the siege in September and the destruction of the Commune in May, perhaps as many as 90,000 Parisians died of starvation and violence.

These events and their impact on French painters are detailed in the art critic Sebastian Smee’s absorbing new book, Paris in Ruins: Love, War, and the Birth of Impressionism. His main focus is on the star-crossed not-quite-lovers Morisot and Édouard Manet, but nobody in this tale escaped unscathed. Morisot was in the city through the bombardment, the famine, and the street fighting; Manet and Degas volunteered for the National Guard; Pierre-Auguste Renoir served in the cavalry. Some of their most promising peers were killed. Everyone saw ghastly things.

[From the April 1892 issue: Some notes on French Impressionism]

And yet nothing about Degas’ ballerinas practicing their tendus or Renoir’s frothy scene of sophisticates out on the town suggests recent experience with terror, starvation, or climbing over dead bodies in the street, though they were painted when those events were still fresh. The Boulevard des Capucines, where the first Impressionist show took place, had been the site of “atrocious violence” in 1871, Smee tells us, but in 1874, Monet’s painting of the street is limpid with light and bustling with top hats and hansom cabs. If most fans of Impressionism remain unaware of its intimacy with the horrors of what Victor Hugo dubbed “l’année terrible,” it’s because the Impressionists did not picture them.

Like Sir Arthur Conan Doyle’s unbarking dog, this suggests an absence in search of a story, and indeed, “Paris 1874” ultimately leaves one with a sense of why they chose to turn away, and how that choice helped set a new course for art. The standard version of Impressionism—the one most people will come through the door with—has, however, always emphasized a different conflict: the David-versus-Goliath contest between the young Impressionists and the illustrious Salon.

With more than 3,000 works displayed cheek by jowl, the 1874 Salon was nearly 20 times the size of the first Impressionist show, and attracted an audience of about half a million—aristocrats, members of the bourgeoisie, workers with families in tow. (Of the latter, one journalist sniffed: “If he could, he would even bring his dog or his cat.”) Presided over by the nation’s Académie des Beaux-Arts, an institution whose pedigree went back to Louis XIV, the Salon was allied with the state and had a vested interest in preserving the status quo. The Impressionists, wanting to preside over themselves, had founded their own organization—the Société Anonyme des Artistes Peintres, Sculpteurs, Graveurs, etc.—with a charter they adapted from the bakers’ union in Pissarro’s hometown.

“Paris 1874” is built from these two shows. With a handful of exceptions (mainly documentary photographs of the shattered city), the art on the walls in Washington now was on the walls in Paris then. (Identifying the relevant works to select from was no small achievement, given the 19th-century catalogs’ lack of images or measurements, and their penchant for unhelpful titles like Portrait.) Labels indicate which exhibition each artwork appeared in, beginning with the Salon’s medal-of-honor winner, Jean-Léon Gérôme’s L’Éminence Grise (1873), alongside Monet’s celebrated and pilloried Impression, Sunrise.

L’Éminence Grise (1873), Jean-Léon Gérôme (© 2024 Museum of Fine Arts, Boston)

The two paintings might be mascots for the opposing teams. Impeccably executed, the Gérôme is an umbrous scene in which Cardinal Richelieu’s right-hand monk, François Leclerc du Tremblay, descends a staircase as the high and mighty doff their caps. The fall of light is dramatic and convincing, the dispatch of color deft, the actors choreographed and costumed to carry you through the action. Every satin ribbon, every curl of Baroque metalwork seems palpable.

Beside it, the Monet looks loose and a bit jangly. The muted gray harbor flits between solidity and dissolution. The orange blob of a sun and its shredded reflection are called into being with an almost militant economy of means. And somehow, the painting glows as if light were passing through the canvas to land at our feet. The Gérôme is a perfect portal into another world. But the Monet is a world. More than just displaying different styles, the pictures embody divergent notions of what art could and should do.

Impression, Sunrise (1872), Claude Monet (© Musée Marmottan Monet, Paris / Studio Christian Baraja SLB)

For 200 years, the Académie had defined and defended visual art—both its manual skill set (perspective, anatomy, composition) and its intellectual status as a branch of rhetoric, conveying moral ideals and building better citizens. (L’Éminence Grise is, among other things, an engaging lesson in French history: When Cardinal Richelieu was the flashy power behind the throne of Louis XIII, the somber Capuchin friar was the “gray eminence” behind the cardinal.) Such content is what made “fine art” fine and separated painters and sculptors from decorators and cabinetmakers.

This value system had stylistic consequences. Narrative clarity demanded visual clarity. Figuration ranked higher than landscapes and still lifes in part because human figures instruct more lucidly than trees and grapes. Space was theatrical and coherent, bodies idealized, actions easily identified. Surfaces were smooth, brushstrokes self-effacing. This is still what we mean by “academic art.”

Most visitors confronting the opening wall at the National Gallery will know which painting they’re supposed to like—and it’s not the one with the fawning courtiers. Impressionism is universally admired, while academic art is sometimes treated as the butt of a joke. Admittedly, Jean Jules Antoine Lecomte Du Nouÿ’s huge, body-waxed Eros with surly cupids is easier to laugh at than to love, but most of the academic art on view strives, like the Gérôme, for gripping plausibility. You can see the assiduous archaeological research that went into the Egyptian bric-a-brac pictured in Lawrence Alma-Tadema’s pietà The Death of the Pharaoh’s First-Born Son (1872), or the armor of the sneaky Greeks descending from their giant gift horse in Henri-Paul Motte’s starlit scene of Troy.

[From the July 1900 issue: Impressionism and appreciation]

Today these pictures look like film stills. It’s easy to imagine Errol Flynn dashing up Gérôme’s stairs, or Timothée Chalamet brooding in the Alma-Tadema gloom. Perhaps the reason such paintings no longer move audiences the way they once did is that we have actual movies to provide that immersive storytelling kick. What we want from painting is something different—something personal, handmade, “authentic” (even when we aren’t quite clear what that means).

It’s a mistake, though, to assume that this impulse was new with Impressionism. Beginning in the 1840s, concurrent with the literary “Realism” of Stendhal and Honoré de Balzac, Realist painters turned away from the studio confections of the Académie and began schlepping their easels out into the weather to paint en plein air—peasants toiling in fields, or fields just being fields. Visible brushstrokes and rough finish were the price (or certificate of authenticity) of a real-time response to a real world. These were aesthetic choices, and in turn they suggested political viewpoints. In place of explicit narratives valorizing order, sacrifice, and loyalty, Realist art carried implicit arguments for social equality (“These plain folk are worthy of being seen”) and individual liberty (“My personal experience counts”).

The Salon was the Académie’s enforcement mechanism: In the absence of anything like today’s gallery system, it represented the only practical path for a French artist to establish a reputation. Yet for decades it flip-flopped—sometimes rejecting Realist art, sometimes accepting it and even rewarding it with prizes. Manet, considered a Realist because of his contemporary subjects and ambiguous messaging, had a famously volatile history with the Salon. In 1874, Degas explained the rationale behind the Société Anonyme in these terms: “The Realist movement no longer has to fight with others. It is, it exists, it needs to show itself on its own.”

But nothing in 1874 was quite that simple. A room at the National Gallery is given over to art about the Franco-Prussian War, both academic and Realist. All of it appeared in the Salon. The contrast is instructive: The elegant bronze by Antonin Mercié, conceived (prematurely) as a monument to victory, was altered in the face of actual events and titled Glory to the Vanquished. Although the naked soldier in the clasp of Victory has breathed his last, arms and wings still zoom ecstatically skyward and draperies flutter. He is beautiful even in death. The corpses laid out on the dirt in Auguste Lançon’s Dead in Line! (1873), dressed in the uniforms they were wearing when they fell, are neither naked nor beautiful. Their skin is gray, and their fists are clenched in cadaveric spasm. In the background, troops march by, officers chat, and a village burns. There is no glory, just the banality of slaughter. Unlike Mercié, Lançon had been at the front.

Dead in Line! (1873), Auguste Lançon (© Département de la Moselle, MdG1870&A, Rebourg)

Here also is Manet’s quiet etching of women queuing at a butcher shop in Paris as food supplies dwindled. Black lines, swift and short, capture a sea of shining umbrellas above a snaking mass of black dresses, at the back of which you can just make out the faint lightning-bolt outline of an upthrust bayonet. It’s a picture with no argument, just a set of observations: patience, desperation, rain.

In “Paris 1874,” a model of curatorial discretion, the art is allowed to speak for itself. Visitors are encouraged to look and guess whether a given work appeared in the Salon or the Société before checking the answer on the label. One quickly finds that applying the standard checklist of Impressionist attributes—“urban life,” “French countryside,” “leisure,” “dappled brushwork”—is remarkably unhelpful. The dog-walking ladies in Giuseppe De Nittis’s Avenue du Bois de Boulogne (1874, Salon) sport the same complicated hats, fashionable bustles, and acres of ruched fabric as Renoir’s The Parisian Girl (1874, Société). Charles-François Daubigny’s The Fields in June (1874, Salon) and Pissarro’s June Morning in Pontoise (1873, Société) are both sunny summer landscapes laid out with on-the-fly brushwork. Both sides did flowers.

As for the celebration of leisure, the Salon seems to have been full of moony girls lounging around and people entertaining fluffy white lapdogs, while the artists we now call Impressionists were paying much more attention to the working world. The glinting light of Pissarro’s Hoarfrost (1873, Société) falls on an old man trudging down a road with a large bundle of wood on his back. The backlit fug of Impression, Sunrise was probably smog—the admirably informative exhibition catalog alerts readers to Stendhal’s description of the same vista, “permeated by the sooty brown smoke of the steamboats.” Pictured at labor, not at play, Degas’ dancers stand around splayfooted, bored and tired, adjusting their shoe ribbons, scratching an itch. Even the bourgeois family outing in Degas’ transcendently odd At the Races in the Countryside (1869, Société) is focused on work: Together in a carriage, husband, wife, and dog are all transfixed by the baby’s wet nurse, doing her job. As for the scenes of mothers and children, it is possible that later observers have overestimated the leisure involved.

Hoarfrost (1873), Camille Pissarro (© Musée d’Orsay, Dist. RMN-Grand Palais / Patrice Schmidt)

Jules-Émile Saintin’s Washerwoman (1874, Salon) is assertively a picture of urban working life, but in an entirely academic mode. The scene is “modern” in the same way that Alma-Tadema’s pharaoh was ancient, time-stamped by an array of meticulously rendered accessories. But the Alma-Tadema at least had the gravitas of tragedy. Saintin is content with smarm: He arranges his working girl awkwardly in the street, grinning coquettishly at the viewer while twirling a pole of white linens and hoisting her skirt to give a peek of ankle—the eternal trope of the trollop.

[Read: Why absolutely everyone hates Renoir]

Then there is art so wonderful and so peculiarly modern, it seems unfair that it went to the Salon. In contrast to Saintin’s washerwoman, Manet’s The Railway (1873) is reticent to the point of truculence. Against the backdrop of an iron railing, a little girl stands with her back to us, watching the steam of a train below, while next to her, a poker-faced young woman glances up from the book and sleeping puppy in her lap to meet our gaze. A bunch of grapes sits on the stone footing of the fence. The emotional tenor is ambiguous, the relationships between woman, child, dog, grapes, and train unclear. Everything is perfectly still and completely unsettled. Why was this at the Salon? Manet believed that appearing there was a necessary career move and declined to join in the Société event.

The Railway (1873), Édouard Manet (Courtesy of the National Gallery of Art)

He had a point. The Société chose, in its egalitarian zeal, to have no jury and to give space to anyone who paid the modest membership fee. The exhibit ended up even more of a grab bag than the Salon, so alongside some of the most adventurous and lasting art of the 1870s, you got Antoine Ferdinand Attendu’s conventional still-life pile of dead birds, and Auguste Louis Marie Ottin’s marble head of Jean-Auguste-Dominique Ingres, the great master of hard-edged Neoclassicism, made more than 30 years earlier.

One function of “Paris 1874” is to debunk the tale of the little exhibition that could. The “first Impressionist exhibition,” it turns out, wasn’t all that Impressionist (only seven of its 31 participants are commonly categorized as such). Many artists took part in both shows simultaneously, prioritizing career opportunities over stylistic allegiance. (Not only was organized avant-gardism not a thing before 1874; it appears not to have been a thing in 1874.) As for those famously annoyed reviews, the catalog explains that they came from a handful of critics who specialized in being annoyed, and that most of the modest attention the Société show received was neutral or even friendly. Impression, Sunrise was “barely noticed.” Just four works sold. Goliath wandered off without a scratch, and David went broke.

But debunking is a short-lived thrill. The real rewards of “Paris 1874” lie in the rising awareness one gets walking through the galleries of a new signal in the noise, a set of affinities beyond either the certainties of the Académie or the earthy truths of Realism, and even a hint of how the unpictured traumas of 1870–71 left their mark. We know about the highlights to come (Monet’s water lilies at Giverny are hanging just down the hall), but there is something much more riveting about the moment before everything shifts into focus. By contrast, later Impressionist shows (there were eight in all) knew what they were about. The standard checklist works there. In 1874, it wasn’t yet clear, but you can begin to see a kind of opening up, a sideways slip into letting light be light and paint be paint.

As the Salon-tagged items demonstrate, the battle over subject matter had abated by 1874. Myths and modernity were both admissible. The shift that followed had less to do with what was being painted than how. The most frequent complaint about Impressionist art concerned style—it was too “sketchy.” The preference for loose brushwork, the disregard for clean edges and smooth gradients, was seen as slapdash and lazy, as if the artists were handing in early drafts in place of a finished thesis. More than one painting in the Société show was compared to “palette scrapings.”

Now we like the slap and the dash. We tend to see those independent-minded brushstrokes as evidence not of diminished attention, but of attention homing in on a new target—a fresh fascination with the transitory fall of light, at the expense, perhaps, of the stable object it falls on. Like a shape seen in the distance, sketchiness has the power to suggest multiple realities at once. Monet’s dark-gray squiggle in the Le Havre water might be a rock or a boat; certainly it is a squiggle of paint. Emphasizing the physicality of the image—the gloppiness of the paint, the visible canvas below—calls attention to the instability of the illusion. Step backwards and it’s a harbor; step forward and it’s bits of colorful dried goo.

At the Races in the Countryside (1869), Edgar Degas (© 2024 Museum of Fine Arts, Boston)

Sketchiness wasn’t the only means of undermining pictorial certainty. Degas never went in for fluttering brushstrokes or elusive edges, but his Ballet Rehearsal (1874) is scattered with pentimenti—the ghosts of a former foot, the trace of an altered elbow, the shadow of a male observer removed from the scene. He had sketched the dancers from life, but then used and reused those drawings for years, reconfiguring them like paper dolls, exactly the way an academic artist might go about peopling a crowd scene. The all-important difference is that Degas shows how the trick is played. In At the Races in the Countryside, the carriage and family are placed so far down and to the right that the nose and shoulder of one of the horses fall off the canvas, as if the painting were a snapshot whose taker was jostled just as the shutter clicked. It’s a way of calling attention to the bucket of artifice and conventions on which painterly illusion depends. This is art being disarmingly honest about being dishonest.

What this fledgling Impressionism puts on offer, distinct from the works around it, is a kind of gentle disruption or incompleteness—a willingness to leave things half-said, an admission of ambiguity, not as a problem to be solved but as a truth to be treasured. Nowhere is this more compelling than in Morisot’s The Cradle (1872). A portrait of the artist’s sister Edma watching her sleeping daughter, it takes a soft subject—mother and child, linen and lace—and girds it with a tensile framework of planes, taut lines, and swooping catenaries. Look beyond the “femininity” and you can see the first steps of the dance with abstraction that would dominate 20th-century painting from Henri Matisse to Richard Diebenkorn. At least as astonishing, though, is the neutrality and distance of the expression on Edma’s face. It might be exhaustion, or reverie, or (because before her marriage, she too had been a gifted professional painter) dispassionate study. Think what you will.

The Cradle is not harrowing or angst-ridden. It doesn’t picture unpleasantness. But when Smee writes of Morisot’s pursuit of “a new language of lightness and evanescence—a language based in close observation, devoid of rhetoric or hysteria,” he’s talking about a response to 1870–71. Both the right-wing empire and the left-wing Commune had ended in pointless, bloody, self-inflicted tragedies. The survivors, at least some of them, had learned to mistrust big ideas. An art about nothing might seem a strange defense, but the act of paying attention to what is rather than what should be—to the particular and ephemeral rather than the abstract and eternal—could be a bulwark against the seductions of ideology.

Resistance, of necessity, adapts to circumstance. In China during the Cultural Revolution, when message-laden art was an instrument of the state, artists belonging to the No Name Group took to clandestine plein air painting in the French mode precisely because it “supported no revolutionary goals—it was hand-made, unique, intimate and personal,” the scholar and artist Chang Yuchen has written. “In this context nature was less a retreat than a chosen battlefield.”

I used to think that Impressionism’s just-rightness was simply a function of time’s passage—that its inventions had seeped so deeply into our culture that they felt comfy. But although familiarity might explain our ease, it doesn’t fully explain Impressionism’s continued hold: the sense that beyond being nice to look at, it still has something to say. The more time I spent in “Paris 1874,” the more I cooled on the soft-edged moniker “impressionist” and warmed to the bristlier “intransigent.” It was a term often applied to unrepentant Communards, but the most intransigent thing of all might just be refusing to tell people what to think.

The contemporary art world, like the world at large, has reentered a period of high moral righteousness. Major institutions and scrappy start-ups share the conviction that the job (or at least a job) of art is to instruct the public in values. Educators, publicists, and artists work hard to ensure that nobody gets left behind and nobody misses the point. But what if leaving the point unfixed is the point?

Whether all of this would have developed in the same way without the violence and disillusionment of the Franco-Prussian War and the Commune is impossible to know. But there are worse lessons to derive from trauma than these: Take pleasure in your senses, question authority, look around you. Look again.

This article appears in the December 2024 print edition with the headline “The Dark Origins of Impressionism.”

The Freedom of Quincy Jones

The Atlantic

www.theatlantic.com › culture › archive › 2024 › 11 › quincy-jones-obituary-future › 680536

When the 1997 comedy Austin Powers needed a song to send up the swinging ’60s in its joyfully absurd opening sequence, the movie could have opted for obvious touchstones, such as British-invasion rock or sitar-drenched psychedelia. Instead, it used an offbeat bit of samba-jazz by Quincy Jones. This was an inspired choice. Jones’s 1962 song “Soul Bossa Nova” was certainly an artifact of its decade, reflecting a then-emerging international craze for Brazilian rhythms. But the track was more than just a time capsule; its hooting percussion and saucy flutes exploded from the speakers in a way that still sounds original, even alien, decades later.

Jones, the legendary polymath who died at age 91 on Sunday, spent a lifetime making music like this—music that defined its era by transcending it. He’s best associated with the gleaming, lush sound of jazz and pop in the ’70s and ’80s, as most famously heard on Michael Jackson’s albums Off the Wall, Thriller, and Bad. But his impact was bigger than any one sound or epoch, as Jones used his talent and expertise to design a future we’re still catching up to.

Jones was born into wretched conditions in Depression-era Chicago: His mother was sent to a mental hospital when he was 7, leaving him to be temporarily raised by a grandmother who was so poor that she cooked rats to eat. When Jones was 11, after his family moved to Washington State, he and his brother broke into a building looking for food and came across a piano; playing around with the instrument lit a fire in the young Jones. He’d spend his teenage years hanging out with Ray Charles and playing trumpet with the Count Basie Orchestra; at age 20, he started touring the world as a member of Lionel Hampton’s big band. After producing Dinah Washington’s 1955 album, For Those in Love, he went to Paris to study under the famed classical-music teacher Nadia Boulanger, who’d also tutored Igor Stravinsky and Aaron Copland.

These early brushes with genius—and global travels that exposed him to far-flung musical traditions—gave him the skills he’d draw on for the rest of his life. Boulanger, Jones would often later say, drilled into him an appreciation for the endless possibilities contained within the confines of music theory. Mastery, she told him, lay in understanding how previous greats had creatively used the same 12 notes available to everyone else. Jones took this idea to heart. His work was marked by a blend of compositional rigor and freedom; knowing what had come before allowed him to arrange familiar sounds in ways that were, in one way or another, fresh.

Take, for example, Lesley Gore’s 1963 hit “It’s My Party,” which Jones produced. The song is a key text of mid-century girl-group pop—Phil Spector tried to take the song for the Crystals—but what made it soar were the Jonesian touches: harmonic decisions that feel ever so off, Latin syncopation pulsing throughout. You can hear similarly eclectic, colorful elements in another American standard that Jones arranged: Frank Sinatra and Count Basie’s 1964 version of “Fly Me to the Moon” (which Buzz Aldrin listened to before stepping onto the lunar surface in 1969).  

Though schooled by classical academics and jazz insiders, Jones seemed to have a pop soul: He used precise technique not to impress aficionados but to convey emotion in an accessible, bold way. “The Streetbeater,” the theme song for Sanford & Sons, used prickly, interlaced percussion to conjure sizzling excitement; a tempo change in “Killer Joe,” from Jones’s 1969 album, Walking in Space, opened up an oasis of cooling flute. The 1985 African-famine-relief anthem “We Are the World” was a particularly gracious use of talent. Not just any producer could have brought 46 vocalists—including such distinctive voices as Bob Dylan, Cyndi Lauper, and Tina Turner—into one coherent, catchy whole.

Jones’s signature collaborator was Michael Jackson. It was a kinship that made sense: The two men shared a knack for rhythm, a sense of history, and perfectionism. “He had a perspective on details that was unmatched,” Jones said of Jackson in a 2018 GQ interview. “His idols are Fred Astaire, Gene Kelly, James Brown, all of that. And he paid attention, and that’s what you’re supposed to do.” For all of Jackson’s scandals and eccentricities, the music he made with Jones has never been overshadowed. The songs are just too intricately lovely, delighting hips and hearts and heads all at once, to be denied.

[Read: AI can’t make music]

As Jones settled into living-icon status, he tried to pass his wisdom to new generations. In 1992, he founded the hip-hop magazine Vibe; in 2017, he launched Qwest TV, a streaming service for videos of jazz performances. He kept working with young talents, such as Amy Winehouse in 2010 and the avant-pop composer Jacob Collier much more recently. Even so, later in life, Jones liked to gripe about the state of pop music. In his view, modern artists weren’t educated or broad-minded enough to break new ground. “Musicians today can’t go all the way with the music because they haven’t done their homework with the left brain,” he told New York magazine in 2018. “Music is emotion and science.” He added, “Do these musicians know tango? Macumba? Yoruba music? Samba? Bossa nova? Salsa? Cha-cha?”

Yet clearly, he still has disciples today—though perhaps some of them are misunderstanding his lessons, trying nostalgically to imitate his work rather than studying his techniques to create something different. I feel, for example, conflicted about the Weeknd, a pastiche-y pop star who’s obsessed with recapturing the magic of Jones and Jackson’s hot streak. Jones himself appeared on an interlude on the Weeknd’s 2022 release, Dawn FM. He relayed a story about childhood trauma rippling throughout his adult life, and concluded by saying, “Looking back is a bitch, isn’t it?” The point, he seemed to say, was to use the past to keep moving forward.

The Magic Mountain Saved My Life

The Atlantic

www.theatlantic.com › magazine › archive › 2024 › 12 › thomas-mann-magic-mountain-cultural-political-relevance › 680400

Just after college, I went to teach English as a Peace Corps volunteer in a small village school in West Africa. To help relieve the loneliness, I packed a shortwave radio, a Sony Walkman, and, among other books, a paperback copy of Thomas Mann’s very long novel The Magic Mountain. As soon as I set foot in Togo, something began to change. My pulse kept racing; my mouth went dry and prickly; dizzy spells came on. I developed a dread of the hot silence of the midday hours, and an awareness of each moment of time as a vehicle for mental pain. It might have helped if I’d known that my weekly antimalarial medicine could have disturbing effects, especially on dreams (mine were frighteningly vivid), or if someone had mentioned the words anxiety and depression to me. At 22, I was a psychological innocent. Without the comfort of a diagnosis, I experienced these changes as a terrifying void of meaning in the universe. I had never noticed the void before, because I had never been moved to ask the questions Who am I? What is life for? Now I couldn’t seem to escape them, and I received no answers from an empty sky.

I might have lost my mind if not for The Magic Mountain. By luck or fate, the novel—which was published 100 years ago, in November 1924—seemed to tell a story a little like mine, set not in the West African rainforest but in the Swiss Alps. Hans Castorp, a 23-year-old German engineer, leaves the “flatlands” for a three-week visit to his cousin Joachim, a tuberculosis patient who is taking the cure in one of the high-altitude sanatoriums that flourished in Europe before the First World War. Hans Castorp (Mann’s detached and amused, yet sympathetic, narrator always refers to the protagonist by his full name) is “a perfectly ordinary, if engaging young man,” a slightly comical young bourgeois.

Arriving on the mountain, he immediately loses his bearings. In the thin air, his face goes hot and his body cold; his heart pounds, and his favorite cigar tastes like cardboard. His sense of time becomes warped. Many of the patients spend years “up here.” No one speaks or thinks in terms of days. “ ‘Home in three weeks,’ that’s a notion from down below,” his ailing cousin warns. Hans Castorp’s companions at the sanatorium’s five lavish daily meals are a cosmopolitan and macabre gallery of mostly young people who fill the endless hours gossiping, flirting, quarreling, philosophizing, and waiting to recover or die. The proximity of death is unsettling; it’s also funny (when the roads are blocked by snow, corpses are sent flying down the mountain on bobsleds) and strangely alluring.

[From the January 1953 issue: Thomas Mann on the Making of The Magic Mountain]

When Hans Castorp catches a cold, the sanatorium’s director examines him and finds a “moist spot” on one of his lungs. That and a slight fever suggest tuberculosis, requiring him to remain for an indeterminate time. Both diagnosis and treatment are dubious, but they thrill Hans Castorp: This hermetic world has begun to cast a spell on him and provoke questions “about the meaning and purpose of life” that he’d never asked down in the flatlands. Answered at first with “hollow silence,” they demand extended contemplation that’s possible only on the magic mountain.

The director’s assistant, trained in psychoanalysis, explains in one of his biweekly lectures that sickness is “merely transformed love,” the body’s response to repressed desire. Fever is the mark of eros; the decay of a diseased body signifies life itself. Mann had ventured onto this terrain before. In his novella Death in Venice (1912), the famous writer Gustav von Aschenbach, infatuated with a Polish boy at his hotel, stays in the plague-ridden city while other visitors flee. Hans Castorp stays too, obsessed with his own temperature chart, and with the entrancing Clavdia Chauchat, a young tubercular Russian with “Kirghiz eyes,” bad posture, and a habit of letting the dining-room door slam behind her. Almost half the novel goes by before Hans Castorp—who has by now been on the mountain for seven months—talks with Clavdia, just as she’s about to depart. On the night before she leaves, he makes one of the most bizarre declarations of love in literature: “Let me take in the exhalation of your pores and brush the down—oh, my human image made of water and protein, destined for the contours of the grave, let me perish, my lips against yours!” Clavdia leaves Hans Castorp with a framed X-ray of her tubercular lung.

I fell under the spell of Hans Castorp’s quest story, as the Everyman hero is transformed by his explorations of time, illness, sciences and séances, politics and religion and music. The climactic chapter, “Snow,” felt as though it were addressed to me. Hans Castorp, lost in a snowstorm, falls asleep and then awakens from a mesmerizing and monstrous dream with an insight toward which the entire story has led him: “For the sake of goodness and love, man shall grant death no dominion over his thoughts.”

Hans Castorp remains on the mountain for seven years—a mystical number. The Magic Mountain is an odyssey confined to one place, a novel of ideas like no other, and a masterpiece of literary modernism. Mann analyzes the nature of time philosophically and also conveys the feeling of its passage, slowing down his narrative in some spots to take in “the entire world of ideas”—a day can fill 100 pages—and elsewhere omitting years. Reading this dense yet miraculously seductive book becomes an experience like Hans Castorp’s interlude on the mountain. As I made my way through the novel by kerosene lamplight, I took Mann’s bildungsroman as a guide to my own education among the farmers, teachers, children, and market women who became my closest companions, hoping to find myself on a journey toward enlightenment as rich and meaningful as its hero’s. That was asking too much of even great literature; afraid of my own suicidal thoughts, I went home before the end of my two years. But on a few particularly dark nights, The Magic Mountain probably saved my life.

I recently returned to The Magic Mountain, without the intense identification of the first time (you have to be young for a book to inspire that), but with a larger sense that, a century later, Mann has something important to tell us as a civilization. The Mann who began writing the novel was an aristocrat of art, hostile to democracy—a reactionary aesthete. Working on The Magic Mountain was a transformative experience, turning him—as it turned his protagonist—into a humanist. What Hans Castorp arrives at, lost and asleep in the snow, “is the idea of the human being,” Mann later wrote, “the conception of a future humanity that has passed through and survived the profoundest knowledge of disease and death.” In our age of brutal wars, authoritarian politics, cultures of contempt, and technology that promises to replace us with machines, what is left of the idea of the human being? What can it mean to be a humanist?

Mann conceived of The Magic Mountain in 1912, when he was 37, after a three-week visit to a sanatorium in Davos where his wife, Katia, was a patient. “It was meant as a humorous companion-piece to Death in Venice and was to be about the same length: a sort of satire on the tragedy just finished,” he later wrote. He soon discovered that his story resisted the confines of a comic novella. But before he could realize its possibilities, World War I broke out, in August 1914. With Hans Castorp still in his first week at the sanatorium, Mann abandoned the manuscript as Europe plunged into unprecedented destruction. In a letter to a friend in the summer of 1915, he left a clue as to where things stood with his unfinished novel: “On the whole the story inclines towards sympathy with death.” And he now saw an ending—the war itself.

Mann published no fiction for the duration of the war. Instead, he became a very public defender of imperial Germany against its adversaries. For Mann, the Great War was more than a contest among rival European powers or a patriotic cause. It was a struggle between “civilization” and “culture”—between the rational, politicized civilization of the West and Germany’s deeper culture of art, soul, and “genius,” which Mann associated with the irrational in human nature: sex, aggression, mythical belief. The kaiser’s Germany—strong in arms, rich in music and philosophy, politically authoritarian—embodied Mann’s ideal. The Western powers “want to make us happy,” he wrote in the fall of 1914—that is, to turn Germany into a liberal democracy. Mann was more drawn to death’s mystery and profundity than to reason and progress, which he considered facile values. This sympathy wasn’t simply a fascination with human evil—with a death instinct—but an attraction to a deeper freedom, a more intense form of life than parliaments and pamphleteering offered.

Mann scorned the notion of the writer as political activist. The artist should remain apart from politics and society, he believed, free to represent the deep and contradictory truths of reality rather than using art as a means to advance a particular view. In his wartime nonfiction writing, he mocked “civilization’s literary man,” a self-important poseur who takes sides on public issues and signs petitions. Mann was aiming at his brother Heinrich, a novelist and an essayist of nearly equal renown, whose liberal politics led him to support Germany’s enemies, France and Britain. The brothers exchanged indirect but caustic volleys in print, and their fraternal dispute became so bitter that they didn’t speak for seven years.

Before setting aside The Magic Mountain, Mann had created a version of this writer figure in a character named Lodovico Settembrini, another patient at the sanatorium, who is an irascible and hyper-articulate advocate for all things progressive: reason, liberty, virtue, health, the active life, social improvement. He declares music, the most emotionally overpowering of the arts, “politically suspect.” Mann at his most satiric has Settembrini contributing an essay to a multivolume project whose purpose is to end suffering. In short, Settembrini, like Heinrich, is a “humanist”—but in Mann’s usage, the term has an ironic sound. As he wrote elsewhere, it implies “a repugnant shallowness and castration of the concept of humanity,” pushed by “the politician, the humanitarian revolutionary and radical literary man, who is a demagogue in the grand style, namely a flatterer of mankind.”

Settembrini becomes a philosophical tutor to Hans Castorp, who listens with respectful interest but resists the liberal catechism. He responds more powerfully to the erotic allure of Clavdia Chauchat, the careless door slammer, who believes in “abandoning oneself to danger, to whatever can harm us, destroy us.” Yet Settembrini also has the wisdom to warn our hero against the seductions of the sanatorium, which separates young people from the society “down there,” infecting them with lassitude and rendering them incapable of ordinary life. As an artist above politics, Mann didn’t want simply to criticize “civilization’s literary man,” but to show him as “equally right and wrong.” He intended to create an intellectual opponent to Settembrini in a conservative Protestant character named Pastor Bunge—but the war intruded.

Mann spent the war years making his case for the German soul, steeped in the “passion” of Wagner and “manliness” of Nietzsche, amid a global catastrophe that remained bloodlessly abstract to him at his desk in Munich. He published his wartime writings in the genre-defying Reflections of a Nonpolitical Man in October 1918, one month before the armistice. Katia Mann later wrote, “In the course of writing the book, Thomas Mann gradually freed himself from the ideas which had held sway over him … He wrote Reflections in all sincerity and, in doing so, ended by getting over what he had advocated in the book.”

When Mann unpacked the four-year-old manuscript of The Magic Mountain in the spring of 1919, the novel and its creator were poised to undergo a metamorphosis. The war that had just ended enlarged the novel’s theme into “a worldwide festival of death”; the devastation, he would go on to write in the book’s last pages, was “the thunderbolt that bursts open the magic mountain and rudely sets its entranced sleeper outside the gates,” soon to become a German soldier. It also confronted Mann himself with a new world to which he had to respond.

[From the January 1953 issue: Thomas Mann on the making of The Magic Mountain]

Defeated Germany was in a state of revolution. In Munich, demobilized soldiers, right-wing paramilitaries, and Communist militants fought in the streets, while leaders of the new Weimar Republic were routinely assassinated. A local war veteran named Adolf Hitler began to electrify crowds in cramped halls with speeches denouncing the “traitors”—republican politicians, leftists, Jews—who had stabbed Germany in the back. The National Socialist German Workers’ Party was born in Munich; Hitler’s attempted coup in November 1923, known as the Beer Hall Putsch, took place less than two miles from the Mann house.

Some German conservatives, in their hatred of the Weimar Republic and the Treaty of Versailles, embraced right-wing mass politics. Mann, nearing 50, vacillated, hoping to salvage the old conservatism from the new extremism. In early 1922, he and Heinrich reconciled, and, as Mann later wrote, he began “to accept the European-democratic religion of humanity within my moral horizon, which so far had been bounded solely by late German romanticism, by Schopenhauer, Nietzsche, Wagner.” In April of that year, in a review of a German translation of Walt Whitman’s selected poetry and prose, he associated the American poet’s mystical notion of democracy with “the same thing that we in our old-fashioned way call ‘humanity’ … I am convinced there is no more urgent task for Germany today than to fill out this word, which has been debased into a hollow shell.”

The key event of Mann’s conversion came in June, when ultranationalists in Berlin murdered his friend Walther Rathenau, the Weimar Republic’s Jewish foreign minister. Shocked into taking a political stand, Mann turned a birthday speech in honor of the Nobel Prize–winning author Gerhart Hauptmann into a stirring call for democracy. To the amazement of his audience and the German press, Mann ended with the cry “Long live the republic!”

Mann the novelist had meanwhile returned to The Magic Mountain, and his work on it took a swerve in the same crucial year of 1922. His hero would have to struggle with the political battle that had beset Mann during the war. Abandoning Pastor Bunge as outmoded, he created a new counterpart to Settembrini who casts a sinister shadow over the second half of the novel: an ugly, charismatic, and (of course) tubercular Jesuit of Jewish origin named Leo Naphta. The intellectual combat between him and Settembrini—which ends physically, in a duel—provides some of the most dazzling passages in The Magic Mountain.

Just when you want to give up on their high-level dialectics, one of them, usually Naphta, says something that shocks you into a new way of thinking. Naphta is neither conservative nor liberal. Against capitalist modernity, whose godless greed and moral vacuity he hates with a sulfurous rage, Naphta offers a synthesis of medieval Catholicism and the new ideology of communism. Both place “anonymous and communal” authority over the individual, and both are intent on saving humanity from Settembrini’s soft, rational humanism. Hans Castorp calls Naphta “a revolutionary of reaction.” At times sounding like a fanatical parody of the Mann of Reflections, Naphta argues that love of freedom and pleasure is weaker than the desire to obey. “The mystery and precept of our age is not liberation and development of the ego,” he says. “What our age needs, what it demands, what it will create for itself, is—terror.” Mann understood the appeal of totalitarianism early on.

It’s Naphta, a truly demonic figure—not Settembrini, the voice of reason—who precipitates the end of the hero’s romance with death. His jarring arrival allows Hans Castorp to loosen himself from its grip and begin a journey toward—what? Not toward Settembrini’s international republic of letters, and not back toward his simple bourgeois life down in the flatlands. The answer comes 300 pages before the novel’s end, when Hans Castorp puts on a new pair of skis and sets out for a few hours of exercise that lead him into the fateful blizzard and “a very enchanting, very dreadful dream.”

In it, he encounters a landscape of human beings in all their kindness and beauty, and all their hideous evil. “I know everything about humankind,” he thinks, still dreaming, and he resolves to reject both Settembrini and Naphta—or rather, to reject the stark choice between life and death, illness and health, recognizing that “man is the master of contradictions, they occur through him, and so he is more noble than they.” During his years on the mountain, he’s become one of death’s intimates, and his initiation into its mysteries has immeasurably deepened his understanding of life—but he won’t let death rule his thoughts. He won’t let reason either, which seems weak and paltry before the power of destruction. “Love stands opposed to death,” he dreams; “it alone, and not reason, is stronger than death.”

The Magic Mountain makes no clear political statement. The novel remains true to Mann’s belief that art must include everything, allowing life its complexity and ambiguity. But the vision of “love” that Hans Castorp embraces just before waking up is “brotherly love”—the bond that unites all human beings. The creation of this novel, which won Mann international fame, is “a tale of two Thomas Manns,” in the words of Morten Høi Jensen, a Danish critic whose The Master of Contradictions: Thomas Mann and the Making of “The Magic Mountain” is due to be published next year. The Mann of wartime could not have written the sentence that awakens Hans Castorp from his dream.

[From the October 1944 issue: Thomas Mann’s “In My Defense”]

Mann now recognized political freedom as necessary to ensure the freedom of art, and he became a sworn enemy of the Nazis. A Nobel Prize winner in exile, he emerged as the preeminent German spokesman against Hitler who, in lectures across the United States in 1938, warned Americans of the rising threat to democracy, which for him was inseparable from humanism: “We must define democracy as that form of government and of society which is inspired above every other with the feeling and consciousness of the dignity of man.”

He was speaking at a moment when the dignity of man was locked up in Nazi concentration camps, liquidated in Soviet show trials, buried under piles of corpses. Yet Mann urged his audiences to resist the temptation to deride humanity. “Despite so much ridiculous depravity, we cannot forget the great and the honorable in man,” he said, “which manifest themselves as art and science, as passion for truth, creation of beauty, and the idea of justice.”

Could anyone utter these lofty words today without courting a chorus of snickers, a social-media immolation? We live in an age of human self-contempt. We’re hardly surprised when our leaders debase themselves with vile behavior and lies, when combatants desecrate the bodies of their enemies, when free people humiliate themselves under the spell of a megalomaniacal fraud. It takes a constant effort not to accept this as normal. We might even feel, without acknowledging it to ourselves, that we deserve it: After all, we’re human, the lowest of the low.

In driving our democracy into hatred, chaos, and violence we, too, grant death dominion over our thoughts. We succumb to the impulse to escape our humanness. That urge, ubiquitous today, thrives in the utopian schemes of technologists who want to upload our minds into computers; in the pessimism of radical environmentalists who want us to disappear from the Earth in order to save it; in the longing of apocalyptic believers for godly retribution and cleansing; in the daily sense of inadequacy, of shame and sin, that makes us disappear into our devices.

The need for political reconstruction, in this country and around the world, is as obvious as it was in Thomas Mann’s time. But Mann also knew that, to withstand our attraction to death, a decent society has to be built on a foundation deeper than politics: the belief that, somewhere between matter and divinity, we human beings, made of water, protein, and love, share a common destiny.

This article appears in the December 2024 print edition with the headline “The Magic Mountain Saved My Life.”

Ode to Uncertainty

The Atlantic

www.theatlantic.com › culture › archive › 2024 › 11 › ode-to-uncertainty-election › 680524

A twist in the guts, a shift in the tide,
there are cartons of eggs getting broken worldwide.   
I’m not sleeping and neither are you.
Boo-hoo.
In fear, in fear, the stars are spread,
they shine in isolate rings of dread,
and should the heavens get too tight
they’ll hiss and disengage their light.

Were we helpless? Were we blind?
Were we out of our fucking minds?
Should we have got that booster shot
from the screaming man in the parking lot?
And is he among us, the Father of Lies,
his presence announced by a buzzing of flies,
with all of his reptile retinue?
America, nice knowing you.

Oligarchs be gentle, oligarchs be nice,
oligarchs don’t make us say it twice.
The smoke descends, the options narrow,
this is a moment seeking its tarot,
its Devil, its Hanged Man, its Ten of Swords.
Can you tell the tale? Do you have the words?
Come on, give me the pill, Jill,
and we’ll roll unconscious down the hill.

Maintained in this state of wild vexation
by volleys of planetary radiation—
what if a genie replaced your phone
with the club of somebody’s tibia bone?
Love alone is the medicine for asshole-ism,
Love the elixir that settles the schism,
Love the securest biodefense.
O keep us together, Love. Make us make sense.