Itemoids

Taylor Swift

Donald Trump and the Politics of Looking Busy

The Atlantic

www.theatlantic.com › politics › archive › 2025 › 02 › trump-busy-second-term › 681664

Let us pause the various constitutional crises, geopolitical showdowns, and DOGE dramas to make a simple observation: Donald Trump seems kind of busy, no?

In recent days, he kicked off what the media have dubbed “Tariff Week” by declaring Sunday, February 9, Gulf of America Day. This occurred as he flew to New Orleans to become the first-ever sitting U.S. president to attend the Super Bowl and just before Fox News aired a Super Bowl Sunday/Gulf of America Day interview, a presidential news-making tradition that Joe Biden had blown off the past two years, in which Trump, among other things, (1) reiterated that Canada should become the 51st U.S. state, (2) declined to endorse Vice President J. D. Vance as his successor (“but he’s very capable”), and (3) referred to Gaza as a “demolition site.”

Trump spent much of the afternoon and evening getting fussed over by billionaires, celebrities, and other dignitaries in front of 127.7 million viewers, during the most watched television broadcast in history. He received mostly cheers when his ubiquitous mug was shown on the Caesars Superdome big screen before the game, which he watched with his daughter Ivanka and NFL Commissioner Roger Goodell from a 50-yard-line suite. He closed out his weekend by stirring up bad blood with Kamala Harris supporter Taylor Swift via Truth Social (“BOOED out of the Stadium”) and ordering his Treasury secretary to terminate the bipartisan menace of the penny.

[Read: A Super Bowl spectacle over the gulf]

After a brief overnight respite, the Trump-centric events kept hurtling forth in a flurry of perpetual motion—also known as Monday and Tuesday. Trump imposed 25 percent duties on all steel and aluminum imports; pardoned former Illinois Governor Rod Blagojevich; and threatened that “all hell is gonna break out” if Hamas does not release all Israeli hostages by Saturday at noon. He signed an executive order that calls for a halt to all federal purchases of those flaccid paper straws (which, let’s face it, are as annoying as pennies), and another directing all federal agencies to cooperate with Elon Musk’s Department of Government Efficiency to “significantly” reduce the federal workforce. This came a few hours after he held an Oval Office meeting with Jordan’s King Abdullah II in which the president reasserted, in reference to Gaza, “We’re going to take it, we’re going to hold it, we’re going to cherish it.”

In summation: Yes, Trump definitely does seem kind of busy.

Opinions, of course, vary about whether this is a good or a catastrophic kind of busy. And for what it’s worth, several federal judges have declared themselves hostile to Trump’s executive orders. Regardless, these rapid-fire feedings of attention-seizing fodder represent a fundamental ethic of Trump 2.0: Frenetic action—or at least the nonstop impression thereof—seems very much the point. And notwithstanding the whiplash, turbulence, and contradiction of it all, people seem to like it so far.

In a CBS News/YouGov poll released Sunday, 53 percent of the 2,175 U.S. adults surveyed said that they approved of the job Trump is doing, a higher share than at any point in his first go-round. Perhaps more revealing, the poll’s respondents described these first weeks of the 78-year-old president’s term as “energetic,” “focused,” and “effective.” They might not necessarily approve of what Trump has been energetic, focused, and effective about doing (pardoning the January 6 perpetrators, for example) or not doing (66 percent said Trump hasn’t paid enough attention to lowering prices for goods and services). But Trump has created a sense of action, commotion, disruption, and maybe even destruction that many voters seem to welcome for now. At the very least, there is nothing sleepy about any of this.

“He said he was going to do something, and he’s doing it,” one woman told a Bulwark focus group of Biden-turned-Trump voters conducted in the days after Trump returned to the White House. At this point, the fact of this “something” seems to be trumping the substance of it. The woman said she works in clinical research at a hospital and interacts with people who might lose National Institutes of Health grants to Trump and Musk’s barrage of cuts; she described a work environment that has been thrown into chaos.

“Like, what do we do? We have no idea, the CEO has no idea. We’re confused a little bit,” the woman said. “I’m not saying it’s the right move, the wrong move,” she added. “But it’s definitely like, Something’s happening. He’s actually doing something.”

[Read: The strategy behind Trump’s policy blitz]

Sarah Longwell, the Bulwark publisher who runs the focus groups, told me that Trump appears to be benefiting from “Joe Biden’s complete lack of communication” during his time in office. Longwell said she repeatedly heard from voters that they had no idea what Biden wanted to do in office, or what he was doing. “He created this huge vacuum of presidential communication that Trump is now filling,” Longwell said.

She added that Biden also presents a cautionary example of how a president’s initial popularity can be fleeting. Four years ago, at this same point, voters were sounding quite appreciative of having someone in office who was not constantly in their faces. Biden was seen as restoring “normalcy” after the tumultuous, COVID-dominated, and violent end of Trump’s first term. He polled in the low 60s in a March 2021 CBS survey, was still getting compared to Franklin D. Roosevelt, and enjoyed a popularity that would last until the summer of 2021, when Afghanistan went south and inflation headed north.

A hallmark of presidential honeymoons is that presidents tend to look better when they act in ways that contrast with their predecessor, especially when their predecessor was unpopular. Another hallmark of those honeymoon periods: They tend not to last. In other words, Trump should cherish this while he can—or until all hell breaks out and people start pining again for normalcy.

The Wrong Case for Foreign Aid

The Atlantic

www.theatlantic.com › international › archive › 2025 › 02 › foreign-aid-trump-usaid › 681652

As Elon Musk and President Donald Trump attempt to unlawfully obliterate USAID, its advocates have focused on the many ways that shutting off foreign aid damages U.S. interests. They argue that it exposes Americans to a greater risk of outbreaks such as Ebola and bird flu, stifles future markets for domestic producers, and cedes the great-power competition to China. These arguments are accurate and important, but they have overtaken a more fundamental—and ultimately more persuasive—reason for the U.S. to invest in foreign aid: It’s essential to America’s identity.

Following World War II, every U.S. president until Trump used his inaugural address to champion foreign aid and invoke the country’s long-held ideals of decency and generosity. They maintained that Americans had a moral duty to help the deprived. Once Trump was elected in 2016, however, U.S. leaders and aid advocates grew reluctant to talk about altruism. President Joe Biden made no mention of the world’s needy in his inaugural address.

I’m as much to blame for this shift as anyone. I served as USAID’s head speechwriter for six years under the past two Democratic administrations. In that role, I prioritized tactical arguments about America’s safety and well-being in order to persuade the shrinking segment of Republicans who were sympathetic to foreign aid. For a time, it worked. During the Biden administration, Congress spared USAID’s budget from the most drastic proposed cuts, and the agency received unprecedented emergency funding to deal with a series of humanitarian disasters, conflicts, and climate catastrophes.

[Read: The cruel attack on USAID]

Today, however, that line of reasoning is failing. Trump, Musk, and their allies are convinced that administering foreign aid weakens America, rather than enriching or securing it. Marco Rubio used to be one of the agency’s biggest supporters; now, as secretary of state, he’s maligning its staff and abetting its demolition.

A more compelling message lies in the fact that Trump and Musk’s foreign-aid freeze could be one of the cruelest acts that a democracy has ever undertaken. In 2011, when Republican members of Congress proposed a 16 percent cut in annual foreign aid, then–USAID Administrator Rajiv Shah conservatively estimated that it would lead to the deaths of 70,000 children. That is more children than died in Hiroshima and Nagasaki. Depending on how thoroughly Trump and Musk are allowed to dismantle USAID, the casualties this time could be worse. (A federal judge has temporarily blocked their plan to put staffers on leave.)

By assaulting the foreign-aid system, Rubio, Musk, and Trump are redefining what it means to be American: small-hearted rather than generous; unexceptional in our selfishness. To respond by arguing that foreign aid simply benefits Americans is to accede to their view, not combat it.

Instead, advocates of foreign aid should appeal to a higher principle: To be American is to care about those in need. The country is already primed for this message. Americans are an exceptionally charitable people, donating more than $500 billion each year. And although polling shows that a narrow majority of Americans want to cut foreign aid in the abstract, they strongly support the specific programs it funds, including disaster relief, food and medicine, women’s education, and promoting democracy.

[Read: Trump’s assault on USAID makes Project 2025 look like child’s play]

That support derives above all from a moral belief. According to a poll by KFF, only 25 percent of respondents cited economic or national-security interests as the most important reason for America to invest in the public health of developing countries. Nearly double—46 percent—said that it’s the right thing to do.

A modern blueprint exists for tapping into Americans’ concern for the world’s poor. During the George W. Bush and Obama administrations, proponents of foreign aid emphasized America’s values ahead of its interests, inspiring communities of faith and galvanizing a nationwide youth movement. Rock stars and celebrities echoed the message, which penetrated pop culture. When an earthquake struck Haiti in 2010, a telethon featuring performances by Beyoncé and Taylor Swift raised $61 million; stars including Leonardo DiCaprio and Julia Roberts staffed the phones. No one mentioned security or prosperity. Empathy was enough.

Today, the political and cultural coalitions that championed foreign aid are severely diminished. The Republicans whom USAID once counted on have gone silent. Few faith leaders or celebrities are calling for foreign aid to resume. No widespread youth movement is demanding that we end poverty now. Proponents, myself included, stopped focusing on inspiring the American people, so it’s no surprise that they are uninspired. But we can motivate them again. We just need to appeal to their hearts as much as their minds.

The Game That Shows We’re Thinking About History All Wrong

The Atlantic

www.theatlantic.com › culture › archive › 2025 › 02 › civilization-7-review › 681656

This is an era of talking about eras. Donald Trump says we’ve just begun a “golden age.” Pundits—responding to the rise of streaming, AI, climate change, and Trump himself—have announced the dawn of post-literacy, post-humanism, and post-neoliberalism. Even Taylor Swift’s tour name tapped into the au courant way of depicting time: not as a river, but as a chapter book. A recent n+1 essay asked, “What does it mean to live in an era whose only good feelings come from coining names for the era (and its feelings)?”

Oddly enough, the new edition of Civilization, Sid Meier’s beloved video-game franchise, suggests an answer to that question. In the six previous Civ installments released since 1991, players guide a culture—such as the Aztecs, the Americas, or the French—from prehistory to modernity. Tribes wielding spears and scrolls grow into global empires equipped with nukes and blue jeans. But Civilization VII, out this month, makes a radical change by firmly segmenting the experience into—here’s that word—eras. At times, the resulting gameplay mirrors the pervasive mood of our present age-between-ages: tedious, janky, stranded on the way to somewhere else.

In many ways, the game plays like a thoughtful cosmetic update. You select a civilization and a leader, with options that aren’t only the obvious ones (all hail Empress Harriet Tubman!). The world map looks ever so fantastical, with postcard-perfect coastlines and mountains resembling tall sandcastles. Then, in addictive turn after turn, you befriend or conquer neighboring tribes (using sleek new systems for war and diplomacy), discover technologies such as the wheel and bronze-working, and cultivate cities filled with art and industry. The big twist is that all the while, an icon on-screen accumulates percentage points. When it gets somewhere above 70 percent, a so-called crisis erupts: Maybe your citizens rebel; maybe waves of outsiders attack. At 100 percent, the game pauses to announce that the “Antiquity Age” is over. Time isn’t just marching on—your civilization is about to molt, caterpillar-style.

[Read: Easy mode is actually for adults]

In each of the two subsequent ages—Exploration, Modern—players pick a new society to transform into. In my first go, my ancient Romans became the Spanish, who sent galleons to distant lands. Then I founded modern America and got to work laying down a railroad network. Over time, my conquistadors retired, and my pagan temples got demolished to make way for grocery stores. Yet certain attributes persisted. For example, the Roman tradition of efficiently constructing civic works made building the Statue of Liberty easier. As I played, the word civilization came to feel newly expansive. I wasn’t running a country; I was tending to a lineage of peoples who had gone by a few names but shared a past, a homeland, self-interest, and that hazy thing called culture.

In the run-up to the game, Civilization’s developers have argued that the eras system is realistic. No nation-state has continuously spanned the thousands of years that a typical Civ game simulates; the closest counterexample might be China, which is playable as three different dynastic forms (plus Mongolia) in this game. Although Civ’s remix of history is always a bit wacky, in my head, I could maintain a plausible-ish narrative to explain why my America’s cities featured millennia-old colonnades (to quote a colleague: Are We Rome?). Each era-ending crisis created a credible kind of drama: In real life, revolutions, reformations, migration, invasion, disasters, and so much else can reshape societies in fundamental ways. The game succeeds at making the case that, as its creators like to say, “history is built in layers.”

Unfortunately, in the most recent version of the game, history also feels overdetermined. Winning in previous Civs meant accomplishing one self-evidently climactic feat—conquering Earth, say, or mastering spaceflight. During the many hours it took to get to that goal, you enjoyed immense freedom to improvise your own path. Civ VII, however, adds on a menu of goals for each era. To succeed in the Antiquity Age, for example, you might build seven Wonders of the World; in modernity, you could mass-produce a certain number of factory goods and then form a world bank. The micro objectives lend each era a sense of a narrative cohesion—but a limiting and predictable kind, less epic novel than completed checklist. Playing Civilization used to feel like living through an endless dawn of possibility. But this time, you’re not in command of history; history is in command of you, and it’s assigning you busywork.

[Read: What will become of American civilization?]

Making matters worse, the complexity of the eras mechanism seems to have encouraged the game’s designers to simplify other features—or, less charitably, to just pay those features less care. I played on what should have been a challenging level of difficulty—four on a six-point scale—but I still smoked the computer-controlled opponents, who seemed programmed to act meekly and unambitiously. Picking your form of government used to feel like an existential choice, but now despotism and oligarchy are hardly differentiated. Complicated ideas have been reduced to childish mini-games: Achieving cultural hegemony in Civ VI meant fostering soft power through a variety of options—curating art museums, building iconic monuments, shipping rock bands off on global tours—but in Civ VII, it’s mostly a matter of sending explorers to random places to dig up artifacts. Luckily, many of these problems seem fixable, and later downloadable updates may make the game richer and more satisfying.

Still, I worry that the dull anxiety that can creep in over a session of Civ VII results from a deeper flaw: the strictly defined ages. I like that the game wants to honor how societies really can change in sweeping, sudden ways. But in gaming and in life, fixating on an episodic view of time—prophecies of rise and fall, cycles of malaise and renewal—can have a diminishing effect on the present. Civilization VII suggests why the what’s-next anxieties of our times, stuck between mourning yesterday and anticipating tomorrow, can be so draining. Time actually doesn’t move in chunks. At best, eras are an imprecise tool to make sense of the messy past, and at worst, they rob us of our sense of agency. It’s healthiest to buy into the old Civilization fantasy, the dream that’s always propelled humans forward: We’re going to last.

Swift, Trump and a dynasty in ruins - how Super Bowl 59 unfolded

BBC News

www.bbc.com › sport › american-football › articles › ckgyvyygd12o

President Trump and Taylor Swift in attendance as Philadelphia Eagles beat Kansas City Chiefs in Super Bowl 59 at Caesars Superdrome, New Orleans

Stop Listening to Music on a Single Speaker

The Atlantic

www.theatlantic.com › technology › archive › 2025 › 02 › bluetooth-speakers-ruining-music › 681571

When I was in my early 20s, commuting to work over the freeways of Los Angeles, I listened to Brian Wilson’s 2004 album, Smile, several hundred times. I like the Beach Boys just fine, but I’m not a superfan, and the decades-long backstory of Smile never really hooked me. But the album itself was sonic mesmerism: each hyper-produced number slicking into the next, with Wilson’s baroque, sometimes cartoonish tinkering laid over a thousand stars of sunshine. If I tried to listen again and my weathered Mazda mutely regurgitated the disc, as it often did, I could still hear the whole thing in my head.

Around this time, a friend invited me to see Wilson perform at the Hollywood Bowl, which is a 17,000-seat outdoor amphitheater tucked into the hills between L.A. and the San Fernando Valley. Elsewhere, this could only be a scene of sensory overload, but its eye-of-the-storm geography made the Bowl a kind of redoubt, cool and dark and almost hushed under the purple sky. My friend and I opened our wine bottle, and Wilson and his band took the stage.

From the first note of the a capella opening, they … well, they wobbled. The instruments, Wilson’s voice, all of it stretched and wavered through each beat of the album (which constituted their set list) as if they were playing not in a bandshell but far down a desert highway on a hot day, right against the horizon. Wilson’s voice, in particular, verged on frail—so far from the immaculate silk of the recording as to seem like a reinvention. Polished and rhythmic, the album had been all machine. But the performance was human—humans, by the thousand, making and hearing the music—and for me it was like watching consciousness flicker on for the first time in the head of a beloved robot.

Music is different now. Finicky CD players are a rarity, for one thing. We hold the divine power instead to summon any song we can think of almost anywhere. In some respects, our investment in how we listen has kept pace: People wear $500 headphones on the subway; they fork out the GDP of East Timor to see Taylor Swift across an arena. But the engine of this musical era is access. Forever, music was tethered to the human scale, performers and audience in a space small enough to carry an organic or mechanical sound. People alive today knew people who might have heard the first transmitted concert, a fragile experiment over telephone lines at the Paris Opera in 1881. Now a library of music too big for a person to hear in seven lifetimes has surfed the smartphone to most corners of the Earth.

In another important way, though, how we listen has shrunk. Not in every instance, but often enough to be worthy of attention. The culprit is the single speaker—as opposed to a pair of them, like your ears—and once you start looking for it, you might see it everywhere, an invasive species of flower fringing the highway. Every recorded sound we encounter is made up of layers of artifice, of distance from the originating disturbance of air. So this isn’t an argument about some standard of acoustic integrity; rather, it’s about the space we make with music, and what (and who) will fit inside.

From the early years of recorded music, the people selling it have relied on a dubious language of fidelity—challenging the listener to tell a recording apart from the so-called real thing. This is silly, even before you hear some of those tinny old records. We do listen to sound waves, of course, but we also absorb them with the rest of our body, and beyond the sound of the concert are all the physical details of its production—staging, lighting, amplification, decor. We hear some of that happening, too, and we see it, just as we see and sense the rising and falling of the people in the seats around us, as we feel the air whipping off their applauding hands or settling into the subtly different stillnesses of enrapturement or boredom. People will keep trying to reproduce all of that artificially, no doubt, because the asymptote of fidelity is a moneymaker. But each time you get one new piece of the experience right, you’ve climbed just high enough to crave the next rung on the ladder. Go back down, instead, to the floor of the most mundane auditorium, and you’ll feel before you can name all the varieties of sensation that make it real.

For a long time, the fidelity sell was a success. When American men got home from World War II, as the cultural historian Tony Grajeda has noted, they presented a new consumer class. Marketing phrases such as “concert-hall realism” got them buying audio equipment. And the advent of stereo sound, with separated left and right channels—which became practical for home use in the late ’50s—was an economic engine for makers of both recordings and equipment. All of that needed to be replaced in order to enjoy the new technology. The New York Times dedicated whole sections to the stereo transition: “Record dealers, including a considerable number who do not think that stereo is as yet an improvement over monophonic disks, are hopeful that, with sufficient advertising and other forms of publicity, the consumer will be converted,” a 1958 article observed.

Acoustic musicians were integral to the development of recorded sound, and these pioneers understood that the mixing panel was now as important as any instrument. When Bell Laboratories demonstrated its new stereophonic technology in a spectacle at Carnegie Hall, in 1940, the conductor Leopold Stokowski ran the audio levels himself, essentially remixing live the sounds he’d recorded with his Philadelphia Orchestra. Stokowski had worked, for years, with his pal Walt Disney to create a prototype of surround sound for Fantasia. The result was a system too elaborate to replicate widely, which had to be abandoned (and its parts donated to the war effort) before the movie went to national distribution.

Innovators like Stokowski recognized a different emerging power in multichannel sound, more persuasive and maybe more self-justifying than the mere simulation of a live experience: to make, and then remake in living rooms and dens across the country, an aural stage without a physical correlate—an acoustic space custom-built in the recording studio, with a soundtrack pieced together from each isolated instrument and voice. The musical space had always been monolithic, with players and listeners sharing it for the fleeting moment of performance. The recording process divided that space into three: one for recording the original sound, one for listening, and an abstract, theoretical “sound stage” created by the mixing process in between. That notional space could have a size and shape of its own, its own warmth and coolness and reverberance, and it could reposition each element of the performance in three dimensions, at the inclination of the engineer—who might also be the performer.

Glenn Gould won permanent fame with his recordings of Bach’s keyboard works in the 1950s. Although he was as formidable and flawless a live performer as you’ll get, his first recording innovation—and that it was, at the time—was to splice together many different takes of his performances to yield an exaggerated, daring perfection in each phrase of every piece, as if LeBron James only ever showed up on TV in highlight reels. (“Listen, we’ve got lots of endings,” Gould tells his producer in one recording session, a scene recalled in Paul Elie’s terrific Reinventing Bach.) By the ’70s, the editors of the anthology Living Stereo note, Gould had hacked the conventional use of multi-mic recording, “but instead of using it to render the conventional image of the concert hall ‘stage,’ he used the various microphone positions to create the effect of a highly mobile acoustic space—what he sometimes referred to as an ‘acoustic orchestration’ or ‘choreography.’” It was akin to shooting a studio film with a handheld camera, reworking the whole relationship of perceiver to perceived.

Pop music was surprisingly slow to match the classicalists’ creativity; many of the commercial successes of the ’60s were mastered in mono, which became an object of nostalgic fascination after the record companies later reengineered them—in “simulated stereo”—to goose sales. (Had it been released by the Beach Boys back then, Smile would have been a single-channel record, and, in fact, Brian Wilson himself is deaf in one ear.) It wasn’t really until the late ’60s, when Pink Floyd championed experiments in quadraphonic sound—four speakers—that pop music became a more reliable scene of fresh approaches in both recording and production.

Nowadays, even the most rudimentary pop song is a product of engineering you couldn’t begin to grasp without a few master’s degrees. But the technologization of music producing, distribution, and consumption is full of paradoxes. For the first 100 years, from that Paris Opera telephone experiment to the release of the compact disc in the early 1980s, recording was an uneven but inexorable march toward higher quality—as both a selling point and an artistic aim. Then came file sharing, in the late ’90s, and the iPod and its descendant, the iPhone, all of which compromised the quality of the music in favor of smaller files that could flourish on a low-bandwidth internet—convenience and scale at the expense of quality. Bluetooth, another powerful warrior in the forces of convenience, made similar trade-offs in order to spare us a cord. Alexa and Siri gave us new reasons to put a multifunctional speaker in our kitchens and bathrooms and garages. And the ubiquity of streaming services brought the whole chain together, one suboptimal link after another, landing us in a pre-Stokowski era of audio quality grafted onto a barely fathomable utopia of access: all music, everywhere, in mediocre form.

People still listen to music in their car or on headphones, of course, and many others have multichannel audio setups of one kind or another. Solitary speakers tend to be additive, showing up in places you wouldn’t think to rig for the best sound: in the dining room, on the deck, at the beach. They’re digital successors to the boombox and the radio, more about the presence of sound than its shape.

Yet what many of these places have in common is that they’re where people actually congregate. The landmark concerts and the music we listen to by ourselves keep getting richer, their real and figurative stages more complex. (I don't think I've ever felt a greater sense of space than at Beyoncé’s show in the Superdome two Septembers ago.) But our everyday communal experience of music has suffered. A speaker designed to get you to order more toilet paper, piping out its lonely strain from the corner of your kitchen—it’s the first time since the arrival of hi-fi almost a century ago that we’ve so widely acceded to making the music in our lives smaller.

For Christmas, I ordered a pair of $60 Bluetooth speakers. (This kind of thing has been a running joke with my boyfriend since a more ambitious Sonos setup showed up in his empty new house a few days after closing, the only thing I needed to make the place livable. “I got you some more speakers, babe!”) We followed the instructions to pair them in stereo, then took them out to the fire pit where we’d been scraping by with a single unit. I hung them from opposite trees, opened up Spotify, and let the algorithmic playlist roll. In the flickering darkness, you could hear the silence of the stage open up, like the moments when the conductor mounts the podium in Fantasia. As the music began, it seemed to come not from a single point on the ground, like we were used to, but from somewhere out in the woods or up in the sky—or maybe from a time before all this, when the musician would have been one of us, seated in the glow and wrapping us in another layer of warmth. This wasn’t high-fidelity sound. There wasn’t a stereo “sweet spot,” and the bass left something to be desired. But the sound made a space, and we were in it together.