Itemoids

iPhone

The Government’s Computing Experts Say They Are Terrified

The Atlantic

www.theatlantic.com › technology › archive › 2025 › 02 › elon-musk-doge-security › 681600

Elon Musk’s unceasing attempts to access the data and information systems of the federal government range so widely, and are so unprecedented and unpredictable, that government computing experts believe the effort has spun out of control. This week, we spoke with four federal-government IT professionals—all experienced contractors and civil servants who have built, modified, or maintained the kind of technological infrastructure that Musk’s inexperienced employees at his newly created Department of Government Efficiency are attempting to access. In our conversations, each expert was unequivocal: They are terrified and struggling to articulate the scale of the crisis.

Even if the president of the United States, the head of the executive branch, supports (and, importantly, understands) these efforts by DOGE, these experts told us, they would still consider Musk’s campaign to be a reckless and dangerous breach of the complex systems that keep America running. Federal IT systems facilitate operations as varied as sending payments from the Treasury Department and making sure that airplanes stay in the air, the sources told us.

Based on what has been reported, DOGE representatives have obtained or requested access to certain systems at the U.S. Treasury, the Department of Health and Human Services, the Office of Personnel Management, and the National Oceanic and Atmospheric Administration, with eyes toward others, including the Federal Aviation Administration. “This is the largest data breach and the largest IT security breach in our country’s history—at least that’s publicly known,” one contractor who has worked on classified information-security systems at numerous government agencies told us this week. “You can’t un-ring this bell. Once these DOGE guys have access to these data systems, they can ostensibly do with it what they want.”

[Read: If DOGE goes nuclear]

What exactly they want is unclear. And much remains unknown about what, exactly, is happening here. The contractor emphasized that nobody yet knows which information DOGE has access to, or what it plans to do with it. Spokespeople for the White House, and Musk himself, did not respond to emailed requests for comment. Some reports have revealed the scope of DOGE’s incursions at individual agencies; still, it has been difficult to see the broader context of DOGE’s ambition.

The four experts laid out the implications of giving untrained individuals access to the technological infrastructure that controls the country. Their message is unambiguous: These are not systems you tamper with lightly. Musk and his crew could act deliberately to extract sensitive data, alter fundamental aspects of how these systems operate, or provide further access to unvetted actors. Or they may act with carelessness or incompetence, breaking the systems altogether. Given the scope of what these systems do, key government services might stop working properly, citizens could be harmed, and the damage might be difficult or impossible to undo. As one administrator for a federal agency with deep knowledge about the government’s IT operations told us, “I don’t think the public quite understands the level of danger.”

Each of our four sources, three of whom requested anonymity out of fear of reprisal, made three points very clear: These systems are immense, they are complex, and they are critical. A single program run by the FAA to help air-traffic controllers, En Route Automation Modernization, contains nearly 2 million lines of code; an average iPhone app, for comparison, has about 50,000. The Treasury Department disburses trillions of dollars in payments per year.

Many systems and databases in a given agency feed into others, but access to them is restricted. Employees, contractors, civil-service government workers, and political appointees have strict controls on what they can access and limited visibility into the system as a whole. This is by design, as even the most mundane government databases can contain highly sensitive personal information. A security-clearance database such as those used by the Department of Justice or the Bureau of Alcohol, Tobacco, Firearms and Explosives, one contractor told us, could include information about a person’s mental-health or sexual history, as well as disclosures about any information that a foreign government could use to blackmail them.

Even if DOGE has not tapped into these particular databases, The Washington Post reported on Wednesday that the group has accessed sensitive personnel data at OPM. Mother Jones also reported on Wednesday that an effort may be under way to effectively give Musk control over IT for the entire federal government, broadening his access to these agencies. Trump has said that Musk is acting only with his permission. “Elon can’t do and won’t do anything without our approval,” he said to reporters recently. “And we will give him the approval where appropriate. Where it’s not appropriate, we won’t.” The specter of what DOGE might do with that approval is still keeping the government employees we spoke with up at night. With relatively basic “read only” access, Musk’s people could easily find individuals in databases or clone entire servers and transfer that secure information somewhere else. Even if Musk eventually loses access to these systems—owing to a temporary court order such as the one approved yesterday, say—whatever data he siphons now could be his forever.

[Read: Trump advisers stopped Musk from hiring a noncitizen at DOGE]

With a higher level of access—“write access”—a motivated person may be able to put their own code into the system, potentially without any oversight. The possibilities here are staggering. One could alter the data these systems process, or they could change the way the software operates—without any of the testing that would normally accompany changes to a critical system. Still another level of access, administrator privileges, could grant the broad ability to control a system, including hiding evidence of other alterations. “They could change or manipulate treasury data directly in the database with no way for people to audit or capture it,” one contractor told us. “We’d have very little way to know it even happened.”

The specific levels of access that Musk and his team have remain unclear and likely vary between agencies. On Tuesday, the Treasury said that DOGE had been given “read only” access to the department’s federal payment system, though Wired then reported that one member of DOGE was able to write code on the system. Any focus on access tiers, for that matter, may actually simplify the problem at hand. These systems aren’t just complex at the code level—they are multifaceted in their architecture. Systems can have subsystems; each of these can have their own permission structures. It’s hard to talk about any agency’s tech infrastructure as monolithic. It’s less a database than it is a Russian nesting doll of databases, the experts said.

Musk’s efforts represent a dramatic shift in the way the government’s business has traditionally been conducted. Previously, security protocols were so strict that a contractor plugging a non-government-issued computer into an ethernet port in a government agency office was considered a major security violation. Contrast that with DOGE’s incursion. CNN reported yesterday that a 23-year-old former SpaceX intern without a background check was given a basic, low tier of access to Department of Energy IT systems, despite objections from department lawyers and information experts. “That these guys, who may not even have clearances, are just pulling up and plugging in their own servers is madness,” one source told us, referring to an allegation that DOGE had connected its own server at OPM. “It’s really hard to find good analogies for how big of a deal this is.” The simple fact that Musk loyalists are in the building with their own computers is the heart of the problem—and helps explain why activities ostensibly authorized by the president are widely viewed as a catastrophic data breach.

The four systems professionals we spoke with do not know what damage might already have been done. “The longer this goes on, the greater the risk of potential fatal compromise increases,” Scott Cory, a former CIO for an agency in the HHS, told us. At the Treasury, this could mean stopping payments to government organizations or outside contracts it doesn’t want to pay. It could also mean diverting funds to other recipients. Or gumming up the works in the attempt to do those, or other, things.

In the FAA, even a small systems disruption could cause mass grounding of flights, a halt in global shipping, or worse, downed planes. For instance, the agency oversees the Traffic Flow Management System, which calculates the overall demand for airspace in U.S. airports and which airlines depend on. “Going into these systems without an in-depth understanding of how they work both individually and interconnectedly is a recipe for disaster that will result in death and economic harm to our nation,” one FAA employee who has nearly a decade of experience with its system architecture told us. “‘Upgrading’ a system of which you know nothing about is a good way to break it, and breaking air travel is a worst-case scenario with consequences that will ripple out into all aspects of civilian life. It could easily get to a place where you can’t guarantee the safety of flights taking off and landing.” Nevertheless, on Wednesday Musk posted that “the DOGE team will aim to make rapid safety upgrades to the air traffic control system.”

Even if DOGE members are looking to modernize these systems, they may find themselves flummoxed. The government is big and old and complicated. One former official with experience in government IT systems, including at the Treasury, told us that old could mean that the systems were installed in 1962, 1992, or 2012. They might use a combination of software written in different programming languages: a little COBOL in the 1970s, a bit of Java in the 1990s. Knowledge about one system doesn’t give anyone—including Musk’s DOGE workers, some of whom were not even alive for Y2K—the ability to make intricate changes to another.

[Read: The “rapid unscheduled disassembly” of the United States government]

The internet economy, characterized by youth and disruption, favors inventing new systems and disposing of old ones. And the nation’s computer systems, like its roads and bridges, could certainly benefit from upgrades. But old computers don’t necessarily make for bad infrastructure, and government infrastructure isn’t always old anyway. The former Treasury official told us that mainframes—and COBOL, the ancient programming language they often run—are really good for what they do, such as batch processing for financial transactions.

Like the FAA employee, the payment-systems expert also fears that the most likely result of DOGE activity on federal systems will be breaking them, especially because of incompetence and lack of proper care. DOGE, he observed, may be prepared to view or hoover up data, but it doesn’t appear to be prepared to carry out savvy and effective alterations to how the system operates. This should perhaps be reassuring. “If you were going to organize a heist of the U.S. Treasury,” he said, “why in the world would you bring a handful of college students?” They would be useless. Your crew would need, at a minimum, a couple of guys with a decade or two of experience with COBOL, he said.

Unless, of course, you had the confidence that you could figure anything out, including a lumbering government system you don’t respect in the first place. That interpretation of DOGE’s theory of self seems both likely and even more scary, at the Treasury, the FAA, and beyond. Would they even know what to do after logging in to such a machine? we asked. “No, they’d have no idea,” the payment expert said. “The sanguine thing to think about is that the code in these systems and the process and functions they manage are unbelievably complicated,” Scott Cory said. “You’d have to be extremely knowledgeable if you were going into these systems and wanting to make changes with an impact on functionality.”

But DOGE workers could try anyway. Mainframe computers have a keyboard and display, unlike the cloud-computing servers in data centers. According to the former Treasury IT expert, someone who could get into the room and had credentials for the system could access it and, via the same machine or a networked one, probably also deploy software changes to it. It’s far more likely that they would break, rather than improve, a Treasury disbursement system in so doing, one source told us. “The volume of information they deal with [at the Treasury] is absolutely enormous, well beyond what anyone would deal with at SpaceX,” the source said. Even a small alteration to a part of the system that has to do with the distribution of funds could wreak havoc, preventing those funds from being distributed or distributing them wrongly, for example. “It’s like walking into a nuclear reactor and deciding to handle some plutonium.”

DOGE is many things—a dismantling of the federal government, a political project to flex power and punish perceived enemies—but it is also the logical end point of a strain of thought that’s become popular in Silicon Valley during the boom times of Big Tech and easy money: that building software and writing code aren’t just dominant skills for the 21st century, but proof of competence in any realm. In a post on X this week, John Shedletsky, a developer and an early employee at the popular gaming platform Roblox, summed up the philosophy nicely: “Silicon Valley built the modern world. Why shouldn’t we run it?”

This attitude disgusted one of the officials we spoke with. “There’s this bizarre belief that being able to do things with computers means you have to be super smart about everything else.” Silicon Valley may have built the computational part of the modern world, but the rest of that world—the money, the airplanes, the roads, and the waterways—still exists. Knowing something, even a lot, about computers guarantees no knowledge about the world beyond them.

“I’d like to think that this is all so massive and complex that they won’t succeed in whatever it is they’re trying to do,” one of the experts told us. “But I wouldn’t want to wager that outcome against their egos.”

Stop Listening to Music on a Single Speaker

The Atlantic

www.theatlantic.com › technology › archive › 2025 › 02 › bluetooth-speakers-ruining-music › 681571

When I was in my early 20s, commuting to work over the freeways of Los Angeles, I listened to Brian Wilson’s 2004 album, Smile, several hundred times. I like the Beach Boys just fine, but I’m not a superfan, and the decades-long backstory of Smile never really hooked me. But the album itself was sonic mesmerism: each hyper-produced number slicking into the next, with Wilson’s baroque, sometimes cartoonish tinkering laid over a thousand stars of sunshine. If I tried to listen again and my weathered Mazda mutely regurgitated the disc, as it often did, I could still hear the whole thing in my head.

Around this time, a friend invited me to see Wilson perform at the Hollywood Bowl, which is a 17,000-seat outdoor amphitheater tucked into the hills between L.A. and the San Fernando Valley. Elsewhere, this could only be a scene of sensory overload, but its eye-of-the-storm geography made the Bowl a kind of redoubt, cool and dark and almost hushed under the purple sky. My friend and I opened our wine bottle, and Wilson and his band took the stage.

From the first note of the a capella opening, they … well, they wobbled. The instruments, Wilson’s voice, all of it stretched and wavered through each beat of the album (which constituted their set list) as if they were playing not in a bandshell but far down a desert highway on a hot day, right against the horizon. Wilson’s voice, in particular, verged on frail—so far from the immaculate silk of the recording as to seem like a reinvention. Polished and rhythmic, the album had been all machine. But the performance was human—humans, by the thousand, making and hearing the music—and for me it was like watching consciousness flicker on for the first time in the head of a beloved robot.

Music is different now. Finicky CD players are a rarity, for one thing. We hold the divine power instead to summon any song we can think of almost anywhere. In some respects, our investment in how we listen has kept pace: People wear $500 headphones on the subway; they fork out the GDP of East Timor to see Taylor Swift across an arena. But the engine of this musical era is access. Forever, music was tethered to the human scale, performers and audience in a space small enough to carry an organic or mechanical sound. People alive today knew people who might have heard the first transmitted concert, a fragile experiment over telephone lines at the Paris Opera in 1881. Now a library of music too big for a person to hear in seven lifetimes has surfed the smartphone to most corners of the Earth.

In another important way, though, how we listen has shrunk. Not in every instance, but often enough to be worthy of attention. The culprit is the single speaker—as opposed to a pair of them, like your ears—and once you start looking for it, you might see it everywhere, an invasive species of flower fringing the highway. Every recorded sound we encounter is made up of layers of artifice, of distance from the originating disturbance of air. So this isn’t an argument about some standard of acoustic integrity; rather, it’s about the space we make with music, and what (and who) will fit inside.

From the early years of recorded music, the people selling it have relied on a dubious language of fidelity—challenging the listener to tell a recording apart from the so-called real thing. This is silly, even before you hear some of those tinny old records. We do listen to sound waves, of course, but we also absorb them with the rest of our body, and beyond the sound of the concert are all the physical details of its production—staging, lighting, amplification, decor. We hear some of that happening, too, and we see it, just as we see and sense the rising and falling of the people in the seats around us, as we feel the air whipping off their applauding hands or settling into the subtly different stillnesses of enrapturement or boredom. People will keep trying to reproduce all of that artificially, no doubt, because the asymptote of fidelity is a moneymaker. But each time you get one new piece of the experience right, you’ve climbed just high enough to crave the next rung on the ladder. Go back down, instead, to the floor of the most mundane auditorium, and you’ll feel before you can name all the varieties of sensation that make it real.

For a long time, the fidelity sell was a success. When American men got home from World War II, as the cultural historian Tony Grajeda has noted, they presented a new consumer class. Marketing phrases such as “concert-hall realism” got them buying audio equipment. And the advent of stereo sound, with separated left and right channels—which became practical for home use in the late ’50s—was an economic engine for makers of both recordings and equipment. All of that needed to be replaced in order to enjoy the new technology. The New York Times dedicated whole sections to the stereo transition: “Record dealers, including a considerable number who do not think that stereo is as yet an improvement over monophonic disks, are hopeful that, with sufficient advertising and other forms of publicity, the consumer will be converted,” a 1958 article observed.

Acoustic musicians were integral to the development of recorded sound, and these pioneers understood that the mixing panel was now as important as any instrument. When Bell Laboratories demonstrated its new stereophonic technology in a spectacle at Carnegie Hall, in 1940, the conductor Leopold Stokowski ran the audio levels himself, essentially remixing live the sounds he’d recorded with his Philadelphia Orchestra. Stokowski had worked, for years, with his pal Walt Disney to create a prototype of surround sound for Fantasia. The result was a system too elaborate to replicate widely, which had to be abandoned (and its parts donated to the war effort) before the movie went to national distribution.

Innovators like Stokowski recognized a different emerging power in multichannel sound, more persuasive and maybe more self-justifying than the mere simulation of a live experience: to make, and then remake in living rooms and dens across the country, an aural stage without a physical correlate—an acoustic space custom-built in the recording studio, with a soundtrack pieced together from each isolated instrument and voice. The musical space had always been monolithic, with players and listeners sharing it for the fleeting moment of performance. The recording process divided that space into three: one for recording the original sound, one for listening, and an abstract, theoretical “sound stage” created by the mixing process in between. That notional space could have a size and shape of its own, its own warmth and coolness and reverberance, and it could reposition each element of the performance in three dimensions, at the inclination of the engineer—who might also be the performer.

Glenn Gould won permanent fame with his recordings of Bach’s keyboard works in the 1950s. Although he was as formidable and flawless a live performer as you’ll get, his first recording innovation—and that it was, at the time—was to splice together many different takes of his performances to yield an exaggerated, daring perfection in each phrase of every piece, as if LeBron James only ever showed up on TV in highlight reels. (“Listen, we’ve got lots of endings,” Gould tells his producer in one recording session, a scene recalled in Paul Elie’s terrific Reinventing Bach.) By the ’70s, the editors of the anthology Living Stereo note, Gould had hacked the conventional use of multi-mic recording, “but instead of using it to render the conventional image of the concert hall ‘stage,’ he used the various microphone positions to create the effect of a highly mobile acoustic space—what he sometimes referred to as an ‘acoustic orchestration’ or ‘choreography.’” It was akin to shooting a studio film with a handheld camera, reworking the whole relationship of perceiver to perceived.

Pop music was surprisingly slow to match the classicalists’ creativity; many of the commercial successes of the ’60s were mastered in mono, which became an object of nostalgic fascination after the record companies later reengineered them—in “simulated stereo”—to goose sales. (Had it been released by the Beach Boys back then, Smile would have been a single-channel record, and, in fact, Brian Wilson himself is deaf in one ear.) It wasn’t really until the late ’60s, when Pink Floyd championed experiments in quadraphonic sound—four speakers—that pop music became a more reliable scene of fresh approaches in both recording and production.

Nowadays, even the most rudimentary pop song is a product of engineering you couldn’t begin to grasp without a few master’s degrees. But the technologization of music producing, distribution, and consumption is full of paradoxes. For the first 100 years, from that Paris Opera telephone experiment to the release of the compact disc in the early 1980s, recording was an uneven but inexorable march toward higher quality—as both a selling point and an artistic aim. Then came file sharing, in the late ’90s, and the iPod and its descendant, the iPhone, all of which compromised the quality of the music in favor of smaller files that could flourish on a low-bandwidth internet—convenience and scale at the expense of quality. Bluetooth, another powerful warrior in the forces of convenience, made similar trade-offs in order to spare us a cord. Alexa and Siri gave us new reasons to put a multifunctional speaker in our kitchens and bathrooms and garages. And the ubiquity of streaming services brought the whole chain together, one suboptimal link after another, landing us in a pre-Stokowski era of audio quality grafted onto a barely fathomable utopia of access: all music, everywhere, in mediocre form.

People still listen to music in their car or on headphones, of course, and many others have multichannel audio setups of one kind or another. Solitary speakers tend to be additive, showing up in places you wouldn’t think to rig for the best sound: in the dining room, on the deck, at the beach. They’re digital successors to the boombox and the radio, more about the presence of sound than its shape.

Yet what many of these places have in common is that they’re where people actually congregate. The landmark concerts and the music we listen to by ourselves keep getting richer, their real and figurative stages more complex. (I don't think I've ever felt a greater sense of space than at Beyoncé’s show in the Superdome two Septembers ago.) But our everyday communal experience of music has suffered. A speaker designed to get you to order more toilet paper, piping out its lonely strain from the corner of your kitchen—it’s the first time since the arrival of hi-fi almost a century ago that we’ve so widely acceded to making the music in our lives smaller.

For Christmas, I ordered a pair of $60 Bluetooth speakers. (This kind of thing has been a running joke with my boyfriend since a more ambitious Sonos setup showed up in his empty new house a few days after closing, the only thing I needed to make the place livable. “I got you some more speakers, babe!”) We followed the instructions to pair them in stereo, then took them out to the fire pit where we’d been scraping by with a single unit. I hung them from opposite trees, opened up Spotify, and let the algorithmic playlist roll. In the flickering darkness, you could hear the silence of the stage open up, like the moments when the conductor mounts the podium in Fantasia. As the music began, it seemed to come not from a single point on the ground, like we were used to, but from somewhere out in the woods or up in the sky—or maybe from a time before all this, when the musician would have been one of us, seated in the glow and wrapping us in another layer of warmth. This wasn’t high-fidelity sound. There wasn’t a stereo “sweet spot,” and the bass left something to be desired. But the sound made a space, and we were in it together.