Itemoids

AI

What Trump and Musk Want With Social Security

The Atlantic

www.theatlantic.com › newsletters › archive › 2025 › 03 › what-trump-and-musk-want-with-social-security › 682056

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

The idea that millions of dead Americans are receiving Social Security checks is shocking, and bolsters the argument that the federal bureaucracy needs radical change to combat waste and fraud. There’s one big problem: No evidence exists that it’s true.

Despite being told by agency staff last month that this claim has no basis in fact, Elon Musk and President Donald Trump have continued to use the talking point as a pretext to attack America’s highest-spending government program. Musk seems to have gotten this idea from a list of Social Security recipients who did not have a death date attached to their record. Agency employees reportedly explained to Musk’s DOGE team in February that the list of impossibly ancient individuals they found were not necessarily receiving benefits (the lack of death dates was related to an outdated system).

And yet, in his speech to Congress last week, Trump stated: “Believe it or not, government databases list 4.7 million Social Security members from people aged 100 to 109 years old.” He said the list includes “3.5 million people from ages 140 to 149,” among other 100-plus age ranges, and that “money is being paid to many of them, and we’re searching right now.” In an interview with Fox Business on Monday, Musk discussed the existence of “20 million people who are definitely dead, marked as alive” in the Social Security database. And DOGE has dispatched 10 employees to try to find evidence of the claims that dead Americans are receiving checks, according to documents filed in court on Wednesday.

Musk and Trump have long maintained that they do not plan to attack Social Security, Medicare, and Medicaid, the major entitlement programs. But their repeated claims that rampant fraud exists within these entitlement systems undermine those assurances. In his Fox interview on Monday, Musk said, “Waste and fraud in entitlement spending—which is most of the federal spending, is entitlements—so that’s like the big one to eliminate. That’s the sort of half trillion, maybe $600, $700 billion a year.” Some observers interpreted this confusing sentence to mean that Musk wants to cut the entitlement programs themselves. But the Trump administration quickly downplayed Musk’s comments, insisting that the federal government will continue to protect such programs and suggesting that Musk had been talking about the need to eliminate fraud in the programs, not about axing them. “What kind of a person doesn’t support eliminating waste, fraud, and abuse in government spending?” the White House asked in a press release.

The White House’s question would be a lot easier to answer if Musk, who has called Social Security a “Ponzi scheme,” wasn’t wildly overestimating the amount of fraud in entitlement programs. Musk is claiming waste in these programs on the order of hundreds of billions of dollars a year, but a 2024 Social Security Administration report found that the agency lost closer to $70 billion total in improper payments from 2015 to 2022, which accounts for about 1 percent of Social Security payments. Leland Dudek, a mid-level civil servant elevated to temporarily lead Social Security after being put on administrative leave for sharing information with DOGE, pushed back last week on the idea that the agency is overrun with fraud and that dead people older than 100 are getting payments, ProPublica reported after obtaining a recording of a closed-door meeting. DOGE’s false claim about dead people receiving benefits “got in front of us,” one of Dudek’s deputies reportedly said, but “it’s a victory that you’re not seeing more [misinformation], because they are being educated.” (Dudek did not respond to ProPublica’s request for comment.)

Some 7 million Americans rely on Social Security benefits for more than 90 percent of their income, and 54 million individuals and their dependents receive retirement payments from the agency. Even if Musk doesn’t eliminate the agency, his tinkering could still affect all of those Americans’ lives. On Wednesday, DOGE dialed back its plans to cut off much of Social Security’s phone services (a commonly used alternative to its online programs, particularly for elderly and disabled Americans), though it still plans to restrict recipients’ ability to change bank-deposit information over the phone.

In recent weeks, confusion has rippled through the Social Security workforce and the public; many people drop off forms in person, but office closures could disrupt that. According to ProPublica, several IT contracts have been cut or scaled back, and several employees reported that their tech systems are crashing every day. Thousands of jobs are being cut, including in regional field offices, and the entire Social Security staff has been offered buyouts (today is the deadline for workers to take them). Martin O’Malley, a former commissioner of the agency, has warned that the workforce reductions that DOGE is seeking at Social Security could trigger “system collapse and an interruption of benefits” within the next one to three months.

In going anywhere near Social Security—in saying the agency’s name in the same sentence as the word eliminate—Musk is venturing further than any presidential administration has in recent decades. Entitlement benefits are extremely popular, and cutting the programs has long been a nonstarter. When George W. Bush raised the idea of partially privatizing entitlements in 2005, the proposal died before it could make it to a vote in the House or Senate.

The DOGE plan to cut $1 trillion in spending while leaving entitlements, which make up the bulk of the federal budget, alone always seemed implausible. In the November Wall Street Journal op-ed announcing the DOGE initiative, Musk and Vivek Ramaswamy (who is no longer part of DOGE) wrote that those who say “we can’t meaningfully close the federal deficit without taking aim at entitlement programs” are deflecting “attention from the sheer magnitude of waste, fraud and abuse” that “DOGE aims to address.” But until there’s clear evidence that this “magnitude” of fraud exists within Social Security, such claims enable Musk to poke at what was previously untouchable.

Related:

DOGE’s fuzzy math Is DOGE losing steam?

Here are four new stories from The Atlantic:

Democrats have a man problem. There was a second name on Rubio’s target list. The crimson face of Canadian anger The GOP’s fears about Musk are growing.

Today’s News

Senate Minority Leader Chuck Schumer said that Democrats will support a Republican-led short-term funding bill to help avoid a government shutdown. A federal judge ruled that probationary employees fired by 18 federal agencies must be temporarily rehired. Mark Carney was sworn in as Canada’s prime minister, succeeding Justin Trudeau as the Liberals’ leader.

Dispatches

Atlantic Intelligence: The Trump administration is embracing AI. “Work is being automated, people are losing their jobs, and it’s not at all clear that any of this will make the government more efficient,” Damon Beres writes. The Books Briefing: Half a decade on, we now have at least a small body of literary work that takes on COVID, Maya Chung writes.

Explore all of our newsletters here.

Evening Read

Illustration by John Gall*

I’d Had Jobs Before, but None Like This

By Graydon Carter

I stayed with my aunt the first night and reported to the railroad’s headquarters at 7 o’clock the next morning with a duffel bag of my belongings: a few pairs of shorts, jeans, a jacket, a couple of shirts, a pair of Kodiak work boots, and some Richard Brautigan and Jack Kerouac books, acceptable reading matter for a pseudo-sophisticate of the time. The Symington Yard was one of the largest rail yards in the world. On some days, it held 7,000 boxcars. Half that many moved in and out on a single day. Like many other young men my age, I was slim, unmuscled, and soft. In the hall where they interviewed and inspected the candidates for line work, I blanched as I looked over a large poster that showed the outline of a male body and the prices the railroad paid if you lost a part of it. As I recall, legs brought you $750 apiece. Arms were $500. A foot brought a mere $250. In Canadian dollars.

Read the full article.

More From The Atlantic

The kind of thing dictators do Trump is unleashing a chaos economy. RFK Jr. has already broken his vaccine promise. The NIH’s grant terminations are “utter and complete chaos.” Netanyahu doesn’t want the truth to come out. Republicans tear down a Black Lives Matter mural.

Culture Break

Music Box Films

Watch. The film Eephus (in select theaters) is a “slow movie” in the best possible way, David Sims writes.

Read. Novels about women’s communities tend toward utopian coexistence or ruthless backbiting. The Unworthy does something more interesting, Hillary Kelly writes.

Play our daily crossword.

Stephanie Bai contributed to this newsletter.

When you buy a book using a link in this newsletter, we receive a commission. Thank you for supporting The Atlantic.

The AI Era of Governing Has Arrived

The Atlantic

www.theatlantic.com › newsletters › archive › 2025 › 03 › the-ai-era-of-governing-has-arrived › 682053

This is Atlantic Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. Sign up here.

President Donald Trump’s administration is embracing AI. According to reports, agencies are using the technology to identify places to cut costs, figure out which employees can be terminated, and comb through social-media posts to determine whether student-visa holders may support terror groups. And as my colleague Matteo Wong reported this week, employees at the General Services Administration are being urged to use a new chatbot to do their work, while simultaneously hearing from officials that their jobs are far from secure; Thomas Shedd, the director of the GSA division that produced the AI, told workers that the department will soon be “at least 50 percent smaller.”

This is a haphazard leap into a future that tech giants have been pushing us toward for years. Work is being automated, people are losing their jobs, and it’s not at all clear that any of this will make the government more efficient, as Elon Musk and DOGE have promised.

Illustration by The Atlantic. Sources: pressureUA / Getty; Thanasis / Getty.

DOGE’s Plans to Replace Humans With AI Are Already Under Way

By Matteo Wong

A new phase of the president and the Department of Government Efficiency’s attempts to downsize and remake the civil service is under way. The idea is simple: use generative AI to automate work that was previously done by people.

The Trump administration is testing a new chatbot with 1,500 federal employees at the General Services Administration and may release it to the entire agency as soon as this Friday—meaning it could be used by more than 10,000 workers who are responsible for more than $100 billion in contracts and services. This article is based in part on conversations with several current and former GSA employees with knowledge of the technology, all of whom requested anonymity to speak about confidential information; it is also based on internal GSA documents that I reviewed, as well as the software’s code base, which is visible on GitHub.

Read the full article.

What to Read Next

Elon Musk looks desperate: “Musk has wagered the only thing he can’t easily buy back—the very myth he created for himself,” Charlie Warzel writes. Move fast and destroy democracy: “Silicon Valley’s titans have decided that ruling the digital world is not enough,” Kara Swisher writes.

P.S.

The internet can still be good. In a story for The Atlantic’s April issue, my colleague Adrienne LaFrance explores how Reddit became arguably “the best platform on a junky web.” Reading it in between editing stories about AI, I was struck by how much of what Adrienne described was fundamentally human: “There is a subreddit where violinists gently correct one another’s bow holds, a subreddit for rowers where people compare erg scores, and a subreddit for people who are honest-to-God allergic to the cold and trade tips about which antihistamine regimen works best,” she writes. “One subreddit is for people who encounter cookie cutters whose shapes they cannot decipher. The responses reliably entail a mix of sincere sleuthing to find the answer and ridiculously creative and crude joke guesses.” How wholesome!

— Damon

Was Sam Altman Right About the Job Market?

The Atlantic

www.theatlantic.com › technology › archive › 2025 › 03 › generative-ai-agents › 682050

The automated future just lurched a few steps closer. Over the past few weeks, nearly all of the major AI firms—OpenAI, Anthropic, Google, xAI, Amazon, Microsoft, and Perplexity, among others—have announced new products that are focused not on answering questions or making their human users somewhat more efficient, but on completing tasks themselves. They are being pitched for their ability to “reason” as people do and serve as “agents” that will eventually carry out complex work from start to finish.

Humans will still nudge these models along, of course, but they are engineered to help fewer people do the work of many. Last month, Anthropic launched Claude Code, a coding program that can do much of a human software developer’s job but far faster, “reducing development time and overhead.” The program actively participates in the way that a colleague would, writing and deploying code, among other things. Google now has a widely available “workhorse model,” and three separate AI companies have products named Deep Research, all of which quickly gather and synthesize huge amounts of information on a user’s behalf. OpenAI touts its version’s ability to “complete multi-step research tasks for you” and accomplish “in tens of minutes what would take a human many hours.”

AI companies have long been building and benefiting from the narrative that their products will eventually be able to automate major projects for their users, displacing jobs and perhaps even entire professions or sectors of society. As early as 2016, Sam Altman, who had recently co-founded OpenAI, wrote in a blog post that “as technology continues to eliminate traditional jobs,” new economic models might be necessary, such as a universal basic income; he has warned repeatedly since then that AI will disrupt the labor market, telling my colleague Ross Andersen in 2023 that “jobs are definitely going to go away, full stop.”

Despite the foreboding nature of these comments, they have remained firmly in the realm of speculation. Two years ago, ChatGPT couldn’t perform basic arithmetic, and critics have long harped on the technology’s biases and mythomania. Chatbots and AI-powered image generators became known for helping kids cheat on homework and flooding the web with low-grade content. Meaningful applications quickly emerged in some professions—coding, fielding customer-service queries, writing boilerplate copy—but even the best AI models were clearly not capable enough to precipitate widespread job displacement.

[Read: A chatbot is secretly doing my job]

Since then, however, two transformations have taken place. First, AI search became standard. Chatbots exploded in popularity because they could lucidly—though frequently inaccurately—answer human questions. Billions of people were already accustomed to asking questions and finding information online, making this an obvious use case for AI models that might otherwise have seemed like research projects: Now 300 million people use ChatGPT every week, and more than 1 billion use Google’s AI Overview, according to the companies. Further underscoring the products’ relevance, media companies—including The Atlanticsigned lucrative deals with OpenAI and others to add their content to AI search, bringing both legitimacy and some additional scrutiny to the technology. Hundreds of millions were habituated to AI, and at least some portion have found the technology helpful.

But although plain chatbots and AI search introduced a major cultural shift, their business prospects were always small potatoes for the tech giants. Compared with traditional search algorithms, AI algorithms are more expensive to run. And search is an old business model that generative AI could only enhance—perhaps resulting in a few more clicks on paid advertisements or producing a bit more user data for targeting future advertisements.

Refining and expanding generative AI to do more for the professional class—not just students scrambling on term papers—is where tech companies see the real financial opportunity. And they’ve been building toward seizing it. The second transformation that has led to this new phase of the AI era is simply that the technology, while still riddled with biases and inaccuracies, has legitimately improved. The slate of so-called reasoning models released in recent months, such as OpenAI’s o3-mini and xAI’s Grok 3, has impressed in particular. These AI products can be genuinely helpful, and their applications to advancing scientific research could prove lifesaving. Economists, doctors, coders, and other professionals are widely commenting on how these new models can expedite their work; a quarter of tech start-ups in this year’s cohort at the prestigious incubator Y Combinator said that 95 percent of their code was generated with AI. Major firms—McKinsey, Moderna, and Salesforce, to name just a handful—are now using it in basically every aspect of their businesses. And the models continue getting cheaper, and faster, to deploy.

[Read: The GPT era is already ending]

Tech executives, in turn, have grown blunt about their hopes that AI will become good enough to do a human’s work. In a Meta earnings call in late January, CEO Mark Zuckerberg said, “2025 will be the year when it becomes possible to build an AI engineering agent” that’s as skilled as “a good, mid-level engineer.” Dario Amodei, the CEO of Anthropic, recently said in a talk with the Council on Foreign Relations that AI will be “writing 90 percent of the code” just a few months from now—although still with human specifications, he noted. But he continued, “We will eventually reach the point where the AIs can do everything that humans can,” in every industry. (Amodei, it should be mentioned, is the ultimate techno-optimist; in October, he published a sprawling manifesto, titled “Machines of Loving Grace,” that posited AI development could lead to “the defeat of most diseases, the growth in biological and cognitive freedom, the lifting of billions of people out of poverty to share in the new technologies, a renaissance of liberal democracy and human rights.”) Altman has used similarly grand language recently, imagining countless virtual knowledge workers fanning out across industries.

These bright visions have dimmed considerably when put into practice: Elon Musk and the Department of Government Efficiency’s efforts to replace human civil servants with AI may be the clearest and most dramatic execution of this playbook yet, with massive job loss and little more than chaos to show for it so far. Meanwhile, all of generative-AI models’ issues with bias, inaccuracy, and poor citations remain, even as the technology has advanced. OpenAI’s image-generating technology still struggles at times to produce people with the right number of appendages. Salesforce is reportedly struggling to sell its AI agent, Agentforce, to customers because of issues with accuracy and concerns about the product’s high cost, among other things. Nevertheless, the corporation has pressed on with its pitch, much as other AI companies have continued to iterate on and promote products with known issues. (In a recent earnings call, Salesforce CEO Marc Benioff said the firm has “3,000 paying Agentforce customers who are experiencing unprecedented levels of productivity.”) In other words, flawed products won’t stop tech companies’ push to automate everything—the AI-saturated future will be imperfect at best, but it is coming anyway.

The industry’s motivations are clear: Google’s and Microsoft’s cloud businesses, for instance, grew rapidly in 2024, driven substantially by their AI offerings. Meta’s head of business AI, Clara Shih, recently told CNBC that the company expects “every business” to use AI agents, “the way that businesses today have websites and email addresses.” OpenAI is reportedly considering charging $20,000 a month for access to what it describes as Ph.D.-level research agents.

Google and Perplexity did not respond to a request for comment, and a Microsoft spokesperson declined to comment. An OpenAI spokesperson pointed me to an essay from September in which Altman wrote, “I have no fear that we’ll run out of things to do.” He could well be right; the Bureau of Labor Statistics projects AI to substantially increase the demand for computer and business occupations through 2033. A spokesperson for Anthropic referred me to the start-up’s initiative to study and prepare for AI’s effect on the labor market. The effort’s first research paper analyzed millions of conversations with Anthropic’s Claude model and found that it was used to “automate” human work in 43 percent of cases, such as identifying and fixing a software bug.

Tech companies are revealing, more clearly than ever, their vision for a post-work future. ChatGPT started the generative-AI boom not with an incredible business success, but with a psychological one. The chatbot was and is still possibly losing the company money, but it exposed internet users around the world to the first popular computer program that could hold an intelligent conversation on any subject. The advent of AI search may have performed a similar role, presenting limited opportunity for immediate profits but habituating—or perhaps inoculating—millions of people to bots that can think, write, and live for you.