Itemoids

Americans

The Supreme Court Cases That Could Redefine the Internet

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 09 › scotus-social-media-cases-first-amendment-internet-regulation › 675520

In the aftermath of the January 6 attack on the U.S. Capitol, both Facebook and Twitter decided to suspend lame-duck President Donald Trump from their platforms. He had encouraged violence, the sites reasoned; the megaphone was taken away, albeit temporarily. To many Americans horrified by the attack, the decisions were a relief. But for some conservatives, it marked an escalation in a different kind of assault: It was, to them, a clear sign of Big Tech’s anti-conservative bias.

That same year, Florida and Texas passed bills to restrict social-media platforms’ ability to take down certain kinds of content. (Each is described in this congressional briefing.) In particular, they intend to make political “deplatforming” illegal, a move that would have ostensibly prevented the removal of Trump from Facebook and Twitter. The constitutionality of these laws has since been challenged in lawsuits—the tech platforms maintain that they have a First Amendment right to moderate content posted by their users. As the separate cases wound their way through the court system, federal judges (all of whom were nominated by Republican presidents) were divided on the laws’ legality. And now they’re going to the Supreme Court.

On Friday, the Court announced it would be putting these cases on its docket. The resulting decisions could be profound: “This would be—I think this is without exaggeration—the most important Supreme Court case ever when it comes to the internet,” Alan Rozenshtein, a law professor at the University of Minnesota and a senior editor at Lawfare, told me. At stake are tricky questions about how the First Amendment should apply in an age of giant, powerful social-media platforms. Right now, these platforms have the right to moderate the posts that appear on them; they can, for instance, ban someone for hate speech at their own discretion. Restricting their ability to pull down posts would cause, as Rozenshtein put it, “a mess.” The decisions could reshape online expression as we currently know it.

[Read: Is this the beginning of the end of the internet?]

Whether or not these particular laws are struck down is not what’s actually important here, Rozenshtein argues. “What’s much, much more important is what the Court says in striking down those laws—how the Court describes the First Amendment protections.” Whatever they decide will set legal precedents for how we think about free speech when so much of our lives take place on the web. Rozenshtein and I caught up on the phone to discuss why these cases are so interesting—and why the decision might not fall cleanly along political lines.

Our conversation has been condensed and edited for clarity.

Caroline Mimbs Nyce: How did we get here?

Alan Rozenshtein: If you ask the companies and digital-civil-society folks, we got here because the crazy MAGA Republicans need something to do with their days, and they don’t have any actual policy proposals. So they just engage in culture-war politics, and they have fastened on Silicon Valley social-media companies as the latest boogeyman. If you ask conservatives, they’re going to say, “Big Tech is running amok. The liberals have been warning us about unchecked corporate power for years, and maybe they had a point.” This really came to a head when, in the wake of the January 6 attack on the Capitol, major social-media platforms threw Donald Trump, the president of the United States, off of their platforms.

Nyce: Based on what we know about the Court, do we have any theories about how they’re going to rule?

Rozenshtein: I do think it is very likely that the Texas law will be struck down. It is very broad and almost impossible to implement. But I think there will be some votes to uphold the Florida law. There may be votes from the conservatives, especially Justices Samuel Alito and Clarence Thomas, but you might also get some support from some folks on the left, in particular Justices Ketanji Brown Jackson and Sonia Sotomayor—not because they believe conservatives are being discriminated against, but because they themselves have a lot of skepticism of private power and big companies.

But what’s actually important is not whether these laws are struck down or not. What’s much, much more important is what the Court says in striking down those laws—how the Court describes the First Amendment protections.

Nyce: What are the important things for Americans to consider at this moment?

Rozenshtein: This would be—I think this is without exaggeration—the most important Supreme Court case ever when it comes to the internet.

The Supreme Court in 1997 issued a very famous case called Reno v. ACLU. And this was a constitutional case about what was called the Communications Decency Act. This was a law that purported to impose criminal penalties on internet companies and platforms that transmitted indecent content to minors. So this is part of the big internet-pornography scare of the mid-’90s. The Court said this violates the First Amendment because to comply with this law, platforms are going to have to censor massive, massive, massive amounts of information. And that’s really bad. And Reno v. ACLU has always been considered the kind of Magna Carta of internet–First Amendment cases, because it recognized the First Amendment is really foundational and really important. The Court has recognized this in various forms since then. But, in the intervening almost 30 years, it’s never squarely taken on a case that deals with First Amendment issues on the internet so, so profoundly.

Even if the Court strikes these laws down, if it does not also issue very strong language about how platforms can moderate—that the moderation decisions of platforms are almost per se outside the reach of government regulation under the First Amendment—this will not be the end of this. Whether it’s Texas or Florida or some blue state that has its own concerns about content moderation of progressive causes, we will continue to see laws like this.

This is just the beginning of a new phase in American history where, rightly, it is recognized that because these platforms are so important, they should be the subject of government regulation. For the next decade, we’ll be dealing with all sorts of court challenges. And I think this is as it should be. This is the age of Big Tech. This is not the end of the conversation about the First Amendment, the internet, and government regulation over big platforms. It’s actually the beginning of the conversation.

Nyce: This could really influence the way that Americans experience social media.

Rozenshtein: Oh, it absolutely could, in very unpredictable ways. If you believe the state governments, they’re fighting for internet freedom, for the freedom of users to be able to use these platforms, even if users express unfriendly or unfashionable views. But if you listen to the platforms and most of the tech-policy and digital-civil-society crowd, they’re the ones fighting for internet freedom, because they think that the companies have a First Amendment right to decide what’s on the platforms, and that the platforms only function because companies aggressively moderate.

Even if the conservative states are arguing in good faith, this could backfire catastrophically. Because if you limit what companies can do to take down harmful or toxic content, you’re not going to end up with a freer speech environment. You’re going to end up with a mess.

The GOP’s New Obsession With Attacking Mexico

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 09 › us-military-intervention-mexico-fentanyl-crisis › 675487

Today’s Republican Party has made a turn toward foreign-policy isolationism or, less pejoratively, realism and restraint. After Donald Trump shattered the GOP’s omertà about the disastrous Iraq War—a “big fat mistake,” he called it in 2016—Republicans quickly learned to decry “endless wars” and, often quite sensibly, argue for shrinking America’s global military footprint. During the 2020 election, Trump’s supporters touted his refusal to start any new wars while in office (though he got very close).

When it comes to America’s southern neighbor, however, Republicans have grown more hawkish. Party leaders, including members of Congress and presidential candidates, now regularly advocate for direct U.S. military intervention in Mexico to attack drug cartels manufacturing the deadly fentanyl flooding into America. “Building the wall is not enough,” Vivek Ramaswamy said at Wednesday night’s GOP-primary debate. The best defense is now a good offense.

The strategic stupidity of any potential U.S. military intervention in Mexico is difficult to overstate. The calls for such an intervention are also deeply ironic: Even as Trump’s epigones inveigh against the possibility of an “endless war” in Ukraine similar to those in Iraq and Afghanistan, they are reprising the arguments, tools, and rhetoric of the global War on Terror that many of them belatedly turned against.

The War on Terror was a disaster, devastating countries and leaving hundreds of thousands of people dead and millions of refugees adrift. A botched U.S. attack on Mexico, America’s largest trading partner, could create a failed state on the 2,000-mile U.S. southern border, an outcome that would be far, far worse for the United States. The toll of the U.S. fentanyl epidemic is staggering: More than 100,000 Americans died of an overdose in 2022. But a unilateral military “solution” holds the potential, if not the near certainty, of causing far more death and destruction than any drug.

[Read: ‘Every time I hear you, I feel a little bit dumber’]

Trump, not surprisingly, sowed the seeds for this new jingoism. After launching his presidential campaign in 2015 with an infamous verbal attack against Mexican migrants, in office he mused about shooting missiles at Mexican fentanyl labs, according to the memoir of his then–defense secretary, Mark Esper. “No one would know it was us,” Trump assured a stunned Esper.

Fast-forward to last month, at this election cycle’s first Republican presidential debate: Florida Governor Ron DeSantis pledged to launch Special Operations raids into Mexico on his first day in office. His rivals for the nomination have issued similar promises to wage war against the cartels—in the form of drone strikes, blockades, and military raids. Former Ambassador to the United Nations Nikki Haley breezily promised at this week’s debate to “send in our Special Operations” to Mexico. Republican senators and representatives have introduced bills to classify fentanyl as a chemical weapon, designate Mexican drug cartels as foreign terrorist organizations, and authorize the use of military force in Mexico.

If you’re inclined to dismiss this saber-rattling as primary-season bluster, don’t be so sure. Pundits and voters seem to be falling in line behind the politicians. The conservative commentator Ben Domenech recently said that he is “close to becoming a single issue voter” on the issue of attacking Mexico (he’s for it). A recent poll found that as many GOP voters consider Mexico an enemy of the United States as an ally, a marked shift from just a few years ago.

The parallels to the War on Terror aren’t exact—no prominent Republican has advocated a full-scale invasion and occupation of Mexico, at least not yet. But the rhetorical similarities are hard to ignore. America’s tragic interventions in Iraq and Afghanistan began with politicians inflating threats; seeking to militarize complex international problems; and promising clean, swift, decisive military victories. The language regarding Mexico today is eerily similar. The Fox News personality Greg Gutfeld recently assured his viewers that a unilateral attack on Mexico would “be over in minutes.” The labeling of Mexican cartel leaders as “terrorists” sidelines even the most basic analysis of the costs and consequences of a potential war. Just like in Iraq, a war on Mexico would be a war of choice, with American moral culpability for whatever furies it unleashes.

[David Frum: The new Republican litmus test is very dangerous]

It’s worth remembering that the war in Afghanistan included a failed counter-drug campaign. In my time there as a Marine lieutenant a decade ago, U.S. troops engaged in erratic, futile attempts to interrupt opium-poppy cultivation. Partnered with Afghanistan’s version of the U.S. Drug Enforcement Agency, my company wasted days fruitlessly searching motorcycles at checkpoints on dusty village trails, finding no drugs. On one occasion, I was ordered to confiscate farmers’ wooden poppy scorers, simple finger-mounted tools used to harvest opium; at a cost of maybe a penny a piece, they were immediately replaced. U.S. planes bombed 200 Afghan drug labs during the occupation. Yet opium production skyrocketed—Afghanistan produced more than 80 percent of the global supply of the drug in the last years of the war.

Mexico would be an even riskier proposition. Start with the obvious: proximity. The direct costs to the United States of the War on Terror were enormous: $8 trillion squandered, more than 7,000 U.S. troops killed in action, tens of thousands wounded. Across the Middle East, North Africa, and Central Asia, hundreds of thousands of people, most of them civilians, were killed in counterinsurgency campaigns and civil wars. Governments were toppled, leaving behind anarchy and nearly 40 million refugees, who have further destabilized the region and its neighbors. But America itself was shielded from the worst effects of its hubris and militarism. Flanked by oceans and friendly neighbors, Americans didn’t have to worry about the conflicts coming home.

Any unilateral U.S. military action in Mexico would risk the collapse of a neighboring country of 130 million people. It could unleash civil war and a humanitarian crisis that would dwarf those in Iraq and Syria. This carnage would not be confined to Mexico. Some of America’s largest and wealthiest cities are a few hours’ drive from the border; nearly 40 million Americans are of Mexican descent, many of them with family members still living across the border. The cartels would not have far to travel to launch retaliatory terrorist attacks on U.S. soil. And the refugee crisis that many Republicans consider the preeminent national-security crisis would worsen.

The United States would also lack one major War on Terror asset: partners. A host of NATO and non-NATO partners contributed troops and resources to the fighting in Afghanistan; none would be willing to participate in an American attack on Mexico. Despite government corruption in Iraq and Afghanistan and dependency on U.S. weapons and technology, soldiers from those countries did the lion’s share of the fighting and dying in the long struggle against insurgents there. But Mexican President Andrés Manuel López Obrador has publicly rejected U.S. military intervention. One can easily imagine uniformed Mexican soldiers and policemen firing on American troops and aiding the cartels. If the U.S. were to attempt to build competing Mexican militias or proxies in response, it would further fracture the Mexican state.

[David Frum: The autocrat next door]

If there is an overriding lesson of America’s post-9/11 conflicts, it is that war unleashes a host of unintended consequences. A war of choice seldom respects the goals or limits set by its architects. External military intervention in a country fighting an insurgency—ideological, criminal, or otherwise—is particularly fraught. Foreign troops are far more likely to be an accelerant of violence than a dampener. As in Iraq and Afghanistan, cartel members would be likely to hide out among civilians, infiltrate Mexico’s already compromised security services, and find havens in bordering countries (the United States included). American forces would in turn be susceptible to corruption and infiltration, especially if an intervention were to drag on longer than expected.

Lacking a definable end state, a counter-cartel campaign would likely devolve into a manhunt for a few narco kingpins. Such an operation would be liable to create folk heroes out of brutal drug traffickers, one accidental wedding-party drone strike at a time. Some of the worst men on Earth could become global symbols of resistance to U.S. imperialism, especially if they are able to evade U.S. forces for a decade, as Osama bin Laden did. A U.S.-Mexico conflict would then become an opportunity for other American adversaries. Russia and China would undoubtedly be happy to arm the cartel insurgents, perhaps even overtly. Mexico already hosts more members of the GRU—Russia’s military intelligence—than any other foreign country. American arms and assistance are taking Russian lives in Ukraine, as they did in Afghanistan a generation before. The Russians would welcome an easy opportunity to return the favor.

Since Trump’s ascent in 2016, the most bellicose neoconservatives in the GOP have been ousted, the Republican Party’s views on Russia and China have become muddled, and the Iraq War is now widely accepted as a disaster. But Republicans’ enthusiasm for launching a war on Mexico reveals the shallowness of their conversion. The rise of fentanyl is mostly a demand-side problem. Whatever Republican leaders say about “endless wars,” they’re once again pulling out the military hammer first, then looking for nails.

America’s Eyes Are on Unions

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 09 › uaw-strike-biden-unions › 675490

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

The president was on the picket line, and the American public is paying attention to unions. This moment of renewed interest in organizing could energize labor activity in the U.S., but it also turns up the pressure on union leaders.

First, here are three new stories from The Atlantic:

“Every time I hear you, I feel a little bit dumber.”

Eight ways to banish misery

The best thing about Amazon was never going to last.

“A Genuinely Historic Moment”

“Unions built the middle class,” the president of the United States bellowed this week through a bullhorn emblazoned with an American flag. “You deserve what you’ve earned, and you’ve earned a hell of a lot more than you’re getting paid now.” On Tuesday, Joe Biden became the first sitting president to join striking workers on a picket line. In standing with the United Auto Workers, who have been on strike against the Big Three car companies for almost two weeks, he has picked a side. As my colleague Adam Serwer wrote today, “A president on the picket line, telling workers they deserved to share in the wealth they had helped create, was a genuinely historic moment.”

Public approval of unions is the highest it’s been in many decades. Data from Gallup last month found that, after dipping to a low of 48 percent in 2009, around the time of the recession, Americans’ union-approval rating is now at 67 percent, down slightly from 71 percent last year. Three-quarters of respondents said that they sided with autoworkers over management in their negotiations (this was before the UAW strike had actually begun), and support for striking television writers over their studios was nearly as high. A record-high number, 61 percent, said that unions help rather than hurt the economy.

Organized labor has contracted dramatically in the past 50 years: In 1981, President Ronald Reagan fired 11,000 striking Professional Air Traffic Controllers Organization workers, ushering in a period of union decline that has continued since. Now a successful UAW strike could inspire other workers to stand up, potentially even serving as “a reverse PATCO moment,” says Johnnie Kallas, a doctoral candidate at Cornell University’s School of Industrial and Labor Relations and the project director of its Labor Action Tracker. Kallas’s research shows that so far this year, there have been 291 strikes involving about 367,600 workers. That is an uptick from a few years ago, when his team began documenting strikes. And beyond the numbers, there are other indicators that we are in a strong labor moment, he told me: High-profile victories at Starbucks and Amazon point to a rise in labor interest in private industries. And, of course, there’s the president on the picket line.

Recent strikes may make the public more curious about unions. Many Americans don’t fully understand the potential benefits of unions, Suresh Naidu, an economics professor at Columbia, told me. For decades, “one reason the labor movement has not had so much energy is that it’s been taken for granted that it can’t win strikes,” he said. But given how publicized the UAW’s effort has become, Naidu observed, a successful strike could send onlookers the message that “when you actually have a union that’s willing to go to bat for you, it can really deliver good wages and working conditions.” The high level of current public interest in unions also means that the pressure is on: If the UAW workers do not end up winning a strong contract, it may damage public perception of strikes, Naidu explained. And in strikes like the UAW’s, union leaders need to thread a needle: If they settle for a weak contract or let the strike drag on long enough that it significantly affects workers and their communities, they could lose public support.

As the labor movement gains momentum, workers in such seemingly different industries as Hollywood and mail delivery are making real gains, often on related issues. “We’re seeing a confluence of concerns around the high cost of living, the role of technology in degrading our work, and what people call work-life balance,” Tobias Higbie, the faculty chair of labor studies at UCLA, told me. “These strikes have a way of defining the key conflicts of a particular historical moment.” The coronavirus pandemic has changed the way many people view their lives, he added—and the role that work should play in them. The past few years have also exacerbated public concerns about income inequality, as many bosses and corporations have grown wealthier while workers have struggled with inflation.

Where America’s labor movement will go next is impossible to predict. After months of picketing, Hollywood writers returned to work yesterday with a strong contract in hand; meanwhile, UAW workers are holding the line, and may even expand their strike this week. “Any kind of negotiation is about power,” Higbie explained. “The UAW is giving a master class on how to strategically utilize the power that you do have so that you can get what you need.”

Related:

Trump didn’t go to Michigan to support autoworkers.

The Big Three’s inevitable collision with the UAW

Today’s News

As tensions continue among congressional Republicans, the U.S. government has begun notifying federal employees that a shutdown appears imminent. The House held its first hearing in the Biden-impeachment inquiry; witnesses chosen by Republicans stated that there is currently no evidence of a crime, but that more bank records from the president and his son are still needed. The Senate unanimously passed a dress-code resolution after controversy over Senator John Fetterman’s casual attire.

Evening Read


Paul Spella / The Atlantic

Group-Chat Culture Is Out of Control

By Faith Hill

Here’s just a sample of group chats that have been messaging me recently: college friends, housemates, camp friends, friends I met in adulthood, high-school friends, a subset of high-school friends who live in New York City, a subset of high-school friends who are single, a group of friends going to a birthday party, a smaller group of friends planning a gift for that person’s birthday, co-workers, book club, another book club, family, extended family, a Wordle chat with friends, a Wordle chat with family.

I love a group text—a grext, if you’ll permit me—but lately, the sheer number of them competing for my attention has felt out of control. By the time I wake up, the notifications have already started rolling in; as I’m going to bed, they’re still coming. In between, I try to keep up, but all it takes is one 30-minute meeting before I’ve somehow gotten 100 new messages, half of them consisting of “lol” or “right!” I scroll up and up and up, trying to find where I left off, like I’ve lost my place in a book that keeps getting longer as I read.For better or for worse, we might be in the Age of the Group Chat.

Read the full article.

More From The Atlantic

The 24-year-old who outsold Oprah this week Basil the one-eyed opossum is the perfect zoo animal. The weirdos living inside our phones

Culture Break

Fondation Henri Cartier-Bresson / Magnum

Read. These seven books for the lifelong learner may tempt you to take up a new pursuit.

Listen. Tony Bennett, who died on Friday, reportedly sang one last song while sitting at his piano. It’s also the one that made him a star.

Play our daily crossword.

Did someone forward you this email? Sign up here.

Katherine Hu contributed to this newsletter.

When you buy a book using a link in this newsletter, we receive a commission. Thank you for supporting The Atlantic.

Iran’s Influence Operation Pays Off

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 09 › tehran-times-classified-documents-leak-investigation-robert-malley › 675480

When news comes out that someone has suffered an email breach, my first instinct is to pity them and practice extreme charity. I don’t remember any emails I wrote a decade ago, but I’m sure there’s something in there appalling enough to sour my relationships with every friend, ex, or co-worker I ever had. Give me your email password, and I will ruin your career.

This week, the careers in jeopardy belong to a handful of Americans and Europeans who were, by the looks of their emails, groomed by the Iranian government to promote conciliatory policies toward Tehran. According to reports by Semafor and Iran International, Iranian foreign-policy bigwigs such as Mohammad Javad Zarif identified think-tank staffers of Iranian origin, sponsored meetings with them, and used the group to coordinate and spread messages helpful to Iran. The emails, which date from 2014, suggest that those in their group—the “Iran Experts Initiative”—reacted to Iranian outreach in a range of ways, including cautious engagement and active coordination. The Iranian government then paid expenses related to this group’s internal meetings; cultivated its members with “access to high-ranking officials and extended invitations to visit Tehran,” according to Iran International; and later gloated over how effectively it had used its experts to propagate the Islamic Republic’s positions.

Graeme Wood: Talk to coldhearted criminals

The government had reason to gloat. It picked excellent prospects, some of whom sucked up to Tehran over email and echoed its negotiating positions publicly. A few of them ended up in and near positions of prominence in the U.S. government through connections to Robert Malley, a veteran Middle East hand in Democratic administrations. Malley, who led Obama teams focusing on the Islamic State, Syria, and Iraq, is known to favor negotiation with unfriendly governments in the region and to scorn the “maximum pressure” approach that replaced nuclear negotiation when Donald Trump entered office. Earlier this year, Malley lost his security clearance for reasons still not explained, and he is on leave from government service. (He did not reply to a request for comment.)

One of Tehran’s targets, Ariane M. Tabatabai, joined the Biden administration’s Iran team with Malley and is now the chief of staff for the assistant secretary of defense for special operations. Another, Ali Vaez, formerly worked as an aide to Malley on Iran issues. That is the disturbing upshot to the reports: Witting participants in an Iranian influence operation have been close colleagues with those setting the Biden administration’s Iran policy, or have even served in government and set it themselves.

On Tuesday, President Joe Biden’s State Department spokesperson, Matthew Miller, dismissed the reports as “an account of things that happened almost a decade ago, most of which involved people that do not currently work for the government.” I assume he meant the U.S. government. Anyway, the accusations are serious and can’t be batted away by the suggestion that 2014 was a long time ago.

One sign of the gravity of these accusations is the unconvincing attempts to minimize them. The commentator Esfandyar Batmanghelidj said opponents of Tehran had smeared the analysts merely because they “maintained dialogue and exchanged views with Iranian officials.” He went on to note Semafor’s links to Qatar and Iran International’s to Iran’s archenemy, Saudi Arabia. The journalist Laura Rozen tweeted that the stories were “McCarthyistic” and targeted blameless analysts “because they try to talk to everybody and because of their Iranian heritage.”

Defending the emails as maintaining “dialogue” so ludicrously misrepresents the accusation that I am forced to conclude that these defenders find the actual accusation indefensible. No one is alarmed that Americans of Iranian descent are talking with Iranian-government officials. What’s alarming is the servile tone of the Iranian American side of that dialogue, and the apparent lack of concern that the Iranian government views them as tools for its political ends. Rozen and Batmanghelidj don’t dispute the emails’ authenticity. Comparing the Iranian influence operation to supposed Qatari and Saudi ones is, in turn, tacit admission that the emails are probably real.

Cultivating a source is fine. But any self-respecting analyst, journalist, or politician wants to be the one cultivating, not the one being cultivated. This mutual back-scratching can erode one’s integrity and independence. That is why the Iranians do it: to turn influential and otherwise smart people into their pets, and eventually condition them to salivate at the issuance of a visa, or an email from Javad Zarif. Responding to these overtures is fine. You can butter up an official (“Your Excellency”), maybe grovel a little for a visa. But the writing itself, and the analysis behind it, must be independent to the point that even the most cynical observer could not accuse you of altering your views to please a subject.

By this standard, some of the reported exchanges between the Iran Experts and their convenor are mortifying. After the report, Vaez, a deputy to Malley, admitted on X (formerly Twitter) that he’d sent a full draft of an op-ed to the Iranian government. “I look forward to your comments and feedback,” his email to the Iranians read. If I sent a source a draft of a story, I would be fired. (I asked The National Interest, where the article appeared, if its policy also forbids sharing drafts. The editor, Jacob Heilbrunn, did not reply.) Sending questions is laudable. Checking facts is standard practice. But a magazine article is not a Wiki whose contributors are also its subjects. Sharing a full draft of an article, whether for approval or just improvement, makes the recipient an unacknowledged co-author.

Vaez later pledged to the Iranian foreign minister to “help you in any way,” by proposing “a public campaign” to promote Iran’s views on its nuclear program. He offered these services “as an Iranian, based on my national and patriotic duty.” Vaez, like his former boss Malley, has written widely about Iran and U.S.-Iran relations, for magazines including this one. (Attempts to reach Vaez through his employer to verify the authenticity of the emails and their context were not answered by the time of publication.)

According to the same reports, Adnan Tabatabai, CEO and founder of the German think tank CARPO, “offered to prepare articles for Iran’s foreign ministry.” “We as a group [could] work on an essay,” he suggested. “It could, for example, be published under a former official’s name.” Tabatabai, the report says, worked as a contractor for Malley’s International Crisis Group. (He did not respond to a request for comment.)

Ariane Tabatabai (who is not related to Adnan) wrote to her contact at the Iranian foreign ministry and asked his advice on whether to work with officials in Saudi Arabia and attend a meeting in Israel. “I would like to ask your opinion too and see if you think I should accept the invitation and go,” she asked Mostafa Zahrani of the foreign ministry. She made clear that she personally “had no inclination to go” to a workshop at Ben-Gurion University, but she thought it might be better if she went, rather than “some Israeli,” such as Emily Landau of Tel Aviv University. Zahrani told Tabatabai to look into Saudi Arabia and avoid Israel. She thanked him for the guidance, and she went to Tehran herself in 2014. In another email to the Iranians, she noted that she had recently published an article arguing that Tehran should be given more leeway to spin up centrifuges for uranium enrichment.

These emails look bad. So would mine, if they came out in a selective leak, and so would yours. But I’m not sure that they would look this bad, or that my excuses would be so weak.

Vaez tweeted that he “shared the draft as a courtesy after [Iranian] officials claimed I had been too harsh on their position in my writing.” Even if sharing a draft were permissible, would he extend the courtesy to Trump officials? “[ICG’s Iran] work has always been informed by the perspectives of all relevant stakeholders,” he claims. I am confident that if you plumbed his inbox, you would find no fan mail addressed to “Your Excellency” Mike Pompeo, offering his devoted and patriotic service. Nor would he soften the blow of criticism of Trump officials (whose Iran policy was built on sanctions and drone strikes) by giving them a “courtesy” peek at his next work.

Roya Hakakian: Ebrahim Raisi has blood on his hands

For once, the Iranians themselves are blameless. As conspiracies go, the one alleged here is mild. They found Westerners of Iranian extraction who did not despise their religious government, as so many Iranian expatriates do. They made a list. They flattered its members and waited to see who welcomed the flattery and reciprocated with offers of service. These techniques paid off splendidly when the Biden administration started appointing the very people Tehran had been grooming. (Vaez was poised to join Malley at State, but the appointment was never made.)

The emails do not demonstrate or suggest that Ariane Tabatabai, now in the Defense Department, or others not in government, became agents of Tehran. The Pentagon says that Tabatabai was “thoroughly and properly vetted” for her current job but refuses to say whether her emails were accurately and fairly quoted. Even if they do not show that she is a security risk, they do show that she and others responded to Tehran’s blandishments and sought its approval. The administration should find staff who know Iran and its leaders, ideally well enough to recognize Zarif by the smell of his cologne or the sound of his footfall. To get that close takes some ingratiation. The method of ingratiation matters, though, and in this case, it stinks.

Why Driverless Cars Are a Tough Sell

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 09 › why-driverless-cars-are-a-tough-sell › 675468

Welcome to Up for Debate. Each week, Conor Friedersdorf rounds up timely conversations and solicits reader responses to one thought-provoking question. Later, he publishes some thoughtful replies. Sign up for the newsletter here.

Last week, I asked for your thoughts on self-driving cars.

Replies have been edited for length and clarity.

Kathryn is bullish and looks forward to shedding the responsibility of driving:

Yes, driverless cars are the future, at least for people alive today. I’m sure there will be some later innovation in transportation we can’t even imagine yet. Cities should allow them to be tested on the street now, assuming the vehicle has passed something analogous to the driving test humans take to receive a driver’s license. I’d love to have these vehicles in my neighborhood. I live in an urban area and use our sidewalks and crosswalks as my primary mode of transportation. Given all the close calls I’ve had over the years with human drivers, I’d welcome anything that is safer. Driverless cars don’t need to be even close to perfect to be an improvement on the status quo.

I don’t like driving, although I have a license and access to a car and I drive a few times a month. What I like least about driving is that I could make one wrong move and possibly kill someone, including myself, or destroy my family’s financial stability. I appreciate that I can commute by bus and avoid facing that responsibility on a daily basis. I also enjoy relaxing on the bus. I can read and write for work or fun, listen to music and podcasts, and watch the world go by. I wonder if today’s drivers will start to value the decrease in legal liability and the increase in downtime that driverless cars can provide.

Mike is bearish:

Driverless vehicles of any type should be banned in any setting where they will have to interact with vehicles controlled by humans. They shouldn’t be allowed to be tested, and the technology should not be pursued. Where every vehicle is autonomous, there is in theory nothing wrong with them, but on the public roads, that environment will never exist, at least not in our lifetimes. I, for one, will never get in a vehicle that does not have a human driver.

Chris points out that innovation is rarely, if ever, stopped by its critics or regulators acting on their behalf:

Driverless cars will flood the streets regardless of whether they are ready for prime time. There will be accidents, injuries, and deaths. Victims and their lawyers will haggle with and sue multiple parties––and compensation will be slow in coming while everyone is still figuring out who is actually responsible for an accident (the vehicle maker, the software company, third-party technology, the passenger somehow, etc.).

Due to the intricacies of multiple-party liability, insurance companies covering driverless vehicles may be very slow to compensate victims, leaving the latter in medical and financial purgatory. The business of America is business, and driverless cars and trucks represent a business opportunity. The consumer and his or her safety is always an afterthought.

Leigh anticipates safer roads:

I live in New Orleans, where just in the past five days of driving my kids to school, I have experienced drivers blowing through stoplights; drivers who were clearly watching their phones and not the road; drivers doing 50 in a 20 mph school zone, and not worried about getting caught because they have no tag on their car. Driverless cars will end all this. They will be programmed to obey traffic laws. This will make life easier and more predictable for pedestrians, cyclists, and other drivers. Plus, Baby Boomers are getting older, and it would be great to have a driverless car take them to their doctor appointments instead of them continuing to drive past the point of safety. And how great would it be to sleep in a driverless car on your way to your travel destination? You could wake up refreshed, having traveled at night when the roads aren’t busy.

Cameron is skeptical of anything tech companies touch:

I think the discussion around autonomous vehicles—and their viability as a mode of transportation in the not-so-distant future—encapsulates bigger questions that The Atlantic has covered surrounding different elements of American social and political life. The oft-discussed degradation of civil society in the U.S., combined with a legacy of automobile-centric infrastructure development and noxious residential zoning regulations, has resulted in starved public transportation networks (where they even exist) and urban and suburban layouts that aren’t terribly traversable on foot to begin with.

Now, I realize that tackling these issues will require a significant amount of effort, investment, and political willpower, but I find myself increasingly disheartened whenever Silicon Valley—who are not accountable to the public outside of “market forces”—get the opportunity to treat the U.S. as a playground while our civic institutions convulse.

Maureen is betting on old-fashioned car culture:

Our century-long love affair with all things automotive dooms the driverless concept to a niche market: people whose physical condition precludes driving and those who prefer to be driven.

The vast majority of us regard driving as a birthright, obtaining a license as a rite of passage, and operating a vehicle as an expression of control––and as much fun as you can have while fully dressed. In short, we love control and speed. The less mature among us love road games like Cut You Off and Tailgate. An astonishing number love to work on cars, restore cars, race cars, and watch others race. Driverless cars will have their place, when they iron out the kinks, but it is not on the roads of this vast and beautiful nation.

Alan makes the case for human drivers:

A couple years ago, I was heading west on I-66 in the late afternoon. The road was under construction, recently repaved and with no lines yet painted. It had just finished raining, so the new asphalt was wet, but the sun was coming out, very low on the horizon and reflecting off the wet asphalt. I was effectively blinded, as were all the other drivers. Visor pulled low, I could focus on the car in front and make judgments on where to be. Stay in my unmarked “lane,” keep a reasonable distance, and just drive. But what would a computer do? Would it have the “intuition” to adopt a defensive driving mode and guesstimate where the lanes should be? Or would it just freak out and stop? And how would a computer drive a car on the snow-covered roadways of Buffalo or Bismarck in February? No lane markings to follow, just human understanding of how we navigate difficult conditions.

Richard offers additional examples of nonstandard road conditions:

Construction reduces a two-lane road to one lane. Some flag man in a high-visibility vest is holding a pole with a small sign that says “SLOW,” which means you can enter the opposite lane. Then he flips it to the other side, which says “STOP.” Will self-driving cars figure this out?

Stuck behind a postal delivery vehicle stopping at every house on a two-lane road with double yellow lines. Does the self-driving vehicle know it can pass the mailman? Ditto for the garbage truck, UPS, FedEx.

A car accident requires a police officer to take control of an intersection. Humans recognize the presence of a police officer and know to obey his hand signals and ignore the traffic light. Will a self-driving vehicle recognize the man as a police officer and understand his hand signals for “go” and “stop”?

A power outage causes nonworking traffic lights at intersections. Humans know to treat this intersection as a four-way stop. How does a self-driving vehicle interpret this situation? Does it even recognize that there is a nonworking traffic light?

A school bus is stopped on the other side of the road. In Ohio, if it is three lanes or fewer, traffic must stop on both sides. If it is four lanes or more, traffic on the opposite side can keep moving. Does a self-driving vehicle know this? Does a self-driving vehicle recognize a school bus?

I could go on in that vein, but you get the drift.

Steve contends with one of the Northeast’s most hazardous road conditions: Massachusetts drivers. He writes:

Living near Boston, I can tell you that, within 10 minutes of every drive, I run into a scenario that would be nearly impossible for a driverless car to navigate. Beyond the nearly unnavigable cow paths we call roads, there are so many times that eye contact with the driver, pedestrian, or pet is the only real way to avoid calamity. Not to mention the average Boston driver seems to find new ways every day to do something irrational. It will take decades to master that and even longer for Bostonians to trust any software to solve that. Instead, car companies should focus on two things: First, make driver-assist technologies amazing. Imagine a windshield that enhances everything (especially at night) and highlights potential risks, 360 cameras that help see issues and then accident-avoidance technologies that give 80-year-olds the reflexes of a teenager. Said simply, keep the person at the wheel, but make them awesome drivers.

Second, situational autopilot: Designated areas where driverless cars move on preprogrammed routes (think shuttles in airport lots or parts of Rome filled with driverless vehicles). Also, why not special lanes on highways (repurpose HOV lanes) that allow cars to link up, form a dance line of sorts, and speed down the highway?  Instead of focusing on an unreasonable goal not reachable for 20 years or more, why not take the remarkable technologies we’ve developed and get us to a much safer place than we are now?

Leo suggests that we shouldn’t count on driverless cars winning the day politically, even if they perform better than humans:

This debate may play out differently in other countries and cultures, but in America, freedom will trump safety in the end. There are, of course, any number of laws and regulations in our society, but the underlying ethos, the dominant paradigm, is that we live in the land of the free. Laws, regulations, and limitations are not prized as arbiters of a functional society so much as endured as necessary evils. And any person, community, or movement that pushes too hard for too many restrictions will pay a heavy price.

What politician or political party is going to sign their own death warrant by limiting or, god forbid, outlawing our right to drive our own cars? Even the limitations that already exist (such as speed limits) are flouted so regularly that in many cases it’s unclear why they exist at all (other than to raise money for local governments through ticket citations). The decades-long effort to stigmatize and heavily fine drunk drivers has indeed yielded some results, but there are still drunk drivers, and there always will be. For better and for worse, Americans will only tolerate so many infringements on their individual liberty.

So driverless cars will likely be deployed to some extent. They will penetrate our society at some level. But they are not the future. At best, they are one aspect of the future.

Karen won’t be buying a self-driving car:

I enjoy driving a manual transmission. I have found that it forces me to pay attention only to driving. I feel engaged with the car. Power steering, power brakes, fine; but I still don’t mind winding my car windows up and down. Power seats? Entirely unnecessary. I don’t even like the whole touchscreen thing. Some cars force you to use the touchscreen to open the glovebox! Why? What’s the deal with putting the heating and AC controls on the touchscreen? Or the radio, for that matter. And I will decide what music I want to hear. No music apps! You now have to buy a used car to get a CD player. Yes, I’m old. But I like to drive—not be driven, even if the systems get better and more accurate.

Tanner writes that “autonomous cars are still cars,” which he sees as a bad thing:

We would be better off investing in low-tech, less car-centric ideas; designing our communities where car trips are less necessary, supporting robust public transportation, making streets safer for all users, dedicating less space to cars and more space to more community-oriented things. Some will argue that driverless cars will solve the issues above by reducing the need to own a car, reducing crashes (with cars at least), reducing congestion, etc.

Even if true (I have my doubts), do we really want to be even more dependent on giant tech companies than we are? How do I, as a pedestrian or a bicyclist, communicate with a machine about my intentions at an intersection (no more gadgets, please)? Is the future one where everyone and everything requires sensors and gadgets to work safely?

There is a place for driverless cars. But the future, for me, is a 30-year-old bicycle that I can take anywhere.

The Man Who Created America’s Most Controversial Gun

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 09 › ar-15-rifle-gun-history › 675449

This story seems to be about:

Eugene Stoner was an unassuming family man in postwar America. He wore glasses and had a fondness for bow ties. His figure was slightly round; his colleagues called him a teddy bear. He refused to swear or spank his children. “Boy, that frosts me,” he’d say when he was upset. He liked to tweak self-important people with a dry sense of humor. He hated attention.

A lifelong tinkerer and a Marine veteran, he was also fascinated by the question of how to make guns shoot better. When an idea came to him, he scribbled it down on anything he could find—a pad of paper, a napkin, the tablecloth at a restaurant. He had no formal training in engineering or in firearms design. Yet it was inside Stoner’s detached garage in Los Angeles, during the 1950s, that the amateur gunsmith, surrounded by piles of sketches and prototypes, came up with the idea for a rifle that would change American history.

Today, this weapon is the most popular rifle in America—and the most hated. The AR-15 is a symbol of Second Amendment rights to millions of Americans and an emblem of a violent gun culture run amok to millions more. With a lightweight frame and an internal gas system, the military version can be fired as an automatic, unleashing a stream of bullets from a single pull of the trigger, or as a semiautomatic, allowing for one shot per trigger pull. The civilian semiautomatic version is now the best-selling rifle in the country; more than 20 million such guns are in civilian hands. And it is a weapon of choice for mass shooters—including the white supremacist who killed three Black people last month at a store in Jacksonville, Florida, armed with a handgun and an AR-15-style rifle emblazoned with a swastika.

[Juliette Kayyem: The Jacksonville killer wanted everyone to know his message of hate]

The consequences of the AR-15’s creation have coursed through our society and politics for generations in ways that Stoner never foresaw. He created the gun with a simple goal: to build a better rifle for the U.S. military and its allies during the Cold War. He wanted to protect the country he loved. Now his invention is fused in Americans’ minds with the horror of people going about their daily tasks—at school, the movies, the store, a concert—and suddenly finding themselves running for their lives. Few of the participants in America’s perpetual gun debate know the true, complicated history of this consequential creation—or of the man behind it. The saga of the AR-15 is a story of how quickly an invention can leave the control of the inventor, how it can be used in ways the creator never imagined.

We interviewed Stoner’s family members and close colleagues about his views of his gun. They gave us insight into what the inventor might have thought about the way the AR-15 is being used today, though we’ll never know for sure; Stoner died before mass shootings with AR-15s were common. Later in life, after years of working in the gun industry, he was asked about his career in an interview for the Smithsonian Institution. “It was kind of a hobby that got out of hand,” he said.

As a boy growing up in the Coachella Valley, in Southern California, in the 1920s and ’30s, Stoner was fascinated by explosions. Before the age of 10, he had designed rockets and rudimentary weapons. On one occasion, he begged a friend’s father for a metal pipe and the local drugstore owner for magnesium. Stoner built a primitive cannon and pointed it at a house across the street, but before he could open fire, his father ran to stop him. “I told you to do this at the city dump,” scolded Lloyd Stoner, a veteran of the Great War who had moved the family to California from the farmlands of Indiana in search of a better life.

Eugene Stoner never went to college. He joined the Marines during World War II and was tasked with repairing weapons on aircraft in the Philippines. When he came home, he brought his wife, Jean, an adventurous woman who idolized Amelia Earhart, a special present: gun parts from Asia that he assembled into a rifle. She loved it. The couple often went hunting and shooting together. “He was a very quiet person,” Jean said in an unpublished interview that the Stoner family shared with us. “But if you talked about guns, cars, or planes, he’d talk all night.”

After the war, Stoner got a job as a machinist making aircraft parts. Every day after he came home, he would eat the dinner that Jean had prepared (beef Stroganoff was his favorite), take a quick nap, and then walk to the garage to work on his gun designs. Like other hobbyist inventors of the era, he believed he could move the country forward by the power of his ingenuity. “We were like the 1950s family. It was California. It was booming after the war,” his daughter Susan told us. “I knew from my dad—I felt from him—the future was wide open.”

[Conor Friedersdorf: The California dream is dying]

Stoner had the ability, common among inventors, to imagine engineering solutions that others stuck in the dogmas of the field could not. For centuries, gunmakers had built their rifles out of wood and steel, which made them very heavy. At the time, the U.S. military was searching for a lighter rifle, and Stoner wondered if he could build one using modern materials. If humans were soaring into the atmosphere in airplanes made of aluminum, he figured, couldn’t the lightweight metal tolerate the pressures of a gun firing? By the early 1950s, he had figured out how to replace one of the heaviest steel components of a rifle with aluminum. Then he devised a way of using the force of the gas from the exploding gunpowder to move parts inside the gun so that they ejected spent casings and loaded new rounds. This allowed him to eliminate other, cumbersome metal parts that had been used in the past. The first time he tried firing a gun using this new system, it blew hot gas into his face. But he perfected the design and eventually received a patent for it.

In 1954, Stoner got the opportunity to bring his radical gun concepts to life. That year, as Stoner later recalled, he had a chance encounter at a local gun range with George Sullivan. A relentless pitchman, Sullivan was then the head of a Hollywood start-up called ArmaLite, a subsidiary of Fairchild Engine and Aircraft Corporation whose mission was to design futuristic weapons. Impressed with the homemade guns Stoner was shooting, Sullivan hired him as ArmaLite’s chief engineer.

The small yet brilliant ArmaLite team worked at a fevered pace, designing a series of lightweight guns made of aluminum and plastic. Most went nowhere. Nevertheless, the ambitious Sullivan set the firm’s sights on an improbable target: the U.S Army’s standard-issue rifle. The Eisenhower administration’s “New Look”—an effort to rein in Pentagon spending and shift it toward newer technologies—opened the door for private companies to get big military contracts. The outsiders from Hollywood decided to take on Springfield Armory, the military’s citadel of gun making in western Massachusetts that had equipped American soldiers since the Revolutionary War. Springfield’s own efforts to develop a new rifle had resulted in a heavy wood-and-steel model that wasn’t much more advanced than the M1 Garand used by GIs in World War II.

Eugene Stoner, wearing his trademark bow tie, holds his creation the AR-10. The AR-15 was a scaled-down version of this gun. (Photograph courtesy of Susan Kleinpell via Farrar, Straus and Giroux)

ArmaLite’s first serious attempt at a rapid-fire rifle made of plastic and aluminum was the AR-10—AR for ArmaLite or ArmaLite Research (accounts differ), and 10 because the weapon was the company’s tenth creation. The rifle combined the efficient internal gas system Stoner had devised in his garage and lightweight modern materials with a design that made the gun easy to shoot and keep on target. In December 1956, Time heralded the AR-10 as a potential savior for the bumbling U.S. military and listed Sullivan as the gun’s inventor, a claim that infuriated Stoner’s wife. Sullivan had also meddled with the design, insisting that more aluminum be used in making the gun’s barrel, a move Stoner resisted. During military trials, the AR-10 fared poorly. At one point, a bullet erupted from the side of the gun’s barrel, just missing the hand of the soldier firing the weapon—and seemingly dooming ArmaLite’s chances of landing a military contract.

But within the Pentagon, a cabal of high-ranking officers led by General Willard Wyman launched a back-channel effort to save Stoner’s gun. Wyman was a legendary military leader who, at age 46, had joined the D-Day invasion at Omaha Beach as an assistant commander of the First Infantry Division. He knew that the United States needed better firepower as the Cold War flashed hot. America’s enemies around the globe were being armed by the Soviet Union with millions of rugged AK-47s that could spray bullets in automatic mode and were highly effective in guerilla warfare. Wyman was certain that modern wars would be won not by long-range marksmen but by soldiers firing lots of bullets in close combat. They needed a rifle that used small-caliber bullets so they could carry more ammo. And he was worried that the tradition-bound gun designers at Springfield Armory weren’t innovative enough to meet the challenge. When Wyman’s superiors brushed him off, he secretly flew to Los Angeles and stunned Stoner and his team by striding into the ArmaLite office unannounced. Wyman told Stoner that he wanted ArmaLite to build a new version of the AR-10 that fired a smaller bullet.

[James Fallows: Why the AR-15 is so lethal]

Stoner and an ArmaLite draftsman named Jim Sullivan (no relation to George) set about designing the gun. It was simple, efficient, and easy to use. Early versions of the AR-15 weighed just more than five pounds unloaded, less than the hedge trimmers and handheld vacuums of the era. With all of Stoner’s innovations—lighter material, fewer parts, and the gas system, as well as an in-line stock and a pistol grip—Jim Sullivan found shooting the prototype AR-15 to be easy, even after he flipped the selector switch to automatic. “That made it so well handling,” he told us. “If you’re firing full auto, you don’t want a gun that lifts.” Sullivan found the rifle’s recoil to be minimal. As a result, follow-up shots were quick when he switched it to semiautomatic. “It looked a little far-out for that time in history,” Stoner later said in the Smithsonian interview.

As Stoner and his backers sought to persuade the military to adopt the AR-15 in place of Springfield’s rifle, they were often met with skepticism about the gun’s small bullets. During secret military hearings about the rifle in the winter of 1958, Stoner explained to a panel of generals that the AR-15 had “a better killing cartridge with a higher velocity” than the Soviet AK-47. The generals asked Stoner how a smaller bullet fired from his rifle could do so much damage. “The wound capability is extremely high,” Stoner answered. “It blows up on contact rather than drilling a nice neat hole.” A slower .30 caliber round, similar to the one used by Springfield’s wood-and-steel rifles, “will go right through flesh,” but the faster, smaller bullet from the AR-15 “will tumble and tear,” he said.

Those in the military who wanted Springfield’s rifle to prevail tried to sabotage Stoner’s gun, rigging tests and shading reports so that it would seem like it wasn’t ready for the battlefield. During official trials in Alaska, Stoner arrived to find that the aiming sights on his guns had been replaced with bits of metal that were badly misaligned, causing soldiers to miss their targets. The guileless inventor was caught up in the murky world of Pentagon intrigue.

[From June 1981: James Fallows’s ‘M-16: A Bureaucratic Horror Story’]

Eventually, through persistence and luck, and with the help of a cast of lobbyists, spies, and analytics-driven military leaders, Stoner’s rifle would be adopted. At a key moment when it seemed that the AR-15 would be killed off by military bureaucrats, the powerful, cigar-chomping Air Force General Curtis LeMay, the architect of the U.S. bombing campaign in Japan during World War II, was asked if he wanted to shoot the gun. On July 4, 1960, at a birthday party for Richard Boutelle, the onetime head of Fairchild, the gun’s backers set up ripe watermelons as targets at Boutelle’s estate in western Maryland. LeMay fired, causing a red-and-green explosion. The general marched into the Pentagon soon after and demanded that the military purchase the weapon. It would become the standard-issue rifle—renamed the M16, for the prosaic “Model 16”—just in time for the rise of U.S. involvement in Vietnam.   

A U.S. Marine holds his M16 rifle alert after being fired on by North Vietnamese soldiers in the jungle southwest of Da Nang on April 22, 1969. (Yvon Cornu / AP)

In Eugene Stoner’s and Jim Sullivan’s minds, their work was not just intellectually engaging but also noble, a way to help America defeat the Communists. At school, in the 1950s, the Stoner children learned what to do in the event of a Soviet nuclear attack. Sirens and bells went off regularly, and teachers ordered kids to hide under their desks and cover their heads, Stoner’s daughter Susan recalled. For her father, the task of making the best rifle for the U.S. military wasn’t burdened with moral quandaries. Many weapons inventors at the time thought about the technical challenges of their weapons first, and wrestled with the consequences of their creations only afterward. “When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success,” J. Robert Oppenheimer, the lead developer of the atomic bomb, said almost a decade after bombs were dropped on Hiroshima and Nagasaki.

[From February 1949: J. Robert Oppenheimer’s ‘The Open Mind’]

After Stoner created the AR-15, he continued designing guns and artillery for a variety of gunmakers. Through a company he co-founded, he worked on antiaircraft weapons for the Shah of Iran, before the 1979 revolution scuttled the deal. He helped design a handgun for the venerable gunmaker Colt that the company tried to sell on the civilian market, without much success. But none of his creations came close to the prominence of the AR-15. By the 1990s, he’d become a superstar in the gun world. Royalties from the M16 made him wealthy; Colt, which purchased the rights to the gun from ArmaLite, sold millions of the weapons to the military. Stoner was “a Second Amendment guy,” his daughter said, but he didn’t talk much about the messy world of politics, either privately or publicly. He preferred thinking about mechanisms.

Throughout his life, Stoner was troubled by losing control over the production of his most famous gun. In the 1960s, as the U.S. ramped up production of the rifle for the war in Vietnam, a Pentagon committee made changes to the gun and its ammunition without proper testing. The results on the battlefields in Vietnam were disastrous. Stories of GIs dying with jammed M16s in their hands horrified the public and led to congressional hearings. The shy inventor was called to testify and found himself thrust into an uncomfortable spotlight. Declassified military documents that we reviewed show that Stoner tried in vain to warn Pentagon officials against the changes.

Stoner paid far less attention to the semiautomatic version of his rifle that Colt began marketing to the public in the 1960s as “a superb hunting partner.” Even after Stoner’s patent expired, in 1977, the rifle was a niche product made by a handful of companies and was despised by many traditional hunters, who tended to prefer polished wood stocks and prided themselves on felling game with a single shot. But the rifle’s status shifted after 9/11. Many Americans wanted to own the gun that soldiers were carrying in the War on Terror. When the 1994 federal assault-weapons ban expired after a decade, the AR-15 became palatable for mainstream American gunmakers to sell. Soon, it was a symbol of Second Amendment rights and survivalist chic, and gun owners rushed to buy AR-15s, fearful that the government would ban them again. By the late 2000s, the gun was enjoying astounding commercial success.

AR-15 style weapons are displayed for sale at the 2022 Rod of Iron Freedom Festival, an open-carry event to celebrate the Second Amendment, in Greeley, Pennsylvania. (Jabin Botsford / The Washington Post / Getty)

When Stoner died from cancer, in 1997, obituaries hailed him as the inventor of the long-serving military rifle; they made no mention of the civilian version of the weapon. Stoner left clues about his thoughts about the gun in a long letter, sent to a Marine general, in which he outlined his wishes for his funeral and burial at Quantico National Cemetery, in Virginia. He saw the creation of a rifle for the U.S military as his greatest triumph. He didn’t mention the civilian version. The government had wanted a “small caliber/high velocity, lightweight, select fire rifle which engaged targets with salvos of rounds from one trigger pull,” Stoner wrote. “That is what I achieved for our servicemen.”

[Ryan Busse: The rifle that ruined America]

The inventor wouldn’t get to control how his proudest achievement would be used after his death, or the fraught, outsize role it would come to play in American society and politics. Since 2012, some of the deadliest mass shootings in the nation’s history—Sandy Hook, Las Vegas, Sutherland Springs, Uvalde—have been carried out by men armed with AR-15s. Now children practice drills to avoid being gunned down by attackers with AR-15s at their school.

The last surviving member of that ArmaLite team, the draftsman Jim Sullivan, was at times haunted by the invention’s later impact. When we visited him at his workshop in Arizona in 2019, Sullivan pulled out the original drawings for the AR-15 and smiled broadly as he described how he and Stoner had designed the gun. He picked up parts to demonstrate how it worked, explaining its functions like an excited professor. He was proud of the weapon and loved Stoner. He said that his years working at ArmaLite were the best of his life. After hours of talking about barrels, bolts, receivers, and Stoner’s gas system, he paused and looked down at the floor. He said he’d grown deeply disturbed by the violence being wrought with the invention he had helped create. He said that mass shooters wouldn’t be able to do what they do without weapons such as the AR-15.

“Every gun designer has a responsibility to …” he said, pausing before finishing his thought, “to think about what the hell they’re creating.”

This article has been adapted from Zusha Elinson and Cameron McWhirter’s book, American Gun: The True Story of the AR-15.

Judicial Ethics in a Populist Age

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 09 › supreme-court-ethics-oversight-criticism › 675460

The contemporary ethical standards that many Americans want to see the Supreme Court adhere to are exactly that—contemporary. Throughout the Court’s long history, justices have had conflicts of interest that we would find unacceptable today. And in the past, people didn’t seem to mind quite so much.

In 1803, Chief Justice John Marshall, who wrote the Court’s landmark opinion in Marbury v. Madison, should have recused himself by contemporary standards. The case concerned the validity of judicial commissions that he had himself signed and sealed, and that his brother James Marshall had been charged with delivering. But Chief Justice Marshall didn’t recuse himself—and nobody objected at the time. In 1972, Chief Justice Warren Burger spoke by telephone with President Richard Nixon about cases and issues that were or could come before the Court, including school busing and obscenity. The news became public in 1981, while Burger was still chief justice—and was met with a relative shrug.

Nor are potential financial conflicts anything new. The justices have long benefited from the generosity of rich friends, which until recently generated little concern. Justice William J. Brennan’s acceptance of $140,000 in gifts and forgiven debts from a wealthy businessperson in the 1990s, far from making front-page news, showed up in a tiny article on the bottom of page A9 of The New York Times. In 1995, reports that seven different justices enjoyed luxurious trips over a 13-year period, courtesy of a major legal publisher (and Supreme Court litigant), generated little interest from Congress. More recent instances when millionaires and billionaires bankrolled trips taken by Justices Antonin Scalia, Ruth Bader Ginsburg, and Stephen Breyer spurred generally mild media coverage with hardly any outrage. (Although Justice Abe Fortas’s financial entanglements with the financier Louis Wolfson ultimately caused Fortas to resign, the allegations against the justice—who had agreed to accept large cash payments from Wolfson for the rest of Justice and Mrs. Fortas’ lives in exchange for the justice providing unspecified “services” to this subsequently convicted felon—were far more serious than any made recently.)

[Bob Bauer: The Supreme Court needs an ethics code]

The current climate is very different. Last year, critics lambasted Justice Clarence Thomas for not recusing in cases involving the January 6 attack on the Capitol after text messages from his wife, Virginia, revealed her involvement in the effort to overturn the 2020 presidential election. Then, in April, ProPublica reported on the relationship between Justice Thomas and the real-estate tycoon Harlan Crow, which was followed by more reports of other financial entanglements between justices and wealthy benefactors. These reports stoked public anger; politicians of both parties, newspaper editorial boards, and numerous commentators called for a formal code of ethics at the Supreme Court, possibly including limits on the gifts the justices can accept and more robust disclosure requirements.

So the question is not why today’s Court has so many potential conflicts and controversies, some of them problematic (the Ginni Thomas texts), some of them less so (Venmogate). The question is why they have generated so much attention and outrage compared with decades past.

Part of this is undoubtedly partisan opportunism, with critics on the Court’s left and right seeking additional reason to delegitimize the decisions of their disfavored justices, amplified through a hyper-politicized media environment. But a more fundamental, albeit interrelated, reason is at play: the rise in recent years of a strong anti-elitism in American politics, what David Brooks has dubbed a “distrustful populism.”

One principal feature of this form of populism is a rejection of an earlier narrative that the powerful attained their posts because of “merit.” Instead, on both the left and right, an increasing suspicion has emerged that meritocracy is toxic, a system that rewards power and privilege with yet more power and privilege.

Attitudes toward Supreme Court justices reflect this shift. Back when Justices Clarence Thomas and Sonia Sotomayor were nominated, their paths—from childhood poverty to Ivy League law schools to the highest court in the land—were celebrated as American success stories. But these days, when commentators note that eight of the nine justices graduated from Harvard or Yale Law School, it’s almost always the subject of complaint rather than acclaim.

This anti-elitist turn extends even to the hiring of the justices’ law clerks. Earlier this year, when a study found that going to an elite college greatly enhanced one’s chances of landing a Supreme Court clerkship, an author of the study complained that it reflected “some of the worst pathologies in American society.” When it became public in July 2021 that Justice Elena Kagan offered a clerkship to Jessica Garland, the daughter of former D.C. Circuit Chief Judge and current Attorney General Merrick Garland, the news was condemned as a glaring example of “nepotism” and “another justice not caring about conflicts of interest.” (Jessica Garland’s clerkship has been postponed to a time when her father is no longer attorney general.)

Which is not to say that all distrust and calling out of elites is a bad thing; much of it represents a belated and worthy recognition of deep unfairness in many parts of American society. But recognizing the relative recency of such concerns should also affect the approach to ethics reforms for the Court.

[Glenn Fine: The Supreme Court needs real oversight]

First, although greater scrutiny of the justices is salutary, blaming them for conduct based on standards developed after the actions at issue may be counterproductive. Hyperbolic condemnation of the justices, including calls for impeachment, has the potential to backfire. It makes the justices more defensive—as reflected in a recent Wall Street Journal interview of Justice Samuel Alito, in which he asserted that “no provision in the Constitution gives [Congress] the authority to regulate the Supreme Court”—and less likely to voluntarily adopt an ethics code. And given questions surrounding Congress’s ability to impose ethics requirements on the Court, both constitutionally (because of separation of powers) and politically (because of Republican opposition), getting the justices to adopt a code on their own is still the most likely path to reform.

Second, as history has made clear, as long as the justices are real people underneath their robes, they will have potential conflicts of interest. The justices are human (and Americans want them that way—research shows that Americans trust human judges more than artificial-intelligence judges). The justices will have friends—who might be inclined to entertain or help them, as friends do. The justices will have spouses—who might have lucrative careers and outside clients. The justices will have human desires—perhaps for the finer things of life, perhaps for fame.

Given this, strengthening disclosure requirements—and imposing real consequences for violations, such as serious financial penalties—may be more productive than trying to police the friendships of the justices or the gifts they can receive. An ethics regime that gives the justices broad leeway in their and their spouse’s outside relationships, tied to greater disclosure of those relationships, could be a reasonable compromise acceptable to both Congress and the Court.

Despite its issues past and present, the federal judiciary is one of the world’s best in terms of independence and integrity. We know this firsthand, having clerked for three federal judges between the two of us and having appeared as lawyers before many more. We have also followed and written about the Supreme Court for years, for both scholarly and general-interest publications (separately and together, as a married couple).

Yes, the Supreme Court should adopt an ethics code, at the very least to convey to the public that it is, in Kagan’s words, “adhering to the highest standards of conduct.” But Americans should also proceed with caution and humility when advocating for what such a code should contain, tempering today’s populist sympathies with an understanding of history and a recognition that if the public wants justices to be humans, not Platonic Guardians or AI creations, they must accept the burdens as well as the benefits of that bargain.

American Democracy Requires a Conservative Party

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 09 › america-us-democracy-conservative-party › 675463

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

Every nation needs parties of the left and the right, but America’s conservative party has collapsed—and its absence will undermine the recovery of American democracy even when Donald Trump is gone.

First, here are four new stories from The Atlantic:

So much for “learn to codeWhere the new identity politics went wrong The origins of the socialist slur The coming attack on an essential element of women’s freedom

The Danger That Will Outlast Trump

The American right has been busy the past few days. The Republicans in Congress are at war with one another over a possible government shutdown that most of them don’t really want. Representative Paul Gosar of Arizona (channeling the warden from The Shawshank Redemption, apparently) railed about “quislings” such as the “sodomy-promoting” Mark Milley, the chairman of the Joint Chiefs of Staff, and said he should be hanged. Gosar, of course, was merely backing up a similar attack from the likely GOP presidential nominee Donald Trump, who over the weekend floated the idea of executing Milley and swore to use government power to investigate a major television network for “treason.”

Normally, this is the kind of carnival of abominable behavior that would lead me to ask—again—how millions of Americans not only tolerate but support such madness.

But today I’m going to ask a different question: Is this the future of “conservatism”? I admit that I am thinking about this because it’s also one of the questions I’m going to tackle with my colleagues David Frum, Helen Lewis, and Rebecca Rosen on Thursday in Washington, D.C., at The Atlantic Festival, our annual two-day gathering where we explore ideas and cultural trends with a roster of stellar guests.

Slightly more than a year ago, I tried to think through what being a conservative means in the current era of American politics. I have not been a Republican for several years, but I still describe myself as a conservative: I believe in public order as a prerequisite for politics; I respect tradition, and I am reluctant to acquiesce to change too precipitously; I think human nature is fixed rather than malleable; I am suspicious of centralized government power; I distrust mass movements. To contrast these with progressivism, I think most folks on the left, for example, would weigh social justice over abstract commitments to order, be more inclined to see traditions as obstacles to progress, and regard mass protests as generally positive forces.

This is hardly an exhaustive list of conservative views, and some on the right have taken issue with my approach. A young writer at National Review named Nate Hochman took me to task last year for fundamentally misunderstanding modern conservatism. Mr. Hochman, however, was apparently fired this summer from the Ron DeSantis campaign after he produced a campaign video that used Nazi symbolism, which suggests to me that I do, in fact, understand the modern conservative movement better than at least some of my critics might admit.

In any case, the immediate problem America faces is that it no longer has a center-right party that represents traditional conservatism, or even respects basic constitutional principles such as the rule of law. The pressing question for American democracy, then, is not so much the future of conservatism but the future of the Republican Party, another question our panel will discuss—and one that continually depresses me.

The United States, like any other nation, needs political parties that can represent views on the left and the right. The role of the state, the reach of the law, the allocation of social and economic resources—these are all inevitable areas of disagreement, and every functioning democracy needs parties that can contest these issues within the circumscribed limits of a democratic and rights-respecting constitution. Today’s Republican Party rarely exhibits such commitments to the rule of law, constitutionalism, or democracy itself.

The current GOP is not so much conservative as it is reactionary: Today’s right-wing voters are a loose movement of various groups, but especially of white men, obsessed with a supposedly better past in which they were not the aggrieved minority they see themselves as today. These reactionary voters, as I have written recently, are reflexively countercultural: They reject almost everything in the current social and political order because everything around them is the product of the hated now that has displaced the sacred then.

(Although many of my colleagues in academia and in the media see Trumpism as fascism, I remain reticent to use that word … for now. I think it’s inaccurate at the present time, but I also believe the word has been overused for years and people tend to tune it out. I grant, however, that much of the current GOP has become an anti-constitutional leader cult built around Trump—perhaps one of the weakest and unlikeliest men ever in history to have such a following—and could become a genuinely fascist threat soon.)

America needs an actual conservative party, but it is unlikely to produce one in the near future. The movement around Trump will come to an end one way or another; as the writer Peter Sagal noted in The Atlantic after interviewing former members of various cults, “the icy hand of death” will end the Trump cult because it is primarily a movement of older people, and when they die out, “there will be no one, eventually, to replace them.” Although the cult around Trump will someday dissolve, the authoritarians his movement spawned will still be with us, and they will prevent the formation of a sensible center-right party in the United States.

Too many Americans remain complacent, believing that defeating Trump means defeating the entire threat to American democracy. As the Atlantic contributor Brian Klaas wrote yesterday, Trump’s threats on social media against Milley should have been the biggest story in the nation: “Instead, the post barely made the news.” Nor did Gosar’s obscene pile-on get more than a shrug.

Meanwhile, the New York Times opinion writer Michelle Cottle today profiled Ohio Senator J. D. Vance, a man who has called his opponents “degenerate liberals” and who is so empty of character that even Mitt Romney can’t stand him. Cottle, however, noted Vance’s cute socks, and ended with this flourish: “Mr. Trump’s Republican Party is something of a chaotic mess. Until it figures out where it is headed, a shape-shifting MAGA brawler who quietly works across the aisle on particular issues may be the best this party has to offer.”

Something of a mess? That’s one way to put it.

And what about Fox News, the source of continual toxic dumping into the American political ecosystem? “Fox News,” the Washington Post columnist Megan McArdle said yesterday, “does not have nearly as much power over viewers’ minds as progressives think. I am not cutting Fox any slack for amplifying Trump’s election lie nonsense. But I also doubt that it made that much of a difference.” Having traveled the country giving talks about misinformation and democracy for years, and hearing the same stories so many times of people who now find it impossible to talk to their own parents, I have no such doubts.

If Trump wins in 2024, worries about Fox’s influence or reflections on Vance’s adorable socks will seem trivial when Trump unleashes his narcissistic and lawless revenge on the American people. But even if he does not win, America cannot sustain itself without a functional and sane center-right party. So far, the apathy of the public, the fecklessness of the media, and the cynicism of Republican leaders mean that no such party is on the horizon.

Related:

The end will come for the cult of MAGA. Trump floats the idea of executing Joint Chiefs Chairman Milley.

Today’s News

The Supreme Court ruled against an attempt by Alabama Republicans to retain a congressional map with only one majority-Black district. The Federal Trade Commission and 17 states are suing Amazon in a broad antitrust lawsuit that accuses it of monopolistic practices. An increasing number of Senate Democrats is calling for Senator Bob Menendez to resign from Congress following his federal indictment.

Evening Read

Franco Pagetti / VII / Redux

How We Got ‘Democracy Dies in Darkness’

By Martin Baron

I should not have been surprised, but I still marveled at just how little it took to get under the skin of President Donald Trump and his allies. By February 2019, I had been the executive editor of The Washington Post for six years. That month, the newspaper aired a one-minute Super Bowl ad, with a voice-over by Tom Hanks, championing the role of a free press, commemorating journalists killed and captured, and concluding with the Post’s logo and the message “Democracy dies in darkness.” The ad highlighted the strong and often courageous work done by journalists at the Post and elsewhere—including by Fox News’s Bret Baier—because we were striving to signal that this wasn’t just about us and wasn’t a political statement …

Even that simple, foundational idea of democracy was a step too far for the Trump clan. The president’s son Donald Trump Jr. couldn’t contain himself. “You know how MSM journalists could avoid having to spend millions on a #superbowl commercial to gain some undeserved credibility?” he tweeted with typical two-bit belligerence. “How about report the news and not their leftist BS for a change.”

Read the full article.

More From The Atlantic

A new Coca-Cola flavor at the end of the world The Supreme Court needs to make a call on Trump’s eligibility. The next supercontinent could be a terrible, terrible place.

Culture Break

Wilford Harwood / Hulu

Read. In Orphan Bachelors, Fae Myenne Ng explores the true cost of the Chinese Exclusion era through an aching account of her own family.

Watch. The Hulu series The Other Black Girl dramatizes the pains of managing Afro-textured hair—and other people’s perceptions of it.

Play our daily crossword.

P.S.

I’m off to The Atlantic Festival, so I’ll be brief today. But I’ll be back on Friday to talk about Barry Manilow, whom I saw this past week in Las Vegas as he broke Elvis Presley’s record for performances at the venerable Westgate Las Vegas Resort & Casino. If you’re, ah, ready to take a chance again, you might enjoy it, even now, especially as we’ll be talking about the old songs. All the time, until daybreak.

I’m sorry. I promise: no more Manilow puns. See you in a few days.

— Tom

Katherine Hu contributed to this newsletter.

When you buy a book using a link in this newsletter, we receive a commission. Thank you for supporting The Atlantic.

The Origins of the Socialist Slur

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 09 › american-socialism-racist-origins › 675453

For years after World War II, the “liberal consensus”—the New Deal idea that the federal government had a role to play in regulating business, providing a basic social safety net, and promoting infrastructure—was a true consensus. It was so widely popular that in 1950, the critic Lionel Trilling wrote of the United States that “liberalism is not only the dominant but even the sole intellectual tradition.”

But the Supreme Court’s 1954 Brown v. Board of Education decision declaring segregation in public schools unconstitutional tied the federal government to ensuring not just economic equality, but also civil rights. Opponents of the liberal consensus argued that the newly active federal government was misusing tax dollars taken from hardworking white men to promote civil rights for “undeserving” Black people. The troops President Dwight Eisenhower sent to Little Rock Central High School in 1957, for example, didn’t come cheap. The government’s defense of civil rights redistributed wealth, they said, and so was virtually socialism.

[Read: An attempt to resegregate Little Rock, of all places]

This intersection of race and economics was not new to the second half of the 20th century. It reached back into the past to resurrect an argument made by former Confederates during the Reconstruction years to overturn federal protection of Black rights after the Civil War.

Some of today’s Republicans are in the process of making that argument reality. Their insistence that all their opponents are socialists goes hand in hand with their effort to suppress Black and brown voting. When former President Donald Trump insists that the country has fallen to communism and “Marxists,” what he’s really saying is that a government in which racial minorities have a say is illegitimate.

The accusation of “socialism” had sharp teeth in the 1950s, as Americans recoiled from the growing influence of the Soviet Union and the rise of Communist China. But Republicans’ use of the word typically had little to do with actual, Bolshevik-style socialism. The theory that the people would rise up and take control of the means of production has never been popular in the United States. The best a Socialist Party candidate has ever done in an American presidential election was when Eugene V. Debs won about 6 percent of the popular vote in 1912.

Rather, in the United States, the political charge of socialism tended to carry a peculiar meaning, one forged in the white-supremacist backlash to Black civil rights in the 1870s.

During the Civil War, the Republicans in charge of the government both created national taxation and abolished legal slavery (except as punishment for crime). For the first time in U.S. history, voting in federal elections had a direct impact on people’s pocketbooks. Then, in 1867, Congress passed the Military Reconstruction Act, extending the vote to Black men in the South. White southerners who hated the idea of Black people using the vote to protect themselves started to terrorize their Black neighbors. Pretending to be the ghosts of dead Confederate soldiers, they dressed in white robes with hoods to cover their faces and warned formerly enslaved people not to show up at the polls.

But in 1870, Congress created the Department of Justice to enable the federal government to protect the right of Black men to vote. Attorney General Amos Akerman oversaw the prosecution of more than 3,000 members of the Ku Klux Klan, winning more than 1,000 convictions. Meanwhile, Congress passed laws to protect Black voting.

Suddenly, it was harder for white southerners to object to Black rights on racial grounds. So they turned to a new argument, one based in economics.

They did not want Black men voting, they said, because formerly enslaved people were poor, and they would vote for leaders who promised to build things such as roads and hospitals. Those public investments could be paid for only with tax levies, and most of the people in the South with property after the war were white. Thus, although the infrastructure in which the southern legislatures were investing would help everyone, reactionaries claimed that Black voting amounted to a redistribution of wealth from white men to Black people, who wanted something for nothing.

Black voting was, one magazine insisted, “socialism in South Carolina.”

This argument that poor Black workers were dangerous socialists offered justification for former Confederates to block their Black neighbors from the polls, to read them out of American society, and ultimately to lynch them. It’s a peculiarly American version of “socialism,” and it might have been a historical anomaly had a small group of business leaders and southern racists not resurrected it in the 20th century as part of a deliberate effort to destroy the liberal consensus.

After World War II, most Republicans joined Democrats in believing that the federal government had to oversee business regulation, welfare programs, and infrastructure. They knew what businessmen would do to the economy unless they were checked; they had seen people homeless and hungry during the Depression.

And they scoffed at the notion that the New Deal system was a bad idea. They looked around at their homes, at their candy-colored cars that they drove on the new interstate highways built under what was then the biggest public-works project in U.S. history, and at their union-boosted paychecks in a nation with its highest gross domestic production ever, and they dismissed as a radical fringe the people trying to undermine this wildly successful system.

But the federal protection of civil rights added a new element to the liberal consensus that would threaten to tear it apart. Between 1967 and 1977, a North Carolina billboard urged people in “Klan Country” to “help fight Communism & Integration.”

The stagflation of the ’70s pushed middle-class Americans into higher tax brackets just when they needed their income most, and helped spread the sense that white tax dollars were being siphoned off to help racial minorities. As towns and governments tried to make up their declining funds with higher property taxes, angry property owners turned against the government. Republicans courted white workers by painting the Democrats as a party of grievance and special interests who simply wanted to pay off lazy Black supporters, rather than being interested in the good of America as a whole.

In 1976, former California Governor Ronald Reagan ran for president with the story of a “welfare queen” from the South Side of Chicago—code words for “Black”—who lived large on government benefits she stole. “She has 80 names, 30 addresses, 12 Social Security cards and is collecting veteran’s benefits on four non-existing deceased husbands,” Reagan claimed. “And she is collecting Social Security on her cards. She’s got Medicaid, getting food stamps, and she is collecting welfare under each of her names.” There was such a woman, but she was a dangerous criminal rather than a representative welfare recipient. Nonetheless, the story illustrated perfectly the idea that government involvement in the economy handed tax dollars to allegedly  undeserving Black Americans.

Reagan suggested a solution to such corruption. In August 1980, he spoke to voters in Philadelphia, Mississippi, 16 years and just a few miles from where the civil-rights workers James Chaney, Andrew Goodman, and Michael Schwerner had been found murdered by members of the Ku Klux Klan as they registered Black voters during 1964’s Freedom Summer. There, Reagan echoed the former Confederates during Reconstruction: “I believe in states’ rights,” he said.

Reagan’s campaign invited voters to remember a time before Black and brown voices and women began to claim equal rights. His campaign passed out buttons and posters urging voters to “make America great again.”

Voters put Reagan in the White House, where his administration cut taxes and slashed spending on public welfare programs (while pouring money into defense spending, and tripling the national debt). In the name of preventing socialism, those programs began the process of hollowing out the middle class.

In the years since 1981, wealth has moved dramatically upward. And yet, the language that linked socialism and minority voting never ceased to escalate.

Talk hosts such as Rush Limbaugh insisted that socialism was creeping through America at the hands of Black Americans, “feminazis,” and liberals. After its founding in 1996, the Fox News Channel joined the chorus of those who insisted that their political opponents were socialists trying to wreck the country. Republicans insisted that Barack Obama was a full-fledged socialist, and in 2018, Trump’s White House Council of Economic Advisers used the word socialism 144 times in a 72-page report attacking Democratic politicians. Trump’s press release for the report read: “Congressional Democrats Want to Take Money From Hardworking Americans to Fund Failed Socialist Policies.”

There is a long-standing fight over whether support for the modern-day right is about taxes or race. The key is that it is about taxes and race at the same time: Since Reconstruction, white supremacists have argued that minority voting means socialism, and that true Americans stand against both. In recent history, that argument has led Republican-dominated state legislatures to make voting harder for people of color, and to rig the system through gerrymandering. Three years ago it led Trump and his supporters to try to overturn the results of a presidential election to keep their opponents out of power. They believed, and insist they still believe, that they had to destroy the government in order to save it.

This article is adapted from Democracy Awakening: Notes on the State of America.

The Painful Afterlife of a Cruel Policy

The Atlantic

www.theatlantic.com › books › archive › 2023 › 09 › orphan-bachelors-bone-fae-myenne-ng-chinese-exclusion › 675385

In an age of democratized self-expression, you need not be Serena Williams or Prince Harry to write a memoir—or for people to want to read about your life. Not all of these first-person works are good, but more of them means that some will be good, even fascinating. Take an ever-swelling corner of the memoir market: those written about the Asian American experience. Identity, in these books, is a constant theme, but refreshingly, it plays out in all sorts of different registers—say, racial politics (Cathy Park Hong’s Minor Feelings) or grief (Michelle Zauner’s Crying in H Mart) or friendship (Hua Hsu’s Pulitzer Prize–winning Stay True). The most compelling of these create space for bigger questions—about the historical legacy of marginalization, or the nature of belonging—through the details of a particular set of lives.

A recent entrant into this arena reassures me that the proliferation of first-person storytelling is yielding outstanding works. Fae Myenne Ng’s Orphan Bachelors, an aching account of the author’s family in San Francisco’s Chinatown at the tail end of the Chinese Exclusion era, is an exemplar of the historical memoir.

Exclusion, which lasted from the late-19th century to World War II, was the United States’ official policy of forbidding immigration and citizenship to Chinese people. The orphan bachelors were the men who, during that period, came to work in America’s goldfields, on its railroads, or in its restaurants and laundries. Most came as “paper sons” who circumvented the law by falsely claiming to be the sons of Chinese American citizens. Trading their identities for fake ones, they toiled alone in America. Some had wives and children in China who could not legally come over, and those who were single suffered from a double exclusion—the law forbade not only immigration but also interracial marriage. These men are known in Cantonese as the lo wah que, the “old sojourners.”

Ng’s father called Exclusion a brilliant crime because it was bloodless: “four generations of the unborn.” Ng and her siblings were part of the first generation that repopulated their neighborhood after the lifting of Exclusion but before the immigration reforms of the 1960s. Beyond telling her family’s story, Ng memorializes an enclave stuck in time, its demographics twisted by cruel constraints. She shows that Exclusion has a reverberating and painful afterlife that dictates the limits of inclusion: One does not simply lead to the other.

Orphan bachelor is not a translation from Chinese, but a phrase that Ng’s father came up with. To her, it signals the tragedy and romance of the sojourners: their labor and loneliness, and also their hope. By the time Ng is coming up, these men are wizened and gray-haired; the generational shift is clear. Still, though the memoir plays out from Ng’s perspective, it is full of color from the old timers’ lives. As young girls, Ng and her sister respectfully address these men, who while away the time in Portsmouth Square, as “grandfather.” When she introduces them to us, she uses names that bespeak their individuality: Gung-fu Bachelor, Newspaper Bachelor, Hakka Bachelor, Scholar Bachelor. In the park, they argue politics and play chess. Some have jobs; others do not. They shuffle off, Ng writes, “their steps a Chinese American song of everlasting sorrow.”

From an early age, Ng seems to have an inclination toward history, and toward storytelling—tendencies that help her observe the bigger-picture currents at the edges of her family’s tale. She spends time with Scholar Bachelor in particular, who lives in an SRO hotel, works in a restaurant, and teaches in the Chinese school where the immigrants’ kids go in the afternoon after “English school.” A sincere, tyrannical teacher who recites Chinese poetry from the Tang dynasty, he encourages Ng, a budding writer, to look “to the old country for inspiration.”

Another orphan bachelor who influences Ng is her father, a merchant seaman and raconteur who can “take one fact and clothe it in lore.” He lived in San Francisco’s Chinatown for almost a decade before he went back to his ancestral village and found a wife, with whom he returned to California, after Exclusion lifted, to start a family. Like many who’ve faced unjust barriers and ongoing precarity, he tells tall tales filled with warlord violence, famine, and adversity. These stories are the currency traded among the orphan bachelors in the park, necessary in order to believe that their present misfortunes are not the worst. It may be bad in America, but not as bad as it was in China.

The impulse to narrate hardship—and, in so doing, lay claim to it—is evident in the relationship between Ng’s parents, who are full of pity, both for themselves and for each other. They have little in common other than their suffering, but even in that, they are competitive. Ng’s dad rails about the racism he has faced in the United States. Her mom retorts that “nothing compared to the brutality of Japan’s imperial army,” which she experienced growing up in pre-Communist China. Seeking relief from all of the fighting, Ng’s father ships out and leaves his wife and children for a month or more at a time. Her mom works as a seamstress, during the day at the sewing factory and at night at home; Ng and her sister go to sleep and wake up to the sound of the sewing machine.

Theirs is not a story of upward mobility or assimilation. Going to sea and sewing, the arguments and resentments—they all continue, even after the parents buy a small grocery store and a house on the outskirts of the city. In the 1960s, Ng’s father signs up for the U.S. government’s Chinese Confession Program, in which paper sons could “confess” their fake identities in exchange for the possibility of legalized status. The program is controversial: A single confession implicates an entire lineage, and there is no guarantee of being granted legal status (indeed, some are deported). Ng’s mom pressures Ng’s dad to confess; she wants to be able to bring her mother, whom she has not seen for decades, to the States. But confessing invalidates his legal status, and his citizenship isn’t restored until many years later.

[Read: Racism has always been a part of the Asian American experience]

Confession ruins the marriage. Still, there are small acts of devotion. When Ng’s mother is diagnosed with cancer, her father travels to Hong Kong and smuggles back an expensive traditional Chinese treatment: a jar of snake’s gallbladders, which he tenderly spoon-feeds her at her bedside. This ongoing tension is one of the memoir’s remarkable qualities. The story it tells is, in one sense, simply about the aches and dramas of a single family. But in another, its scope is more deeply existential. It considers the unjust constraints that can make unhappiness feel like fate, and the role that stubborn fealty can play in helping a family, somehow, stay together.

One of the things Ng’s dad, ever the weaver of yarns, teaches her is that stories always contain secrets; the important thing is to find the truth in them, however hidden they might be. That makes Orphan Bachelors something of an excavation—one that seems to build on a previous effort. Thirty years ago, Ng’s evocative debut novel, Bone, told a version of this story.

That novel was similarly focused on a family in San Francisco’s Chinatown during the Confession era: The mother is a seamstress and the stepfather is a merchant seaman; the marriage is fraught, buffeted by adversity; the first-person protagonist is, like Ng herself, the eldest daughter. In the novel, the middle daughter has jumped to her death from the rooftop of the Chinatown projects. The sister’s death is the plot device that forces a reckoning with the lies that fester in the family’s troubled relationships—and the bigger lies that have structured the lives of the paper sons.

Bone is full of minimalist but distinctive place-setting details—a chicken being plucked “till it was completely bald,” the culottes the mother must sew to meet popular demand in the flower-power ’60s. In Orphan Bachelors, Ng has enriched the environment further by attending to linguistic subtleties. She understands what language can reveal about identity formation—what it creates and enables, what it denies and obscures. Of the subdialect of Cantonese that she hears crisscrossing the neighborhood while growing up, Ng writes, “Our Toishan was a thug’s dialect, the Tong Man’s hatchetspeak. Every curse was a plunging dagger. Kill. Kill. You.” (It’s written in English, and although I can hear the Chinese, non-Chinese speakers will have no trouble getting it.) The second-generation children live in between languages, “obedient, polite, and respectful” in English school, yet like “firecrackers” in Chinese school. “We talked back. We never shut up,” Ng writes. “Our teachers grimaced at our twisty English-laced Chinese. We were Americans and we made trouble.”

In a way, the secret that Ng reveals about this era—across fiction and memoir—is how the trauma of Exclusion is transferred from one generation to the next: the complications of true and fake family histories, the desire of the younger generation to unburden themselves of that difficult inheritance, the impossibility of actually escaping it. In Bone, we see the dissonance between familial duty and selfhood playing out from a young woman’s point of view. Orphan Bachelors captures the longer arc of Ng’s life as a Chinatown daughter, including her parents’ deaths. The struggle to balance devotion to your elders with living your own life, it suggests, does not necessarily end when those elders have passed away.

As a historian who has written three books on aspects of Chinese Exclusion, I have explained how Exclusion separated families and how Confession separated them still. I hope I have told the story well enough. I am grateful to Ng for lending her voice to this history and crafting a narrative that reckons with this period’s devastating psychic costs. The storyteller’s delusion, as Ng puts it in Orphan Bachelors, is the belief that if you tell the story right, you will be understood. It may be an impossible task, but with this latest endeavor, she is getting closer.