Itemoids

Americans

Why Driverless Cars Are a Tough Sell

The Atlantic

www.theatlantic.com › newsletters › archive › 2023 › 09 › why-driverless-cars-are-a-tough-sell › 675468

Welcome to Up for Debate. Each week, Conor Friedersdorf rounds up timely conversations and solicits reader responses to one thought-provoking question. Later, he publishes some thoughtful replies. Sign up for the newsletter here.

Last week, I asked for your thoughts on self-driving cars.

Replies have been edited for length and clarity.

Kathryn is bullish and looks forward to shedding the responsibility of driving:

Yes, driverless cars are the future, at least for people alive today. I’m sure there will be some later innovation in transportation we can’t even imagine yet. Cities should allow them to be tested on the street now, assuming the vehicle has passed something analogous to the driving test humans take to receive a driver’s license. I’d love to have these vehicles in my neighborhood. I live in an urban area and use our sidewalks and crosswalks as my primary mode of transportation. Given all the close calls I’ve had over the years with human drivers, I’d welcome anything that is safer. Driverless cars don’t need to be even close to perfect to be an improvement on the status quo.

I don’t like driving, although I have a license and access to a car and I drive a few times a month. What I like least about driving is that I could make one wrong move and possibly kill someone, including myself, or destroy my family’s financial stability. I appreciate that I can commute by bus and avoid facing that responsibility on a daily basis. I also enjoy relaxing on the bus. I can read and write for work or fun, listen to music and podcasts, and watch the world go by. I wonder if today’s drivers will start to value the decrease in legal liability and the increase in downtime that driverless cars can provide.

Mike is bearish:

Driverless vehicles of any type should be banned in any setting where they will have to interact with vehicles controlled by humans. They shouldn’t be allowed to be tested, and the technology should not be pursued. Where every vehicle is autonomous, there is in theory nothing wrong with them, but on the public roads, that environment will never exist, at least not in our lifetimes. I, for one, will never get in a vehicle that does not have a human driver.

Chris points out that innovation is rarely, if ever, stopped by its critics or regulators acting on their behalf:

Driverless cars will flood the streets regardless of whether they are ready for prime time. There will be accidents, injuries, and deaths. Victims and their lawyers will haggle with and sue multiple parties––and compensation will be slow in coming while everyone is still figuring out who is actually responsible for an accident (the vehicle maker, the software company, third-party technology, the passenger somehow, etc.).

Due to the intricacies of multiple-party liability, insurance companies covering driverless vehicles may be very slow to compensate victims, leaving the latter in medical and financial purgatory. The business of America is business, and driverless cars and trucks represent a business opportunity. The consumer and his or her safety is always an afterthought.

Leigh anticipates safer roads:

I live in New Orleans, where just in the past five days of driving my kids to school, I have experienced drivers blowing through stoplights; drivers who were clearly watching their phones and not the road; drivers doing 50 in a 20 mph school zone, and not worried about getting caught because they have no tag on their car. Driverless cars will end all this. They will be programmed to obey traffic laws. This will make life easier and more predictable for pedestrians, cyclists, and other drivers. Plus, Baby Boomers are getting older, and it would be great to have a driverless car take them to their doctor appointments instead of them continuing to drive past the point of safety. And how great would it be to sleep in a driverless car on your way to your travel destination? You could wake up refreshed, having traveled at night when the roads aren’t busy.

Cameron is skeptical of anything tech companies touch:

I think the discussion around autonomous vehicles—and their viability as a mode of transportation in the not-so-distant future—encapsulates bigger questions that The Atlantic has covered surrounding different elements of American social and political life. The oft-discussed degradation of civil society in the U.S., combined with a legacy of automobile-centric infrastructure development and noxious residential zoning regulations, has resulted in starved public transportation networks (where they even exist) and urban and suburban layouts that aren’t terribly traversable on foot to begin with.

Now, I realize that tackling these issues will require a significant amount of effort, investment, and political willpower, but I find myself increasingly disheartened whenever Silicon Valley—who are not accountable to the public outside of “market forces”—get the opportunity to treat the U.S. as a playground while our civic institutions convulse.

Maureen is betting on old-fashioned car culture:

Our century-long love affair with all things automotive dooms the driverless concept to a niche market: people whose physical condition precludes driving and those who prefer to be driven.

The vast majority of us regard driving as a birthright, obtaining a license as a rite of passage, and operating a vehicle as an expression of control––and as much fun as you can have while fully dressed. In short, we love control and speed. The less mature among us love road games like Cut You Off and Tailgate. An astonishing number love to work on cars, restore cars, race cars, and watch others race. Driverless cars will have their place, when they iron out the kinks, but it is not on the roads of this vast and beautiful nation.

Alan makes the case for human drivers:

A couple years ago, I was heading west on I-66 in the late afternoon. The road was under construction, recently repaved and with no lines yet painted. It had just finished raining, so the new asphalt was wet, but the sun was coming out, very low on the horizon and reflecting off the wet asphalt. I was effectively blinded, as were all the other drivers. Visor pulled low, I could focus on the car in front and make judgments on where to be. Stay in my unmarked “lane,” keep a reasonable distance, and just drive. But what would a computer do? Would it have the “intuition” to adopt a defensive driving mode and guesstimate where the lanes should be? Or would it just freak out and stop? And how would a computer drive a car on the snow-covered roadways of Buffalo or Bismarck in February? No lane markings to follow, just human understanding of how we navigate difficult conditions.

Richard offers additional examples of nonstandard road conditions:

Construction reduces a two-lane road to one lane. Some flag man in a high-visibility vest is holding a pole with a small sign that says “SLOW,” which means you can enter the opposite lane. Then he flips it to the other side, which says “STOP.” Will self-driving cars figure this out?

Stuck behind a postal delivery vehicle stopping at every house on a two-lane road with double yellow lines. Does the self-driving vehicle know it can pass the mailman? Ditto for the garbage truck, UPS, FedEx.

A car accident requires a police officer to take control of an intersection. Humans recognize the presence of a police officer and know to obey his hand signals and ignore the traffic light. Will a self-driving vehicle recognize the man as a police officer and understand his hand signals for “go” and “stop”?

A power outage causes nonworking traffic lights at intersections. Humans know to treat this intersection as a four-way stop. How does a self-driving vehicle interpret this situation? Does it even recognize that there is a nonworking traffic light?

A school bus is stopped on the other side of the road. In Ohio, if it is three lanes or fewer, traffic must stop on both sides. If it is four lanes or more, traffic on the opposite side can keep moving. Does a self-driving vehicle know this? Does a self-driving vehicle recognize a school bus?

I could go on in that vein, but you get the drift.

Steve contends with one of the Northeast’s most hazardous road conditions: Massachusetts drivers. He writes:

Living near Boston, I can tell you that, within 10 minutes of every drive, I run into a scenario that would be nearly impossible for a driverless car to navigate. Beyond the nearly unnavigable cow paths we call roads, there are so many times that eye contact with the driver, pedestrian, or pet is the only real way to avoid calamity. Not to mention the average Boston driver seems to find new ways every day to do something irrational. It will take decades to master that and even longer for Bostonians to trust any software to solve that. Instead, car companies should focus on two things: First, make driver-assist technologies amazing. Imagine a windshield that enhances everything (especially at night) and highlights potential risks, 360 cameras that help see issues and then accident-avoidance technologies that give 80-year-olds the reflexes of a teenager. Said simply, keep the person at the wheel, but make them awesome drivers.

Second, situational autopilot: Designated areas where driverless cars move on preprogrammed routes (think shuttles in airport lots or parts of Rome filled with driverless vehicles). Also, why not special lanes on highways (repurpose HOV lanes) that allow cars to link up, form a dance line of sorts, and speed down the highway?  Instead of focusing on an unreasonable goal not reachable for 20 years or more, why not take the remarkable technologies we’ve developed and get us to a much safer place than we are now?

Leo suggests that we shouldn’t count on driverless cars winning the day politically, even if they perform better than humans:

This debate may play out differently in other countries and cultures, but in America, freedom will trump safety in the end. There are, of course, any number of laws and regulations in our society, but the underlying ethos, the dominant paradigm, is that we live in the land of the free. Laws, regulations, and limitations are not prized as arbiters of a functional society so much as endured as necessary evils. And any person, community, or movement that pushes too hard for too many restrictions will pay a heavy price.

What politician or political party is going to sign their own death warrant by limiting or, god forbid, outlawing our right to drive our own cars? Even the limitations that already exist (such as speed limits) are flouted so regularly that in many cases it’s unclear why they exist at all (other than to raise money for local governments through ticket citations). The decades-long effort to stigmatize and heavily fine drunk drivers has indeed yielded some results, but there are still drunk drivers, and there always will be. For better and for worse, Americans will only tolerate so many infringements on their individual liberty.

So driverless cars will likely be deployed to some extent. They will penetrate our society at some level. But they are not the future. At best, they are one aspect of the future.

Karen won’t be buying a self-driving car:

I enjoy driving a manual transmission. I have found that it forces me to pay attention only to driving. I feel engaged with the car. Power steering, power brakes, fine; but I still don’t mind winding my car windows up and down. Power seats? Entirely unnecessary. I don’t even like the whole touchscreen thing. Some cars force you to use the touchscreen to open the glovebox! Why? What’s the deal with putting the heating and AC controls on the touchscreen? Or the radio, for that matter. And I will decide what music I want to hear. No music apps! You now have to buy a used car to get a CD player. Yes, I’m old. But I like to drive—not be driven, even if the systems get better and more accurate.

Tanner writes that “autonomous cars are still cars,” which he sees as a bad thing:

We would be better off investing in low-tech, less car-centric ideas; designing our communities where car trips are less necessary, supporting robust public transportation, making streets safer for all users, dedicating less space to cars and more space to more community-oriented things. Some will argue that driverless cars will solve the issues above by reducing the need to own a car, reducing crashes (with cars at least), reducing congestion, etc.

Even if true (I have my doubts), do we really want to be even more dependent on giant tech companies than we are? How do I, as a pedestrian or a bicyclist, communicate with a machine about my intentions at an intersection (no more gadgets, please)? Is the future one where everyone and everything requires sensors and gadgets to work safely?

There is a place for driverless cars. But the future, for me, is a 30-year-old bicycle that I can take anywhere.

The Man Who Created America’s Most Controversial Gun

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 09 › ar-15-rifle-gun-history › 675449

This story seems to be about:

Eugene Stoner was an unassuming family man in postwar America. He wore glasses and had a fondness for bow ties. His figure was slightly round; his colleagues called him a teddy bear. He refused to swear or spank his children. “Boy, that frosts me,” he’d say when he was upset. He liked to tweak self-important people with a dry sense of humor. He hated attention.

A lifelong tinkerer and a Marine veteran, he was also fascinated by the question of how to make guns shoot better. When an idea came to him, he scribbled it down on anything he could find—a pad of paper, a napkin, the tablecloth at a restaurant. He had no formal training in engineering or in firearms design. Yet it was inside Stoner’s detached garage in Los Angeles, during the 1950s, that the amateur gunsmith, surrounded by piles of sketches and prototypes, came up with the idea for a rifle that would change American history.

Today, this weapon is the most popular rifle in America—and the most hated. The AR-15 is a symbol of Second Amendment rights to millions of Americans and an emblem of a violent gun culture run amok to millions more. With a lightweight frame and an internal gas system, the military version can be fired as an automatic, unleashing a stream of bullets from a single pull of the trigger, or as a semiautomatic, allowing for one shot per trigger pull. The civilian semiautomatic version is now the best-selling rifle in the country; more than 20 million such guns are in civilian hands. And it is a weapon of choice for mass shooters—including the white supremacist who killed three Black people last month at a store in Jacksonville, Florida, armed with a handgun and an AR-15-style rifle emblazoned with a swastika.

[Juliette Kayyem: The Jacksonville killer wanted everyone to know his message of hate]

The consequences of the AR-15’s creation have coursed through our society and politics for generations in ways that Stoner never foresaw. He created the gun with a simple goal: to build a better rifle for the U.S. military and its allies during the Cold War. He wanted to protect the country he loved. Now his invention is fused in Americans’ minds with the horror of people going about their daily tasks—at school, the movies, the store, a concert—and suddenly finding themselves running for their lives. Few of the participants in America’s perpetual gun debate know the true, complicated history of this consequential creation—or of the man behind it. The saga of the AR-15 is a story of how quickly an invention can leave the control of the inventor, how it can be used in ways the creator never imagined.

We interviewed Stoner’s family members and close colleagues about his views of his gun. They gave us insight into what the inventor might have thought about the way the AR-15 is being used today, though we’ll never know for sure; Stoner died before mass shootings with AR-15s were common. Later in life, after years of working in the gun industry, he was asked about his career in an interview for the Smithsonian Institution. “It was kind of a hobby that got out of hand,” he said.

As a boy growing up in the Coachella Valley, in Southern California, in the 1920s and ’30s, Stoner was fascinated by explosions. Before the age of 10, he had designed rockets and rudimentary weapons. On one occasion, he begged a friend’s father for a metal pipe and the local drugstore owner for magnesium. Stoner built a primitive cannon and pointed it at a house across the street, but before he could open fire, his father ran to stop him. “I told you to do this at the city dump,” scolded Lloyd Stoner, a veteran of the Great War who had moved the family to California from the farmlands of Indiana in search of a better life.

Eugene Stoner never went to college. He joined the Marines during World War II and was tasked with repairing weapons on aircraft in the Philippines. When he came home, he brought his wife, Jean, an adventurous woman who idolized Amelia Earhart, a special present: gun parts from Asia that he assembled into a rifle. She loved it. The couple often went hunting and shooting together. “He was a very quiet person,” Jean said in an unpublished interview that the Stoner family shared with us. “But if you talked about guns, cars, or planes, he’d talk all night.”

After the war, Stoner got a job as a machinist making aircraft parts. Every day after he came home, he would eat the dinner that Jean had prepared (beef Stroganoff was his favorite), take a quick nap, and then walk to the garage to work on his gun designs. Like other hobbyist inventors of the era, he believed he could move the country forward by the power of his ingenuity. “We were like the 1950s family. It was California. It was booming after the war,” his daughter Susan told us. “I knew from my dad—I felt from him—the future was wide open.”

[Conor Friedersdorf: The California dream is dying]

Stoner had the ability, common among inventors, to imagine engineering solutions that others stuck in the dogmas of the field could not. For centuries, gunmakers had built their rifles out of wood and steel, which made them very heavy. At the time, the U.S. military was searching for a lighter rifle, and Stoner wondered if he could build one using modern materials. If humans were soaring into the atmosphere in airplanes made of aluminum, he figured, couldn’t the lightweight metal tolerate the pressures of a gun firing? By the early 1950s, he had figured out how to replace one of the heaviest steel components of a rifle with aluminum. Then he devised a way of using the force of the gas from the exploding gunpowder to move parts inside the gun so that they ejected spent casings and loaded new rounds. This allowed him to eliminate other, cumbersome metal parts that had been used in the past. The first time he tried firing a gun using this new system, it blew hot gas into his face. But he perfected the design and eventually received a patent for it.

In 1954, Stoner got the opportunity to bring his radical gun concepts to life. That year, as Stoner later recalled, he had a chance encounter at a local gun range with George Sullivan. A relentless pitchman, Sullivan was then the head of a Hollywood start-up called ArmaLite, a subsidiary of Fairchild Engine and Aircraft Corporation whose mission was to design futuristic weapons. Impressed with the homemade guns Stoner was shooting, Sullivan hired him as ArmaLite’s chief engineer.

The small yet brilliant ArmaLite team worked at a fevered pace, designing a series of lightweight guns made of aluminum and plastic. Most went nowhere. Nevertheless, the ambitious Sullivan set the firm’s sights on an improbable target: the U.S Army’s standard-issue rifle. The Eisenhower administration’s “New Look”—an effort to rein in Pentagon spending and shift it toward newer technologies—opened the door for private companies to get big military contracts. The outsiders from Hollywood decided to take on Springfield Armory, the military’s citadel of gun making in western Massachusetts that had equipped American soldiers since the Revolutionary War. Springfield’s own efforts to develop a new rifle had resulted in a heavy wood-and-steel model that wasn’t much more advanced than the M1 Garand used by GIs in World War II.

Eugene Stoner, wearing his trademark bow tie, holds his creation the AR-10. The AR-15 was a scaled-down version of this gun. (Photograph courtesy of Susan Kleinpell via Farrar, Straus and Giroux)

ArmaLite’s first serious attempt at a rapid-fire rifle made of plastic and aluminum was the AR-10—AR for ArmaLite or ArmaLite Research (accounts differ), and 10 because the weapon was the company’s tenth creation. The rifle combined the efficient internal gas system Stoner had devised in his garage and lightweight modern materials with a design that made the gun easy to shoot and keep on target. In December 1956, Time heralded the AR-10 as a potential savior for the bumbling U.S. military and listed Sullivan as the gun’s inventor, a claim that infuriated Stoner’s wife. Sullivan had also meddled with the design, insisting that more aluminum be used in making the gun’s barrel, a move Stoner resisted. During military trials, the AR-10 fared poorly. At one point, a bullet erupted from the side of the gun’s barrel, just missing the hand of the soldier firing the weapon—and seemingly dooming ArmaLite’s chances of landing a military contract.

But within the Pentagon, a cabal of high-ranking officers led by General Willard Wyman launched a back-channel effort to save Stoner’s gun. Wyman was a legendary military leader who, at age 46, had joined the D-Day invasion at Omaha Beach as an assistant commander of the First Infantry Division. He knew that the United States needed better firepower as the Cold War flashed hot. America’s enemies around the globe were being armed by the Soviet Union with millions of rugged AK-47s that could spray bullets in automatic mode and were highly effective in guerilla warfare. Wyman was certain that modern wars would be won not by long-range marksmen but by soldiers firing lots of bullets in close combat. They needed a rifle that used small-caliber bullets so they could carry more ammo. And he was worried that the tradition-bound gun designers at Springfield Armory weren’t innovative enough to meet the challenge. When Wyman’s superiors brushed him off, he secretly flew to Los Angeles and stunned Stoner and his team by striding into the ArmaLite office unannounced. Wyman told Stoner that he wanted ArmaLite to build a new version of the AR-10 that fired a smaller bullet.

[James Fallows: Why the AR-15 is so lethal]

Stoner and an ArmaLite draftsman named Jim Sullivan (no relation to George) set about designing the gun. It was simple, efficient, and easy to use. Early versions of the AR-15 weighed just more than five pounds unloaded, less than the hedge trimmers and handheld vacuums of the era. With all of Stoner’s innovations—lighter material, fewer parts, and the gas system, as well as an in-line stock and a pistol grip—Jim Sullivan found shooting the prototype AR-15 to be easy, even after he flipped the selector switch to automatic. “That made it so well handling,” he told us. “If you’re firing full auto, you don’t want a gun that lifts.” Sullivan found the rifle’s recoil to be minimal. As a result, follow-up shots were quick when he switched it to semiautomatic. “It looked a little far-out for that time in history,” Stoner later said in the Smithsonian interview.

As Stoner and his backers sought to persuade the military to adopt the AR-15 in place of Springfield’s rifle, they were often met with skepticism about the gun’s small bullets. During secret military hearings about the rifle in the winter of 1958, Stoner explained to a panel of generals that the AR-15 had “a better killing cartridge with a higher velocity” than the Soviet AK-47. The generals asked Stoner how a smaller bullet fired from his rifle could do so much damage. “The wound capability is extremely high,” Stoner answered. “It blows up on contact rather than drilling a nice neat hole.” A slower .30 caliber round, similar to the one used by Springfield’s wood-and-steel rifles, “will go right through flesh,” but the faster, smaller bullet from the AR-15 “will tumble and tear,” he said.

Those in the military who wanted Springfield’s rifle to prevail tried to sabotage Stoner’s gun, rigging tests and shading reports so that it would seem like it wasn’t ready for the battlefield. During official trials in Alaska, Stoner arrived to find that the aiming sights on his guns had been replaced with bits of metal that were badly misaligned, causing soldiers to miss their targets. The guileless inventor was caught up in the murky world of Pentagon intrigue.

[From June 1981: James Fallows’s ‘M-16: A Bureaucratic Horror Story’]

Eventually, through persistence and luck, and with the help of a cast of lobbyists, spies, and analytics-driven military leaders, Stoner’s rifle would be adopted. At a key moment when it seemed that the AR-15 would be killed off by military bureaucrats, the powerful, cigar-chomping Air Force General Curtis LeMay, the architect of the U.S. bombing campaign in Japan during World War II, was asked if he wanted to shoot the gun. On July 4, 1960, at a birthday party for Richard Boutelle, the onetime head of Fairchild, the gun’s backers set up ripe watermelons as targets at Boutelle’s estate in western Maryland. LeMay fired, causing a red-and-green explosion. The general marched into the Pentagon soon after and demanded that the military purchase the weapon. It would become the standard-issue rifle—renamed the M16, for the prosaic “Model 16”—just in time for the rise of U.S. involvement in Vietnam.   

A U.S. Marine holds his M16 rifle alert after being fired on by North Vietnamese soldiers in the jungle southwest of Da Nang on April 22, 1969. (Yvon Cornu / AP)

In Eugene Stoner’s and Jim Sullivan’s minds, their work was not just intellectually engaging but also noble, a way to help America defeat the Communists. At school, in the 1950s, the Stoner children learned what to do in the event of a Soviet nuclear attack. Sirens and bells went off regularly, and teachers ordered kids to hide under their desks and cover their heads, Stoner’s daughter Susan recalled. For her father, the task of making the best rifle for the U.S. military wasn’t burdened with moral quandaries. Many weapons inventors at the time thought about the technical challenges of their weapons first, and wrestled with the consequences of their creations only afterward. “When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success,” J. Robert Oppenheimer, the lead developer of the atomic bomb, said almost a decade after bombs were dropped on Hiroshima and Nagasaki.

[From February 1949: J. Robert Oppenheimer’s ‘The Open Mind’]

After Stoner created the AR-15, he continued designing guns and artillery for a variety of gunmakers. Through a company he co-founded, he worked on antiaircraft weapons for the Shah of Iran, before the 1979 revolution scuttled the deal. He helped design a handgun for the venerable gunmaker Colt that the company tried to sell on the civilian market, without much success. But none of his creations came close to the prominence of the AR-15. By the 1990s, he’d become a superstar in the gun world. Royalties from the M16 made him wealthy; Colt, which purchased the rights to the gun from ArmaLite, sold millions of the weapons to the military. Stoner was “a Second Amendment guy,” his daughter said, but he didn’t talk much about the messy world of politics, either privately or publicly. He preferred thinking about mechanisms.

Throughout his life, Stoner was troubled by losing control over the production of his most famous gun. In the 1960s, as the U.S. ramped up production of the rifle for the war in Vietnam, a Pentagon committee made changes to the gun and its ammunition without proper testing. The results on the battlefields in Vietnam were disastrous. Stories of GIs dying with jammed M16s in their hands horrified the public and led to congressional hearings. The shy inventor was called to testify and found himself thrust into an uncomfortable spotlight. Declassified military documents that we reviewed show that Stoner tried in vain to warn Pentagon officials against the changes.

Stoner paid far less attention to the semiautomatic version of his rifle that Colt began marketing to the public in the 1960s as “a superb hunting partner.” Even after Stoner’s patent expired, in 1977, the rifle was a niche product made by a handful of companies and was despised by many traditional hunters, who tended to prefer polished wood stocks and prided themselves on felling game with a single shot. But the rifle’s status shifted after 9/11. Many Americans wanted to own the gun that soldiers were carrying in the War on Terror. When the 1994 federal assault-weapons ban expired after a decade, the AR-15 became palatable for mainstream American gunmakers to sell. Soon, it was a symbol of Second Amendment rights and survivalist chic, and gun owners rushed to buy AR-15s, fearful that the government would ban them again. By the late 2000s, the gun was enjoying astounding commercial success.

AR-15 style weapons are displayed for sale at the 2022 Rod of Iron Freedom Festival, an open-carry event to celebrate the Second Amendment, in Greeley, Pennsylvania. (Jabin Botsford / The Washington Post / Getty)

When Stoner died from cancer, in 1997, obituaries hailed him as the inventor of the long-serving military rifle; they made no mention of the civilian version of the weapon. Stoner left clues about his thoughts about the gun in a long letter, sent to a Marine general, in which he outlined his wishes for his funeral and burial at Quantico National Cemetery, in Virginia. He saw the creation of a rifle for the U.S military as his greatest triumph. He didn’t mention the civilian version. The government had wanted a “small caliber/high velocity, lightweight, select fire rifle which engaged targets with salvos of rounds from one trigger pull,” Stoner wrote. “That is what I achieved for our servicemen.”

[Ryan Busse: The rifle that ruined America]

The inventor wouldn’t get to control how his proudest achievement would be used after his death, or the fraught, outsize role it would come to play in American society and politics. Since 2012, some of the deadliest mass shootings in the nation’s history—Sandy Hook, Las Vegas, Sutherland Springs, Uvalde—have been carried out by men armed with AR-15s. Now children practice drills to avoid being gunned down by attackers with AR-15s at their school.

The last surviving member of that ArmaLite team, the draftsman Jim Sullivan, was at times haunted by the invention’s later impact. When we visited him at his workshop in Arizona in 2019, Sullivan pulled out the original drawings for the AR-15 and smiled broadly as he described how he and Stoner had designed the gun. He picked up parts to demonstrate how it worked, explaining its functions like an excited professor. He was proud of the weapon and loved Stoner. He said that his years working at ArmaLite were the best of his life. After hours of talking about barrels, bolts, receivers, and Stoner’s gas system, he paused and looked down at the floor. He said he’d grown deeply disturbed by the violence being wrought with the invention he had helped create. He said that mass shooters wouldn’t be able to do what they do without weapons such as the AR-15.

“Every gun designer has a responsibility to …” he said, pausing before finishing his thought, “to think about what the hell they’re creating.”

This article has been adapted from Zusha Elinson and Cameron McWhirter’s book, American Gun: The True Story of the AR-15.

Judicial Ethics in a Populist Age

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 09 › supreme-court-ethics-oversight-criticism › 675460

The contemporary ethical standards that many Americans want to see the Supreme Court adhere to are exactly that—contemporary. Throughout the Court’s long history, justices have had conflicts of interest that we would find unacceptable today. And in the past, people didn’t seem to mind quite so much.

In 1803, Chief Justice John Marshall, who wrote the Court’s landmark opinion in Marbury v. Madison, should have recused himself by contemporary standards. The case concerned the validity of judicial commissions that he had himself signed and sealed, and that his brother James Marshall had been charged with delivering. But Chief Justice Marshall didn’t recuse himself—and nobody objected at the time. In 1972, Chief Justice Warren Burger spoke by telephone with President Richard Nixon about cases and issues that were or could come before the Court, including school busing and obscenity. The news became public in 1981, while Burger was still chief justice—and was met with a relative shrug.

Nor are potential financial conflicts anything new. The justices have long benefited from the generosity of rich friends, which until recently generated little concern. Justice William J. Brennan’s acceptance of $140,000 in gifts and forgiven debts from a wealthy businessperson in the 1990s, far from making front-page news, showed up in a tiny article on the bottom of page A9 of The New York Times. In 1995, reports that seven different justices enjoyed luxurious trips over a 13-year period, courtesy of a major legal publisher (and Supreme Court litigant), generated little interest from Congress. More recent instances when millionaires and billionaires bankrolled trips taken by Justices Antonin Scalia, Ruth Bader Ginsburg, and Stephen Breyer spurred generally mild media coverage with hardly any outrage. (Although Justice Abe Fortas’s financial entanglements with the financier Louis Wolfson ultimately caused Fortas to resign, the allegations against the justice—who had agreed to accept large cash payments from Wolfson for the rest of Justice and Mrs. Fortas’ lives in exchange for the justice providing unspecified “services” to this subsequently convicted felon—were far more serious than any made recently.)

[Bob Bauer: The Supreme Court needs an ethics code]

The current climate is very different. Last year, critics lambasted Justice Clarence Thomas for not recusing in cases involving the January 6 attack on the Capitol after text messages from his wife, Virginia, revealed her involvement in the effort to overturn the 2020 presidential election. Then, in April, ProPublica reported on the relationship between Justice Thomas and the real-estate tycoon Harlan Crow, which was followed by more reports of other financial entanglements between justices and wealthy benefactors. These reports stoked public anger; politicians of both parties, newspaper editorial boards, and numerous commentators called for a formal code of ethics at the Supreme Court, possibly including limits on the gifts the justices can accept and more robust disclosure requirements.

So the question is not why today’s Court has so many potential conflicts and controversies, some of them problematic (the Ginni Thomas texts), some of them less so (Venmogate). The question is why they have generated so much attention and outrage compared with decades past.

Part of this is undoubtedly partisan opportunism, with critics on the Court’s left and right seeking additional reason to delegitimize the decisions of their disfavored justices, amplified through a hyper-politicized media environment. But a more fundamental, albeit interrelated, reason is at play: the rise in recent years of a strong anti-elitism in American politics, what David Brooks has dubbed a “distrustful populism.”

One principal feature of this form of populism is a rejection of an earlier narrative that the powerful attained their posts because of “merit.” Instead, on both the left and right, an increasing suspicion has emerged that meritocracy is toxic, a system that rewards power and privilege with yet more power and privilege.

Attitudes toward Supreme Court justices reflect this shift. Back when Justices Clarence Thomas and Sonia Sotomayor were nominated, their paths—from childhood poverty to Ivy League law schools to the highest court in the land—were celebrated as American success stories. But these days, when commentators note that eight of the nine justices graduated from Harvard or Yale Law School, it’s almost always the subject of complaint rather than acclaim.

This anti-elitist turn extends even to the hiring of the justices’ law clerks. Earlier this year, when a study found that going to an elite college greatly enhanced one’s chances of landing a Supreme Court clerkship, an author of the study complained that it reflected “some of the worst pathologies in American society.” When it became public in July 2021 that Justice Elena Kagan offered a clerkship to Jessica Garland, the daughter of former D.C. Circuit Chief Judge and current Attorney General Merrick Garland, the news was condemned as a glaring example of “nepotism” and “another justice not caring about conflicts of interest.” (Jessica Garland’s clerkship has been postponed to a time when her father is no longer attorney general.)

Which is not to say that all distrust and calling out of elites is a bad thing; much of it represents a belated and worthy recognition of deep unfairness in many parts of American society. But recognizing the relative recency of such concerns should also affect the approach to ethics reforms for the Court.

[Glenn Fine: The Supreme Court needs real oversight]

First, although greater scrutiny of the justices is salutary, blaming them for conduct based on standards developed after the actions at issue may be counterproductive. Hyperbolic condemnation of the justices, including calls for impeachment, has the potential to backfire. It makes the justices more defensive—as reflected in a recent Wall Street Journal interview of Justice Samuel Alito, in which he asserted that “no provision in the Constitution gives [Congress] the authority to regulate the Supreme Court”—and less likely to voluntarily adopt an ethics code. And given questions surrounding Congress’s ability to impose ethics requirements on the Court, both constitutionally (because of separation of powers) and politically (because of Republican opposition), getting the justices to adopt a code on their own is still the most likely path to reform.

Second, as history has made clear, as long as the justices are real people underneath their robes, they will have potential conflicts of interest. The justices are human (and Americans want them that way—research shows that Americans trust human judges more than artificial-intelligence judges). The justices will have friends—who might be inclined to entertain or help them, as friends do. The justices will have spouses—who might have lucrative careers and outside clients. The justices will have human desires—perhaps for the finer things of life, perhaps for fame.

Given this, strengthening disclosure requirements—and imposing real consequences for violations, such as serious financial penalties—may be more productive than trying to police the friendships of the justices or the gifts they can receive. An ethics regime that gives the justices broad leeway in their and their spouse’s outside relationships, tied to greater disclosure of those relationships, could be a reasonable compromise acceptable to both Congress and the Court.

Despite its issues past and present, the federal judiciary is one of the world’s best in terms of independence and integrity. We know this firsthand, having clerked for three federal judges between the two of us and having appeared as lawyers before many more. We have also followed and written about the Supreme Court for years, for both scholarly and general-interest publications (separately and together, as a married couple).

Yes, the Supreme Court should adopt an ethics code, at the very least to convey to the public that it is, in Kagan’s words, “adhering to the highest standards of conduct.” But Americans should also proceed with caution and humility when advocating for what such a code should contain, tempering today’s populist sympathies with an understanding of history and a recognition that if the public wants justices to be humans, not Platonic Guardians or AI creations, they must accept the burdens as well as the benefits of that bargain.