Itemoids

New York City

Outdoor Dining Is Doomed

The Atlantic

www.theatlantic.com › health › archive › 2023 › 01 › restaurants-outdoor-dining-winter-covid › 672904

These days, strolling through downtown New York City, where I live, is like picking your way through the aftermath of a party. In many ways, it is exactly that: The limp string lights, trash-strewn puddles, and splintering plywood are all relics of the raucous celebration known as outdoor dining.

These wooden “streeteries” and the makeshift tables lining sidewalks first popped up during the depths of the coronavirus pandemic in 2020, when restaurants needed to get diners back in their seats. It was novel, creative, spontaneous—and fun during a time when there wasn’t much fun to be had. For a while, outdoor dining really seemed as though it could outlast the pandemic. Just last October, New York Magazine wrote that it would stick around, “probably permanently.”

But now someone has switched on the lights and cut the music. Across the country, something about outdoor dining has changed in recent months. With fears about COVID subsiding, people are losing their appetite for eating among the elements. This winter, many streeteries are empty, save for the few COVID-cautious holdouts willing to put up with the cold. Hannah Cutting-Jones, the director of food studies at the University of Oregon, told me that, in Eugene, where she lives, outdoor dining is “absolutely not happening” right now. In recent weeks, cities such as New York and Philadelphia have started tearing down unused streeteries. Outdoor dining’s sheen of novelty has faded; what once evoked the grands boulevards of Paris has turned out to be a janky table next to a parked car. Even a pandemic, it turns out, couldn’t overcome the reasons Americans never liked eating outdoors in the first place.

For a while, the allure of outdoor dining was clear. COVID safety aside, it kept struggling restaurants afloat, boosted some low-income communities, and cultivated joie de vivre in bleak times. At one point, more than 12,700 New York restaurants had taken to the streets, and the city—along with others, including Boston, Los Angeles, Chicago, and Philadelphia—proposed making dining sheds permanent. But so far, few cities have actually adopted any official rules. At this point, whether they ever will is unclear. Without official sanctions, mounting pressure from outdoor-dining opponents will likely lead to the destruction of existing sheds; already, people keep tweeting disapproving photos at sanitation departments. Part of the issue is that as most Americans’ COVID concerns retreat, the potential downsides have gotten harder to overlook: less parking, more trash, tacky aesthetics, and, oh God, the rats. Many top New York restaurants have voluntarily gotten rid of their sheds this winter.

The economics of outdoor dining may no longer make sense for restaurants, either. Although it was lauded as a boon to struggling restaurants during the height of the pandemic, the practice may make less sense now that indoor dining is back. For one thing, dining sheds tend to take up parking spaces needed to attract customers, Cutting-Jones said. The fact that most restaurants are chains doesn’t help: “If whatever conglomerate owns Longhorn Steakhouse doesn’t want to invest in outdoor dining, it will not become the norm,” Rebecca Spang, a food historian at Indiana University Bloomington, told me. Besides, she added, many restaurants are already short-staffed, even without the extra seats.

In a sense, outdoor dining was doomed to fail. It always ran counter to the physical makeup of most of the country, as anyone who ate outside during the pandemic inevitably noticed. The most obvious constraint is the weather, which is sometimes pleasant but is more often not. “Who wants to eat on the sidewalk in Phoenix in July?” Spang said.

The other is the uncomfortable proximity to vehicles. Dining sheds spilled into the streets like patrons after too many drinks. The problem was that U.S. roads were built for cars, not people. This tends not to be true in places renowned for outdoor dining, such as Europe, the Middle East, and Southeast Asia, which urbanized before cars, Megan Elias, a historian and the director of the gastronomy program at Boston University, told me. At best, this means that outdoor meals in America are typically enjoyed with a side of traffic. At worst, they end in dangerous collisions.

Cars and bad weather were easier to put up with when eating indoors seemed like a more serious health hazard than breathing in fumes and trembling with cold. It had a certain romance—camaraderie born of discomfort. You have to admit, there was a time when cozying up under a heat lamp with a hot drink was downright charming. But now outdoor dining has gone back to what it always was: something that most Americans would like to avoid in all but the most ideal of conditions. This sort of relapse could lead to fewer opportunities to eat outdoors even when the weather does cooperate.

But outdoor dining is also affected by more existential issues that have surmounted nearly three years of COVID life. Eating at restaurants is expensive, and Americans like to get their money’s worth. When safety isn’t a concern, shelling out for a streetside meal may simply not seem worthwhile for most diners. “There’s got to be a point to being outdoors, either because the climate is so beautiful or there’s a view,” Paul Freedman, a Yale history professor specializing in cuisine, told me. For some diners, outdoor seating may feel too casual: Historically, Americans associated eating at restaurants with special occasions, like celebrating a milestone at Delmonico’s, the legendary fine-dining establishment that opened in the 1800s, Cutting-Jones said.

Eating outdoors, in contrast, was linked to more casual experiences, like having a hot dog at Coney Island. “We have high expectations for what dining out should be like,” she said, noting that American diners are especially fussy about comfort. Even the most opulent COVID cabin may be unable to override these associations. “If the restaurant is going to be fancy and charge $200 a person,” said Freedman, most people can’t escape the feeling of having spent that much for “a picnic on the street.”

Outdoor dining isn’t disappearing entirely. In the coming years there’s a good chance that more Americans will have the opportunity to eat outside in the nicer months than they did before the pandemic—even if it’s not the widespread practice many anticipated earlier in the pandemic. Where it continues, it will almost certainly be different: more buttoned-up, less lawless—probably less exciting. Santa Barbara, for example, made dining sheds permanent last year but specified that they must be painted an approved “iron color.” It may also be less popular among restaurant owners: If outdoor-dining regulations are too far-reaching or costly, cautioned Hayrettin Günç, an architect with Global Designing Cities Initiative, that will “create barriers for businesses.”

For now, outdoor dining is yet another COVID-related convention that hasn’t quite stuck—like avoiding handshakes and universal remote work. As the pandemic subsides, the tendency is to default to the ways things used to be. Doing so is easier, certainly, than coming up with policies to accommodate new habits. In the case of outdoor dining, it’s most comfortable, too. If this continues to be the case, then outdoor dining in the U.S. may return to what it was before the pandemic: dining “al fresco” along the streetlamp-lined terraces of the Venetian Las Vegas, and beneath the verdant canopy of the Rainforest Cafe.

Did George Washington Burn New York?

The Atlantic

www.theatlantic.com › magazine › archive › 2023 › 03 › george-washington-burn-new-york-great-fire-1776 › 672780

This story seems to be about:

On July 9, 1776, General George Washington amassed his soldiers in New York City. They would soon face one of the largest amphibious invasions yet seen. If the British took the city, they’d secure a strategic harbor on the Atlantic Coast from which they could disrupt the rebels’ seaborne trade. Washington thus judged New York “a Post of infinite importance” and believed the coming days could “determine the fate of America.” To prepare, he wanted his men to hear the just-issued Declaration of Independence read aloud. This, he hoped, might “serve as a fresh incentive.”

But stirring principles weren’t enough. By the end of August, the British had routed Washington’s forces on Long Island and were preparing to storm Manhattan. The outlook was “truly distressing,” he confessed. Unable to hold the city—unable even to beat back disorder and desertion among his own dispirited men—Washington abandoned it. One of his officers ruefully wished that the retreat could be “blotted out of the annals of America.”

As if to underscore the loss, a little past midnight five days after the redcoats took New York on September 15, a terrible fire broke out. It consumed somewhere between a sixth and a third of the city, leaving about a fifth of its residents homeless. The conflagration could be seen from New Haven, 70 miles away.

New York’s double tragedy—first invaded, then incinerated—meant a stumbling start for the new republic. Yet Washington wasn’t wholly displeased. “Had I been left to the dictates of my own judgment,” he confided to his cousin, “New York should have been laid in Ashes before I quitted it.” Indeed, he’d sought permission to burn it. But Congress refused, which Washington regarded as a grievous error. Happily, he noted, God or “some good honest Fellow” had torched the city anyway, spoiling the redcoats’ valuable war prize.

For more than 15 years, the historian Benjamin L. Carp of Brooklyn College has wondered who that “honest fellow” might have been. Now, in The Great New York Fire of 1776: A Lost Story of the American Revolution, he cogently lays out his findings. Revolutionaries almost certainly set New York aflame intentionally, Carp argues, and they quite possibly acted on instructions. Sifting through the evidence, he asks a disturbing question: Did George Washington order New York to be burned to the ground?

The idea of Washington as an arsonist may seem far-fetched. Popular histories of the American Revolution treat the “glorious cause” as different from other revolutions. Whereas the French, Haitian, Russian, and Chinese revolutions involved mass violence against civilians, this one—the story goes—was fought with restraint and honor.

But a revolution is not a dinner party, as Mao Zedong observed. Alongside the parade-ground battles ran a “grim civil war,” the historian Alan Taylor writes, in which “a plundered farm was a more common experience than a glorious and victorious charge.” Yankees harassed, tortured, and summarily executed the enemies of their cause. The term lynch appears to have entered the language from Colonel Charles Lynch of Virginia, who served rough justice to Loyalists.

Burning towns was, of course, a more serious transgression. “It is a Method of conducting War long since become disreputable among civilized Nations,” John Adams wrote. The Dutch jurist Hugo Grotius, whose writings influenced European warfare, forbade killing women and children, and judged unnecessary violence in seizing towns to be “totally repugnant to every principle of Christianity and justice.”

Still, in the thick of war, the torch was hard to resist, and in North America, it was nearly impossible. Although Britain, facing a timber famine, had long since replaced its wooden buildings with brick and stone ones, the new United States was awash in wood. Its immense forests were, to British visitors, astonishing. And its ramshackle wooden towns were tinderboxes, needing only sparks to ignite.

On the eve of the Revolution, the rebel Joseph Warren gave a speech in a Boston church condemning the British military. Vexed British officers cried out “Oh! fie! Oh! fie!” That sounded enough like “fire” to send the crowd of 5,000 sprinting for the doors, leaping out windows, and fleeing down the streets. They knew all too well how combustible their city was.

The British knew it too, which raised the tantalizing possibility of quashing the rebellion by burning rebel towns. Although some officers considered such tactics criminal, others didn’t share their compunctions. At the 1775 Battle of Bunker Hill, they burned Charlestown, outside Boston, so thoroughly that “scarcely one stone remaineth upon another,” Abigail Adams wrote. The Royal Navy then set fire to more than 400 buildings in Portland, Maine (known then as Falmouth). On the first day of 1776, it set fires in Norfolk, Virginia; the city burned for three days and lost nearly 900 buildings.

Thomas Paine’s Common Sense appeared just days after Norfolk’s immolation. In it, Paine noted the “precariousness with which all American property is possessed” and railed against Britain’s reckless use of fire. As Paine appreciated, torched towns made the case for revolution pointedly. “A few more of such flaming Arguments as were exhibited at Falmouth and Norfolk” and that case would be undeniable, Washington agreed. The Declaration of Independence condemned the King for having “burnt our towns.”

In Norfolk, however, the King had help. After the British lit the fires, rebel Virginia soldiers kept them going, first targeting Loyalist homes but ultimately kindling a general inferno. “Keep up the Jigg,” they cried as the buildings burned. From a certain angle, this made sense: The fire would deny the Royal Navy a port, and the British would take the blame. In early February a revolutionary commander, Colonel Robert Howe, finished the job by burning 416 remaining structures. The city is “entirely destroyed,” he wrote privately. “Thank God for that.”

A year later, the Virginia legislature commissioned an investigation, which found that “very few of the houses were destroyed by the enemy”—only 19 in the New Year’s Day fire—whereas the rebels, including Howe, had burned more than 1,000. That investigation’s report went unpublished for six decades, though, and even then, in 1836, it was tucked quietly into the appendix of a legislative journal. Historians didn’t understand who torched Norfolk until the 20th century.

This was presumably by design: The Revolution required seeing the British as incendiaries and the colonists as their victims. Washington hoped that Norfolk’s ashes would “unite the whole Country in one indissoluble Band.”

Carp believes that what happened in Norfolk happened in New York. But how to square that with Washington’s renowned sense of propriety? The general detested marauding indiscipline among his men. Toward enemy prisoners, he advocated “Gentleness even to Forbearance,” in line with the “Duties of Humanity & Kindness.” And he deemed British-set fires “Savage Cruelties” perpetrated “in Contempt of every Principle of Humanity.” Is it thinkable that he disobeyed orders and set a city full of civilians aflame?

It becomes more thinkable if you look at another side of the war, Carp notes. In popular memory, the Revolutionary War was between colonists and redcoats, with some French and Hessians pitching in. But this version leaves out the many Native nations that also fought, mostly alongside the British. The Declaration of Independence, after charging the King with arson, indicted him for unleashing “merciless Indian Savages, whose known rule of warfare is an undistinguished destruction of all ages, sexes and conditions.”

[From the May 2022 issue: Daniel Immerwahr reviews a new history of World War II]

This accusation—that Indigenous people fought unfairly—haunted discussions of war tactics. Redcoat attacks on American towns fed the revolutionary spirit precisely because they delegitimized the British empire, whose methods, John Adams wrote, were “more abominable than those which are practiced by the Savage Indians.”

Perhaps, but Adams’s compatriots, at least when fighting Indians, weren’t exactly paragons of enlightened warfare. A month after the Declaration of Independence complained about burned towns and merciless savages, the revolutionaries launched a 5,500-man incendiary expedition against the British-allied Cherokees, targeting not warriors but homes and food. “I have now burnt down every town and destroyed all the corn,” one commander reported.

This was hitherto the “largest military operation ever conducted in the Lower South,” according to the historian John Grenier. Yet it’s easily overshadowed in popular accounts by more famous encounters. The Pulitzer Prize–winning writer Rick Atkinson, in his painstakingly detailed, 800-page military history of the war’s first two years, The British Are Coming, spends just a paragraph on it. The Cherokee campaign was, Atkinson writes, a mere “postscript” to Britain’s short and unsuccessful siege of Charleston (even though, by Atkinson’s own numbers, it killed roughly 10 times as many as the Charleston siege did).

But the Cherokee campaign was important, not only for what it did to the Cherokees but for what it revealed about the revolutionaries. Washington brandished it as proof of how far his men were willing to go. The Cherokees had been “foolish” to support the British, he wrote to the Wolastoqiyik and Passamaquoddy peoples, and the result was that “our Warriors went into their Country, burnt their Houses, destroyed their corn and obliged them to sue for peace.” Other tribes should take heed, Washington warned, and “never let the King’s wicked Counselors turn your hearts against me.”

Indigenous people did turn their hearts against him, however, and the fighting that followed scorched the frontier. In one of the war’s most consequential campaigns, Washington ordered General John Sullivan in 1779 to “lay waste all the settlements” of the British-aligned Haudenosaunees in New York, ensuring that their lands were “not merely overrun but destroyed.” Sullivan complied. “Forty of their towns have been reduced to ashes—some of them large and commodious,” Washington observed. He commended Sullivan’s troops for a “perseverance and valor that do them the highest honor.”

It’s hard, looking from Indian Country, to see Washington—or any of the revolutionaries—as particularly restrained. In the 1750s, the Senecas had given him the name “Conotocarious,” meaning “town taker” or “town destroyer,” after the title they’d bestowed on his Indian-fighting great-grandfather. Washington had occasionally signed his name “Conotocarious” as a young man, but he fully earned it destroying towns during the Revolutionary War. “To this day,” the Seneca chief Cornplanter told him in 1790, “when that name is heard, our women look behind them and turn pale, and our children cling close to the neck of their mothers.”

Carp acknowledges but doesn’t linger over what the revolutionaries did on the frontier. As he shows, there’s enough evidence from Manhattan itself to conclude that the New York conflagration was intentional.

To start, this was perhaps the least surprising fire in American history. Rumors swirled through the streets that it would happen, and Washington’s generals talked openly of the possibility. The president pro tempore of New York’s legislature obligingly informed Washington that his colleagues would “chearfully submit to the fatal Necessity” of destroying New York if required. The fire chief buried his valuables in anticipation.

When the expected fire broke out, it seemed to do so everywhere simultaneously. Those watching from afar “saw the fire ignite in three, four, five, or six places at once,” Carp notes. He includes a map showing 15 distinct “ignition points,” where observers saw fires start or found suspicious caches of combustibles. The fire could have begun in just one place and spread by wind-borne embers, but to those on the scene it appeared to be the work of many hands.

As the fire raged, witnesses saw rebels carrying torches, transporting combustibles, and cutting the handles of fire buckets. Some offenders allegedly confessed on the spot. But, as often happens with arson, the evidence vanished in the smoke. The British summarily executed some suspects during the fire, others fled, and those taken into custody all denied involvement.

Months elapsed before the British secured their first major confession. They caught a Yankee spy, Abraham Patten, who’d been plotting to torch British-held New Brunswick. On the gallows, Patten confessed, not only to the New Brunswick scheme but also to having been a principal in the conspiracy to burn New York. “I die for liberty,” he declared, “and do it gladly, because my cause is just.”

[Amy Zegart: George Washington was a master of deception]

After Patten’s execution, Washington wrote to John Hancock, the president of the Continental Congress. Patten had “conducted himself with great fidelity to our cause rendering Services,” Washington felt, and his family “well deserves” compensation. But, Washington added, considering the nature of Patten’s work, a “private donation” would be preferable to a “public act of generosity.” He’d made a similar suggestion when proposing burning New York. Washington had clarified that, if Congress agreed to pursue arson, its assent should be kept a “profound secret.”

It’s possible, given Carp’s circumstantial evidence, that New York radicals conspired to incinerate the city without telling the rebel command. Or perhaps Washington knew they would and feigned ignorance. Yet, for Carp, Patten’s confession and Washington’s insistence on paying Patten’s widow under the table amount to “a compelling suggestion that Washington and Congress secretly endorsed the burning of New York.”

Whoever burned the city, the act set the tone for what followed. As the war progressed, the British incinerated towns around New York and in the southern countryside. The rebels, for their part, fought fire with fire—or tried to. In 1778, Commodore John Paul Jones attacked an English port hoping to set it aflame, but he managed to burn only a single ship. Other attempts to send incendiaries to Great Britain were similarly ineffectual. British cities were too fireproof and too far for the revolutionaries to reach with their torches.

Vengeful Yankees had to settle for targets closer at hand: Native towns. In theory they were attacking Britain’s allies, but lines blurred. Pennsylvania militiamen searching for hostile Lenapes in 1782 instead fell on a village of pacifist Christian Indians, slaughtering 96 and burning it to the ground. If against the British the war was fought at least ostensibly by conventional means, against Indigenous people it was “total war,” the historian Colin G. Calloway has written.

That war continued well past the peace treaty signed in Paris—with no American Indians present—on September 3, 1783. Andrew Jackson’s arson-heavy campaigns against Native adversaries helped propel him to the presidency. Burning Indigenous lands was also key to William Henry Harrison’s election, in 1840. He won the White House on the slogan “Tippecanoe and Tyler Too”: Tyler was his running mate; “Tippecanoe” referred to the time in 1811 when Harrison’s troops had attacked an Indigenous confederacy and incinerated its capital.

Native Americans deserved such treatment, settlers insisted, because they always fought mercilessly, whereas white Americans did so only when provoked. Crucial to this understanding was a vision of the Revolution as a decorous affair, with Washington, venerated for his rectitude and restraint, at its head.

The legend of the pristine Revolution, however, is hard to sustain. The rebels lived in a combustible land, and they burned it readily, torching towns and targeting civilians. Like all revolutions, theirs rested on big ideas and bold deeds. But, like all revolutions, it also rested on furtive acts—and a thick bed of ashes.

This article appears in the March 2023 print edition with the headline “Did George Washington Burn New York?”

Why Police Officers Almost Never Get Punished

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 01 › police-misconduct-consequences-qualified-immunity › 672899

On the afternoon of February 8, 2018, more than two dozen law-enforcement officers crowded into a conference room in the Henry County Sheriff’s Office, on the outskirts of Atlanta. They were preparing to execute a no-knock warrant at 305 English Road, the home of a suspected drug dealer who had been under investigation for almost two years. The special agent leading the briefing told the team that 305 English Road was a small house with off-white siding and several broken-down cars out front, showed them an aerial photograph of the house, and gave them turn-by-turn directions to get there.

When the officers arrived at their destination, the house described in the warrant—305 English Road, run-down, off-white, with cars strewn across the yard—was right in front of them. But they walked past it to a different house, a tidy yellow one, 40 yards away. The house at 303 English Road looked nothing like the house described in the briefing and in the warrant. Yet, less than a minute after getting out of their cars, the officers set off flash grenades and used battering rams to smash open all three doors of the home.

Inside, they found Onree Norris, a 78-year-old Black man who had lived there for more than 50 years, raising his three children while he worked at a nearby rock quarry. Norris was no drug dealer. He had never been in any trouble with the law; he’d never even received a traffic ticket.

Onree Norris was watching the evening news in an armchair in his bedroom when he heard a thunderous sound, as if a bomb had gone off in his house. He got up to see what the commotion was and found a crowd of men in military gear in his hallway. Norris was more than twice as old as the target of the search warrant, but the officers pointed assault rifles at him anyway and yelled at him to raise his hands and get on the ground. When Norris told the officers that his knees were in bad shape, an officer grabbed Norris, pushed him down, and twisted his arm behind his back. Norris’s chest hurt, and he had trouble breathing. He told the officers that he had a heart condition—he’d had bypass surgery and had a pacemaker put in—but they kept him on the ground for several minutes. Norris was eventually led outside in handcuffs. When the officers realized they had blasted their way into the wrong house, they turned their cameras off one by one.

Whatever one believes about the job of policing—whether it’s that well-intentioned officers often must make split-second decisions that are easy to criticize in hindsight, or that the profession is inherently corrupt—there is no doubt that police officers sometimes egregiously abuse their authority. The videos that have filled our screens in recent years—most recently the surveillance footage of officers in Memphis fatally beating Tyre Nichols—offer horrifying evidence of this reality.

People who have lost loved ones or have themselves been harmed by the police often say that they want the officers involved to be punished and an assurance that something similar won’t happen in the future. Yet justice for victims of police misconduct is extremely difficult to achieve.

What happened in Memphis last week—the swift firing and arrest of the five officers who beat Nichols, and the murder charges they face—is highly unusual, a result of immediate public attention to an inconceivably barbaric attack. Although officers can be criminally prosecuted and sent to prison, they seldom are: Police are charged in less than 2 percent of fatal shootings and convicted in less than a third of those cases. Police departments rarely discipline or fire their officers.

Typically, victims’ only recourse is a civil lawsuit seeking money or court-ordered reforms. In 1961, the Supreme Court ruled that people could sue officers who violated their constitutional rights under a federal statute enacted 90 years earlier, during the bloody years of Reconstruction. That statute, known then as the Ku Klux Klan Act and referred to as Section 1983 today, was meant to provide a remedy to Black people across the South who were being tortured and killed by white supremacists while local law enforcement either participated in the violence or stood idly by.

After that 1961 decision, the number of police-misconduct suits filed shot up. But so did concerns about the suits’ potentially ruinous effects. Settlements and judgments would bankrupt officers and cities; no one in their right mind would agree to become a police officer; the very fabric of our society would become unwound. These claims were exaggerated, if not simply false. But they have nevertheless been relied upon by courts, legislatures, and government officials over the past 60 years to justify the creation of multiple overlapping protections for officers and police departments that regularly deny justice to people whose rights have been violated.

The best-known of these protections is “qualified immunity.” When the Supreme Court created qualified immunity, in 1967, it was meant to shield officers from liability only if they were acting in “good faith” when they violated the Constitution. Yet the Court has repeatedly strengthened the doctrine. In 1982, the Court ruled that requiring officers to prove good faith was too much of a burden. Instead, they would be entitled to qualified immunity so long as they did not violate “clearly established law.” Over the years, what constitutes “clearly established law” has constricted. The Roberts Court, invoking the importance of qualified immunity to “society as a whole,” has emphasized that the law is “clearly established” only if a court has previously found nearly identical conduct to be unconstitutional. What began as a protection for officers acting in good faith has turned into a protection for officers with the good fortune to have violated the Constitution in a novel way.

It was qualified immunity that dashed Onree Norris’s hopes of getting justice. In 2018, Norris sued the officers who had raided his home, seeking money to compensate him for his physical and emotional injuries. But in 2020, a federal judge in the Northern District of Georgia granted the officers qualified immunity and dismissed the case; in 2021, a panel of three judges on the Eleventh Circuit Court of Appeals affirmed the ruling.

The three appeals judges recognized that officers who execute a search warrant on the wrong home violate the Fourth Amendment to the U.S. Constitution unless they have made “a reasonable effort to ascertain and identify the place intended to be searched.” In fact, the very same court of appeals that heard Norris’s case in 2021 had ruled five years earlier that it was unconstitutional for an officer who executed a warrant on the wrong house to detain its residents at gunpoint—almost exactly what had happened to Norris. But that earlier court decision was not enough to defeat qualified immunity in Norris’s case, because it was “unpublished”—meaning that it was available online but had not been selected to be printed in the books of decisions that are issued each year—and the Eleventh Circuit is of the view that such unpublished decisions cannot “clearly establish” the law.

Just as George Floyd’s murder has come to represent all that is wrong with police violence and overreach, qualified immunity has come to represent all that is wrong with our system of police accountability. But, over the past 60 years, the Supreme Court has created multiple other barriers to holding police to account.

Take, for example, the standard that a plaintiff must meet to file a complaint. For decades, a complaint needed to include only a “short and plain” statement of the facts and why those facts entitled the plaintiff to relief. But in 2007, the Supreme Court did an about-face, requiring that plaintiffs include enough factual detail in their initial complaints to establish a “plausible” entitlement to relief.

This standard does not always pose a problem: Norris and his lawyer knew enough about what had happened during the raid of his home to write a detailed complaint. But sometimes a person whose rights have been violated doesn’t know the crucial details of their case.

Vicki Timpa searched for months for information about how her son, Tony, had died while handcuffed in Dallas police officers’ custody in August 2016. Department officials had body-camera videos that captured Tony’s last moments, but they refused to tell Timpa what had happened to her son or the names of the officers who were on the scene when he died. Timpa sued the city, but the case was dismissed because her complaint did not include enough factual detail about those last moments to establish a “plausible” claim.

When the Court set out the “plausibility” standard, it explained that, if filing a case were too easy, plaintiffs with “a largely groundless claim” could “take up the time” of defendants, and expensive discovery could “push cost-conscious defendants to settle even anemic cases.” But this rule puts people like Timpa in a bind: They are allowed discovery only if their complaints include evidence supporting their claims, but they can’t access that evidence without the tools of discovery.

(Timpa did eventually get the information she sought after she filed a public-records request and sued the city for not complying with it. Only with that information in hand could she defeat the motion to dismiss. But then her case was dismissed on qualified-immunity grounds because she could not point to a prior case with similar facts. That decision was overturned on appeal in December 2021, and Timpas’s case is set to go to trial in March, almost seven years after Tony was killed.)

The Supreme Court has also interpreted the Constitution in ways that deny relief to victims of police violence and overreach. The Fourth Amendment protects against “unreasonable searches and seizures.” But in a series of decisions beginning in the 1960s, the Court has interpreted the “reasonableness” standard in a manner so deferential to police that officers can stop, arrest, search, beat, shoot, or kill people who have done nothing wrong without violating their rights.

On a July night in 2016, David Collie was walking down the street in Fort Worth, Texas, headed to a friend’s house, when two officers jumped out of their patrol car and yelled for Collie to raise his hands. The officers were on the lookout for two Black men who had robbed someone at a gas station. Collie was at least 10 years older, six inches shorter, and 30 pounds lighter than the smaller of the two robbery suspects. But he, like the suspects, was Black and was not wearing a shirt on that warm summer evening. Collie raised his hands. Just seconds later, and while standing more than 30 feet away, one of the officers shot Collie in the back. The hollow-point bullet entered Collie’s lung and punctured his spine. He survived, but was left paralyzed from the waist down.

When Collie sued, his case was dismissed by a district-court judge in Texas, and the decision was affirmed on appeal. The Fifth Circuit Court of Appeals called the case “tragic,” and a prime example of “an individual’s being in the wrong place at the wrong time,” but concluded that the officer had not violated Collie’s Fourth Amendment rights, because he reasonably—though mistakenly—thought he had seen a gun in Collie’s raised hand.

The Supreme Court has undermined the power and potential of civil-rights lawsuits in other ways: It has limited, for example, plaintiffs’ ability to sue local governments for their officers’ conduct and to win court orders requiring that departments change their behavior. Any one of the barriers, in isolation, would limit the power of civil-rights suits. In combination, they have made the police all but untouchable.

Even when people are able to secure a settlement or verdict to compensate them for their losses, police officers and departments rarely suffer any consequences for their wrongdoing.

The Supreme Court has long assumed that officers personally pay settlements and judgments entered against them. That is one of the justifications for qualified immunity. But officers’ bank accounts are protected by a wholly separate set of state laws and local policies requiring or allowing most governments to indemnify their officers when they are sued (meaning that they must pay for the officers’ defense and any award against them). As a result, vanishingly few police officers pay a penny in these cases.

Police departments typically don’t feel the financial sting of settlements or judgments either. Instead, the money is taken from local-government funds. And when money is tight, it tends to get pulled from the crevices of budgets earmarked for the least powerful: the marginalized people whose objections will carry the least political weight—the same people disproportionately likely to be abused by police.

Officers and officials could still learn from lawsuits, even without paying for them. But most make little effort to do so when a lawsuit doesn’t inspire front-page news or meetings with an angry mayor. Instead, government attorneys defend the officers in court, any settlement or judgment is paid out of the government’s budget or by the government’s insurer, and the law-enforcement agency moves on. In many cases, it does not even track the names of the officers, the alleged claims, the evidence revealed, the eventual resolution, or the amount paid.

Fundamental questions remain about what we should empower the police to do, and how to restore trust between law enforcement and the communities it serves. But no matter how governments ultimately answer these questions, they will almost certainly continue to authorize people to protect public safety. And some of those people will almost certainly abuse that authority. We need to get our system of governmental accountability working better than it does, no matter what our system of public safety looks like.

The fact that so many barriers to justice exist means that there is something for officials at every level of government to do.

The Supreme Court should reconsider its standards for qualified immunity, pleading rules, the Fourth Amendment, and municipal liability. But this seems unlikely, because a majority of the justices have demonstrated a durable hostility to plaintiffs in civil-rights cases.

Congress could remove many of the obstacles the Supreme Court has devised. And at least some members of Congress have shown an appetite for doing so. A bill to end qualified immunity, among other reforms, was passed in the House soon after the murder of George Floyd. But following 15 months of negotiations in the Senate, the George Floyd Justice in Policing Act was abandoned. Republican Senator Tim Scott described the bill’s provision ending qualified immunity as a “poison pill” for Republican lawmakers.  

In the face of intransigence at the federal level, states have stepped in. Since May 2020, lawmakers in more than half of the states have proposed bills that would effectively do away with qualified immunity; these bills would allow people to bypass Section 1983 claims altogether and, instead, bring state-law claims for constitutional violations where qualified immunity could not be raised as a defense. State legislatures have additionally proposed bills that would limit police officers’ power to use force—prohibiting choke holds and no-knock warrants.

A bill enacted by Colorado in June 2020 is, in many ways, the gold standard. It allows people to sue law-enforcement officers for violations of the state constitution and prohibits officers from raising qualified immunity as a defense. The law also requires local governments to indemnify their officers unless they have been convicted of a crime, but allows cities to make officers contribute up to $25,000 or 5 percent of a settlement or judgment if the city concludes that the officer acted in bad faith. And the law bans officers from using choke holds, creating a bright-line limit on police power. Similar bills have passed in New Mexico and New York City, and are on the legislative agenda in other states. But other police-reform bills have failed in California, Washington, Virginia, and elsewhere.

I’ve testified in legislative hearings for bills in several states, and each has been frustratingly familiar. The people speaking against the bills threaten that if police officers cannot raise qualified immunity as a defense, they will be bankrupted for reasonable mistakes, and frivolous lawsuits will flood the courts. These assertions are just not true. Nevertheless, they have led lawmakers to vote against legislation that would take tentative but important steps toward a better system. Their inaction has left us with a world in which Onree Norris could receive nothing more than a few repairs to his doors after officers busted into his home and forced him to the floor; a world in which the Dallas Police Department could hide information about Tony Timpa’s death and then argue that his mother’s complaint should be dismissed because she did not have that information; a world in which David Collie could be shot and paralyzed from the waist down by a police officer, and require medical care for those injuries for the remainder of his life, but receive nothing, because the officer mistakenly thought Collie had a gun.

We need to stop being scared of unfounded claims about the dangers of too much justice, and start worrying about the people who have their lives shattered by the police—and then again by the courts.

This essay was adapted from the forthcoming Shielded: How the Police Became Untouchable.