Itemoids

Australia

Whatever Happened to Carpal Tunnel Syndrome?

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 10 › carpal-tunnel-syndrome-prevalence › 675803

Diana Henriques was first stricken in late 1996. A business reporter for The New York Times, she was in the midst of a punishing effort to bring a reporting project to fruition. Then one morning she awoke to find herself incapable of pinching her contact lens between her thumb and forefinger.

Henriques’s hands were soon cursed with numbness, frailty, and a gnawing ache she found similar to menstrual cramps. These maladies destroyed her ability to type—the lifeblood of her profession—without experiencing debilitating pain.

“It was terrifying,” she recalls.

Henriques would join the legions of Americans considered to have a repetitive strain injury (RSI), which from the late 1980s through the 1990s seized the popular imagination as the plague of the modern American workplace. Characterized at the time as a source of sudden, widespread suffering and disability, the RSI crisis reportedly began in slaughterhouses, auto plants, and other venues for repetitive manual labor, before spreading to work environments where people hammered keyboards and clicked computer mice. Pain in the shoulders, neck, arms, and hands, office drones would learn, was the collateral damage of the desktop-computer revolution. As Representative Tom Lantos of California put it at a congressional hearing in 1989, these were symptoms of what could be “the industrial disease of the information age.”

By 1993, the Bureau of Labor Statistics was reporting that the number of RSI cases had increased more than tenfold over the previous decade. Henriques believed her workplace injury might have had a more specific diagnosis, though: carpal tunnel syndrome. Characterized by pain, tingling, and numbness that results from nerve compression at the wrist, this was just one of many conditions (including tendonitis and tennis elbow) that were included in the government’s tally, but it came to stand in for the larger threat. Everyone who worked in front of a monitor was suddenly at risk, it seemed, of coming down with carpal tunnel. “There was this ghost of a destroyed career wandering through the newsroom,” Henriques told me. “You never knew whose shoulder was going to feel the dead hand next.”

But the epidemic waned in the years that followed. The number of workplace-related RSIs recorded per year had already started on a long decline, and in the early 2000s, news reports on the modern plague all but disappeared. Two decades later, professionals are ensconced more deeply in the trappings of the information age than they’ve ever been before, and post-COVID, computer use has spread from offices to living rooms and kitchens. Yet if this work is causing widespread injury, the evidence remains obscure. The whole carpal tunnel crisis, and the millions it affected, now reads like a strange and temporary problem of the ancient past.

[Read: Yes, the pandemic is ruining your body]

So what happened? Was the plague defeated by an ergonomic revolution, with white-collar workers’ bodies saved by thinner, light-touch keyboards, adjustable-height desks and monitors, and Aeron chairs? Or could it be that the office-dweller spike in RSIs was never quite as bad as it seemed, and that the hype around the numbers might have even served to make a modest problem worse, by spreading fear and faulty diagnoses?

Or maybe there’s another, more disturbing possibility. What if the scourge of RSIs receded, but only for a time? Could these injuries have resurged in the age of home-office work, at a time when their prevalence might be concealed in part by indifference and neglect? If that’s the case—if a real and pervasive epidemic that once dominated headlines never really went away—then the central story of this crisis has less to do with occupational health than with how we come to understand it. It’s a story of how statistics and reality twist around and change each other’s shape. At times they even separate.

The workplace epidemic was visible only after specific actions by government agencies, employers, and others set the stage for its illumination. This happened first in settings far removed from office life. In response to labor groups’ complaints, the Occupational Safety and Health Administration began to look for evidence of RSIs within the strike-prone meatpacking industry—and found that they were rampant.

Surveillance efforts spread from there, and so did the known scope of the problem. By 1988, OSHA had proposed multimillion-dollar fines against large auto manufacturers and meatpacking plants for underreporting employees’ RSIs; other businesses, perhaps spooked by the enforcement, started documenting such injuries more assiduously. Newspaper reporters (and their unions) took up the story, too, noting that similar maladies could now be produced by endless hours spent typing at the by-then ubiquitous computer keyboard. In that way, what had started playing out in government enforcement actions and statistics morphed into a full-blown news event. The white-collar carpal tunnel crisis had arrived.

In the late 1980s, David Rempel, an expert in occupational medicine and ergonomics at UC San Francisco, conducted an investigation on behalf of California’s OSHA in the newsroom of The Fresno Bee. Its union had complained that more than a quarter of the paper’s staff was afflicted with RSIs, and Rempel was there to find out what was wrong.

The problem, he discovered, was that employees had been given new, poorly designed computer workstations, and were suddenly compelled to spend a lot of time in front of them. In the citation that he wrote up for the state, Rempel ordered the Bee to install adjustable office furniture and provide workers with hourly breaks from their consoles.

A computer workstation at The Fresno Bee in 1989 (Courtesy of David Rempel)

Similar injury clusters were occurring at many other publications, too, and reporters cranked out stories on the chronic pain within their ranks. More than 200 editorial employees of the Los Angeles Times sought medical help for RSIs over a four-year stretch, according to a 1989 article in that newspaper. In 1990, The New York Times published a major RSI story—“Hazards at the Keyboard: A Special Report”—on its front page; in 1992, Time magazine ran a major story claiming that professionals were being “Crippled by Computers.”

But ergonomics researchers like Rempel would later form some doubts about the nature of this epidemic. Research showed that people whose work involves repetitive and forceful hand exertions for long periods are more prone to developing carpal tunnel syndrome, Rempel told me—but that association is not as strong for computer-based jobs. “If there is an elevated risk to white-collar workers, it’s not large,” he said.

[Read: Chronic pain is an impossible problem]

Computer use is clearly linked to RSIs in general, however. A 2019 meta-analysis in Occupational & Environmental Medicine found an increased risk of musculoskeletal symptoms with more screen work (though it does acknowledge that the evidence is “heterogeneous” and doesn’t account for screen use after 2005). Ergonomics experts and occupational-health specialists told me they are certain that many journalists and other professionals did sustain serious RSIs while using 1980s-to-mid-’90s computer workstations, with their fixed desks and chunky keyboards. But the total number of such injuries may have been distorted at the time, and many computer-related “carpal tunnel” cases in particular were spurious, with misdiagnoses caused in part by an unreliable but widely used nerve-conduction test. “It seems pretty clear that there wasn’t a sudden explosion of carpal tunnel cases when the reported numbers started to go up,” Leslie Boden, an environmental-health professor at the Boston University School of Public Health, told me.

Such mistakes were probably driven by the “crippled by computers” narrative. White-collar workers with hand pain and numbness might have naturally presumed they had carpal tunnel, thanks to news reports and the chatter at the water cooler; then, as they told their colleagues—and reporters—about their disabilities, they helped fuel a false-diagnosis feedback loop.

It’s possible that well-intentioned shifts in workplace culture further exaggerated the scale of the epidemic. According to Fredric Gerr, a professor emeritus of occupational and environmental health at the University of Iowa, white-collar employees were encouraged during the 1990s to report even minor aches and pains, so they could be diagnosed—and treated—earlier. But Gerr told me that such awareness-raising efforts may have backfired, causing workers to view those minor aches as harbingers of a disabling, chronic disease. Clinicians and ergonomists, too, he said, began to lump any pain-addled worker into the same bin, regardless of their symptoms’ severity—a practice that may have artificially inflated the reported rates of RSIs and caused unnecessary anxiety.

Henriques, whose symptoms were consistent and severe, underwent a nerve-conduction test not long after her pain and disability began; the result was inconclusive. She continues to believe that she came down with carpal tunnel syndrome as opposed to another form of RSI, but chose not to receive surgery given the diagnostic uncertainty. New York Times reporters with RSIs were not at risk of getting fired, as she saw it, but of ending up in different roles. She didn’t want that for herself, so she adapted to her physical limitations, mastering the voice-to-text software that she has since used to dictate four books. The most recent came out in September.

As it happens, a very similar story had played out on the other side of the world more than a decade earlier.

Reporters in Australia began sounding the alarm about the booming rates of RSIs among computer users in 1983, right at the advent of the computer revolution. Some academic observers dismissed the epidemic as the product of a mass hysteria. Other experts figured that Australian offices might be more damaging to people’s bodies than those in other nations, with some colorfully dubbing the symptoms “kangaroo paw.” Andrew Hopkins, a sociologist at the Australian National University, backed a third hypothesis: that his nation’s institutions had merely facilitated acknowledgement—or stopped suppressing evidence—of what was a genuine and widespread crisis.

“It is well known to sociologists that statistics often tell us more about collection procedures than they do about the phenomenon they are supposed to reflect,” Hopkins wrote in a 1990 paper that compared the raging RSI epidemic in Australia to the relative quiet in the United States. He doubted that any meaningful differences in work conditions between the two nations could explain the staggered timing of the outbreaks. Rather, he suspected that different worker-compensation systems made ongoing epidemics more visible, or less, to public-health authorities. In Australia, the approach was far more labor-friendly on the whole, with fewer administrative hurdles for claimants to overcome, and better payouts to those who were successful. Provided with this greater incentive to report their RSIs, Hopkins argued, Australian workers began doing so in greater numbers than before.

Then conditions changed. In 1987, Australia’s High Court decided a landmark worker-compensation case involving an RSI in favor of the employer. By the late 1980s, the government had discontinued its quarterly surveillance report of such cases, and worker-comp systems became more hostile to them, Hopkins said. With fewer workers speaking out about their chronic ailments, and Australian journalists bereft of data to illustrate the problem’s scope, a continuing pain crisis might very well have been pushed into the shadows.

Now it was the United States’ turn. Here, too, attention to a workplace-injury epidemic swelled in response to institutional behaviors and incentives. And then here, too, that attention ebbed for multiple reasons. Improvements in workplace ergonomics and computer design may indeed have lessened the actual injury rate among desk workers during the 1990s. At the same time, the growing availability of high-quality scanners reduced the need for injury-prone data-entry typists, and improved diagnostic practices by physicians reduced the rate of false carpal tunnel diagnoses. In the blue-collar sector, tapering union membership and the expansion of the immigrant workforce may have pushed down the national number of recorded injuries, by making employees less inclined to file complaints and advocate for their own well-being.

But America’s legal and political climate was shifting too. Thousands of workers would file lawsuits against computer manufacturers during this period, claiming that their products had caused injury and disability. More than 20 major cases went to jury trials—and all of them failed. In 2002, the Supreme Court ruled against an employee of Toyota who said she’d become disabled by carpal tunnel as a result of working on the assembly line. (The car company was represented by John Roberts, then in private appellate-law practice.) Meanwhile, Republicans in Congress managed to jettison a new set of OSHA ergonomics standards before they could go into effect, and the George W. Bush administration ended the requirement that employers separate out RSI-like conditions in their workplace-injury reports to the government. Unsurprisingly, recorded cases dropped off even more sharply in the years that followed.

[Read: When the computer mouse was new]

Blue-collar workers in particular would be left in the lurch. According to M. K. Fletcher, a safety and health specialist at the AFL-CIO, many laborers, in particular those in food processing, health care, warehousing, and construction, continue to suffer substantial rates of musculoskeletal disorders, the term that’s now preferred over RSIs. Nationally, such conditions account for an estimated one-fifth to one-third of the estimated 8.4 million annual workplace injuries across the private sector, according to the union’s analysis of Bureau of Labor Statistics reports.

From what experts can determine, carpal tunnel syndrome in particular remains prevalent, affecting 1 to 5 percent of the overall population. The condition is associated with multiple health conditions unrelated to the workplace, including diabetes, age, hypothyroidism, obesity, arthritis, and pregnancy. In general, keyboards are no longer thought to be a major threat, but the hazards of repetitive work were always very real. In the end, the “crippled by computers” panic among white-collar workers of the 1980s and ’90s would reap outsize attention and perhaps distract from the far more serious concerns of other workers. “We engage in a disease-du-jour mentality that is based on idiosyncratic factors, such as journalists being worried about computer users, rather than prioritization by the actual rate and the impact on employment and life quality,” Gerr, the occupational- and environmental-health expert at the University of Iowa, told me.

As for today’s potential “hazards at the keyboard,” we know precious little. Almost all of the research described above was done prior to 2006, before tablets and smartphones were invented. Workplace ergonomics used to be a thriving academic field, but its ranks have dwindled. The majority of the academic experts I spoke with for this story are either in the twilight of their careers or they’ve already retired. A number of the researchers whose scholarship I’ve reviewed are dead. “The public and also scientists have lost interest in the topic,” Pieter Coenen, an assistant professor at Amsterdam UMC and the lead author of the meta-analysis from 2019, told me. “I don’t think the problem has actually resolved.”

So is there substantial risk to workers in the 2020s from using Slack all day, or checking email on their iPhones, or spending countless hours hunched at their kitchen tables, typing while they talk on Zoom? Few are trying to find out. Professionals in the post-COVID, work-from-home era may be experiencing a persistent or resurgent rash of pain and injury. “The industrial disease of the information age” could still be raging.

Who Made the Oxford English Dictionary?

The Atlantic

www.theatlantic.com › books › archive › 2023 › 10 › oxford-english-dictionary-sarah-ogilvie-book-review › 675826

The Oxford English Dictionary always seemed to me like the Rules from on high—near biblical, laid down long ago by a distant academic elite. But back in 1857, when the idea of the dictionary was born, its three founders proposed something more democratic than authoritative: a reference book that didn’t prescribe but instead described English, tracking the meaning of every word in the language across time and laying out how people were actually using each one.

As Sarah Ogilvie writes in her new book, The Dictionary People, the OED’s founders realized that such a titanic task could never be accomplished by a small circle of men in London and Oxford, so they sought out volunteers. That search expanded when the eccentric philologist James Murray took the helm in 1879 as the Dictionary’s third editor. Murray cast a far wider net than his predecessors had, circulating a call for contributors to newspapers, universities, and clubs around the globe. He instructed people to read the books they had on hand, fill 4-by-6-inch slips of paper with quotations that showed how words were used therein, and send them to his “Scriptorium” (the iron shed behind his house where he and a devoted crew worked on the Dictionary). The wave of submissions was so overwhelming that the Royal Mail installed a red post box in front of his home in Oxford, which remains there today.

One of the greatest crowdsourcing efforts in history—“the Wikipedia of the nineteenth century,” as Ogilvie puts it—the OED would not have been possible without this army of volunteers. And yet, for years, most have remained unknown. In his exuberant 2003 history of the OED, The Meaning of Everything, Simon Winchester devoted a chapter to the Dictionary’s contributors—not just the readers who sent in slips, but the subeditors who sorted submissions chronologically and by meaning, and the specialists who advised on specific terminology or etymologies. Winchester served up small biographies of a few key figures but lamented of the group that “their legacy … remains essentially unwritten.” In The Dictionary People, Ogilvie sets out to correct the record. A former editor at the Oxford English Dictionary, Ogilvie stumbled upon Murray’s address books while passing time in the Dictionary’s archives. Upon learning that the number of volunteers wasn’t merely hundreds (as scholars long believed) but some 3,000, she became determined to track each of them down.

The resulting book is, like the Dictionary itself, a clear labor of love, both playful and doggedly researched. Ogilvie spent eight years trawling through libraries and dusty archives across the globe. She pored over the editors’ correspondence, mapped how news of the project spread across social clubs in Britain and beyond, and even recruited a handwriting expert to help determine who was behind scores of the raciest slips. She orders her history alphabetically, categorizing the keenest and quirkiest contributors into different groups—“I for Inventors,” “S for Suffragists,” “M for Murderers”—and offering bite-size biographies of dozens of figures.

[Read: I can’t stand these words anymore]

Under “Q for Queers,” we meet Katharine Bradley and Edith Cooper, an aunt and niece who, in the late 1800s, became lovers and literary collaborators, publishing plays and poetry under the pen name Michael Field. (Critics gushed about Field, comparing “him” to Shakespeare.) In her spare time, Katharine sent in quotations from John Ruskin and The Iliad. We meet the owner of the world’s largest collection of erotica at the time, who is thought to have supplied sentences for words related to genitalia, bondage, and flagellation—along with spicier quotations for otherwise-innocuous entries. We encounter Karl Marx’s daughter Eleanor, whose half-baked efforts exasperated Murray, and the much more devoted William Chester Minor, a former American Army surgeon who submitted 62,720 slips from the Broadmoor Criminal Lunatic Asylum, where he was sent after murdering a man. (Dr. Minor was allowed to keep a separate cell for his books.)

What starts out as a detective story quickly evolves into an ode to the outsider. Some famous figures make appearances in The Dictionary People—the photographer Eadweard Muybridge, known for his studies of animal motion, advised on entries, including the one for gallop; and a young J. R. R. Tolkien was an editorial assistant for a year, during which time he worked on the letter W, puzzling over possible etymologies of the word walrus. But Ogilvie marvels that many of the Dictionary’s key contributors were “on the edges of academia.” They were inventors and pioneers with radical ideas; women (at a time when many were denied higher education) and other autodidacts; asylum patients and recluses. This motley crew shared a hunger to be associated with the prestigious Oxford University, to be part of a project of national importance. Perhaps this desire for belonging powered their obsessive (often unpaid) devotion to the undertaking? Perhaps, for those cast aside by society, like Dr. Minor, their involvement was redemptive? Ogilvie doesn’t linger long on their motives, preferring instead to assemble surprising bits of trivia about each figure.

The most compelling portrait is that of Murray, the Dictionary’s longest-serving editor, who emerges as the book’s protagonist. The son of a village tailor in Scotland, Murray left school at 14, eventually becoming a bank clerk and then a teacher at Mill Hill School in London. Over the years, he taught himself to read some 25 languages, including Tongan and Russian, and developed an interest in philology, writing books on Scottish dialects. In the late 1860s, he was invited to join the London Philological Society, where the idea for the OED had been born in 1857. But as a teetotaling Scot with little formal education, Murray was continually excluded from the indulgent academic establishment of Oxford. He was never made a fellow of a University college, and he wasn’t granted an honorary doctorate until 1914, the year before he died.

The OED’s progress had stalled under Murray’s predecessor, Frederick Furnivall, whose involvement with various academic clubs left him little time to actually edit (but had the benefit, Ogilvie points out, of bringing in a steady stream of contributors). Murray revived the project. For 36 years, he devoted himself to an undertaking that, he noted late in life, “should have been the work of a celibate and ascetic.” He rose by five each morning and spent the day writing letters to volunteers, sorting words into their shades of meaning, and drafting definitions. He was often spotted delivering copy to the publisher by tricycle, his long white beard trailing behind him as he pedaled wildly about town. Murray’s wife, Ada, was instrumental, managing his finances and acting as his personal secretary. Even his kids were involved: Murray brought slips to the table to discuss over lunch and recruited each of his 11 children to sort submissions. For all this, he was paid a pitiful sum, which had to cover not just his wages but those of the Scriptorium staff and the Dictionary’s expenses.

Over the years, Murray resisted calls from the publisher and reviewers to make the Dictionary what they would consider a proper British product. He was pressured to use quotations from only the “great authors,” eschew slang, and omit words deemed too scientific or vulgar or foreign. Murray refused, believing that all of the English language had a valid place in the Dictionary, just as all contributors who put in the work were welcome. As Ogilvie shows in her earlier, wonkier history of the OED, Words of the World, as an editor, Murray was particularly devoted to including foreign words that had entered into English—a stance that can be read as either inclusive or colonizing, though Ogilvie seems to lean toward the former.

[Read: The small island where 500 people speak nine different languages]

Murray died in 1915, shortly after finishing the entry for twilight, and 13 years before the OED’s monumental first edition was completed. The Dictionary has continued to evolve with the world; its third edition, which Ogilvie worked on, has been in progress since 1993, and uses the editing process devised by Murray. (Recent additions include deepfake, teen idol, and textspeak.) In her final chapter, Ogilvie visits a man named Chris Collier from her hometown of Brisbane, Australia, who sent in 100,000 slips from 1975 to 2010. Collier cut quotations out of his local newspaper and pasted them directly onto slips, which arrived at the OED offices wrapped in old cornflakes packaging. “I thought to myself, imagine if I could help get one word into the dictionary,” he told Ogilvie. To his neighbors, he was the local nudist (he was known to take naked evening walks), but in certain Oxford circles he was practically famous, having supplied thousands of new words.