Itemoids

Boulder

I Bought a CO2 Monitor, and It Broke Me

The Atlantic

www.theatlantic.com › health › archive › 2023 › 02 › carbon-dioxide-monitor-indoor-air-pollution-gas-stoves › 672923

A few weeks ago, a three-inch square of plastic and metal began, slowly and steadily, to upend my life.

The culprit was my new portable carbon-dioxide monitor, a device that had been sitting in my Amazon cart for months. I’d first eyed the product around the height of the coronavirus pandemic, figuring it could help me identify unventilated public spaces where exhaled breath was left to linger and the risk for virus transmission was high. But I didn’t shell out the $250 until January 2023, when a different set of worries, over the health risks of gas stoves and indoor air pollution, reached a boiling point. It was as good a time as any to get savvy to the air in my home.

I knew from the get-go that the small, stuffy apartment in which I work remotely was bound to be an air-quality disaster. But with the help of my shiny Aranet4, the brand most indoor-air experts seem to swear by, I was sure to fix the place up. When carbon-dioxide levels increased, I’d crack a window; when I cooked on my gas stove, I’d run the range fan. What could be easier? It would basically be like living outside, with better Wi-Fi. This year, spring cleaning would be a literal breeze!

The illusion was shattered minutes after I popped the batteries into my new device. At baseline, the levels in my apartment were already dancing around 1,200 parts per million (ppm)—a concentration that, as the device’s user manual informed me, was cutting my brain’s cognitive function by 15 percent. Aghast, I flung open a window, letting in a blast of frigid New England air. Two hours later, as I shivered in my 48-degree-Fahrenheit apartment in a coat, ski pants, and wool socks, typing numbly on my icy keyboard, the Aranet still hadn’t budged below 1,000 ppm, a common safety threshold for many experts. By the evening, I’d given up on trying to hypothermia my way to clean air. But as I tried to sleep in the suffocating trap of noxious gas that I had once called my home, next to the reeking sack of respiring flesh I had once called my spouse, the Aranet let loose an ominous beep: The ppm had climbed back up, this time to above 1,400. My cognitive capacity was now down 50 percent, per the user manual, on account of self-poisoning with stagnant air.

By the next morning, I was in despair. This was not the reality I had imagined when I decided to invite the Aranet4 into my home. I had envisioned the device and myself as a team with a shared goal: clean, clean air for all! But it was becoming clear that I didn’t have the power to make the device happy. And that was making me miserable.

[Read: Kill your gas stove]

CO2 monitors are not designed to dictate behavior; the information they dole out is not a perfect read on air quality, indoors or out. And although carbon dioxide can pose some health risks at high levels, it’s just one of many pollutants in the air, and by no means the worst. Others, such as nitrogen oxide, carbon monoxide, and ozone, can cause more direct harm. Some CO2-tracking devices, including the Aranet4, don’t account for particulate matter—which means that they can’t tell when air’s been cleaned up by, say, a HEPA filter. “It gives you an indicator; it’s not the whole story,” says Linsey Marr, an environmental engineer at Virginia Tech.

Still, because CO2 builds up alongside other pollutants, the levels are “a pretty good proxy for how fresh or stale your air is,” and how badly it needs to be turned over, says Paula Olsiewski, a biochemist and an indoor-air-quality expert at the Johns Hopkins Center for Health Security. The Aranet4 isn’t as accurate as, say, the $20,000 research-grade carbon-dioxide sensor in Marr’s lab, but it can get surprisingly close. When Jose-Luis Jimenez, an atmospheric chemist at the University of Colorado at Boulder, first picked one up three years ago, he was shocked that it could hold its own against the machines he used professionally. And in his personal life, “it allows you to find the terrible places and avoid them,” he told me, or to mask up when you can’t.

That rule of thumb starts to break down, though, when the terrible place turns out to be your home—or, at the very least, mine. To be fair, my apartment’s air quality has a lot working against it: two humans and two cats, all of us with an annoying penchant for breathing, crammed into 1,000 square feet; a gas stove with no outside-venting hood; a kitchen window that opens directly above a parking lot. Even so, I was flabbergasted by just how difficult it was to bring down the CO2 levels around me. Over several weeks, the best indoor reading I sustained, after keeping my window open for six hours, abstaining from cooking, and running my range fan nonstop, was in the 800s. I wondered, briefly, if my neighborhood just had terrible outdoor air quality—or if my device was broken. Within minutes of my bringing the meter outside, however, it displayed a chill 480.

[Read: The plan to stop every respiratory virus at once]

The meter’s cruel readings began to haunt me. Each upward tick raised my anxiety; I started to dread what I’d learn each morning when I woke up. After watching the Aranet4 flash figures in the high 2,000s when I briefly ignited my gas stove, I miserably deleted 10 wok-stir-fry recipes I’d bookmarked the month before. At least once, I told my husband to cool it with the whole “needing oxygen” thing, lest I upgrade to a more climate-friendly Plant Spouse. (I’m pretty sure I was joking, but I lacked the cognitive capacity to tell.) In more lucid moments, I understood the deeper meaning of the monitor: It was a symbol of my helplessness. I’d known I couldn’t personally clean the air at my favorite restaurant, or the post office, or my local Trader Joe’s. Now I realized that the issues in my home weren’t much more fixable. The device offered evidence of a problem, but not the means to solve it.

Upon hearing my predicament, Sally Ng, an aerosol chemist at Georgia Tech, suggested that I share my concerns with building management. Marr recommended constructing a Corsi-Rosenthal box, a DIY contraption made up of a fan lashed to filters, to suck the schmutz out of my crummy air. But they and other experts acknowledged that the most sustainable, efficient solutions to my carbon conundrum were mostly out of reach. If you don’t own your home, or have the means to outfit it with more air-quality-friendly appliances, you can only do so much. “And I mean, yeah, that is a problem,” said Jimenez, who’s currently renovating his home to include a new energy-efficient ventilation device, a make-up-air system, and multiple heat pumps.

Many Americans face much greater challenges than mine. I am not among the millions living in a city with dangerous levels of particulate matter in the air, spewed out by industrial plants, gas-powered vehicles, and wildfires, for whom an open window could risk additional peril; I don’t have to be in a crowded office or a school with poor ventilation. Since the first year of the pandemic—and even before—experts have been calling for policy changes and infrastructural overhauls that would slash indoor air pollution for large sectors of the population at once. But as concern over COVID has faded, “people have moved on,” Marr told me. Individuals are left on their own in the largely futile fight against stale air.

[Read: Put your face in airplane mode]

Though a CO2 monitor won’t score anyone victories on its own, it can still be informative: “It’s nice to have an objective measure, because all of this is stuff you can’t really see with the naked eye,” says Abraar Karan, an infectious-disease physician at Stanford, who’s planning to use the Aranet4 in an upcoming study on viral transmission. But he told me that he doesn’t let himself get too worked up over the readings from his monitor at home. Even Olsiewski puts hers away when she’s cooking on the gas range in her Manhattan apartment. She already knows that the levels will spike; she already knows what she needs to do to mitigate the harms. “I use the tools I have and don’t make myself crazy,” she told me. (Admittedly, she has a lot of tools, especially in her second home in Texas—among them, an induction stove and an HVAC with ultra-high-quality filters and a continuously running fan. When we spoke on the phone, her Aranet4 read 570 ppm; mine, 1,200.)

I’m now aiming for my own middle ground. Earlier this week, I dreamed of trying and failing to open a stuck window, and woke up in a cold sweat. I spent that day working with my (real-life) kitchen window cracked, but I shut it when the apartment got too chilly. More important, I placed my Aranet4 in a drawer, and didn’t pull it out again until nightfall. When my spouse came home, he marveled that our apartment, once again, felt warm.

The Supreme Court Considers the Algorithm

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 02 › supreme-court-section-230-twitter-google-algorithm › 672915

When the Ninth Circuit Court of Appeals considered a lawsuit against Google in 2020, Judge Ronald M. Gould stated his view of the tech giant’s most significant asset bluntly: “So-called ‘neutral’ algorithms,” he wrote, can be “transformed into deadly missiles of destruction by ISIS.”

According to Gould, it was time to challenge the boundaries of a little snippet of the 1996 Communications Decency Act known as Section 230, which protects online platforms from liability for the things their users post. The plaintiffs in this case, the family of a young woman who was killed during a 2015 Islamic State attack in Paris, alleged that Google had violated the Anti-terrorism Act by allowing YouTube’s recommendation system to promote terrorist content. The algorithms that amplified ISIS videos were a danger in and of themselves, they argued.

Gould was in the minority, and the case was decided in Google’s favor. But even the majority cautioned that the drafters of Section 230—people whose conception of the World Wide Web might have been limited to the likes of email and the Yahoo homepage—never imagined “the level of sophistication algorithms have achieved.” The majority wrote that Section 230’s “sweeping immunity” was “likely premised on an antiquated understanding” of platform moderation, and that Congress should reconsider it. The case then headed to the Supreme Court.

This month, the country’s highest court will consider Section 230 for the first time as it weighs a pair of casesGonzalez v. Google, and another against Twitter—that invoke the Anti-terrorism Act. The justices will seek to determine whether online platforms should be held accountable when their recommendation systems, operating in ways that users can’t see or understand, aid terrorists by promoting their content and connecting them to a broader audience. They’ll consider the question of whether algorithms, as creations of a platform like YouTube, are something distinct from any other aspect of what makes a website a platform that can host and present third-party content. And, depending on how they answer that question, they could transform the internet as we currently know it, and as some people have known it for their entire lives.

The Supreme Court’s choice of these two cases is surprising, because the core issue seems so obviously settled. In the case against Google, the appellate court referenced a similar case against Facebook from 2019, regarding content created by Hamas that had allegedly encouraged terrorist attacks. The Second Circuit Court of Appeals decided in Facebook’s favor, although, in a partial dissent, then–Chief Judge Robert Katzmann admonished Facebook for its use of algorithms, writing that the company should consider not using them at all. “Or, short of that, Facebook could modify its algorithms to stop them introducing terrorists to one another,” he suggested.

[Read: Is this the beginning of the end for the internet?]

In both the Facebook and Google cases, the courts also reference a landmark Section 230 case from 2008, filed against the website Roommates.com. The site was found liable for encouraging users to violate the Fair Housing Act by giving them a survey that asked them whether they preferred roommates of certain races or sexual orientations. By prompting users in this way, Roommates.com “developed” the information and thus directly caused the illegal activity. Now the Supreme Court will evaluate whether an algorithm develops information in a similarly meaningful way.

The broad immunity outlined by Section 230 has been contentious for decades, but has attracted special attention and increased debate in the past several years for various reasons, including the Big Tech backlash. For both Republicans and Democrats seeking a way to check the power of internet companies, Section 230 has become an appealing target. Donald Trump wanted to get rid of it, and so does Joe Biden.

Meanwhile, Americans are expressing harsher feelings about social-media platforms and have become more articulate in the language of the attention economy; they’re aware of the possible radicalizing and polarizing effects of websites they used to consider fun. Personal-injury lawsuits have cited the power of algorithms, while Congress has considered efforts to regulate “amplification” and compel algorithmic “transparency.” When Frances Haugen, the Facebook whistleblower, appeared before a Senate subcommittee in October 2021, the Democrat Richard Blumenthal remarked in his opening comments that there was a question “as to whether there is such a thing as a safe algorithm.”

Though ranking algorithms, such as those used by search engines, have historically been protected, Jeff Kosseff, the author of a book about Section 230 called The Twenty-Six Words That Created the Internet, told me he understands why there is “some temptation” to say that not all algorithms should be covered. Sometimes algorithmically generated recommendations do serve harmful content to people, and platforms haven’t always done enough to prevent that. So it might feel helpful to say something like You’re not liable for the content itself, but you are liable if you help it go viral. “But if you say that, then what’s the alternative?” Kosseff asked.

Maybe you should get Section 230 immunity only if you put every single piece of content on your website in precise chronological order and never let any algorithm touch it, sort it, organize it, or block it for any reason. “I think that would be a pretty bad outcome,” Kosseff said. A site like YouTube—which hosts millions upon millions of videos—would probably become functionally useless if touching any of that content with a recommendation algorithm could mean risking legal liability. In an amicus brief filed in support of Google, Microsoft called the idea of removing Section 230 protection from algorithms “illogical,” and said it would have “devastating and destabilizing” effects. (Microsoft owns Bing and LinkedIn, both of which make extensive use of algorithms.)

Robin Burke, the director of That Recommender Systems Lab at the University of Colorado at Boulder, has a similar issue with the case. (Burke was part of an expert group, organized by the Center for Democracy and Technology, that filed another amicus brief for Google.) Last year, he co-authored a paper on “algorithmic hate,” which dug into possible causes for widespread loathing of recommendations and ranking. He provided, as an example, Elon Musk’s 2022 declaration about Twitter’s feed: “You are being manipulated by the algorithm in ways you don’t realize.” Burke and his co-authors concluded that user frustration and fear and algorithmic hate may stem in part from “the lack of knowledge that users have about these complex systems, evidenced by the monolithic term ‘the algorithm,’ for what are in fact collections of algorithms, policies, and procedures.”

When we spoke recently, Burke emphasized that he doesn’t deny the harmful effects that algorithms can have. But the approach suggested in the lawsuit against Google doesn’t make sense to him. For one thing, it suggests that there is something uniquely bad about “targeted” algorithms. “Part of the problem is that that term’s not really defined in the lawsuit,” he told me. “What does it mean for something to be targeted?” There are a lot of things that most people do want to be targeted. Typing locksmith into a search engine wouldn’t be practical without targeting. Your friend recommendations wouldn’t make sense. You would probably end up listening to a lot of music you hate. “There’s not really a good place to say, ‘Okay, this is on one side of the line, and these other systems are on the other side of the line,’” Burke said. More importantly, platforms also use algorithms to find, hide, and minimize harmful content. (Child-sex-abuse material, for instance, is often detected through automated processes that involve complex algorithms.) Without them, Kosseff said, the internet would be “a disaster.”

“I was really surprised that the Supreme Court took this case,” he told me. If the justices wanted an opportunity to reconsider Section 230 in some way, they’ve had plenty of those. “There have been other cases they denied that would have been better candidates.” For instance, he named a case filed against the dating app Grindr for allegedly enabling stalking and harassment, which argued that platforms should be liable for fundamentally bad product features. “This is a real Section 230 dispute that the courts are not consistent on,” Kosseff said. The Grindr case was unsuccessful, but the Ninth Circuit was convinced by a similar argument made by plaintiffs against Snap regarding the deaths of two 17-year-olds and a 20-year-old, who were killed in a car crash while using a Snapchat filter that shows how fast a vehicle is moving. Another case alleging that the “talk to strangers” app Omegle facilitated the sex trafficking of an 11-year-old girl is in the discovery phase.

Many cases arguing that a connection exists between social media and specific acts of terrorism are also dismissed, because it’s hard to prove a direct link, Kosseff told me. “That makes me think this is kind of an odd case,” he said. “It almost makes me think that there were some justices who really, really wanted to hear a Section 230 case this term.” And for one reason or another, the ones they were most interested in were the ones about the culpability of that mysterious, misunderstood modern villain, the all-powerful algorithm.

So the algorithm will soon have its day in court. Then we’ll see whether the future of the web will be messy and confusing and sometimes dangerous, like its present, or totally absurd and honestly kind of unimaginable. “It would take an average user approximately 181 million years to download all data from the web today,” Twitter wrote in its amicus brief supporting Google. A person may think she wants to see everything, in order, untouched, but she really, really doesn’t.