Itemoids

Laden

The Supreme Court Actually Understands the Internet

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 02 › supreme-court-oral-arguments-section-230-internet-algorithms › 673199

For the first time, the Supreme Court is considering its opinion on the brief but powerful “26 words that created the internet.”

Enacted in 1996, Section 230 of the Communications Decency Act immunizes online platforms from liability for anything that is posted on their site by a third party—a protection that allowed the web to bloom by encouraging experimentation and interactivity in its early years. More recently, Section 230 has been the subject of scrutiny as bipartisan critics argue that it provides powerful tech companies with too much cover and too little accountability.

The Supreme Court’s perspective on the issue was a mystery until this week, when justices heard oral arguments for two cases involving 230. On Tuesday, the Court was asked to consider whether Google is liable for YouTube-recommendation algorithms showing Islamic State videos to users. Wednesday’s case was similar but dealt with Twitter’s alleged responsibility for ISIS members using its platform to recruit and fundraise. Whatever the justices decide, it will be a major moment in web history: Affirming 230 would put greater pressure on Congress or regulatory agencies to come up with their own ideas for modernizing the legal guardrails of the internet, and reinterpreting it would force tech companies of all sizes to mutate in order to avoid liability.

The direction and tone of the questioning suggest that the justices lean more toward the former, though the Court’s opinions aren’t likely to be published for at least a few months. “There doesn’t seem to be any appetite on the Supreme Court’s part to deliberately open the floodgates for lawsuits against tech companies,” James Grimmelmann, a professor of digital and information law at Cornell Law School, told me. This is notable in part because the Court has not said much of anything about platforms before, he observed: “We haven’t known anything for years. We’ve finally found out something about where their thoughts are.” It looks, maybe, like they lean toward leaving the internet alone.

[Read: The battle for the soul of the web]

The Court briefly discussed whether algorithms may lose Section 230 immunity if they’re intentionally discriminatory—the example they entertained was a dating-app algorithm written to prohibit interracial matches. They seemed to be thinking through the role of intentionality: Would it matter if YouTube had written an algorithm that favored ISIS or other extremists over more benign material, or would any algorithm still be protected by 230? But these questions weren’t resolved; justices hinted that they would like to see Congress be the ones to finesse Section 230 if it needs finessing, and were sometimes self-deprecating about their own ability to understand the issues. “We really don’t know much about these things,” Justice Elena Kagan joked on Tuesday. “You know, these are not, like, the nine greatest experts on the internet.”

They mostly came off as understanding the internet pretty well, though. During the oral arguments against Google, Eric Schnapper, representing the family of the ISIS victim Nohemi Gonzalez, spoke extensively about YouTube’s choice to display video suggestions using thumbnail imagery, saying that this constitutes the creation of new content by the platform. “Is there any other way they could organize themselves without using thumbnails?” Justice Samuel Alito asked, apparently rhetorically. (He then joked that he supposed the site could go with “ISIS video one, ISIS video two, and so forth.”) Justice Clarence Thomas asked Schnapper whether YouTube’s recommendation algorithm works differently for videos about, say, rice pilaf than it does for videos from ISIS. Schnapper said he didn’t think so, and Justice Kagan interjected, “I think what was lying underneath Justice Thomas’s question was a suggestion that algorithms are endemic to the internet, that every time anybody looks at anything on the internet, there is an algorithm involved.” She wondered whether this algorithm-centered approach would send the Court “down the road such that 230 really can’t mean anything at all.”

None of the justices appeared satisfied with Schnapper’s reasoning. Justice Brett Kavanaugh summed it up as paradoxical, pointing out that an “interactive computer service,” as referred to in Section 230, has been understood to mean a service “that filters, screens, picks, chooses, organizes content.” If algorithms aren’t subject to Section 230 immunity, then that “would mean that the very thing that makes the website an interactive computer service also means that it loses the protection of 230. And just as a textual and structural matter, we don’t usually read a statute to, in essence, defeat itself.”

On the second day of arguments, the Court barely discussed Section 230, focusing instead almost entirely on the merits of the case against Twitter under the Justice Against Sponsors of Terrorism Act. This amounted to a lengthy discussion of what may or may not constitute “aiding and abetting.” Would a platform be liable, for example, if it failed to enforce its own policies prohibiting terrorists from using its services? Edwin Kneedler, arguing on behalf of the Department of Justice, took Twitter’s side in the case, saying that the law “requires more than allegations that a terrorist organization availed itself of interactive computer services that were remote from the act of terrorism; were widely and routinely available to hundreds of millions, if not billions, of persons through the automatic features of those services; and did not single out ISIS for favorable treatment.”

The Court then walked through a series of hypotheticals involving pager sales, gun sales, the notion of Osama bin Laden using personalized-banking services, and the imagined scenario of J. Edgar Hoover telling Bell Telephone that Dutch Schultz was a gangster and was using his phone to carry out mob activities. “The discussion this morning has really taken on a very academic tone,” Chief Justice John Roberts observed.

In fact, both mornings were heavy on abstract arguments. The Court has to deal with the larger issues before anybody gets into whether, as reported in the case documents, 1,348 ISIS videos receiving a total of 163,391 views on YouTube—for an average of 121 views per video—constitutes algorithmic amplification of terrorist content. A few weeks ago, I argued that the Supreme Court’s ruling on these two cases could change the web as we know it—particularly if it decides that algorithms of all sorts are not subject to Section 230 immunity. This would make search engines unworkable and cause a flood of lawsuits against any companies that organize content through any kind of automated process.

In taking these cases, the Court was obviously curious about whether singling out algorithmic recommendations could be a good opportunity to reinterpret and thereby modernize Section 230. “I can see why it looked appealing,” Grimmelmann said. “But what happened when the cases actually got to oral argument is the justices saw how complex it actually is, and why that line’s not a very good one to draw.”