The internet’s Supreme Court showdown is here, and the stakes couldn’t be higher

 

By Issie Lapowsky

The Supreme Court has been signaling for a while now that it wants to take up a case about online content moderation. This week, it’ll get two.

On Tuesday and Wednesday, the court will hear arguments in two cases that stand to radically upend the way companies sort, filter, and remove content on the internet, and would potentially make those companies liable for the worst content on their platforms.

The cases, Gonzalez v. Google and Twitter v. Taamneh, both arise from similarly tragic circumstances: the 2015 terrorist attack in Paris, which killed, among others, an American student named Nohemi Gonzalez; and the 2017 terrorist attack in an Istanbul nightclub, which killed a Jordanian citizen named Nawras Alassaf. In the wake of the Paris attack, Gonzalez’s father sued Google, arguing that it had aided and abetted a terrorist group by allowing members of ISIS to post videos on YouTube and by algorithmically recommending those videos. In Taamneh, Alassaf’s relatives made similar accusations against Twitter, Facebook, and Google, alleging that the companies aided and abetted the attack in Istanbul by enabling ISIS propaganda to spread online.

For all of their similarities, the two cases now before the Supreme Court raise interrelated but wholly distinct questions. In Gonzalez, the court will have to decide whether Section 230 protects platforms from liability not just for what other people post but also for the platforms’ recommendation algorithms. In Taamneh, the court will set aside Section 230 and assess whether an internet platform can really be charged with aiding and abetting terrorism if its service wasn’t directly used in an attack.

 

The cases have drawn dozens of amicus briefs from tech companies, civil liberties groups, and even the authors of Section 230 themselves, who have written to the court about the perils of gutting the internet’s core protections. Others, including conservative lawmakers, law enforcement advocates, children’s rights groups, and Facebook whistleblower Frances Haugen, have taken opposing positions, calling for a limited interpretation of Section 230. Still others have written to the court, not taking sides but detailing the ramifications of reforming or repealing the law. 

Here’s a look at the stakes of each case, the questions they raise—and the ones they will inevitably leave unanswered.

Gonzalez v. Google: Content recommendations get put to the test

Gonzalez has drawn by far the most attention from the tech sector because it takes direct aim at Section 230. The nearly 30-year-old law protects online platforms from being held liable for the content that third parties post, and also empowers those platforms to remove and moderate content as they see fit. It allows platforms to “filter” or “screen” content, among other things, but doesn’t explicitly reference algorithmic content recommendations. 

 

As a result, the plaintiffs are arguing that Section 230 shouldn’t shield companies against liability for the decisions their algorithms make. “These Internet companies are constantly adjusting their recommendation systems to improve their effectiveness in inducing viewers to spend more time on the site looking at materials there, what YouTube refers to as ‘watch time,’” the plaintiffs wrote in one brief to the court. 

But neither the U.S. District Court nor the 9th U.S. Circuit Court of Appeals bought that argument when they granted Google’s motion to dismiss. One key issue, experts say, is that it’s almost impossible to distinguish between content moderation—which is clearly encouraged by Section 230—and content recommendations, without which actually finding useful content on the internet would be unworkable. 

“The whole point of 230 is to encourage platforms to engage in content moderation, and ranking algorithms are the tools for so much of the important content moderation,” Daphne Keller, director of the Program on Platform Regulation at Stanford’s Cyber Policy Center, said during a Brookings Institution panel on the cases last week. “What we’re looking at here is a case that is really about the very content moderation that [Section 230] is supposed to immunize. And the idea that the immunity goes away because it is achieved using algorithms, for example, is sort of a denial of how the internet works.”

 

On the other side of the fight, conservative lawmakers including Missouri Senator Josh Hawley say that Section 230 has been interpreted too broadly by the courts over the years. “Far from making the internet safer for children and families, Section 230 now allows platforms to escape any real accountability for their decision-making—as the tragic facts, and procedural history, of this case make clear,” Hawley wrote in a brief. “Congress never intended that result, and the text Congress wrote does not compel it.” (Hawley has been a vocal opponent of Section 230 protections for years. His former deputy counsel, Josh Divine, also clerked for Supreme Court Justice Clarence Thomas shortly before Thomas began publicly calling for Section 230 to be reined in.)

And yet, Hawley’s interpretation of Congress’s original intention conflicts directly with what Section 230’s coauthors, Democratic Senator Ron Wyden and former Republican Congressman Chris Cox, wrote in their own amicus brief to the court. According to Wyden and Cox, Section 230 was written in an intentionally “technology neutral” way to allow for companies to develop new approaches to content moderation. “Recommending systems that rely on such algorithms are the direct descendants of the early content curation efforts that Congress had in mind when enacting Section 230,” they wrote.

If Section 230 protections for recommendations were to disappear, Google and its supporters argue, the internet would devolve into an unranked, impenetrable mess, or companies would preemptively censor any remotely controversial topics so as to avoid legal action. There’s some precedent for that latter outcome: After the anti-sex-trafficking law known as SESTA-FOSTA was adopted, Tumblr banned all adult content from its platform.

 

Section 230’s defenders also argue that conservative lawmakers should be careful what they wish for. If the Supreme Court were to make companies liable for their recommendations, that could result in platforms simply leaving content online but not promoting it, which sounds a lot like “shadow banning.” 

“The thing that the people on the right are asking the court to do here in siding with Gonzalez would actually produce the outcome that they have been so angry about,” Berin Szóka, president of TechFreedom, said during a press briefing last week.

Twitter v. Taamneh: The “big sleeper” case

The fate of Gonzalez actually depends heavily on the outcome of Twitter v. Taamneh. That case sets aside the question of Section 230 protections altogether and instead looks at the underlying allegations against both Twitter and Google: If Section 230 didn’t exist, could Twitter or Google be liable for aiding and abetting terrorism in such an indirect manner? 

 

Unlike in Gonzalez, the 9th Circuit ruled against the tech companies in Taamneh, finding that platforms could be liable for aiding and abetting terrorism in this way. That wasn’t a great outcome for Twitter and other tech companies, but as long as Section 230 continued to shield them from such lawsuits, it wasn’t a disaster either. 

But in anticipation of the possibility that the Supreme Court would take up Gonzalez and potentially slice away at Section 230, Twitter asked the Court to consider Taamneh as well. The thinking goes that if the court weakened Section 230 protections in the Gonzalez case, tech companies would suddenly need immediate protection against the aiding and abetting allegations that underpin both Gonzalez and Taamneh.

Taamneh is the big sleeper component of this case,” Ben Wittes, a senior fellow in governance studies at the Brookings Institution, said during the panel. “While I generally agree that [Gonzalez] could be the most important internet law case ever, it’s all contingent on the way that the Supreme Court resolves the Taamneh case.”

 

Twitter called the 9th Circuit’s interpretation of the Anti-Terrorism Act “misguided,” and argued that the case offers no evidence of any direct link between the nightclub attack and the platforms implicated in the lawsuit. The 9th Circuit’s interpretation, Twitter argued, “threatens harmful consequences for ordinary businesses that provide generally available services or engage in arm’s length transactions with large numbers of consumers.”

If Twitter wins the Taamneh case, “the entire Gonzalez case dissolves like an Alka-Seltzer cube in water,” Wittes said. But if it doesn’t, it will have much broader implications, beyond the internet or Section 230. “If the Court were to rule that the plaintiffs have a cause of action in Taamneh, we have a world of hurt,” Wittes said. “Everybody associated with any institution that has ever been said to engage with terrorist groups is suddenly liable for every act of terrorism that has ever happened.”

The unanswered questions

For all of the big, hairy questions Gonzalez and Taamneh raise, they completely sidestep the question of what sorts of speech restrictions state governments can place on online platforms. Both Texas and Florida have recently passed laws to limit platforms’ ability to moderate user posts based on different criteria. The Texas law seeks to prevent platforms from moderating content on the basis of “viewpoint,” while the Florida law would prohibit deplatforming politicians, deprioritizing posts by or about politicians, or removing content from any “journalistic enterprise.” 

 

NetChoice and the Computer & Communications Industry Association, two tech industry groups, have fought back, arguing that platforms have First Amendment rights to make editorial choices. There are petitions pending for the Supreme Court to decide the fate of both laws, but the Court decided not to take them up this term. (Instead, it asked the Biden administration to chime in on the cases first. President Joe Biden recently expressed interest in “fundamentally reform[ing]” Section 230.)

And just because these state laws aren’t considered in either Gonzalez or Taamneh doesn’t mean they won’t make things extra messy. At their most basic, they require platforms to leave certain content up—but Gonzalez and Taamneh could make platforms liable for the content they leave up or recommend. Those two ideas are in conflict, Keller argues.

“If the outcome of Gonzalez and Taamneh is that platforms face liability for leaving content up or leaving it up in recommendation features but [that] simultaneously, by leaving it up they are violating the must-carry obligations in Texas and Florida, what does that even mean?” Keller asked.

 

While it’s unclear where all the justices stand on each of these questions, three conservative justices, Thomas, Samuel Alito, and Neil Gorsuch, have already indicated their interest in taking a fresh look at Section 230. Last year, after the majority of justices blocked Texas’s social media law from going into effect, Alito, Thomas, and Gorsuch joined in dissent. In a line that signaled what was to come, Alito wrote that the case “concerns issues of great importance that will plainly merit this Court’s review.”

Fast Company

(15)