Courts are slowly chipping away at a law the internet was built on

By Issie Lapowsky

January 12, 2024

Carrie Goldberg has been waiting a while for this moment. A New York-based victim’s rights attorney, Goldberg has spent years taking tech firms to court over a range of alleged abuses of their platforms—from stalking and harassment to revenge porn and other online privacy violations. It’s been a tough gig: For nearly three decades, the law known as Section 230 has shielded online platforms from lawsuits that try to hold them liable for the content that their users post. That’s made it relatively simple—too simple, Goldberg would argue—for the companies she takes on to get their cases dismissed quickly, no matter how horrifying the underlying accusations may be. 

“We don’t even get to pass the starting point,” Goldberg says. But lately, that’s begun to change. 

Just this month, Snap lost its motion to dismiss a case in which Goldberg is representing families who say their children died after overdosing on pills laced with fentanyl that they purchased via Snapchat. In October, a California state court rejected an attempt by Snap, Meta, Google, and TikTok to dismiss another slew of cases that charge the companies with negligence and addicting kids to their platforms in a way that causes harm. A month later, a federal judge allowed a similar multidistrict litigation in federal court to proceed. And, after failing to get a sex trafficking case also filed by Goldberg dismissed in Oregon, the online chat service Omegle shut down completely in November.

What separates these cases from most others in the past is that the plaintiffs are all trying to push forward a novel legal workaround to Section 230. Rather than faulting these platforms for other people’s posts—the kind of claims Section 230 protects them from—these cases accuse the companies of essentially building faulty products—an area of law Section 230 doesn’t cover. According to an analysis by Bloomberg Law last year, the number of these product liability claims against major social media companies has spiked recently, with just five such lawsuits being filed from 2016 through 2021, and 181 being filed from Jan. 2022 through Feb. 2023. 

Until recently, though, it was anyone’s guess whether courts would actually buy this new argument and allow these cases to proceed. Now, the recent spate of rulings over just the last few months suggests that strategy may, in fact, work. 

These rulings have emerged far from the halls of Washington, where U.S. political leaders from the White House on down have threatened for years to limit the reach of Section 230. The law’s critics argue that it has been interpreted too broadly by the courts, inoculating massive companies from being held responsible for even the gravest harms carried out via their platforms. But these threats have mostly been empty ones. Even the Supreme Court, which took up what was poised to be a monumental Section 230 case last term, ultimately punted on the issue. 

Instead, it’s these early rulings on Section 230 winding through the lower courts that are steadily whittling away at the tech industry’s favorite legal shield. It’s a trend that Section 230 critics like Goldberg view as a breakthrough—and Section 230 champions fear could weaken beyond repair a law that has been foundational to the internet as we know it.

“These types of cases make me wonder what services could be next,” says Jess Miers, senior counsel at the tech trade association Chamber of Progress, who, it should be noted, has “Section 230” tattooed on her body. “Anything that’s awful on the internet, you can trace back to: ‘Well, why didn’t they design their platform in a way that would have prevented that?’”

Using product liability claims to circumvent Section 230 hasn’t always been a winning strategy. Just seven years ago, in a now infamous case called Herrick v. Grindr, Goldberg represented a man named Matthew Herrick whose ex-boyfriend impersonated him on the gay dating app and sent more than 1,400 men seeking sex to Herrick’s home and his job in less than a year. Herrick sued Grindr, alleging negligence and defective product design, but the case was dismissed under Section 230, a decision that was upheld on appeal. “Even some of my closest allies in the victims rights movement just thought I was really barking up the wrong tree by advancing this product liability thing,” Goldberg says. 

But that was 2017. Since that time, the so-called “techlash” has grown, public opinion on major tech firms has soured, and Section 230 has emerged as a political punching bag for Democrats and Republicans alike. Meanwhile, the spectrum of product liability cases across the legal system continued to grow. “All these things happened in society that I think changed the public perception of these companies,” Goldberg says. “That influences court decisions.”

Then, in 2021, came a major development in a case called Lemmon v. Snap, which was brought by the parents of two young men who died after speeding into a tree while using a Snapchat filter that recorded how fast they were going—113 mph at the time of the crash. The case was initially dismissed by a district court under Section 230, but the Ninth Circuit Court of Appeals reversed the ruling, finding that it was Snapchat’s own feature—the speed filter—not content provided by its users at issue. “The duty to design a reasonably safe product is fully independent of Snap’s role in monitoring or publishing third-party content,” the three-judge panel wrote in their opinion

The Lemmon case eventually settled before the underlying case went to trial, but the appeals court’s decision regarding Section 230 “opened up the floodgates” to more product liability claims. “Once you have a successful pleading around Section 230, plaintiffs will just run with that,” says Miers.

Since that time, the number and scope of these claims have expanded. While the speed filter case against Snap concerned one discrete feature on the app, Goldberg’s case against Snap regarding fentanyl overdoses deals with what she calls “very core functions of Snap” that may have made it more attractive to drug dealers, including the fact that messages on Snapchat disappear. Goldberg argues—and the Los Angeles Superior Court agreed—that because the complaint focuses on the design features of Snap, rather than any individual messages exchanged between users, Section 230 shouldn’t prevent the case from proceeding.

 

In a statement to Fast Company, Snap spokesperson Ashley Adams said, “We are working diligently to stop drug dealers from abusing our platform, and deploy technologies to proactively identify and shut down dealers, support law enforcement efforts to help bring dealers to justice, and educate our community and the general public about the dangers of fentanyl.” Adams called the plaintiffs’ allegations “legally and factually flawed,” and said the company would “continue to defend that position in court.” Snap has filed a motion to sanction the plaintiffs’ attorneys, including Goldberg—an attempt to formally punish them for alleged misconduct. A hearing on that motion will be heard later this month. 

The social media addiction suits—of which there are hundreds of separate claims that have been merged together at both the state and federal level in California—similarly take issue with the basic function of social media platforms, including Facebook, Instagram, YouTube, Snapchat, and TikTok. The plaintiffs argue that the very design of these platforms is meant to foster addiction in young people and causes more harm than if those platforms were designed differently. 

The judges’ rulings in both the state and federal cases did limit the plaintiffs’ claims in key ways. The state judge, for instance, rejected the idea that these platforms can legally be classified as tangible products, tossing out the plaintiffs’ product liability claims, but allowing other claims of negligence to stand. The judge in the federal case, meanwhile, dismissed claims that took issue with, among other things, the way platforms’ algorithms are designed, but allowed claims regarding, for example, platforms’ alleged lack of robust age verification and parental controls to go forward. 

“The takeaway from the recent rulings is that Big Tech can no longer stretch Section 230 to provide itself complete immunity for the serious harm it causes to its young users, particularly harms to kids from its intentional design choices,” says Lexi Hazam, a partner at the law firm Lief, Cabraser, Heimann, and Bernstein, and co-lead counsel for the plaintiffs in the case.

The tech companies involved in both cases are in the process of trying to get them reheard on appeal. They have argued that the perceived harms the plaintiffs have raised are the result not of companies’ design choices, but of the content that users communicate. Fast Company reached out to all of the companies involved in the social media addiction cases. In a statement, Google spokesperson José Castañeda called the allegations in the cases “simply not true” and said that the company has worked with youth, mental health, and parenting experts to provide “services and policies to provide young people with age-appropriate experiences, and parents with robust controls.” Meta and TikTok declined to comment. 

None of these rulings answer the underlying questions of whether these companies are actually liable for the harms alleged, and it’s unclear which, if any, of them will make it to a jury. The rulings only address whether Section 230 should prevent the cases from going forward at all. The Supreme Court is also slated to hear two Section 230 cases this term, which may well shape how courts consider these claims going forward. 

Even so, Miers of Chamber of Progress believes these recent rulings are likely to be influential and could do damage to companies that rely on Section 230 all on their own. After all, one of the core benefits of the law is that it keeps companies large and small from being drawn into lengthy and costly legal battles. “Without that guarantee, it really puts a lot of risk back onto the startup,” she says. 

Miers also warns that the focus on design defects could ensnare encrypted platforms that make it possible for people to communicate privately. “Is it dangerous design to have an encrypted app where the service can’t see the chat?” she says. “Is a design a dangerous design if it doesn’t monitor everybody’s instant messages or private messages?” 

Goldberg, for one, isn’t willing to entertain the hypothetical. But she says she believes any platform whose design “caused a life-changing injury to somebody should be scrutinized.” These rulings undoubtedly open the door to more of that kind of scrutiny.

(13)