The courts are opening the floodgates to the worst of social media
It will soon be illegal in the state of Texas for YouTube to ban videos from white supremacists or ISIS. The same will be true for Twitter, Facebook, TikTok, and Instagram.
In fact, with a few limited exceptions, any content moderation these sites do—including “de-boosting” or “de-monetizing” videos or posts—will open them up to being sued in Texas. In effect, they’re now legally compelled in that state to not curate their sites.
That may sound hyperbolic. It’s not. Last week, a three-judge panel for the 5th U.S. Circuit Court of Appeals ruled 2 to 1 that HB20—a Texas law passed last summer that bans large social media platforms from doing anything to discriminate against the vast majority of content posted to their sites—is constitutional.
The social media companies who sued to block HB20 contend that the First Amendment gives them the right to decide what content they want to host and what content they prefer to ban, and that the government cannot compel them to publish or leave up content they find offensive or objectionable.
But Judge Andy Oldham, who wrote the opinion for the majority, rejected this idea. He held that social media companies are “common carriers,” closer to telephone companies or railroads than publishers, and that they have no First Amendment right to decide what they will or won’t publish, or even what they will or won’t boost. If you have lots of users and rely on user-generated content, Oldham held, you pretty much have to let your site be an unmoderated free-for-all.
This was a very odd decision (which will likely lead to a showdown at the Supreme Court). First, Supreme Court precedent has held that the First Amendment doesn’t just prohibit the government from banning speech, but also prohibits the government from compelling anyone to speak or to publish.
Yet that’s exactly what HB20 does: It explicitly requires social media companies to publish content they would otherwise ban. (The bill emerged out of conservatives’ anger over beliefs that social media companies were deleting or de-boosting right-wing content, including claims of election fraud.)
Oldham gets around this problem by arguing that social media platforms are not analogous to publishers because they “exercise virtually no editorial control or judgment.”
It’s a bizarre claim, given how much time and money sites like Facebook and YouTube spend making curatorial decisions about what content to promote, what content to de-boost, what content to monetize, what content to ban, and so on. Users may generate the content, but the platforms are constantly using editorial discretion over what happens to that content, with an eye to ensuring their sites don’t alienate their users or descend into chaos. HB20 effectively prevents them from doing that.
On top of the First Amendment, platforms’ right to curate content is also protected by the most famous legal provision in the history of the internet, namely Section 230 of the Communications and Decency Act of 1996. Section 230 is best known for exempting websites from legal liability for content their users post. But it also explicitly allows websites to remove any content they consider “objectionable.”
In other words, it says websites can post problematic content without having to worry about getting sued, and that sites can take down, or refuse to post, content if they so choose.
The 5th Circuit opinion quickly dismisses the claim that the law gives platforms the right to moderate content by saying that the “objectionable” content websites are allowed to ban doesn’t include “political” content. But it provides no evidence for that argument. And Senator Ron Wyden of Oregon, who coauthored the CDA, disagrees. As he put it in 2019, “Section 230 is not about neutrality. Period. 230 is all about letting private companies make their own decisions to leave up some content and take other content down.”
Precisely because it gives websites a lot of leeway, Section 230 has become a favorite target of criticism in recent years from both conservatives and liberals, albeit for completely opposite reasons. Conservatives don’t like it because they think it lets websites get away with censoring right-wingers, while liberals think it allows these sites to take a hands-off approach to controlling misinformation, hate speech, incitement to violence, and the like.
Donald Trump tried to repeal it when he was in office, and just last week President Joe Biden called on Congress to get rid of “special immunity for tech companies.” But at the moment, Section 230 is still the law. And Friday’s decision shows why, for all the criticism it’s faced, it remains essential to a robust internet.
After all, without legal immunity for things users post, social media companies (and websites of all stripes) would be far more cautious and risk-averse, which would make them more aggressive about censoring anything that might expose them to a lawsuit. And without the legal right to moderate content, social media platforms would be deluged with even more hate speech and misinformation than they already are.
Section 230 is an imperfect provision. But repealing it would wreck the internet as we know it.
That doesn’t mean there’s nothing to be done to deal with the problems social media has created. The fundamental issue when it comes to the toxicity of social media is the way these sites’ algorithms can end up promoting and amplifying things like hate speech and misinformation, and sending people down rabbit holes they would otherwise never have gotten lost in.
So getting social media companies to do more to control and limit the reach and power of those algorithms is key. Both conservatives and liberals want platforms to disclose far more information to consumers about how their algorithms work and how they shape the user experience, and to provide in-depth information to regulators about the impact of algorithms in amplifying content. That would be a place to start.
In anything we do, though, it’s essential to strike the balance that’s at the heart of Section 230: allowing websites to be places where users can share their opinions and other content while also recognizing their right to moderate that content to keep their sites from becoming toxic cesspools.
What makes HB20 such a disastrous law is that in the pursuit of the first goal, it completely ignores the second. Under the guise of attacking censorship, it tramples on platforms’ First Amendment rights, and totally ignores the protections of Section 230.
In upholding the law, the 5th Circuit got it wrong. Now we’ll have to wait and see if the Supreme Court gets it right.