The following op-ed was recently featured in the Washington Examiner:
Section 230 is back in the spotlight. After two months of Congress slinging regulation after regulation onto the World Wide Web, Twitter and Facebook are once again feeling the heat. In a widely criticized move, the tech giants had obstructed user sharing of an explosive and salacious New York Post story about Hunter Biden.
The following maelstrom forced Federal Communications Commission Chairman Ajit Pai’s hand, causing him to issue a statement intending to clarify the meaning of Section 230 through proposed rule-making. Meanwhile, Republican Sen. Josh Hawley and loads of other politicians on both sides of the aisle see this as the opportunity to overhaul Section 230 finally.
All of this could end very badly for the rest of us who value an internet that’s actually useful. For a law so deeply misunderstood by politicians, Section 230’s premise is actually quite simple: Websites are not liable for third-party content. The idea isn’t necessarily new. For example, newspaper publishers are typically liable for the articles they choose to disseminate to the world. Offline laws typically impose liability on the responsible party. It’s why telephone companies aren’t liable for the conversations their customers have over the phone.
But Republicans see Section 230 as merely an excuse for tech companies to censor conservative content actively. And Democrats think Section 230 gives tech companies a “free pass” not to censor pro-Trump content. Of course, Section 230 doesn’t fit into either of those definitions.
The internet, a relatively new medium of worldwide communication, can’t be regulated effectively with the same models as a newspaper or a telephone. It’s too complicated for that. On the one hand, websites might avoid liability entirely by taking a totally hands-off approach to moderating the conversations taking place on their services, just like the telephone companies. On the other, websites might choose to retain some control over their users’ content in order to keep their online environments clean and platforms that people want to use.
Of course, without Section 230, such efforts could result in plenty of seven-figure lawsuits. And this is exactly why Congress enacted it: to thwart this so-called “moderator’s dilemma,” giving websites the necessary breathing room to facilitate massive amounts of user-created content.
At the same time, Section 230 empowers websites to experiment with ways to provide safe, healthy, and socially productive spaces for diverse online communities to thrive. Indeed, instead of awkwardly fitting the internet to an offline model, Section 230 created a brand new online model, perfectly tailored for an ever-changing online world.
Today, websites regularly rely on Section 230 to improve their services. For example, instead of banning or removing content, Nextdoor created dedicated spaces for its users to discuss highly sensitive topics such as local and national politics. To curb the spread of dangerous election disinformation, Facebook and YouTube recently cracked down on accounts and groups tied to the notorious conspiracy cult QAnon.
In August, Twitter announced a new labeling protocol for government and state-affiliated media accounts. Earlier this year, YouTube announced its COVID-19 Medical Misinformation Policy, removing or demonetizing any videos that attempt to spread medical misinformation about COVID-19.
Meanwhile, Congress is working around the clock to undermine these efforts. Some believe that websites go too far with their moderation efforts, “censoring” political speech in favor of dominating viewpoints. Contrary to this popular belief, Section 230 has nothing to do with Twitter’s right to pick and choose the content that appears on its service. The right to discriminate against political viewpoints has always been traditionally protected by the First Amendment. Maybe grumblers should raise a stink with the founders instead.
Amending or repealing Section 230 comes at a great cost to consumers. For starters, websites would be less inclined to host user-generated content, effectively shutting down the marketplace of ideas that the politicians claim they’re trying to save. In turn, the internet might look a lot like Netflix: paid-for, curated, and prepackaged content completely controlled by the major internet providers. It would mean a return to the AOL, Prodigy, and CompuServe walled gardens of the ’90s.
Each proposal to amend Section 230 signals Congress’s intent to foreclose on the internet’s future, and for tech giants, the legal risks of hosting user-generated content will soon mean it’s not really worth it to preserve any modicum of free expression. That’s very bad news for us.
Congress is just mad at Big Tech, and it has been for years. That doesn’t justify nuking the whole system in order to make a political point. Considering the alarming fact that members of Congress are, on average, 20 years older than the average person in the United States, perhaps they shouldn’t make mistakes with consequences that future generations will feel long after they are gone.
Jess Miers is a legal policy specialist at Google. All opinions shared are her own and do not represent her previous or current employers. Follow her on Twitter @jess_miers.
James Czerniawski is the tech and innovation policy analyst at the Libertas Institute, a free-market think tank in Utah. His work has been featured in the National Interest, RealClear Policy, the Salt Lake Tribune, and others. Follow him on Twitter @JamesCz19.