Publication | Legaltech News
Nervous System: How Trolls Built Big Tech
A law designed to encourage content moderation now gives tech companies broad legal protections for questionable content.
A heated and consequential debate revolves around the ethical and legal responsibilities governing social media platforms like Twitter, Facebook, and YouTube. These and other tech giants have risen to enormous power and influence while serving as broadcast media for a range of controversial and disputed content. Counterintuitively, the broad legal protections that tech companies enjoy derive from a law that was intended to encourage them to exercise significant content moderation. In many ways the present moment is the exact opposite result of what the law set out to achieve.
Traditionally, 47 U.S.C. § 230 (“Section 230”) has recognized a distinction between publishers and distributors. The notion that publishers exercised editorial control over the material they released suggested they could be held liable for defamatory or otherwise actionable content, whereas mere distributors unable to police the content of everything they sold generally were shielded from liability unless they had cause to be aware. On the internet, however, this distinction led to bizarre and unwelcome results.
In 1994, an anonymous person using a stolen identity posted several messages to Prodigy’s “Money Talk” online bulletin board claiming that the Long Island securities investment banking firm Stratton Oakmont was a “cult of brokers who either lie for a living or get fired.” Although this comment is mild compared to the things said about the “wolf of Wall Street” today, Stratton Oakmont sued Prodigy for defamation.
Prodigy had no way to pursue claims of its own against the poster responsible for the offending message, since their identity was unknown. More problematically for the tech company, it had taken steps to distinguish its status in the marketplace as the “family-friendly” internet provider. Because Prodigy had systems in place to attempt to filter out pornography or offensive language from the material it offered, the New York Supreme Court ruled that it was enough like a publisher to be liable for the defamatory content posted by a third party. The immediate effect of this precedent was to terrify internet-based companies into abandoning any kind of content moderation, for fear of being sued for anything they failed to catch.
This ruling alarmed Representatives Christopher Cox (R-CA) and Ron Wyden (D-OR). The lawmakers wanted to encourage technology companies to perform self-censorship, to nurture a better-curated internet without resorting to government regulation or triggering First Amendment concerns. This goal was untenable if companies believed their best protection from lawsuits was to avoid any hint of editorial influence. Cox and Wyden’s law, eventually codified at 47 U.S.C. § 230, established that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
The first test of this new law came in 1995 in response to another instance of cruel online trolling.
One morning, a Seattle artist named Ken Zeran found himself the bewildered target of an onslaught of furious telephone calls denouncing him and accusing him of the most appalling insensitivity. Zeran pieced together that someone using the alias “Ken ZZ03” had been publishing ads on America Online, listing Zeran’s name and phone number as the place to order t-shirts with slogans like “Visit Oklahoma . . . . It’s a BLAST” and “Finally, a day care center that keeps the kids quiet – Oklahoma City 1995.” These ads appeared just six days after domestic terrorists killed 168 people by bombing the Murrah Federal Building in Oklahoma City.
Zeran never discovered the identity of Ken ZZ03, nor the motive for the attack. Zeran did, however, hope to hold someone accountable for the situation, and he filed a federal lawsuit against AOL for defamation.
As it happened, in the time between Ken ZZ03’s cruel prank and the filing of Zeran’s complaint, Section 230 had come into effect. The court found that the new law effectively shielded AOL from any liability, and the Supreme Court upheld the decision on appeal.
In the wake of that ruling, the dominant forces of today’s internet—YouTube, Facebook, Twitter, Amazon, and so on—have relied on Section 230 to avoid liability for the content created by their users.
Section 230 is why the internet is such a robust and fertile place. It provided the protective infrastructure for why readers can post comments on news articles, upload videos to YouTube, write reviews for Amazon and Yelp, rate Uber drivers and AirBnB hosts, become bloggers or influencers, and otherwise post their ideas freely with minimal censorship or legal repercussions. An entire economy has grown up around online businesses that thrive on public creation of content, public commentary and engagement, and public reviews and feedback for service providers. The boons of the law are abundant.
At the same time, Section 230 is also why the internet is overrun by hate speech, conspiracy theories, and fake news. Those who feel victimized by trolling, misinformation, and even the posting of nonconsensual sexual imagery (so-called “revenge porn”) have found their lawsuits dismissed on the grounds of Section 230 immunity. Although its authors may have intended otherwise, the law prioritized and protected the Ken ZZ03s of the world over the Ken Zerans they trolled.
In the last several years, politicians, commentators, and legal minds on all sides have conducted an increasingly heated debate over the value of Section 230. Many internet companies are assailed on one side by accusations that they are too aggressive in moderating content by conservative voices, and on the other by accusations that they do too little to stop the spread of extremism. Policymakers are increasingly losing patience with this law, but it is an integral part of why the modern internet economy exists the way that it does. It is unclear what changes could be made to ameliorate the problems caused by the law without unraveling its benefits.
The views and opinions expressed in this article are those of the author and do not necessarily reflect the opinions, position, or policy of Berkeley Research Group, LLC or its other employees and affiliates.