Section 230 is a critical pillar to the Internet, but Gonzalez v. Google could neuter this critical component of internet speech.
Section 230 of the Communications Decency Act is a critical legal component of the internet free speech we enjoy today. The text is short and easy to understand. The section itself reads as follows:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
Some have referred to this section as “The 26 words that made the Internet”. It’s with good reason. If you have ever used social media, commented on a news article, posted a drawing, uploaded a video, streamed content, held an audio or video conference over the internet, participated in an online chat, sent an e-mail, or a host of other internet activities, legally speaking, all of this and more was made possible thanks to Section 230.
Simply put, Section 230 is responsible for many of the things users enjoy on the internet today. In short, if you post something, you are responsible for the legal repercussions of what you post. An entity, be it an organization or an individual, can’t simply file a lawsuit against Reddit just because someone posted something illegal. Such legal liability would effectively kneecap innovation across the United States and any dynamic service would either be greatly curtailed, left unmoderated, or moved offshore as a result. Section 230 is a critical legal pillar that enables the free speech we all enjoy today.
Unfortunately, in the last few years, some have chosen to greatly warp and alter the meaning of Section 230. Some have taken to calling this a gift to “Big Tech” and called for so-called “clarification” in a short-sighted and ill-advised attempt to “reign in” “Big Tech”. Others have called it an outdated piece of law that needs to be greatly reformed or axed altogether for the simple reason that it’s an older law. All of the above are extremely short-sighted and, if such voices were permitted to succeed in their ambitions, it would mean that decades of online innovation would we wiped out in the US, effectively taking us back to Web 1.0 from the 90’s where static HTML pages reigned supreme.
Terrifyingly enough, the conspiracy theorists who have warped and completely changed the meaning, even though the law itself has remained unchanged, are now closer than ever to punching a massive hole on the free and open internet. It revolves around a legal case known as Gonzalez v. Google. Nohemi Gonzalez was tragically killed by ISIS during the 2015 Paris attack. Family members of the victim have responded by suing Google because ISIS material appeared on the video sharing site, YouTube. They basically blamed Google for the proliferation of ISIS and have said that they are responsible for the death of Gonzalez. Obviously, Section 230 would apply here, but the case made it to the US Supreme Court.
At the centre of the case is whether or not Section 230 is even really valid any more. If something bad happens or inflammatory content is posted on a service, should Section 230 even apply any more? Such questions basically completely defeat the purpose of Section 230 because it basically changes the meaning to say that sites aren’t responsible for the content posted by other users unless it’s bad. So, little wonder why digital rights advocates are saying that such a change would mean the end of Section 230. If the court case goes against Google, does Section 230 really have any practical legal meaning any more? Not really.
So, a considerable amount is certainly at stake. As a result, it’s no surprise that a number of Amicus Briefs have been filed defending the free and open internet have been filed. The Electronic Frontier Foundation (EFF) filed theirs and explained in a post afterwards just what is at stake:
If the plaintiffs’ arguments are accepted, and Section 230 is narrowed, the internet as we know it could change dramatically.
First, online platforms would engage in severe censorship. As of April 2022, there were more than 5 billion people online, including 4.7 billion using social media platforms. Last year, YouTube users uploaded 500 hours of video each minute. Requiring pre-publication human review is not feasible for platforms of even moderate size. Automated tools, meanwhile, often result in censorship of legal and valuable content created by journalists, human rights activists, and artists. Many smaller platforms, unable to even access these flawed automated tools, would shut down.
The Gonzalez case deals with accusations that Google recommended content that was related to terrorism. If websites and apps can face severe punishments for recommending such content, they’re very likely to limit all speech related to terrorism, including anti-terrorism counter-speech, and critical analysis by journalists and intelligence analysts. The automated tools used to flag content can’t tell whether the subject is being discussed, commented on, critiqued, or promoted. That censorship could also make it more difficult for people to access basic information about real-world events, including terrorist attacks.
Second, online intermediaries are likely to stop offering recommendations of new content. To avoid liability for recommendations that others later claim resulted in harm, services are likely to return to presenting content in blunt chronological order, a system that is far less helpful for navigating the vast seas of online information (notably, a newspaper or magazine would never use such a system).
Third, the plaintiffs want to create a legal distinction under Section 230 related to URLs (the internet’s Uniform Resource Locators, the addresses that begin with “http://”). They argue that Section 230 protects the service from liability for hosting the user-generated content, but it should not protect the service for providing a URL so that others can access the content. The Supreme Court should reject the idea that URLs can be exempted from Section 230 protection. The argument is wrong as both a legal and technical matter. Users direct the creation of URLs when they upload content to a service. Further, Section 230 does not contain any language that indicates Congress wanted to create such a hair-splitting distinction. To rule as the plaintiffs argue would cripple online services up and down the internet “stack,” not just social media companies. The primary means by which everyone accesses content online—the URL—would become a legal liability if the link led to objectionable content.
Public Knowledge also filed an amicus brief and had this to say:
“The question before the Court is not whether YouTube did the right thing by publishing terrorist videos. It did not, and this case—as well as many others—shows that policy responses are needed to address the spread of harmful, extremist, and hateful content online.
“But any such responses (and there is no one silver bullet) must come from Congress, not the courts. Section 230 of the Communications Act says that platforms like YouTube cannot be held liable for publishing user-uploaded material. As our brief explains, content recommendations that present videos to users meet even the narrow, common law conception of ‘publishing,’ and are squarely shielded by Section 230. Creative lawyering that describes the same set of facts in different terms does not get around this.
“Section 230 is a pro-competition, pro-free expression statute. A judicial rewrite of the statute to exclude some kinds of publishing, or some kinds of platforms, would undermine those goals. Any loophole created by the Court would be cited and expanded on by any one of hundreds of district court judges, and platforms would have to adjust: either by curtailing what (and if) users can post content, or turning their platforms into unmoderated free-for-alls where the worst users drown out everyone else. The largest platforms like YouTube, however, can probably invest enough to deal with changing legal exposure. Smaller platforms would not be able to.
Reason also filed a brief and had this to say about it:
Section 230’s text should decide this case. Section 230(c)(1) immunizes the user or provider of an “interactive computer service” from being “treated as the publisher or speaker” of information “provided by another information content provider.” And, as Section 230(f)’s definitions make clear, Congress understood the term “interactive computer service” to include services that “filter,” “screen,” “pick, choose, analyze,” “display, search, subset, organize,” or “reorganize” third-party content. Automated recommendations perform exactly those functions, and are therefore within the express scope of Section 230’s text.
Section 230(c)(1)’s use of the phrase “treated as the publisher or speaker” further confirms that Congress immunized distributors of third-party information from liability. At common law, a distributor of third-party information could be held liable only when the doctrine permitted the distributor to be treated as the publisher. As Petitioners and the United States agree, Congress understood and incorporated that common-law meaning of “treated as the publisher” into Section 230(c)(1). Given that a distributor cannot be “treated as the publisher” of certain third-party information, however, there is no alternative mechanism for holding the distributor liable based on the improper character of the information. Indeed, Congress enacted Section 230(c)(1) specifically to avoid the sweeping consequences that the common-law regime of knowledge-based distributor liability would inflict on the developing internet.
Section 230(c)(1)’s surrounding and subsequent statutory context bolsters this conclusion. Section 230(c)(1) provides the same protection to “user[s]” as to “provider[s]” of interactive computer services. Petitioners do not defend the position that users who like, retweet, or otherwise amplify third-party content should be held liable for the character of that content, but Section 230(c)(1)’s text renders that an inescapable consequence of their argument. The better inference is that Congress chose to protect a wide range of speech and speech-promoting conduct for providers and users of interactive computer services alike. In addition, other statutory enactments illustrate that Congress knew how to impose liability on distributors when it wanted to—such as in the Digital Millennium Copyright Act, for example, where Congress also wrote a detailed notice-and-takedown framework into the statute to ensure that distributors received adequate procedural protections as well.
Petitioners’ and the United States’ attempts to distinguish between mere automated recommendations (for which distributors purportedly could be liable) and the recommended content (for which they could not) find no support in the text. To the contrary, the text makes clear that even a bare automated recommendation constitutes “pick[ing]” or “choos[ing]” content, an activity expressly contemplated by Section 230. Moreover, to hold a distributor liable based in part upon the improper content of information created by a third party would conflict with the common-law meaning of the terms Congress chose.
Cathy Gellis of TechDirt commented on the Copia Institute’s brief in this case:
Every amicus brief the Copia Institute has filed has been important. But the brief filed today is one where all the marbles are at stake. Up before the Supreme Court is Gonzalez v. Google, a case that puts Section 230 squarely in the sights of the Court, including its justices who have previously expressed serious misunderstandings about the operation and merit of the law.
As we wrote in this brief, the Internet depends on Section 230 remaining the intentionally broad law it was drafted to be, applying to all sorts of platforms and services that make the Internet work. On this brief the Copia Institute was joined by Engine Advocacy, speaking on behalf of the startup community, which depends on Section 230 to build companies able to provide online services, and Chris Riley, an individual person running a Mastodon server who most definitely needs Section 230 to make it possible for him to provide that Twitter alternative to other people. There seems to be this pervasive misconception that the Internet begins and ends with the platforms and services provided by “big tech” companies like Google. In reality, the provision of platform services is a profoundly human endeavor that needs protecting in order to be sustained, and we wrote this brief to highlight how personal Section 230’s protection really is.
Because ultimately without Section 230 every provider would be in jeopardy every time they helped facilitate online speech and every time they moderated it, even though both activities are what the Internet-using public needs platforms and services to do, even though they are what Congress intended to encourage platforms and services to do, and even though the First Amendment gives them the right to do them. Section 230 is what makes it possible at a practical level for them to them by taking away the risk of liability arising from how they do.
This case risks curtailing that critical statutory protection by inventing the notion pressed by the plaintiffs that if a platform uses an algorithmic tool to serve curated content, it somehow amounts to having created that content, which would put the activity beyond the protection of Section 230 as it only applies to when platforms intermediate content created by others and not content created by themselves. But this argument reflects a dubious read of the statute, and one that would largely obviate Section 230’s protection altogether by allowing liability to accrue as a result of some quality in the content created by another, which is exactly what Section 230 is designed to forestall. As we explained to the Court in detail, the idea that algorithmic serving of third party content could somehow void a platform’s Section 230 protection is an argument that had been cogently rejected by the Second Circuit and should similarly be rejected here.
All of this is based on what Section 230 is and what it does and doesn’t do. Unfortunately, it seems that conspiracy theorists are trying to discredit all of the above by calling everyone who supports sound legal reasoning as nothing more that “Big Tech” shills:
Nearly 40 nonprofits, legal organizations, and trade associations with financial and personnel ties to Google have formally submitted amicus briefs before the Court in Gonzalez v. Google, accounting for a third of the briefs submitted for the case.
Other suits could follow, potentially drying up funds for Google’s affiliated nonprofits. Mike Davis, president and founder of the Internet Accountability Project, told the Washington Free Beacon it’s “not surprising” to see that the same groups submitting amicus briefs are “on Google’s payroll.”
“These Big Tech shills are bought and paid for and should be in no way considered independent,” he said. “The key to Big Tech’s strategy to fend off legislation, regulation, and damaging court rulings is their willingness to reach into their deep pockets and buy off critics.”
So, really going shoulder to the wheel on the whole concept of attacking the people, rather than the legal arguments. I’ve lost track of how many times that’s an indication of a weak argument and this is definitely no exception.
While such a case might be more of a slam dunk case in normal times, the thing to keep in mind is that this is a Republican controlled US Supreme Court we are talking about. The court is now no stranger to making rulings based on personal beliefs rather than well-established caselaw. As a result, the US has been living in very uncertain times for the better part of a year or more at this point. The highest court on the land now treats the law and the US Constitution as merely suggestions rather than the be-all end-all. So, there’s little wonder why so many who care about free speech are so nervous about this case. Reports indicate that arguments begin on February 21st.
Drew Wilson on Twitter: @icecube85 and Facebook.