The famed Section 230 received a boost in its protection powers. A US appeals court ruled that platforms can be protected from terrorism lawsuits.
Section 230 of the Communications Decency Act is probably one of the more well known laws in the tech sector. Many argue that this is the law that allows online platforms to flourish. The law essentially says that if a third party publishes illegal content on a platform, that platform, acting on good faith, cannot be held liable for that third parties actions.
So, for instance, if someone uses a cloud storage service to post a pirated movie, there are legal protections afforded to that cloud storage service. If the service operated on good faith and didn’t encourage that kind of activity, then an argument can be made that the cloud storage service isn’t liable for the spread of that pirated movie. There are other nuances in that if the storage service became aware of the pirated movie on their servers after they receive notice, then they need to act in a timely manner to remove that file. Obviously, this also more or less hinges on the service being based in the US as well as this is US law we are talking about.
That is the general gist of how Section 230 interacts with US copyright law. Of course, Section 230 isn’t just limited to copyright law. It can apply to other content as well such as terrorism or other forms of illegal material.
Many argue that Section 230 is the reason why there are online platforms operating in the US in the first place. If Youtube were liable for every single video third party users upload onto their platform, we would likely not even be talking about how YouTube is a large platform today. Same can be said for Facebook, MySpace, or any US based platform in the first place. The liability would be too great and would increase exponentially as it gains in popularity.
Of course, in recent years, Section 230 has been under attack by quite a bit. There is the rise of “fake news”, doctored video footage, and conspiracy theories. That, of course, compounded the already great pressure for platforms to somehow magically “do something” about terrorism. Often, these demands are couched in the idea that various Internet platforms are innovative, so therefore, they should be able to somehow figure all of this out. All that without any real specifics on what they should be doing.
Some have taken to attempting to litigate these companies into a solution. In fact, we saw one such incident earlier this month when a law firm sued GitHub because the data from the Capital One data breach happened to have been posted about on that platform.
Now, it seems that results are coming in from some of this litigation. A U.S. Court of Appeals for the Second Circuit made a ruling in the Force v. Facebook case. The ruling says that Facebook is not liable for being the platform people used to coordinate a terrorist attack. The Electronic Frontier Foundation (EFF) welcomed the ruling:
The U.S. Court of Appeals for the Second Circuit last week became the first federal appellate court to rule that Section 230 bars civil terrorism claims against a social media company. The plaintiffs, who were victims of Hamas terrorist attacks in Israel, argued that Facebook should be liable for hosting content posted by Hamas members, which allegedly inspired the attackers who ultimately harmed the plaintiffs.
EFF filed an amicus brief in the case, Force v. Facebook, arguing that both Section 230 and the First Amendment prevent lawsuits under the Anti-Terrorism Act that seek to hold online platforms liable for content posted by their users—even if some of those users are pro-terrorism or terrorists themselves. We’ve been concerned that without definitive rulings that these types of cases cannot stand under existing law, they would continue to threaten the availability of open online forums and Internet users’ ability to access information.
The Second Circuit’s decision is in contrast to that of the Ninth Circuit in Fields v. Twitter and the Sixth Circuit in Crosby v. Twitter, where both courts held only that the plaintiffs in those cases—victims of an ISIS attack in Jordan and the Pulse nightclub shooting in Florida, respectively—could not show a sufficient causal link between the social media companies and the harm suffered by the plaintiffs. Thus, the Ninth and Sixth Circuit rulings are concerning because they tacitly suggest that better pleaded complaints against social media companies for hosting pro-terrorism content might survive judicial scrutiny in the future.
The facts underlying all of these cases are tragic and we have the utmost sympathy for the plight of the victims and their families. The law appropriately allows victims to seek compensation from the perpetrators of terrorism themselves. But holding online platforms liable for what terrorists and their supporters post online—and the violence they ultimately perpetrate—would have dire repercussions: if online platforms no longer have Section 230 immunity in this context, those forums and services will take aggressive action to screen their users, review and censor content, and potentially prohibit anonymous speech. The end result would be sanitized online platforms that would not permit discussion and research about terrorism, a prominent and vexing political and social issue. As we have chronicled, existing efforts by companies to filter extremist online speech have exacted collateral damage by silencing human rights defenders.
So, obviously, the battle to keep Section 230 alive in practice is far from over. Still, this is definitely a positive development for those who support free speech in the US. What will be interesting is if this type of ruling will hold up in later developments. That, of course, will take time.
Drew Wilson on Twitter: @icecube85 and Facebook.