A trend among some outlets is to effectively sell the case that Section 230 should be dismantled. There still isn’t a case.
Over the last few years, we’ve been seeing a constant stream of articles from larger outlets trying to sell the idea that dismantling Section 230 isn’t just going to happen, but is some kind of moral imperative. It’s led to lawmakers to walk in lockstep to introduce bills effectively repealing the law. This includes repealing it if a site contains misinformation or if anyone makes money at all. Of course, no matter what objective angle you look at this issue, gutting Section 230 is a terrible idea. In fact, the Internet Archive highlights this nicely with the Wayforward Machine.
Still, no matter how many times we, and others, point out why repealing Section 230 is a terrible idea, there is a continual drumbeat of misleading or false information pushed by large outlets suggesting that repealing Section 230 isn’t just something that should happen, but something that is inevitable. For us, it’s much like driving off a cliff is inevitable because we are all going to die sooner or later and the car is going to break down eventually anyway. It’s incredibly stupid, but this is the kind of thinking we are facing today.
Recently, an article on Yahoo! News is suggesting that dismantling Section 230 piece by piece is not just a way to go, but it’s what tech critics have always wanted. You really can’t facepalm hard enough about that line. From Yahoo! News:
Section 230 of the Communications Decency Act helped create the modern Internet by allowing companies like Facebook (FB), Twitter (TWTR), and Google (GOOG) to operate without being held liable for content posted on their platform by third parties.
But what if that changed? Not all at once as some like President Joe Biden and former President Donald Trump have suggested – but one brick at a time.
Going at the law piece by piece is what many tech critics want, and some think health misinformation is the way to start.
By reading this, you’d think that there was some magical consensus that something really needs to be done to repeal Section 230. In fact, almost everything you see above is actually misleading. The only thing that is fine is literally the first 12 words.
First of all, it helped create the modern Internet by offering any website a certain degree of legal immunity just so they can survive. This is not exclusive to Facebook, Twitter, and Google. All Section 230 does is not hold sites liable for what users post to their platform. So, if someone publishes a comment that is considered illegal, the website owner isn’t automatically considered the publisher and is automatically held liable for the actions of the user. If you host a website that allows comments or user generated content, this kind of legal immunity is essential. Further, it puts the liability squarely to where it belongs: the person who posted that comment in the first place.
The article then basically asked what would happen if Section 230 was dismantled little by little. The simple answer is that the whole Internet would get dismantled in large chunks for what some might consider small changes to the law. If a site is liable for users comments that posts health misinformation, then comments on US-based sites would pretty much disappear overnight. Why would that be? Well, no platform would want that liability.
First of all, people who are posting information on health at all would be a massive red flag for any site. So, it would be a start to basically say all health advice or comments is completely banned because we don’t want the liability. Of course, that is problematic. How are you going to monitor for that at all? So, that leads to the next step: banning all comments.
Of course, it’s not just website owners who have plenty to fear: it’s also hosting providers and domain name registrars. What if they feel potentially implicated in all of this? Some might simply step out of the hosting business altogether because they can’t monitor (insert number of domains or sites hosted here) 24/7. Even if some survive, anything health related would immediately be red-flagged and taken down.
Bottom line is that the damage would be unthinkable. Huge swaths of the Internet would be lost whether or not they are related to health misinformation. So, when you ask, ‘what if we dismantled Section 230 brick by brick?’, you might as well have asked, ‘what if we dismantled the entire Internet huge chunk by huge chunk?’
Further, it makes you wonder who this journalist has been listening to in all of this. From my perspective, dismantling Section 230 is most assuredly not “what many tech critics want”. In fact, most of those I know of would be more than happy to give long essays as to why dismantling Section 230 is a really bad idea. Further, health misinformation is a terrible way to start as I detailed above.
Of course, misleading information on Section 230 isn’t just exclusive to one site. Here’s another piece from Bloomberg:
Social media user terms could soften the potential impact of the latest push to curb the tech liability shield known as Section 230 of the Communications Decency Act. A proposed bill would allow allegations of injuries caused by “malicious algorithms” to pierce Section 230. But common protections in user terms may bolster platforms’ legal defenses.
The proposed Justice Against Malicious Algorithms Act is one of several recent efforts to reform Section 230, which generally bars claims arising from third-party online content. If it passes, as unlikely as that appears to be, social media platforms would largely be prohibited from raising Section 230 to defend against claims that a personalized recommendation “materially contributed” to a physical or severe emotional injury. In other words, if a platform’s algorithm relies on a user’s personal data to promote particular third-party content, that user could sue the platform for resulting injuries in spite of Section 230.
We recently looked at how contract claims that sneak past Section 230’s protections could be mitigated by user terms. We also examined how user terms are often a major obstacle to copyright infringement claims. But could user terms also help protect platforms from the tort-based claims of “physical or severe emotional” damage envisioned by this proposal? Based on an analysis of standard provisions, the short answer is yes—especially in cases where the link to a physical injury is highly speculative.
If you know your way around tech, your reaction should be around the ballpark of, “Wow, where do you even begin with something that bad?”
Generally speaking, there are limits to contract law. When a user uses a website, they often unknowingly enter into a contract with the website in the first place. Most of the time, the contract amounts largely to, “don’t be a douche and don’t do illegal stuff, OK?”
Still, contract law doesn’t necessarily overrule the law in the first place. If you sign a contract saying you are going to murder someone and that you bear no legal liability with that murder, you are not free from liability by any means. You are still guilty of homicide and the contract is not legal. Contract’s cannot require a breach the law.
Similarly, you cannot write a legally binding contract saying that you are above the law. Signing a contract saying that you cannot be held liable for drinking and driving and anything that results in that action doesn’t legally work. The contract isn’t going to be valid.
Likewise, a contract that says that users are still wholly liable for the speech they post in a post Section 230 world is not likely going to stand in a reasonable court setting. The obvious response is that while the user signed an agreement saying that they are liable for the comments they make, that does not supersede the law that clearly now says that you, as the owner, are liable for what is published. It is your responsibility and you can’t sign away your responsibility. It’s stupid, yes, but this is the reality that is being pushed by those who think repealing Section 230 is the way to go. This is on par with the ridiculousness of Donald Trump claiming “absolute immunity”.
Also, in the event that you missed that nugget of hilarity, it’s this: “We also examined how user terms are often a major obstacle to copyright infringement claims.” We’ll wait for you to stop laughing.
As anyone who follows copyright law and technology knows, copyright has long been a bludgeon for censorship. Thanks to the DMCA (Digital Millennium Copyright Act), websites are generally forced to operate on a shoot first, ask questions later basis as it is. When a complaint is lodged against a website, the content gets taken down regardless of any argument for Fair Use. Just ask anyone who has to deal with YouTube or any other large platform. In fact, we highlighted just how insane things can get back in 2019 when we reported on the story of people’s content being taken down for publishing content that is in the public domain. User terms are as big of an obstacle to copyright claims as a sheet of generic tissue paper is a major obstacle to a semi truck.
Then there is this from the National Review:
In fact, Section 230 already does recognize social-media platforms as a kind of common carrier. In shielding them from designation as “the publisher or speaker of any information provided by another information content provider” — and thus immunizing them from the liabilities that publishers are vulnerable to — the statute effectively recognizes social-media platforms’ roles as de facto public utilities, without securing any of the corresponding antidiscrimination protections that normally accompany the classification. Hamburger is not the only conservative to question that arrangement: In a concurring opinion to this April’s Joseph Biden v. Knight First Amendment Institute at Columbia University, Justice Clarence Thomas argued that “digital platforms that hold themselves out to the public resemble traditional common carriers,” particularly in the context of “digital platforms that have dominant market share.” And “if the analogy between common carriers and digital platforms is correct,” Thomas wrote, “then an answer may arise for dissatisfied platform users who would appreciate not being blocked: laws that restrict the platform’s right to exclude.”
Libertarians like Senator Paul well know the dangers of collusion between big government and big business. Paul himself seemed to acknowledge as much in his August statement on YouTube’s censorship of his videos, writing that “YouTube is acting as an arm of government and censoring their users for contradicting the government.”
But if YouTube is acting as an arm of the government, then surely it should not have the unlimited “right to ban me if they want to,” as Paul argued in that same statement. There is no conservative principle that dictates that Big Tech is entitled to the special treatment it enjoys under Section 230. No clause in the Constitution dictates that Congress owes tech companies a liability shield without asking for anything in return. And there is nothing coercive or authoritarian about requiring beneficiaries of such immunities to observe a basic commitment to free and open debate; if such requirements are too onerous or objectionable, platforms can, and should be welcome to, opt out of Section 230–style protections.
A true free-market solution to Big Tech, consistent with the hands-off approach that conservatives like McCarthy and Paul desire, would involve narrowing Section 230’s liability protections or even repealing the provision altogether. Those are measures that libertarian organizations such as the Mises Institute have endorsed, reasoning that such reforms “would actually reduce government intervention” rather than expand it. But if one disagrees with that approach, as many do — Niall Ferguson writes in The Spectator, “without some kind of First Amendment for the internet, repeal [of 230] would probably just have restricted free speech further” — the next-best option is the kind of common-carrier designation that has been applied to communications mediums in the past. Far from expanding government power, such a reform would secure a wider sphere of political liberty against the censorious encroachment of a state-sanctioned actor. And it would be entirely consistent with first principles. Republicans should act accordingly.
The first sentence in this snippet is wrong and it only goes downhill from there. There’s a big difference between an ISP offering you Internet access and a website that happens to be large. One is required to access the Internet at large and the other is, well, just one website you are free to use or not. It’s depressing that we even have to point this out in the first place. A large website is not a “common carrier” in much the same way a kid operating a lemonade stand does not act as a major grocery store for the whole city. Saying that the kid happens to be popular in school doesn’t really change this fact either.
There is no context to the comments surrounding the YouTube video removal. It could be that it had health misinformation, a copyright claim, harassment, or a host of other reasons not being disclosed. As so often is the case, we’ve seen right wing commentators have their content removed for violations of community guidelines. Then, in turn, they scream that big government is censoring them even though that is not at all what happened. Without context of any kind, this is not just hearsay, but also worthless to bring up in the first place.
Also, the line, “There is no conservative principle that dictates that Big Tech is entitled to the special treatment it enjoys under Section 230” is just ridiculous with respect to the latter portion of the sentence particularly. Every website, whether it is a small blog or photo gallery, all the way up to the large tech giant’s, all enjoy Section 230 equally. To suggest that big tech shouldn’t enjoy a particular law like everyone else is advocating for the very big government market intervention that conservatives reportedly detest so much in the first place. It’s ultimately an anti-free market call.
Further, suggesting that repealing Section 230 is a “true free-market solution” is the equivalent of saying that a true free market solution for restaurants is to remove all health and safety laws across the board. If that diner served you a sandwich with listeria, oh well. That’s the free market for you. Who knows where that came from. Still, how is that place supposed to know how you got that in the first place? Suing the place? Sorry, but you can’t prove it came from the diner in the first place, case dismissed.
What’s more is turning the debate into a political one is just plain bad reasoning. Trying to argue that a law should be repealed because right wing politics says so is atrocious policy-making.
The thing is, if right wingers are so concerned about censorship now, repealing Section 230 would make this problem in orders of magnitude worse. Just imagine what life will be like if Facebook went from possibly taking down a racist comment to removing all of your comments altogether. It encourages Facebook to hand review every comment and discard anything that could bring it liability of any kind. Yes, that means 99.9% are going to get binned. You might as well say that you support the free market to silence conservative voices because the effect of repealing Section 230 would have precisely that kind of impact.
At the end of the day, Section 230 is not only a non-partisan issue, but also a non-problematic issue. To this day, we have yet to see a case for repealing Section 230. Every “solution” we’ve seen so far is terrible. As the above examples show, despite quite a concerted effort to find any reason to do so, there still isn’t a reason to specifically remove or reform Section 230. If there is no case for repealing or reforming Section 230, then there is no reason to reform or repeal Section 230.
Drew Wilson on Twitter: @icecube85 and Facebook.