A new study has found that if far right users are subject to social media moderation more, it’s because they tend to break the rules more often.
With the election season in full swing in the US, far right extremists are, once again, pushing hard this myth that, among other things, that the Biden administration is somehow in cahoots with social media to “censor conservative voices”. In fact, the evidence all along shows that, if anything, platforms have engaged in pro conservative bias where right wing accounts are more likely to see leniency while left leaning accounts tend to experience sanctions, reprimands, or full blown bans for similar activity.
Obviously, nothing is going to stop far right wing extremists from proclaiming that social media and the Biden administration is somehow out to get them personally. This despite the fact that Elon Musk bought Twitter and is doing everything he can to turn it into yet another failed right wing echo chamber. This as users continue to leave the platform in droves and activity quickly getting replaced by porn bots, endless harassment, toxicity, rage farming, and crypto scams.
The reality is that far right wing extremists are wanting to not only be able to post whatever they want, but also be free of the consequences and get an audience on top of it all. That… is never going to happen. It’s, after all, why platforms like Parlour and Gab failed. There’s not a huge appetite for racism, bigotry, and conspiracy theories. What’s more, right wingers found themselves on a platform nicely separated from more general platforms and quickly found that there were no “libs to own”.
This also partly explains why X/Twitter has experienced rapid decline ever since Musk took over. Moderation was subject to repeated firing, intervention, and more. This while relaxing the rules for right wing extremists and the re-instatement of neo-nazi’s. Advertisers fled in droves, users quite in large volumes, and the overall value of X/Twitter plummeted moving forward. Probably the only reason that platform has any relevance left is thanks to journalists and politicians persistently using the platform anyway for reasons I can’t really figure out.
That leaves other platforms and how they continue to conduct business. X/Twitter has decidedly become a shining example of what not to do in running a social media platform. As a result, platforms either see even more reason to continue what they are doing or have adjusted to better serve the interest of their users. That includes moderating content – even though that will always be a tall task. This includes warning and banning accounts actively breaking the rules.
Indeed, this activity makes it a convenient thing for the far right. If one of them is, say, threatening to murder a fellow user, and the platform steps in and issues a temporary or permanent ban, then that user can immediately proclaim that this is part of the overall conspiracy of the Biden administration and the platform to silence conservative voices for their deeply held conservative beliefs.
Today, we are learning that there is additional research to back up this perspective. The research was conducted by Oxford and MIT and was reported on in the Washington Post. From the Washington Post:
Conservatives and Trump supporters are indeed more likely to have their posts on major social media platforms taken down or their accounts suspended than are liberals and Joe Biden supporters, researchers from Oxford University, MIT and other institutions found. But that doesn’t necessarily mean content moderation is biased.
Rather, the study finds that conservative accounts may be more often sanctioned because they post more misinformation.
That might sound either obvious or disingenuous, depending on your point of view. But the study, whose lead author is Oxford Internet Institute professor Mohsen Mosleh, is actually neither.
The Nature paper is not the first to find that conservatives are more likely to share stories that have been debunked, or that originate from fake news sites or other sources deemed “low-quality.” One common objection to such studies is that defining what counts as misinformation can be subjective. For instance, if the fact-checkers skew liberal or the list of fake news sites skews conservative, that in itself could explain the discrepancy in sharing behavior.
But study co-author David G. Rand, an MIT computational social science professor, said his team found that conservatives share more falsehoods and low-quality information online even when you let groups of Republicans define what counts as false or low-quality.
This, honestly, seems to line up nicely with the other evidence floating around out there on this subject. From the moderation perspective, they don’t care whether you lean left or right. They don’t care if you voted Trump or Biden. They may have their own personal opinions on the state of politics, but that tends to get left at the door when it comes to moderation decisions. All they care about is if the activity is within the stated rules or not and whether certain activities warrants a sanction of some sort. That’s it. I know this because I run this website and I do have people from time to time leaving comments I fully disagree with. I leave them up because they aren’t really breaking the rules at all and I only really step in when those comments move from a strong disagreement to being more harassing in nature. That, honestly, is how things should be handled, really.
Further, especially on larger sites, you can’t be expected to have a moderation team be perfect in every single decision. This is simply because moderation is hard and you are going to find instances that end up in grey areas sooner or later. The thing is, you have to curate your audience sooner or later because without curation of any kind, you end up just being another unusable site like X/Twitter where loads of great users end up leaving as a result, leaving the toxic users and spam bots to do as they please.
At any rate, you can have disagreements. You can have strong disagreements. What’s more, you can have disagreements that leads to users blocking each other. That happens and it’s what the blocking tools are often for. When you have users that are actively breaking the rules, that’s when moderators need to step in and make decisions. Sometimes, a certain group ends up breaking the rules more than others and that is very likely what is happening here. There’s nothing surprising about these findings given my own personal experience on other platforms. It’s why whenever people complain about being silenced on a platform, the first question should be, “well, what did you do?” Chances are, if you get an honest answer, you’ll find out very quickly why that user got the boot.