The EFFs Red Flag Machine Initiative is an Excellent Reason to Be Wary of Online Harms Laws

As countries start experimenting with Online Harms laws, a recent initiative by the EFF beautifully demonstrates why we should be wary.

To say that debating laws surrounding so-called “harmful” content is an uncomfortable endeavour is a huge understatement. People like us with actual working knowledge of the internet who try to actually inject some knowledge into the debate get accused of all manners of awful things. From being accused of spreading disinformation and misinformation to being supportive of awful things like child abuse and CSAM, tech advocates tend to get the blunt end of the defamation hammer for the crimes of trying to offer sensible insights on how technology works in the real world and warning of risks associated with various policies.

In terms of where we are in these debates, the UK is pretty far ahead in going the wrong direction with such delicate issues. With the Online Safety Bill becoming the Online Safety Act thanks to the bill receiving royal assent and UK regulator, Ofcom, moving ahead with implementation of these regulations. In Canada, the Online Harms Bill is still in development, but the media is continuing to push for this likely draconian law. The EU is making moves to use the Digital Services Act for censorship purposes. In the US, the debate continues with KOSA which is widely expected to be used to crack down on LGBTQ+ content and health information.

One of the common themes throughout seems to be this persistent myth that the reason why harmful content exists is because platforms refuse to moderate or remove “harmful” content. Yet, the reality is trying to determine what is “harmful” and what is not is by far not a black and white thing. For instance, governments have a tendency of labelling comments they disagree with as “misinformation”. The Canadian government inadvertently provided a perfect example of that by earlier dismissing criticisms towards the Online Streaming Act as “misinformation“. More often then not, labelling “misinformation” ends up being a censorship tool to suppress legitimate speech. When the government has their hands on such a powerful hammer, any activism or commentary they don’t like ends up being a nail – regardless of legitimacy.

Trying to handle content with so much nuance was always going to be an extremely difficult thing to do. There’s a big difference between demanding the removal of obvious illegal content (ala CSAM) and trying to take down what is sometimes referred to as “awful, but lawful” content. Countries have constitutionally protected speech for a reason and when we chip away such speech rights, we witness yet another sign of decline in democratic society. Is it really a free society if you are not free to speak your mind? Probably not.

Yet, for supporters of such initiatives, silly things like “nuance” and the practicality of moderating content goes out the window. For them, platforms just need to “nerd harder” and be able to automatically take down “harmful” content. The details are just up to someone else to sort out. The hard truth is that those “details” matter and, to this day, there’s no mechanism (let alone an automatic mechanism) that accurately removes “misinformation” or “harmful” content 100% of the time. It doesn’t exist and will likely never will. One persons “misinformation” can easily be another persons “inconvenient truth” to borrow from Al Gore.

The other week, the Electronic Frontier Foundation (EFF) offered a spectacular example of why technology to automatically remove “harmful” content doesn’t exist. In response to a piece of software known as GoGuardian, the organization assessed how it flags material on the internet as being dangerous or harmful. The results, well, were not good:

The Electronic Frontier Foundation (EFF) today unveiled the Red Flag Machine: an interactive quiz and report demonstrating the absurd inefficiency—and potential dangers—of student surveillance software that schools across the country use and that routinely invades the privacy of millions of children.

The Red Flag Machine is the result of EFF’s investigation of GoGuardian, a tool used to surveil about 27 million students—mostly in middle and high school—in about 11,500 schools across the United States, according to the company. Like similar tools such as Gaggle and Bark, GoGuardian gives schools and the company access to an enormous amount of sensitive student data, while at the same time mis-flagging massive amounts of useful material as harmful.

The investigation identified categories of non-explicit content that are regularly marked as harmful or dangerous, including college application sites and college websites; counseling and therapy sites; sites with information about drug abuse; sites with information about LGBTQ+ issues; sexual health sites; sites with information about gun violence; sites about historical topics; sites about political parties and figures; medical and health sites; news sites; and general educational sites. Interfering with students’ access to such sites deprives them of information they need to excel in their studies, remain healthy, or improve their lives.

To illustrate the shocking absurdity of GoGuardian’s flagging algorithm, EFF built the Red Flag Machine. Derived from real GoGuardian data, users are presented with websites that were flagged and are asked to guess what keywords triggered the alert.

“This project reveals the very real and very specific failures of student monitoring technology,” said EFF Director of Investigations Dave Maass. “It’s one thing to be concerned broadly about student surveillance, but it’s an entirely different experience to see that students are ‘flagged’ for researching Black authors, the Holocaust, and the LGBTQ+ rights movement. It’s shocking, but it’s also absurd: Students have even been flagged for visiting the official Marine Corps’ fitness guide and looking at the bios of the cast of Shark Tank.”

If you are curious, the quiz in question is here and pretty much any innocuous thing could theoretically be flagged as “harmful”. Judging by what we saw, it basically scours every internet page looking for specific words found in the text. If it appears, then the software flags the webpage as dangerous. So, if a website contains the word “anal”, for instance, the page gets flagged regardless of context. The page in question could be an adult or pornographic website or it could be a web page containing actual helpful health information. In one example we saw, a webpage republishing passages from the bible ended up being flagged. In another, the page in question was about thermometers.

Supporters of online harms initiatives love to insist that the technology to combat harmful content exists. The problem is that such technology is really REALLY bad and does far more harm than good. Scouring the internet for specific words is a highly ineffective method for flagging so-called “harmful” content, but as demonstrated by the EFF, that is where the technology is today.

Some out there will probably insist with something along the lines of, “yeah, but such laws could spurn innovation!” The sad reality is that what is being asked for isn’t some sort of technological barrier to be overcome. What is being asked for is a miracle that isn’t going to happen without the complete removal of all speech on the internet. You could pass a law demanding that cancer be cured. It doesn’t automatically mean that cancer will be cured because of the law. If anything, such an endeavour is little more than wishful thinking.

The incentives to actually filter out so-called “harmful” content exists today. There’s a number of child safety filtering solutions out there and there is a capitalism incentive to produce a really good solution. Yet, here we are today with no technological solution to remove “harmful” speech. Filtering technology can filter out a portion of adult oriented and gambling websites, but that isn’t anywhere near close enough to take on what is being asked here.

It’s for reasons like these that I have always been personally against “online harms” laws. Such endeavours only end up causing far more harm than what they end up preventing. As a result, everyone loses out.

Drew Wilson on Twitter: @icecube85 and Facebook.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top