While Bill C-11 and Bill C-18 has been major areas of focus, the online harms bill is bubbling in the background.
Yesterday, we reported on how the Canadian government is accusing experts, the music industry, the CRTC chair, and digital first online creators of spreading misinformation when they accurately point out that Bill C-11 regulates user generated content. The hostility towards anyone defending the Internet has been palpable for months now, and this recent salvo in attacking Canadian creators and users was a fresh reminder of just how much the Canadian government is at war with innovation and the Internet.
Of course, as the debate perfectly highlights, sometimes, “misinformation” is in the eye of the beholder. For experts and those who are knowledgeable about the Internet, pointing out Section 4.1(2) and 4.2 is just pointing out the obvious. For the government, pointing out flaws in their bill is “misinformation”. This alone is a fantastic example of why it is so problematic to tackle “misinformtion” in the first place.
That is not to say misinformation doesn’t exist by any means. There is, in fact, misinformation that does float around. Some of it centres around COVID-19. This includes messaging that says COVID-19 doesn’t exist, Ivermectin is a cure for COVID-19, unvaccinated people aren’t dying from COVID-19, COVID-19 was made in a lab, masks don’t add a layer of protection against the spread of COVID-19, and several other obviously false claims. Few with any sense of, well, sensibility, would disagree that this sort of misinformation needs to be stopped.
The problem lies in the fact that “misinformation” can (and will) be weaponized to shut down valid criticisms of the government. If misinformation was somehow magically outlawed tomorrow, there is a good chance that criticism’s of Bill C-11 would be labelled as illegal and subject to some kind of law enforcement. It’s not that these critics are spreading information that is false, but rather, the government is politically motivated to silence those criticisms in the first place.
This is not just some theoretical thinking, either. In fact, the media was very open about a very clear example of how the term “misinformation” can be used to silence government criticism. Towards the beginning of Russia’s war on Ukraine, the Russian government passed a “fake news” law. Anyone who says that Vladimir Putin’s war is anything other than a “special military operation” could be sentenced to prison for a long period of time. It was part of Putin’s tightening grip on what people can and cannot hear about the war. At one point, Facebook was blocked in the process. So, there are very real world examples of this.
It is these reasons why our ears perked up when we saw the story earlier this month when Elections Canada called for the banning of misinformation:
OTTAWA — Canada’s chief electoral officer is calling for changes to the country’s election law to combat foreign interference in elections and the spread of misinformation.
Stéphane Perrault has suggested creating a new offence of making false statements to undermine an election — for example, claiming that the results have been manipulated.
The recommendation is one of many made by Perrault in a report for MPs on issues arising from the last two general elections in Canada in 2019 and 2021.
The report released Tuesday said the changes are needed to “protect against inaccurate information that is intended to disrupt the conduct of an election or undermine its legitimacy.”
The new offence in the Canada Elections Act would stop people or bodies from knowingly making false statements about the voting process to disrupt an election or undermine its legitimacy.
The problem in this is where you draw the line. What would be in and what would be out of such a scope? If it’s just saying that the election was “stolen” (as Trump falsely claimed in the US) or falsely telling voters that their voting location has changed when they have not (which has happened in the past), then it is not that big of a deal. If it’s saying that the Liberal’s supported a certain bill that people disagree with, or one candidate questioning the track record of another in and effort to sway voters, that is where such an idea becomes problematic. So, at best, you would need surgical precision to differentiate what is within the scope of such a concept.
Unfortunately, it is looking like this surgical precision isn’t within the cards for the Canadian government. The Globe and Mail is noting how there was a recommendation to add “misinformation” to the online harms bill was made:
Disinformation, including “deepfake” videos and bots spreading deception, should come within the scope of a future online harms bill, say a panel of experts appointed by Heritage Minister Pablo Rodriguez to help him shape a future law.
Members of the expert panel, including Bernie Farber of the Canada Anti-Hate Network and Lianna McDonald of the Canadian Centre for Child Protection, have advised that the act impose a duty on tech giants to tackle the spread of fake news and videos.
Public Safety Minister Marco Mendicino said in an interview that technology was now so sophisticated that some fake images and content were “virtually indistinguishable” from genuine content, making it very difficult for people to tell the difference.
He said a “whole-government approach” spanning several departments was needed to tackle the spread of disinformation in Canada.
“We are at a crucial juncture in our public discourse. We are seeing an increasing amount of misinformation and disinformation informed by extremist ideology,” he said.
An analysis by academics of over six million tweets and retweets – and their origins – found that Canada is being targeted by Russia to influence public opinion here.
None of this is a good sign. Already, the debate around the online harms proposal showed many signs that the government was, at least at one point, trying to bring a sledge hammer to the Internet. When the government posted its “consultation” (in that it was merely a consultation by notice), there were many concerns surrounding the governments approach. One issue was the 24 takedown requirement that would result in millions in fines should a website not comply. Initial reactions to what was proposed was universal condemnation. As a result, the consultation garnered a massive response from individuals and organizations alike.
The problems were extensive. This included how problematic the consultation process was in and of itself, the 24 hour takedown requirements, and mass site blocking to name three. While the government said that they heard the feedback and noted that there was criticisms, an ATIP request result showed that the Canadian government was understating just how much the public pushed back against their approach.
So, so far, the process the online harms proposal took has been hugely problematic and it took universal backlash to get the Canadian government to step back a little on this legislation. It’s with that, and the history of how badly handled Bill C-11 was, there is cause for concern when there is suggestion to try and rope “misinformation” into the online harms bill. After all, just look at how experts were treated during the hearings of Bill C-11.
The debate around misinformation is, at best, a massive minefield where one misstep can blow up the entire process. Most proposals around the world end up going down in flames because they utterly fail to strike a proper balance. It’s a very fine line to get anything right and it requires very detailed and granular knowledge about how websites work, how the Internet works, and the nature of human behaviour. Given that this government already has a long history of outright rejecting expertise and, ironically, labelling everything they don’t like as “misinformation”, it’s really hard to see the government doing anything other than tossing out the scalpel and grabbing the sledge hammer should this issue get tackled in the legislation. Basically, a sure thing that the government will get this wrong.
What’s worse is, should this play out, that there is a very real possibility throughout all of this of portraying anyone who disagrees with the government as those who rely on spreading misinformation. Given that this is already a tactic the government is employing, this would raise the very real worry that the government is going to have a similar approach to countries like Russia who are only interested in controlling the discourse rather than actually tackling misinformation. That would represent a very real risk to Canadian democracy.
Drew Wilson on Twitter: @icecube85 and Facebook.