It is a major source of anxiety, but the text of the Online Harms Bill has been tabled as Bill C-63. We analyze the text of the bill.
Update: Thanks to Matt Hatfield of Open Media for pointing out that the definition of an operator is actually in reference to a social media. The definitions are a bit confusing, but references to “operator” are, in fact, references to social media platforms, not regular websites.
A few days ago, we posted an article noting that the Online Harms Bill appeared on the notice paper. On the day that there were rumours that the bill was going to get tabled, we published an article highlighting some of the things we are looking out for.
Truth be told, this is a day I have been long dreading (not to mention how much anxiety experts have been expressing for the bill as well). I’m basically going into this wondering if my career as a journalist, in written form at least, is coming to an end soon or not. The government has been selling the story that this bill is about addressing actual harmful material and has nothing to do with online censorship. Of course, this is with the long running reputation of the government saying one thing and the bills they are pushing saying something completely different. The media has been anything but critical about the governments approach as they just actively cheer on the government while shunning anyone who is against the governments approach.
As a result, a lot of the talk right beforehand really meant nothing to me in the end. I had every reason to basically say, “show me the text” before I come to a conclusion.
Well, the legislation has been tabled. The text of the legislation can also be found here. My initial reaction to this bill is, “that… is a lot of text.” This thing is absolutely massive. I knew I had a lot of reading and analysis ahead of me.
Still, I’m going to approach this like any other bill I have ever analyzed. I flag points of interest and talk about those points of interest. It may be something that is actually decent or it may be a point that I would highlight that is of concern. So, with that said, let’s jump into the text of the legislation.
Definitions
As always, definitions can provide the foundation for better understanding what the legislation is talking about. For instance, there’s this:
Commission means the Digital Safety Commission of Canada established by section 10. (Commission)
In other bills, whenever there is the mention of a “Commission”, it is usually in reference to the CRTC (ala Bill C-11 or Bill C-18). In this case, it seems like a whole new commission is being established and references to the “Commission” will refer to this whole new governmental body.
The next definition is this:
content that foments hatred means content that expresses detestation or vilification of an individual or group of individuals on the basis of a prohibited ground of discrimination, within the meaning of the Canadian Human Rights Act, and that, given the context in which it is communicated, is likely to foment detestation or vilification of an individual or group of individuals on the basis of such a prohibited ground. (contenu fomentant la haine)
OK, so far so good. When the bill discusses content that foments hatred, they really mean hatred towards minority groups.
The next definition, well, it starts getting a little sketchy:
content that incites violence means content that actively encourages a person to commit — or that actively threatens the commission of — an act of physical violence against a person or an act that causes property damage, and that, given the context in which it is communicated, could cause a person to commit an act that could cause
(a) serious bodily harm to a person;
(b) a person’s life to be endangered; or
(c) serious interference with or serious disruption of an essential service, facility or system. (contenu incitant à la violence)
Already, we can see the bill getting a little wobbly. This is because there are idiots out there that actually suggest committing an act of violence without actually meaning it. Maybe they were drinking or maybe they were just in the heat of the moment and not actually serious about harming the other person for real. I mean, I’ve seen comments about “taking a long walk off of a short plank”, for instance. Now, this is just the definition. We haven’t really gotten to how this bill treats such content or how it considers enforcing rules against such content. Still, this definition alone gives me some hesitation.
The next definition is this:
content that incites violent extremism or terrorism means content that actively encourages a person to commit — or that actively threatens the commission of — for a political, religious or ideological purpose, an act of physical violence against a person or an act that causes property damage, with the intention of intimidating or denouncing the public or any section of the public or of compelling a person, government or domestic or international organization to do or to refrain from doing any act, and that, given the context in which it is communicated, could cause a person to commit an act that could cause
(a) serious bodily harm to a person;
(b) a person’s life to be endangered; or
(c) a serious risk to the health or safety of the public or any section of the public. (contenu incitant à l’extrémisme violent ou au terrorisme)
This really walks the line here. Essentially as long as the content encourages violence against the person or property, then it falls into this definition. Again, people post stupid things, so the question is whether or not such comments were supposed to be made in jest, not serious versus an actual credible threat, that ultimately remains to be seen in this bill (and we are just getting started with this legislation).
This definition does give a definition of harmful content:
harmful content means
(a) intimate content communicated without consent;
(b) content that sexually victimizes a child or revictimizes a survivor;
(c) content that induces a child to harm themselves;
(d) content used to bully a child;
(e) content that foments hatred;
(f) content that incites violence; and
(g) content that incites violent extremism or terrorism. (contenu préjudiciable)
If the legislation is strictly about those categories, it is a major improvement over what we saw in 2021. This is because, previously, “harmful content” was an open definition. If some anonymous user decided that certain content was harmful because they felt like it was harmful, it was classified as harmful content, end of story. At least as far as this definition is concerned, the definition of harmful content is actually tightened to be at least half way reasonable.
Another two definitions of interest are these:
Office means the Digital Safety Office of Canada established by section 39. (Bureau)
Ombudsperson means the Digital Safety Ombudsperson of Canada appointed under section 29. (ombudsman)
So, now, we have a Digital Safety Commission of Canada, a Digital Safety Office of Canada, and a Digital Safety Ombudsperson of Canada. Don’t know if there would technically be any overlap, but that is already a lot of new positions here.
Then there’s the definition of social media. In this case, there’s actually two sections:
social media service means a website or application that is accessible in Canada, the primary purpose of which is to facilitate interprovincial or international online communication among users of the website or application by enabling them to access and share content. (service de média social)
For greater certainty — social media service
(2) For greater certainty, a social media service includes(a) an adult content service, namely a social media service that is focused on enabling its users to access and share pornographic content; and
(b) a live streaming service, namely a social media service that is focused on enabling its users to access and share content by live stream.
So, this is a very wide and encompassing definition of social media. In short, if it sounds like social media, it probably is social media as far as this bill is concerned.
The next definition is something I had to read multiple times for the simple fact that it didn’t quite sink in on the first reading for me:
For greater certainty — content that foments hatred
(3) For greater certainty and for the purposes of the definition content that foments hatred, content does not express detestation or vilification solely because it expresses disdain or dislike or it discredits, humiliates, hurts or offends.
I think the reason why it’s hard to understand the first time is because it probably should’ve read “content that does not express”. It’s basically missing a word, here. For me, if that is the case (I honestly can’t think of what else it’s supposed to say), then this tightens up the definition of content that foments hatred. So, if someone says something stupid like, “I hate this person so much, I wish that person would just disappear and never come back”, then that would not actually trigger the definition because you are expressing just general distaste for someone and not actually suggesting something more nefarious.
Then, there’s another definition about social media and… just reading this makes my head explode:
Regulated service
3 (1) For the purposes of this Act, a regulated service is a social media service that(a) has a number of users that is equal to or greater than the significant number of users provided for by regulations made under subsection (2); or
(b) has a number of users that is less than the number of users provided for by regulations made under subsection (2) and is designated by regulations made under subsection (3).
… huh?
OK, let’s just continue reading here:
Regulations — number of users
(2) For the purposes of subsection (1), the Governor in Council may make regulations(a) establishing types of social media services;
(b) respecting the number of users referred to in that subsection, for each type of social media service; and
(c) respecting the manner of determining the number of users of a social media service.
Regulations — paragraph (1)(b)
(3) For the purposes of paragraph (1)(b), the Governor in Council may make regulations designating a particular social media service if the Governor in Council is satisfied that there is a significant risk that harmful content is accessible on the service.
If I’m reading this right, that is a lot of words to say that the Governor in Council will decide regulated sizes of social media.
This section suggests that what is designated a social media service is also a fluid definition:
Regulations — paragraph (1)(b)
(3) For the purposes of paragraph (1)(b), the Governor in Council may make regulations designating a particular social media service if the Governor in Council is satisfied that there is a significant risk that harmful content is accessible on the service.
So, really, if the government decides you are a social media service that poses a risk to the public, that is exactly what the government can just do.
Then there is this provision:
5 (1) For the purposes of this Act, a service is not a social media service if it does not enable a user to communicate content to the public.
Let’s be clear here, I am grateful that there are other definitions of social media in this legislation because if it was just this definition, then any website with a comments section would qualify as “social media”.
The next section doesn’t even really clear any of this up:
Interpretation
(2) For the purposes of subsection (1), a service does not enable a user to communicate content to the public if it does not enable the user to communicate content to a potentially unlimited number of users not determined by the user.
In the practical sense, that is pretty much meaningless because a random comment on a news website can be communicated “to a potentially unlimited number of users not determined by the user”. Seriously, what the heck is this section even attempting to do here?
There is also the definition of private messaging which is… actually reasonable:
Definition of private messaging feature
(2) For the purposes of subsection (1), private messaging feature means a feature that(a) enables a user to communicate content to a limited number of users determined by the user; and
(b) does not enable a user to communicate content to a potentially unlimited number of users not determined by the user.
OK, fair enough.
This is followed up by this:
Proactive search of content not required
7 (1) Nothing in this Act requires an operator to proactively search content on a regulated service that it operates in order to identify harmful content.
This at leas, sounds good out of context. I, of course, will be looking deeper into the bill first before I cast judgment because this wouldn’t be the first time I’ve seen contradictory language in the bill that basically undoes what is a decent provision (ala exception to the exception in Bill C-11).
So, just over 2,000 words in and we are finally through… the definitions. Yeah, this thing is going to be a marathon to get through, not a sprint.
Part 1
Digital Safety Commission of Canada
The bill launches straight into establishing a whole new governmental body:
Commission
10 The Digital Safety Commission of Canada is established.Mandate
11 The Commission’s mandate is to promote online safety in Canada and contribute to the reduction of harms caused to persons in Canada as a result of harmful content online by, among other things,
(a) ensuring the administration and enforcement of this Act;
(b) ensuring that operators are transparent and accountable with respect to their duties under this Act;
(c) investigating complaints relating to content that sexually victimizes a child or revictimizes a survivor and intimate content communicated without consent;
(d) contributing to the development of standards with respect to online safety through research and educational activities;
(e) facilitating the participation of Indigenous peoples of Canada and interested persons in the Commission’s activities; and
(f) collaborating with interested persons, including operators, the Commission’s international counterparts and other persons having professional, technical or specialized knowledge.
So, in theory, I could partake in such a commission thanks to my specialized or technical knowledge. I mean, hey, there’s a first for everything, right? Seriously, though, I’m skeptical there would be any interest in hearing from me mostly because I’m not a famous celebrity. It’s been my experience that actual working knowledge is kind of irrelevant with things of this nature.
From there, we get this:
Requirements
27 When making regulations and issuing guidelines, codes of conduct and other documents, the Commission must take into account(a) freedom of expression;
(b) equality rights;
(c) privacy rights;
(d) the needs and perspectives of the Indigenous peoples of Canada; and
(e) any other factor that the Commission considers relevant.
This is all well and good, though the thought that crosses my mind when reading this is whether or not this is enforceable or if this is just a mere suggestion. After all, In Bill S-210, there was a provision that asks companies to destroy personal information, but that provision was completely unenforceable. Similarly here, is this simply asking the Commission to pretty please take this into consideration or is it more that the Commission, by default, has to set up guard rails to protect these things and then goes from there in enforcing its mandate. It’s two totally different situations.
Part 2
Digital Safety Ombudsperson
The next big item is the establishment of the Digital Safety Ombudsperson of Canada:
Appointment
29 The Governor in Council is to appoint a Digital Safety Ombudsperson of Canada.
The bill goes into what this person is supposed to do:
Mandate
31 The Ombudsperson’s mandate is to provide support to users of regulated services and advocate for the public interest with respect to systemic issues related to online safety.
The bill then goes into detail of what this office must do (As it’s apparently not just a single person):
Powers, duties and functions
37 In fulfilling their mandate under section 31, the Ombudsperson may(a) gather information with respect to issues related to online safety, including with respect to harmful content, such as by obtaining the perspective of users of regulated services and victims of harmful content;
(b) highlight issues related to online safety, including by making publicly available any information gathered under paragraph (a), other than personal information; and
(c) direct users to resources, including those provided for under this Act, that may address their concerns regarding harmful content.
Delegation
38 After consultation with the Chief Executive Officer of the Office, the Ombudsperson may delegate to employees of the Office, subject to any terms and conditions that the Ombudsperson may specify, the power to exercise any of the Ombudsperson’s powers or perform any of their duties and functions, except the power to submit any report to the Minister or the power to delegate under this section.
So, basically, supposedly this office is supposed to represent users in the mind of the government at least.
Part 3
The Digital Safety Commissioner
The third piece of the puzzle, it seems, is the establishment of the Digital Safety Commissioner.
Office
39 The Digital Safety Office of Canada is established.Mandate
40 The Office’s mandate is to support the Commission and the Ombudsperson in the fulfillment of their mandates, the exercise of their powers and the performance of their duties and functions.
So, in short, providing support for the other two offices. Also note that, again, this isn’t a one person operation:
Employees
52 (1) The Chief Executive Officer may employ any employees that are necessary to conduct the work of the Office.
So, in short, this legislation creates three huge new bureaucracies to handle this new legislation. It’s quite the contrast to, say, the Online Streaming Act where lawmakers absolutely insisted that regulating the entire internet can be handled by a handful of people at the CRTC, but regulating harmful content requires thee whole government bodies. I don’t know if the realization has finally set in that regulating the entire internet is not a simple little small task or what, but the difference is certainly notable.
Part 4
Duty of Operators
Moving forward, the legislation then gets into what website operators are expected to do.
This section starts off at least promising with this:
Duty to implement measures
55 (1) The operator of a regulated service must implement measures that are adequate to mitigate the risk that users of the service will be exposed to harmful content on the service.
This is definitely far removed from the sledge hammer attitude in 2021 where website owners would get financially whacked the moment anything harmful appears on their services. Things continue to stay relieving with this:
Factors
(2) In order to determine whether the measures implemented by the operator are adequate to mitigate the risk that users of the regulated service will be exposed to harmful content on the service, the Commission must take into account the following factors:(a) the effectiveness of the measures in mitigating the risk;
(b) the size of the service, including the number of users;
(c) the technical and financial capacity of the operator;
(d) whether the measures are designed or implemented in a manner that is discriminatory on the basis of a prohibited ground of discrimination within the meaning of the Canadian Human Rights Act; and
(e) any factor provided for by regulations.
So, section 55 (2) (b) and (c) is a huge relief for me. In the 2021 version, any website operator that somehow left “harmful” content on their website for too long would be subject to a $10 million fine, full stop. This was regardless of the size of their operations and their financial situation. At least as far as this section is concerned, that major concern has been removed and, at the very list, offers a hint that things will scale to the size of the operation at the very least. Truth be told, this sort of thing should’ve been thought of clear back in 2021 given that any industry will have operations of varying size, but better late than never.
Curiously, this section was followed up with this:
No unreasonable or disproportionate limit on expression
(3) Subsection (1) does not require the operator to implement measures that unreasonably or disproportionately limit users’ expression on the regulated service.
Legally speaking, I’m not sure this section really means all that much. What’s more, there is going to be a financial incentive for website operators to allow users to comment on their content – especially when they already have a website with a feature that allows this. So, having a provision more or less recommending website operators to not curb freedom of expression seems to me to be more like legal fluff more than anything else. What’s more, even if website operators like Elon Musk starts banning users for saying mean things on X/Twitter about him, I’m not even sure this provision would really change matters anyway.
However, just because things seem reasonable doesn’t necessarily mean that websites aren’t expected to do anything. As the next section makes clear, there are expectations to follow:
Measures in regulations
56 The operator of a regulated service must implement any measures that are provided for by regulations to mitigate the risk that users of the service will be exposed to harmful content on the service.
This is followed up by this:
57 The operator of a regulated service must make user guidelines publicly available on the service. The user guidelines must be accessible and easy to use and must include
(a) a standard of conduct that applies to users with respect to harmful content; and
(b) a description of the measures that the operator implements with respect to harmful content on the service.
Generally speaking, this is actually pretty common at least with some Content Management systems. PHPBB, for instance, has that as part of their package when you install a web forum. Many website operators also publish guidelines. In fact, Freezenet has had something like this since 2015, so shortly after we opened the site. One thing I did notice is that there is nothing here saying that the Commission sets the rules or that there is a common widely used acceptable use policy that should be implemented in some way, just that a user policy is available. So, this does avert some of my fears that website owners would wind up getting bogged down in legalese trying to implement something like this just to get their foot in the door to website ownership.
The next item that is being expected is this:
Tools to block users
58 The operator of a regulated service must make available to users who have an account or are otherwise registered with the service tools that enable those users to block other users who have an account or are otherwise registered with the service from finding or communicating with them on the service.
For logged in users, this is certainly available. I also happen to know that many web forum Content Management System (CMS) packages already has this feature. What’s more, pretty much all major social media platforms already have features like this. So, as far as I’m aware, this is a very standard practice that is widely available today.
From there, we get to this section:
Tools and processes to flag harmful content
59 (1) The operator of a regulated service must implement tools and processes to(a) enable a user to easily flag to the operator content that is accessible on the service as being a particular type of harmful content;
(b) notify a user who flagged content as being a particular type of harmful content of the operator’s receipt of the flag as well as of any measures taken by the operator with respect to the content or of the fact that no measures were taken; and
(c) notify a user who communicated content that was flagged as being a particular type of harmful content of the fact that the content was flagged as well as of any measures taken by the operator with respect to the content or of the fact that no measures were taken.
WordPress generally allows a web administrator or a moderator to see whatever e-mail address the users used when they decided to comment. So, 59 (1) (b) and (c) are easily doable with many CMS solutions. Web forums generally have an internal messaging system as well. What’s more, the ability to flag something in general is generally a widely available feature on many systems commonly deployed today.
This aspect in 2021 did give me cause for concern because there was the suggestion that the government was going to come up with a system and it was up to the operator to implement it. Understandably, this gave me considerable apprehension because I wondered what the heck kind of new system do I have to implement just to appease the government. Here, it seems more like a reasonable system will do and, honestly, that is good enough for me. This is because 99 times out of 100, web administrators already have such tools available.
Where things, in my view, get concerning is this:
Prohibition – notification of measures
(2) In notifying a user of any measures taken with respect to content in accordance with paragraphs (1)(b) and (c), the operator must not notify a user of any report that the operator has made to a law enforcement agency in relation to the content.
This, to me, is concerning because choosing to notify authorities should be left up to the operator. This idea that website operators are barred completely from notifying users seems to be pretty concerning to me – especially when a website engages in public transparency reports about how it moderates users.
The next section is a bit confusing to me:
Multiple instances of automated communication by computer program
60 The operator of a regulated service must label, as being content that is described in this section, harmful content — other than content that is referred to in subsections 67(1) and 68(1) — that is accessible on the service if the operator has reasonable grounds to believe that the content(a) is the subject of multiple instances of automated communication on the service by a computer program, other than a computer program that is implemented by the operator to facilitate the proper functioning of the service; and
(b) is more prominent on the service than it would have been had it not been the subject of those multiple instances of automated communication by a computer program.
At first, I thought this was related to spam somehow or that this was related to labelling automated messaging from the operator, but I’m not certain that it is either. I’m personally not entirely sure what this section is trying to say, truth be told.
The next section, however, is a little disheartening:
Resource person
61 (1) The operator of a regulated service must make a resource person available to users of the service to(a) hear users’ concerns with respect to harmful content on the service or with respect to the measures that the operator implements to comply with this Act;
(b) direct users to internal and external resources to address their concerns, such as an internal complaints mechanism, the Commission or a law enforcement agency; and
(c) provide guidance to users with respect to those internal resources.
This might be doable with a number of larger social media platforms, but for smaller websites, this might prove to be challenging. This runs the risk of website operators having to spend their days chasing down different users who keep asking questions about compliance, eating away time the operator could be spending on making the website better. Why would there be a conflict? There very easily could be instances where a smaller operation is just one or two people who are barely keeping up with things as it is. Adding this on top of it could prove problematic especially when users could flood the system with questions with the intent on wasting the website owners time.
Making things even more confusing is this:
Contact information accessible
(2) The operator must ensure that the resource person is easily identifiable and that the resource person’s contact information is easily accessible to users of the service.
This is problematic because it means that someone is going to have their personal details made available to the wide open public. What’s more, the question is, what counts as contact information that is in compliance with this? Does this mean I have to leave a personal phone number on my website? That’s about the last thing I would want to do, personally. Either way, this opens up a huge privacy red flag for me.
Things really fly off the rails when website operators have to submit a laundry list of information to the government:
Digital safety plan
62 (1) The operator of a regulated service must submit a digital safety plan to the Commission in respect of each regulated service that it operates. The digital safety plan must include the following information in respect of the period provided for in the regulations:(a) information respecting the manner in which the operator complies with sections 55 and 56, including
(i) the operator’s assessment of the risk that users of the service will be exposed to harmful content on the service,
(ii) a description of the measures that the operator implements to mitigate the risk,
(iii) the operator’s assessment of the effectiveness of the measures — both individually and collectively — in mitigating the risk,
(iv) a description of the indicators that the operator uses to assess the effectiveness of the measures, and
(v) information respecting the factors referred to in subsection 55(2);
(b) information respecting the manner in which the operator complies with sections 57 to 61, including a description of the measures that the operator implements under those sections;
(c) information respecting the manner in which the operator complies with section 65, including a description of the design features that the operator integrates into the service under that section;
(d) information respecting any measures that the operator implements to protect children, other than those that it implements under section 65;
(e) information respecting the resources, including human resources, that the operator allocates in order to comply with sections 55 to 61 and 65, including information respecting the resources that the operator allocates to automated decision-making;
(f) information respecting the volume and type of harmful content that was accessible on the service, including the volume and type of harmful content that was moderated, and the volume and type of harmful content that would have been accessible on the service had it not been moderated, as well as the manner in which and the time within which harmful content was moderated;
(g) information respecting
(i) the number of times that content that was accessible on the service was flagged to the operator by users of the service as being harmful content, including the number of flags relating to each type of harmful content,
(ii) the manner in which the operator triaged and assessed the flags,
(iii) measures taken by the operator with respect to content that was flagged as being harmful content, and
(iv) the time within which the operator took measures with respect to content that was flagged as being harmful content;
(h) information respecting the content, other than harmful content, that was moderated by the operator and that the operator had reasonable grounds to believe posed a risk of significant psychological or physical harm, including
(i) a description of the content,
(ii) the volume of the content that was accessible on the service or that would have been accessible had it not been moderated, and
(iii) the manner in which and the time within which the content was moderated;
(i) information respecting the concerns heard by the resource person referred to in subsection 61(1) and the internal and external resources to which the resource person directed users for the purposes of that subsection;
(j) information respecting the topics, and a summary of the findings, conclusions or recommendations, of any research conducted by or on behalf of the operator with respect to
(i) harmful content on the service,
(ii) content on the service that poses a risk of significant psychological or physical harm, other than harmful content, or
(iii) design features of the service that pose a risk of significant psychological or physical harm;
(k) information respecting the measures implemented by the operator for the purposes of complying with the operator’s duties with respect to the service under An Act respecting the mandatory reporting of Internet child pornography by persons who provide an Internet service;
(l) an inventory of all electronic data, other than content that was communicated by users on the service, that was used to prepare the information referred to in paragraphs (a) to (g), (i) and (m); and
(m) any other information provided for by regulations.
I’m personally not even sure how I could possibly hand over some of this information to the government even if I wanted to. This requirement of information is going to be more than enough to deter people from starting or continuing to maintain a website. This is something I had serious concerns with back in 2021 and it has not been fixed in this version of the bill. I would honestly admit defeat on how to even begin to figure any of this out.
What’s more, this sort of thing, with some minor exceptions, has to be publicly posted:
Publication of plan
(4) The operator must make the digital safety plan publicly available on the service to which the plan relates in an accessible and easy-to-read format.Information not required
(5) The operator is not required to include any of the following information in the digital safety plan that the operator makes publicly available:
(a) the inventory of electronic data referred to in paragraph (1)(l);
(b) information that is a trade secret; or
(c) financial, commercial, scientific or technical information that is confidential and that is treated consistently in a confidential manner by the person to whose business or affairs it relates.
I’m sorry, but this is completely insane. It puts a massive barrier to new entrants on the web design market.
Conclusions
I’m not entirely surprised that this is going to be a multi-part series. There is a lot of material still left to go through. Generally, some of my concerns, so far, have been alleviated (we’ll see if that holds true with later parts of the text). Other concerns, such as disclosing huge amounts of data, are still present. What’s more, having to put personal information on the web for the purpose of being contacted is ridiculous – especially from a privacy standpoint. Not everyone wants to be at any anonymous persons beck and call and this law seemingly requires it. It is concerning from a privacy standpoint as well as just from a general operational standpoint.
I’ll continue working on the next part as I delve deeper into this legislation, but I hope you enjoy part one in the mean time. THe next part will get published whenever I get to it.
Drew Wilson on Twitter: @icecube85 and Facebook.