The Women’s Legal Education & Action Fund (LEAF), Public Interest Advocacy Centre (PIAC), and Ranking Digital Rights have responded to the online harms consultation.
The overwhelmingly large list of organizations and individuals responding to the online harms “consultation” just keeps getting bigger. It started with my submission speaking out the damage the online harms proposal would cause. This seemingly ushered in a movement of submissions – almost all of them opposed to the current proposal. We’ve been documenting as many as we can along the way. This includes the Internet Society Canada Chapter, the Independent Press Gallery, Michael Geist, Open Media, CIPPIC, Citizen Lab, the Internet Archive, CARL, Canadian Civil Liberties Association, the CCRC, the International Civil Liberties Monitoring Group, Access Now, the CPE, several additional civil rights and anti-racism organizations, Global Network Initiative, and News Media Canada. With the exception of the CCRC which just likes the ideas on the surface and News Media Canada (who seemingly didn’t understand what is being proposed), everyone is generally opposed to the online harms proposal as it stands now.
Today, we are learning that the Women’s Legal Education & Action Fund (LEAF) has also published their submission in the online harms consultation. From their posting:
In the submission, LEAF supported the development of a federal regulatory framework to address the growing issue of technology-facilitated gender based violence (TFGBV), which disproportionately impacts historically marginalized communities, including women, girls, and gender-diverse people.
However, LEAF did not support the federal government’s proposed “online harms” framework as drafted. The framework posed serious concerns from a substantive equality and human rights perspective. It also risked exacerbating existing inequalities, particularly because it purports to deal with five very different “online harms” with a single approach.
LEAF stated that, in order to deal effectively with the growing issue of TFGBV, the government needed to allocate resources to create a regulatory framework dealing exclusively with it as a particular harm.
LEAF urged the government to:
1. Revise the regulatory framework to explicitly recognize substantive equality and human rights as guiding principles;
2. Provide more immediate and direct support to victims experiencing TFGBV;
3. Provide alternative remedies to those provided through law enforcement and the criminal justice process;
4. Recognize forms of TFGBV that are not currently captured by the criminal law; and
5. Ensure responses are tailored to and account for the specific harms of TFGBV.
The posting also contains a highly detailed submission. This goes into detail about a lack of substantive equality framework, lack of consultation (especially with civil liberties groups, experts, victims and survivors of TFGBV (Technologically Facilitated Gender Based Violence), the lack of tailoring the windows to remove content to specific kinds of “harmful content”, transparency of large platforms, and the issues surrounding the Digital Safety Commissioner, the need to focus on education and prevention, and the dangers of the inherent algorithmic moderation. You can read the full submission in PDF format as well. It goes into a pretty high degree of detail.
The next submission we are aware of is the one coming from the Public Interest Advocacy Centre (PIAC). In their submission (PDF), they specifically note why site blocking is not an appropriate response to “harmful content” outside of blocking child exploitation. From their submission:
With the exception of child sexual exploitation content, which is already de facto censored by the Cybertip.ca Cleanfeed project, PIAC does not believe that site-blocking is an appropriate mechanism to address the online harms identified in the Proposal. If the Proposal is to create an avenue for site-blocking we suggest that the CRTC be the decision-maker, so as to ensure that site-blocking does not undermine Canada’s telecommunications system nor impair Canadian’s rights to telecommunications services. If the government decides to make the Federal Court the site-blocking adjudicator we suggest that it create explicit requirements that the court consider s. 36 and s. 27(2) rulings and jurisprudence and issue orders that apply narrowly to the conduct of the specific parties before them in order to safeguard Canada’s telecommunications system and the CRTC’s role in regulating it.
PIAC submits that it is likely not appropriate to create a regime in which ISPs are required by court order to block user access to non-compliant OCSPs because mandatory site-blocking: 1) is incompatible with Canada’s net neutrality framework rooted in ss. 36 and 27(2) of the Telecommunications Act as articulated by the CRTC; and 2) could result in excessive infringement of Canadians’ rights to freedom of expression on the Internet.
Section 7(i) of the Telecommunications Act requires telecommunications policy to “contribute to the protection of the privacy of persons”. Further, the CRTC has stated that it
“recognizes that [Virtual Private Networks] VPNs are a legitimate tool to protect sensitive information, as recommended by security firms. While the Commission does not find differential pricing practices to have a direct negative impact on privacy per se, it is concerned that their adoption could discourage the use of VPNs and thus compromise the privacy and/or security of consumers.
Upholding individuals’ ability to protect their privacy through VPNs and other encryption methods may make site-blocking an ineffective tool for preventing access to non-compliant OCSPs and these tools may, under the CRTC’s approach to privacy under subs. 7(i), be held to be an important aspect of telecommunications’ users’ privacy. There are a variety of ways users, even technically unsophisticated ones, may easily circumvent blocked access to websites. One method of blocking websites is to program the Domain Name System (DNS) server to refuse to translate the URL into an IP address. When a person looks up a website, they enter a URL including a domain name (ex. Google.ca). A DNS server translates domain names into an IP address which can be used to communicate directly with the websites. Most ISPs have their own DNS servers, which customers may, and most do use (although a technically sophisticated user can specify their preferred DNS server to be one other than that of their ISP). DNS-based blocking can be easily circumvented by entering the IP address directly, using a proxy, using another DNS server or following a link to the IP address. Another method is to block the IP address. This can be easily circumvented by users by using a VPN, which hides the destination of web traffic from the internet service provider. IP blocking is also easy for the site operator to circumvent by changing their IP addresses. A third method is to inspect the packets of data to determine their destination and block packets destined for the infringing website Deep-packet inspection can be easily circumvented by encrypting web-traffic. End users do not have to understand these circumvention measures to use them. Through software users can establish encrypted private network connection with a non-compliant OCSP which an internet service provider cannot block.
The Proposal’s indication that ISPs may be required to block access to only a part of a non-compliant OCSP leads PIAC to presume that deep-packet inspection would be a necessary blocking method. Deep-packet inspection would require ISPs to examine aspects of packets which they would not otherwise examine and use that information to make a decision about whether the packet should be permitted to pass. These additional steps may impose undue burden on ISPs potentially impacting network performance and competition among telecommunications companies. Deep-packet inspections may also constitute an unreasonable search if they reveal private information about users, for example, their financial, medical, or personal information, which is at the heart of the “biographical core” protected by s.8 of the Charter.11
Finally, the CRTC has forbidden, on the basis of users’ confidentiality interests, ISPs’ use of deep packet inspection for any purpose except traffic management:
103. In light of the above, the Commission finds it appropriate to establish privacy provisions in order to protect personal information. The Commission therefore directs all primary ISPs, as a condition of providing retail Internet services, not to use for other purposes personal information collected for the purposes of traffic management and not to disclose such information.
In addition to this, the organization raises questions about the potential implications such a proposal has on freedom of expression. This also goes into great detail as to why this is a concern.
Finally, there is Ranking Digital Rights (RDR). In their submission (PDF), they echo the sentiments raise by so many other organizations and individuals. From their submission:
RDR broadly supports efforts to combat human rights harms that are associated with digital platforms and their products, including the censorship of user speech, incitement to violence, campaigns to undermine free and fair elections, privacy-infringing surveillance activities, and discriminatory advertising practices. But efforts to address these harms need not undermine freedom of expression and information or privacy. We have long advocated for the creation of legislation to make online communication services (OCSs) more accountable and transparent in their content moderation practices and for comprehensive, strictly enforced privacy and data protection legislation.
We commend the Canadian government’s objective to create a “safe, inclusive, and open” internet. The harms associated with the operation of online social media platforms are varied, and Canada’s leadership in this domain can help advance global conversations about how best to promote international human rights and protect users from harm. As drafted, however, the proposed approach fails to meet its stated goals and raises a set of issues that jeopardize freedom of expression and user privacy online. We also note that the framework contradicts commitments Canada has made to the Freedom Online Coalition (FOC)3 and Global Conference for Media Freedom,4 as well as previous work initiating the U.N. Human Rights Council’s first resolution on internet freedom in 2012.5 As Canada prepares to assume the chairmanship of the FOC next year, it is especially important for its government to lead by example. Online freedom begins at home. As RDR’s founder Rebecca MacKinnon emphasized in her 2013 FOC keynote speech in Tunis, “We are not going to have a free and open global Internet if citizens of democracies continue to allow their governments to get away with pervasive surveillance that lacks sufficient transparency and public accountability.”
Issues of Concern and Recommendations
1. Proposed regulatory bodies have expansive powers and limited oversight. RDR is concerned with the sweeping authority vested in the new regulators of online content moderation (Module 1(C): Establishment of the new regulators; Module 1(D): Regulatory powers and enforcement). Particularly troubling are the provisions that empower regulators to define new categories of harmful content for future inclusion under the framework (Module 1(A) #9) and the rule that enables the government to order country-wide ISP blocking of non-compliant OCSPs (Module 1(D) #120). Such broad regulatory powers are inconsistent with the principles of necessity and proportionality that must underlie restrictions on fundamental human rights.8 While Canadians can take comfort in the strength of their democratic institutions, all countries are but one election away from democratic decline and a slide into authoritarianism. Our recent experience in the United States has been a sobering one, reinforcing the importance of balanced institutional powers, good governance, and oversight mechanisms.
Little attention given to human rights considerations.
Despite a stated desire to safeguard “fundamental freedoms and human rights” (Module 1(A) #1(h)), the Technical paper does not enumerate the specific values being protected, the mechanisms by which this might occur, nor the tradeoffs involved in securing some rights at the expense of others (i.e., protecting users from online harm versus limiting online expression).
24-hour takedown requirements for content will lead to unnecessary censorship.
The obligation that OCSs must take action on content flagged as infringing (Module 1(B), #10-12) within 24 hours is particularly onerous and harmful to freedom of expression. This provision is similar to those found in other efforts to regulate online speech, most notably, Germany’s Network Enforcement Act (NetzDG). NetzDG has become a model
for internet regulations in more authoritarian states, inspiring laws and proposals in places such as Russia, Venezuela, Vietnam, and Turkey. Timed takedown mandates have received broad criticism from academic experts and civil society groups for their likelihood to censor lawful speech.
Proactive content monitoring threatens user privacy.
The current structure of the proposal all but ensures that OCSs will implement proactive monitoring tools (i.e., algorithmic filtering software) to moderate illegal content (Module 1(B) #10). Proactive filtering regimes of this kind have been identified by the United Nations Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression as “inconsistent with the right to privacy and likely to amount to pre-publication censorship.” Moreover, automated content moderation systems have been found to disproportionately burden marginalized communities. Belief in the magic of artificial intelligence (AI) to solve harmful content problems at scale is deeply problematic. Algorithmic moderation approaches are subject to significant limitations due to their inability to comprehend contextual elements of speech, biased datasets that discriminate against users and their content, and inaccuracies associated with predictive models.
Regulating specific content overlooks how business models facilitate online
harms.
Content restrictions, without substantive consideration of the economic and technical systems that facilitate content delivery, are inadequate solutions to combat online harms. Instead, legislative attention must center on business models based on the mass collection and monetization of user data for targeted advertising. These industry practices facilitate a range of human rights abuses, most immediately those related to privacy, freedom of expression and information, and protection from discrimination.
The submission contains numerous recommendations and ideas to move the proposal forward in a better direction as well.
So, we are once again seeing many reoccurring themes. In this batch, we are seeing what appears to be unanimous consensus that the proposal, as it stands now, does not work. These organizations are listing out many reasons why and are all too happy to go into specific detail and use examples of why these ideas, as laid out in the proposal, is unworkable.
We’ll continue to document these submissions as we find them.
(Hat tip: Michael Geist)
Drew Wilson on Twitter: @icecube85 and Facebook.