logotype
  • About
    • Canadian IGF
    • Committees
    • Contact us
  • Program
    • Event Details
  • Past Events
    • Canadian IGF 2024
    • Canadian IGF 2023
    • Canadian IGF 2022
      • March 2022
      • May 2022
      • June 2022
      • November 2022
    • Canadian IGF Talks 2021
      • July 2021
      • November 2021
    • Canadian IGF Talks 2020
    • Canadian IGF Talks 2019
  • FAQ
  • Français
logotype
  • About
    • Canadian IGF
    • Committees
    • Contact us
  • Program
    • Event Details
  • Past Events
    • Canadian IGF 2024
    • Canadian IGF 2023
    • Canadian IGF 2022
      • March 2022
      • May 2022
      • June 2022
      • November 2022
    • Canadian IGF Talks 2021
      • July 2021
      • November 2021
    • Canadian IGF Talks 2020
    • Canadian IGF Talks 2019
  • FAQ
  • Français
  • About
    • Canadian IGF
    • Committees
    • Contact us
  • Program
    • Event Details
  • Past Events
    • Canadian IGF 2024
    • Canadian IGF 2023
    • Canadian IGF 2022
      • March 2022
      • May 2022
      • June 2022
      • November 2022
    • Canadian IGF Talks 2021
      • July 2021
      • November 2021
    • Canadian IGF Talks 2020
    • Canadian IGF Talks 2019
  • FAQ
  • Français
logotype
logotype
  • About
    • Canadian IGF
    • Committees
    • Contact us
  • Program
    • Event Details
  • Past Events
    • Canadian IGF 2024
    • Canadian IGF 2023
    • Canadian IGF 2022
      • March 2022
      • May 2022
      • June 2022
      • November 2022
    • Canadian IGF Talks 2021
      • July 2021
      • November 2021
    • Canadian IGF Talks 2020
    • Canadian IGF Talks 2019
  • FAQ
  • Français

Regulating Online Harms in Canada

November 2021

Panelists

  • Cara Zwibel

    Canadian Civil Liberties Association

  • Diana Gheorghiu

    Child Rights International Network

  • Jeanette Patell

    YouTube

  • Ron Bodkin

    Vector Institute

  • David Fewer

    Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic (Moderator)

Major concerns

Key Issues

  • Definitional issues of the illegal and harmful content targeted by the government
  • The prospect of proactive monitoring and the over-removal of legitimate content 
  • The administrative and technical  feasibility of enforcement, including the sophistication of automated decision-making 
  • Protecting fundamental rights and freedoms, including the right to privacy and freedom of expression, in an online context

Highlights

Discussion Overview

During the summer of 2021, the Government of Canada launched a consultation on its proposal for online harms legislation. The ambitious proposal addresses five categories of illegal and harmful content: hate speech, terrorist content, content that incites violence, child sexual exploitative material, and the non-consensual sharing of intimate images. It would establish new regulatory bodies for oversight, enforcement and appeals, and requirements for “online communications service providers,” i.e. major platforms like Facebook and YouTube, including 24-hour takedowns, “robust” flagging, notice, and appeal systems, and mandatory reporting to law enforcement. The nature of the consultation raised flags about how much the intended beneficiaries of the proposal were able to voice their opinions and concerns.

The panelists agreed that the scope of the legislation is too broad. The five types of illegal and harmful content are quite different, with some having much clearer legal boundaries than the others. Moreover, the provision that the definitions of these types of content will be adapted for the regulatory context raises questions about the government’s role in governing speech and the scale of content that will be subject to  regulation. It was suggested that the government should only focus on content that is already illegal. The panelists argued that the broad scope of content under regulation, combined with mandatory 24-hour takedowns and high fines for non-compliance could lead to the over-removal of legitimate content online. 

In the proposed approach, the government included requirements to identify illegal and harmful content and make it inaccessible to Canadians, including through the use of automated systems. The panelists implored that today’s artificial intelligence is not sophisticated enough to understand the context of content that could be flagged. As well, some users would likely flag content in bad faith or for nefarious purposes. The use of automated systems to remove content could result in legitimate users being silenced online. As well, platforms should not be obliged to proactively monitor all online communications just to identify the illegal and harmful content. A trusted flagger system would be preferred over automated systems. 

One major concern for panelists is the potential for the limitation of freedom of expression through the over-removal of legitimate content and the proactive monitoring of online communications. Moreover, panelists expressed concern about implications for the right to privacy due to the mandatory reporting obligations to law enforcement and the Canadian Security Intelligence Service. This concern was heightened through discussion about the potential for automating the information sharing process between online communication service providers and these bodies. The panelists agreed that if the government made some tweaks to the legislation, they could avoid a Charter challenge; however as is, they risk not reaching their goals by allowing the legislation to be caught up in court before it is implemented. 

The panelists discussed how if the government wants to regulate harms online, there are other avenues to be explored before resorting to regulating online speech. If online speech is to remain in the government’s purview, then they should stick to a public oversight role and only impose obligations for truly illegal speech. One panelist noted how giving more algorithmic choice to parents could provide for a better online experience for kids. Education and digital literacy should be prioritized, as legislation and more technology are not silver bullets for harms online. As well, an exploration into competition law, the monetization of platforms and advertising would be likely to improve the online experience and lower the frequency of hate speech and extremism that the government seeks to reduce. 

Essential learnings

Key Insights

  • There should be narrower, clearer definitions of the illegal and harmful content addressed by the pending legislation, as well as a clearer definition of what intermediaries will be implicated in the bill’s application.
  • Different categories of content can be dealt with differently, however, some consistency is necessary to avoid over-burdening companies. 
  • Some content will be straightforward to manage, while other categories will require a nuanced legal analysis of context, intent, and impact. 
  • Automated decision making for the flagging and removal of content will be ineffective and lead to the over-removal of legitimate content; it should be avoided. Automatic systems that send user data to law enforcement agencies should also be avoided. 
  • There are other avenues to deal with harms online that the government could explore before resorting to removing content, such as competition and algorithmic choice for users. 
  • The implications of the current proposal on freedom of expression and the right to privacy could result in constitutional battles. These Charter questions could be avoided if the scope of the legislation is narrower than the proposal.
logotype

Ottawa, Canada

info@canadianigf.ca

X-twitterLinkedin-inFacebook-f

Subscribe

    Copyright © 2024 Canadian IGF. All Rights Reserved