Intellectual Property, Information Technology & Cybersecurity

Upcoming Canadian Regulation of Artificial Intelligence Systems

 Author: William Musani

The Artificial Intelligence and Data Act (Canada) ("AIDA") was tabled in June 2022 as a part of Bill C-27, Digital Charter Implementation Act, 2022, which passed its second reading in the House of Commons on April 24, 2023 and is currently being considered by the Standing Committee on Industrial Technology.

AIDA seeks to:

  • regulate international and interprovincial trade and commerce in artificial intelligence systems by establishing common requirements, applicable across Canada, for the design, development and use of those systems; and
  • prohibit certain conduct in relation to artificial intelligence systems that may result in serious harm to individuals or harm to their interests.

What is Artificial Intelligence?

Artificial intelligence ("AI") enables computers to learn to complete complex tasks, such as generating content or making decisions and recommendations, by recognizing and replicating patterns identified in data. These tasks would otherwise require human intelligence to be completed.

There are many examples of AI in our daily lives, including speech to text or text to speech programs (e.g., Google Translate), image recognition services (e.g., Google Images search), natural language processing models (e.g., predictive text services, Apple's Siri, Amazon's Alexa), and, more recently, notable large language models (e.g., Open AI's ChatGPT, Google Bard).

What are the Risks of AI?

Almost all innovation carries with it inherent risks. Notable concerns with respect to the proliferation of AI include privacy, harm, and bias.

Some examples of high-profile incidents of harmful or discriminatory outcomes are noted in the companion document to Bill C-27 ("Companion Paper") and include:

  • the discrimination of women by an AI-resume screening tool used by Amazon for recruiting;
  • facial recognition systems exhibiting a bias against women and people of colour; and
  • the rise of deepfake images, audio, and video that could potentially cause harm.

How Does AIDA Mitigate Risks?

AIDA imposes significant identification, assessment, record keeping, mitigation and compliance obligations on persons responsible for certain AI systems.

It is noteworthy that the definition of "artificial intelligence system" in AIDA is quite broad and could capture technological systems that may otherwise not qualify as artificial intelligence under widely accepted definitions.

The following are certain key obligations prescribed by AIDA:

1. Anonymized Data. Certain persons must establish measures as to the anonymization and use of anonymized data.

2. High-impact Systems: Mitigation Measures. The establishment and documentation of measures to identify, assess, and mitigate the risks of harm or biased output that could result in the systems use.

  • Note: While a "high-impact system" or its characteristics are yet to be defined by AIDA, the Consultation Paper lists the following as key factors in determining whether an AI system is a "high-impact system":
    • evidence of risks of harm to health and safety, or a risk of adverse impact on human rights, based on both the intended purpose and potential unintended consequences;
    • the severity of potential harms;
    • the scale of use;
    • the nature of harms or adverse impacts that have already taken place;
    • the extent to which for practical or legal reasons it is not reasonably possible to opt-out from that system;
    • imbalances of economic or social circumstances, or age of impacted persons; and
    • the degree to which the risks are adequately regulated under another law.

3. High-impact Systems: Monitoring. The establishment and documentation of measures to monitor compliance with the established risk mitigation measures.

4. Harm Reporting. Mandatory reporting if the user of a high-impact system results or is likely to result in material harm.

5. Public Disclosure. The publication of a plain-language description of a high-impact system which includes an explanation of:

  • how the system is or is intended to be used;
  • the types of content that it generates, or is intended to generate, and the decisions, recommendations, or predictions that it makes or is intended to make; and
  • the mitigation measures established in respect of the system.

Closing Thoughts

While AIDA is not expected to come into force prior to 2025, companies or persons that deal in or intend to purchase, sell, or deploy artificial intelligence systems, including software developers building systems for clients, should ensure that they understand the obligations imposed upon them by AIDA and take proactive steps to meet them.

< Back