10 Key Takeaways: Navigating The Future Of AI Law: Understanding The EU AI Act And AIDA

MT
McCarthy Tétrault LLP

Contributor

McCarthy Tétrault LLP provides a broad range of legal services, advising on large and complex assignments for Canadian and international interests. The firm has substantial presence in Canada’s major commercial centres and in New York City, US and London, UK.
On June 20, 2024, McCarthy Tétrault held an event focused on helping businesses in Canada and beyond navigate the implications of the European Union's Artificial Intelligence Act...
Worldwide Technology
To print this article, all you need is to be registered or login on Mondaq.com.

On June 20, 2024, McCarthy Tétrault held an event focused on helping businesses in Canada and beyond navigate the implications of the European Union's Artificial Intelligence Act ("EU AI Act") and Canada's proposed Artificial Intelligence and Data Act ("AIDA"). Complementing John Buyer's discussion offering an in-depth perspective on the EU AI Act, a panel of McCarthy Tétrault experts contributed their expertise on different topics, including privacy and AI and AI in Commercial Transactions. The event featured presentations from:

  • Charles Morgan, Partner (Montreal), Cyber/Data Group co-lead, McCarthy Tétrault LLP
  • Daniel G.C Glover, Partner (Toronto), Cyber/Data Group co-lead, McCarthy Tétrault LLP
  • Barry Sookman, Partner (Toronto), Senior Counsel, McCarthy Tétrault LLP
  • Alexandra Cocks, Partner (Vancouver), Litigation Group, McCarthy Tétrault LLP
  • David Crane, Partner (Vancouver), Business and Technology Groups McCarthy Tétrault LLP
  • Karine Joizil, Partner (Montreal), Litigation Group McCarthy Tétrault LLP
  • John Buyers, Partner, Osborne Clarke
  1. EU AI Act's Global Reach: Implications for Canadian Businesses

The AI regulatory landscape is rapidly evolving, both in North America and Europe, with the adoption of the EU AI Act being a watershed recent development.

The EU AI Act will result in material new compliance obligations, even for businesses operating outside the confines of the European Union , including for Canadian businesses targeting the EU market. Indeed, Article 2 of the Act extends its applicability to any entity that makes, uses, imports, or distributes AI systems in the EU, regardless of location. It also applies to AI systems used within the EU, even if produced elsewhere.

Canadian businesses developing, using, or selling AI products, including general-purpose AI systems and AI applications for mobile devices, in the EU, should commence their EU AI Act compliance efforts now in order to ensure they are ready when the EU AI Act starts to take effect in Q1 2025.

  1. The Brussels Effect: Will the EU AI Act Set Global Standards for AI Regulation Like the GDPR?

The EU AI Act's extensive global reach has the potential to set an international benchmark for AI governance, similar to the effect that the EU General Data Protection Regulation ("GDPR") has had on global data protection standards. Even for businesses outside the EU, the Act could indirectly influence compliance obligations if local legislation mirrors the EU AI Act or if industry standards, shaped by contractual terms with business partners, start to reflect its provisions. Designed to align seamlessly with GDPR, the EU AI Act creates a compliance hierarchy affecting many foreign parent companies, their local subsidiaries, as well as the participants within their supply chain, all of which must adhere to specific compliance obligations.

  1. Significant Fines under the EU AI Act: A Warning for Canadian Businesses

Canadian businesses should also take note of the EU AI Act due to its severe penalties for non-compliance. Violations involving prohibited AI systems can result in fines up to 35 million EUR or 7% of total annual turnover for the preceding financial year, whichever is greater. Non-compliance with high-risk AI regulations can lead to fines up to 15 million EUR or 3% of total annual turnover for the preceding financial year, whichever is greater. Additionally, companies may face non-financial sanctions, such as the suspension or removal of AI systems from the EU market.

These prohibitions could have significant implications depending on the EU's enforcement level. The deployment of AI systems using subliminal techniques—colours, patterns, or flashing images—may be scrutinized for their potential to distort behaviour in a harmful manner. Businesses must consider the wide-reaching impact of these practices, as violations could lead to substantial physical, psychological, and economic harm. For example, economic harm may result from discriminatory functionalities that disadvantage individuals lacking financial means, thereby leading to financial discrimination. Understanding and mitigating these risks is crucial for businesses to ensure compliance and avoid severe penalties.

  1. Obligations for Providers and Deployers of High-Risk AI Systems

In the context of high-risk AI systems, compliance responsibilities vary across the supply chain.

Providers of AI Systems must first determine whether the AI system should be classified as a "high-risk" AI System before market entry and, if so, comply with the corresponding legal requirements through a conformity assessment as per Article 43 of the EU AI Act. This assessment must be completed before market placement or service initiation, adhering to mandated standards. Providers must balance the effort of introducing the high-risk system against the potential consequences of withholding it from users. Additionally, providers are responsible for implementing a robust quality and life cycle management system to prevent malfunctions, maintaining system integrity, and ensuring transparency. This includes clear communication to users about how the system operates.

Deployers are required to ensure the data input into the system is relevant and suitable for its intended use. They must inform the provider of any system malfunctions and maintain performance logs.

All parties involved with the AI system must comply with regulatory obligations throughout the value chain. Effective mechanisms for transmitting information upstream are essential to ensure compliance.

  1. Practical Steps for AI Corporate Compliance

Businesses can take several measures to ensure compliance with the upcoming legislative framework. It is crucial to begin with a comprehensive inventory of all AI systems in use. This structured evaluation helps determine the category of each AI system and assess any potential exposure the company may face. While this step may be substantial and costly, it is essential to ensure compliance with upcoming legal requirements. Additionally, conducting an AI Act readiness assessment is recommended to verify alignment with the EU AI Act.

The Act mandates that providers of general-purpose AI models must put in place a policy to comply with European copyright laws. Under Article 53 of the EU AI Act, transparency is re-enforced by requiring providers of general-purpose AI models to provide a summary of the training data used for these systems.

  1. Comparative Overview: EU AI Act vs AIDA

Understanding the differences and similarities between the emerging EU and Canadian AI regulatory frameworks is important for Canadian businesses.

  • Canada: The initial effort to create a comprehensive AI regulatory in Canada is set out in the draft bill AI and Data Act ("AIDA"), which forms part of Bill C-27. AIDA aims to regulate the development and deployment of AI systems in the private sector. Like the EU AI Act, AIDA purports to regulate both "high impact" and general purpose AI systems. AIDA also seeks to cover the entire lifecycle of AI systems.
  • European Union (EU): The EU AI Act will be horizontal, risk-based, involve the entire lifecycle of AI systems, and prioritize product safety and the protection of fundamental human rights.

Key Comparison Points:

  • Purpose: AIDA's scope is narrower than the EU AI Act, focusing on trade and harm, while the EU AI Act emphasizes product sfety and fundamental rights of individuals.
  • Regulatory Approach: The EU AI Act is more comprehensive and prescriptive, adopting a multi-tiered risk-based approach, whereas AIDA limits its risk-based regution to one tier of "high impact" AI Systems.
  • Extraterritorial Application: The EU AI Act will have a more significant extra-territorial scope and impact than AIDA.
  • Value Chain Application: Both Acts address the AI value chain, but the EU AI Act assigns more nuanced obligations to different roles, especially for high-risk AI systems.
  • Definition of AI: AIDA has a broader definition of an AI system, insofar as it is not limited to AI Systems that operate with a degree of "autonomy."
  • Prohibited AI Systems: Unlike AIDA, the EU AI Act prohibts the use of certain types of AI systems.
  • General Purpose AI: The EU AI Act adopts a more risk-based approach to regulate general-purpose AI, establishing a specific category of general-purpose AI systems that pose "systemic risk".
  • Complaint Rights: A complaint right is granted exclusively under the EU AI Act.
  • AI System Modifications: AIDA imposes more comprehensive requirements for changes to AI systems.
  1. Professionals Misusing AI in Legal Proceedings – A Case Study from BC Supreme Court

The use of AI in professional contexts requires caution.

In Zhang v. Chen, 2024 BCSC 285, a family law case at the BC Supreme Court, counsel faced scrutiny for citing ChatGPT-generated cases that turned out to be hallucinated—fabricated by AI. Despite extensive research failing to locate these cases, the court refrained from ordering special costs but required the counsel to cover some wasted time costs. The ruling underscored the need for human oversight of the outputs of AI Systems (especially when used by professionals), emphasizing that generative AI cannot replace human expertise in the justice system. This case highlights the critical role of responsible AI governance and vigilance in professional practices, ensuring integrity in legal proceedings.

  1. Privacy Implications of AI

Canadian regulators are closely monitoring AI developments to ensure alignment with existing privacy laws. However, privacy regulators acknowledge that the rapid pace of AI development poses challenges for achieving full compliance.

A particularly challenging area is obtaining consent. Under current Canadian law, consent serves as the primary legal basis for processing personal information in AI systems. Moreover, Canadian law requires that personal information processing must align with "appropriate purposes." This requirement means that even robust consent mechanisms cannot justify processing if the purpose is deemed inappropriate. Strict prohibitions, especially concerning discrimination and human rights violations—referred to as "no-go zones"—are critical for businesses to comply with.

  1. Current State of Quebec's Civil Law AI Regulation Landscape

The Civil Code of Quebec ("CCQ") provides two specific regimes that may apply when assessing the risk of civil liability related to the development and deployment of AI systems in Quebec.

The first liability regime focuses on the "Autonomous Act of a Thing." Article 1465 of the CCQ states that the custodian of a thing is bound to make reparation for injury resulting from its autonomous act unless they prove they are not at fault. This implies that the custodian of an AI system, even if not the owner, could be held liable for any injury caused by the autonomous actions of the AI. However, identifying the custodian in the context of AI can be problematic.

The second liability regime concerns product liability under Article 1468 of the CCQ, potentially viewing AI systems as products. This regime applies to a broad range of actors, including manufacturers, distributors, and suppliers, holding them responsible for safety defects within AI systems.

Given these legal frameworks, businesses must exercise caution in the deployment and management of AI systems to mitigate potential liabilities and ensure compliance with evolving regulations.

  1. AI in Commercial Transactions

Contracting in the AI realm is becoming increasingly complex as new regulatory frameworks enter into force. For example, Canadian contracting lawyers must consider the extraterritorial impact of the EU AI Act, which should be seen as a best practice guide, influencing norms and contractual terms in the AI space. Issues related to terminology and compliance are prevalent in tech contracts, often involving multiple vendors and the supply chain. The lack of clarity and standardization complicates transactions.

The EU AI Act adds layers of complexity with requirements for risk management, data quality, and framework integration. Negotiations now involve new frameworks and discussions about the allocation of risk, further complicating the process across the entire supply chain. These factors highlight the need for a thorough understanding of AI regulations and meticulous contract drafting to navigate the complexities of commercial transactions involving AI.

To view the original article click here

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More