ARTICLE
6 March 2025

The Demise Of The Artificial Intelligence And Data Act (AIDA): 5 Key Lessons

MC
McInnes Cooper

Contributor

McInnes Cooper is a solutions-driven Canadian law firm and member of Lex Mundi, the world’s leading network of independent law firms. Providing strategic counsel to industry-leading clients from Canada and abroad, the firm has continued to thrive for over 160 years through its relentless focus on client success, talent engagement and innovation.
On January 5, 2025, the prorogation of the Canadian Parliament effectively terminated all bills pending in the House of Commons – including Bill C-27 and the proposed and controversial Artificial Intelligence and Data Act (AIDA) contained in it.
Canada Technology

On January 5, 2025, the prorogation of the Canadian Parliament effectively terminated all bills pending in the House of Commons – including Bill C-27 and the proposed and controversial Artificial Intelligence and Data Act (AIDA) contained in it. The inclusion of AIDA in Bill C-27, which principally focused on privacy law reform, was seen as a big surprise to many. That AIDA wasn't preceded by any significant consultation with industry or civil society groups was seen as a serious issue by many. But despite AIDA's demise, it's inevitable a future Canadian government will seek to pass legislation regulating artificial intelligence. There are lessons a future government doing so can learn from the, at times heated, debate over AIDA; here are five key ones.

1. Find Real Harmony

The rhetoric around AIDA was that it sought to harmonize Canadian AI regulation with that of our international trading partners. But AIDA did not do that. For example, its definitions of high-risk/high-impact systems and their treatment didn't align with what Canada's international trading partners are doing. For example, deeming generative, general purpose AI products as automatically "high-risk" was radically out of step with what's going on in the rest of the world, pulling Canada further away from developing international norms. Similar (but different) terminology and definitions doesn't make laws compatible.

Future AI legislation should be compatible with legislation such as Europe's Artificial Intelligence Act. Ideally, it should also allow mutual recognition of compliance so Canadian companies can easily enter the European market and vice-versa.

2. Don't Exclude Government

AIDA's categorical exclusion of the government from AI regulation was dangerous. In stark contrast to relationships between citizens and private sector businesses, the relationships between citizens and their governments are non-voluntary. A citizen, if unsatisfied with the use of AI by their government, can't choose an alternative. The "government" includes law enforcement; it includes immigration; it includes taxation; and it makes decisions about benefits and entitlements. The government has guns. This is inherently much higher risk – life or death in some cases – than any system deployed by a bank or grocery store. Other analogous regulatory schemes, such as privacy and human rights, apply to businesses and governments.

AI legislation should include all organizations, both in the public sector and the private sector.

3. Avoid Exposure to a Constitutional Challenge

AIDA was arguably exposed to constitutional challenge. The Supreme Court of Canada's 2023 decision in Reference re Impact Assessment Act offers fair warning: the federal government simply doesn't have any jurisdiction over, for example, a computer science professor doing basic AI research at a Canadian university. And including the words "to regulate international and interprovincial trade and commerce in artificial intelligence systems ..." in AIDA doesn't make it so.

The next federal attempt at AI legislation should be carefully designed to remain within the guardrails that separate federal from provincial jurisdiction. This could be addressed by the use of "cooperative federalism", through an organization like the Uniform Law Conference of Canada, to create a cohesive federal and provincial scheme.

4. Gain Some Credibility

As proposed under AIDA, the AI Commissioner would have been a civil servant, reporting to the Minister of Industry. This model is out of line with Canadian norms, and what is emerging in other countries. Having the Commissioner report to the Minister undermines the independence, credibility and authority that office may have.

Any future Artificial Intelligence Commissioner for Canada should be an independent officer of Parliament, like the Privacy Commissioner, the Competition Commissioner, the Human Rights Commissioner and the Auditor General.

5. Remove R&D

As it was drafted, AIDA proposed to cover the most basic research and development related to artificial intelligence. AI laws should be directed at managing risk, and the only time AI reasonably presents any risk of harm is when it's deployed and used. Including pure research and development in AIDA's scope creates onerous burdens that are disproportionate given the lack of risk R&D poses to the public.

Future AI legislation should place the burden for onerous risk assessment, documentation and mitigation where AI products and services are to be deployed. Any inclusion of research activities should be with a light touch, taking this low risk into account.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More