Legal Framework For Artificial Intelligence: What Are The Statutory Protections Against Deepfakes?

L,
Langlois Lawyers, LLP

Contributor

With more than 150 professionals working in the Montréal and Quebec City metropolitan areas, Langlois lawyers is one of the largest law firms in Quebec. Our team of over 300 employees offers a complete range of highly regarded legal services in a variety of areas.
Deepfakes can be defined as manipulation using advanced artificial intelligence ("AI"), that allows images, voices, videos, or text to be digitally altered, or even generated entirely, to create highly realistic representations or sounds of fake events.
Canada Privacy
To print this article, all you need is to be registered or login on Mondaq.com.

Deepfakes can be defined as manipulation using advanced artificial intelligence ("AI"), that allows images, voices, videos, or text to be digitally altered, or even generated entirely, to create highly realistic representations or sounds of fake events.1 Some of the most widely used deepfake tools can already replace faces and dub or clone voices to mimic real people, usually without their knowledge or consent.2 The consequences of deepfakes can be serious, ranging from non-consensual pornography3 to financial fraud4 and political manipulation5, to name but a few.

Deepfakes can also pose serious psychological, reputational and financial risks. They continue to proliferate because of how easy they are to produce and how hard it is to hold perpetrators to account. Therefore, many implore governments to introduce appropriate legislation to protect citizens and businesses from this scourge.

After their overview of the current AI regulations in Canada and Québec and in the EU, the U.S. and China, in the present article, the authors highlight certain statutory protections that exist or that are being considered against deepfakes.

Applicable law currently in force in Québec and Canada

While Canada and Québec currently lack legislation specifically governing deepfakes, multiple legal provisions may be used to protect victims. For example, in the Civil Code of Québec,6 articles 3 and 35 guarantee the right to the integrity of the person and to the respect of privacy, while subparagraph 5 of the first paragraph of article 36 stipulates that the use of a person's name, image, likeness or voice for any purpose other than the information of the public is considered as an invasion of privacy. Victims may also avail themselves of the civil liability remedy found in article 1457 of the Civil Code of Québec for any injury caused by a deepfake.

Similarly, the Criminal Code of Canada7 (CCC) contains multiple provisions that can be invoked in the event of an unauthorized deepfake. For example, the provisions governing forgery, fraud, defamatory libel, identity theft, criminal harassment and uttering threats may apply depending on the nature of the offence.8 However, some have suggested that it would be worthwhile for Parliament to amend the Criminal Code to clarify that its provisions apply to deepfakes.9

Examples of such precisions can be found in the laws of certain Canadian provinces, aiming to protect and compensate people whose intimate images have been distributed non-consensually10. Indeed, some of these laws deal specifically with altered images, such as the Saskatchewan's and British Columbia's laws, which specifies that it concerns "visual recording(s) of a person, whether or not the person is identifiable or whether or not the image has been altered in any way, made by any means".11

In addition, privacy legislation can play a role in protecting individuals from harm caused by deepfakes. Where deepfake videos expose an individual's personal information or intimate details without their consent, they may constitute a violation of privacy and, in some cases, be sanctioned under the Act respecting the protection of personal information in the private sector.12 In the event of any infringement of copyright, the Copyright Act might also be invoked.13

New statutory protections to be enacted in Canada

Canada is considering adopting new statutory protections against deepfakes, with the Canadian government currently working on developing new bills to specifically govern certain deepfake scenarios or their consequences.

This includes Bill C-27,14 specifically its provision relating to the Artificial Intelligence and Data Act ("AIDA"), which makes it an offence, under section 39, for a person to make available an AI system that is likely to cause serious physical or psychological harm to an individual or substantial damage to an individual's property or with intent to defraud the public and to cause substantial economic loss to an individual. We invite you to read our previous article15 to learn more about this bill.

In addition, the Government of Canada introduced the Online Harms Act, under Bill C-63,16 on February 26, 2024, with a view to establishing new rules to keep Canadians safe online and hold online platforms accountable for the content they host. In particular, Bill C-63 aims to protect children from multiple forms of harmful content online, including content that sexually victimizes a child and intimate content communicated without consent. It considers "intimate content communicated without consent" to be any visual recording, such as a photographic, film or video recording, "that falsely presents in a reasonably convincing manner a person as being nude or exposing their sexual organs or anal region or engaged in explicit sexual activity, including a deepfake that presents a person in that manner, if it is reasonable to suspect that the person does not consent to the recording being communicated."17 [our emphasis]

However, there is growing consensus on the need for specific legislation governing deepfakes in all their forms and the sanctions that may result from any illegal use thereof, particularly to ensure anyone can readily identify AI-generated content.18

New statutory protections emerging worldwide19

While the list below is in no way exhaustive, we can see that there are jurisdictions around the world that have already started to specifically legislate on deepfakes.

For instance, China was amongst the first to set up rules governing deepfakes with its Provisions on the Administration of Deep Synthesis of Internet-based Information Services, that came into force on January 10, 2023.20 They require enterprises providing deepfake services in China to obtain the real identity of the users of their services and display a statement on any deepfakes indicating that the content was created by AI to avoid confusing the public.21

With its proposal for a regulation on artificial intelligence,22 the Artificial Intelligence Act, the European Union also specifically targets deepfake incidents by obliging users of an AI system that generates or manipulates image, audio or video content to disclose that the content has been artificially generated or manipulated through article 52(3).23

In the United States, on January 10, 2024, members of the U.S. Congress introduced a new bill entitled No Artificial Intelligence Fake Replicas and Unauthorized Duplications Act of 2024 (the "No AI FRAUD Act")24 to prevent the unauthorized creation and use of AI-generated content that replicates an individual's likeness, voice, or other distinguishing characteristic without their consent. In particular, Section 3 of the No AI Fraud Act provides for substantial fines such as US$50,000 for persons or entities that distribute, transmit, or otherwise make available to the public a personalised cloning services, which is defined as meaning "an algorithm, software, tool, or other technology, service, or device the primary purpose or function of which is to produce one or more digital voice replicas or digital depictions of particular, identified individuals", and US$5,000 for persons or entities that distribute, transmit, or otherwise make available to the public a digital voice replica or digital depiction.

Along the same lines, on January 30, 2024, members of the U.S. Congress introduced a bill entitled Disrupt Explicit Forged Images and Non-Consensual Edits Act of 202425 (the "DEFIANCE Act") to allow victims to bring a civil action in an appropriate federal district court against the individuals responsible for the disclosure of deepfake images and videos of a sexual nature while benefiting from enhanced privacy protections and an extension of the statute of limitations to 10 years.26

Conclusion

In short, while the current legislative framework in Québec and Canada offers some protections against deepfakes, many believe that additional government effort is required to ensure citizens are appropriately protected in an ever-changing digital environment. It is imperative that victims of deepfakes, particularly those that involve pornographic content, financial fraud or political manipulation, have access to takedown order remedies without delay, regardless of the origin of the deepfake's author.

The authors are grateful to law student Daïna Donchi Kana for her contribution to this article.

Footnotes

1 Autorité des marchés financiers, "Deepfakes," online.

2 Canadian Security Intelligence Service, "Implications of Deepfake Technologies on National Security," Government of Canada, 2023, online.

3 See in particular: Jessica Le Masurier, "A global problem: US teen fights deepfake porn targeting schoolgirls", France 24, online; Quentin Le Van, "Les deepfakes pornographiques, une réalité quotidienne pour les Sud-Coréennes", Le Monde, 2024, online; Brian Contrepas, "Tougher AI Policies Could Protect Taylor Swift—And Everyone Else—From Deepfakes", Scientific American, 2024, online.

4 Including CEO and grandparent scams in particular. See in particular: Fraudes financières | Le deepfake débarque au Québec | La Presse, online; IA : une arnaque par deepfake a coûté 26 millions de dollars à une entreprise de Hong Kong (radiofrance.fr), online; Charles-Éric Blais-Poulin, "Berné par la fausse voix de son fils," La Presse, 2024, online.

5 See in particular: "Deepfake de Joe Biden : l'identité du commanditaire dévoilée", Le Monde, 2024, online; "Élections. 'Deepfake', 'cheapfake' : l'IA au service de la campagne présidentielle argentine", Courriel international, 2024, online; Thibaut Le Gal, "Elections européennes 2024 : 'Une menace réelle'... Faut-il craindre une campagne bouleversée par les deepfakes ?", 20 minutes, 2024, online.

6 Civil Code of Québec, CQLR c. CCQ-1991.

7 Criminal Code, RSC 1985, c. C-46.

8 See in particular ss. 366 CCC on forgery, 380 CCC on fraud, 301 CCC on defamatory libel, 402.2 CCC on identity theft, 403 CCC on identity fraud, 264 CCC on criminal harassment, 264.1 CCC on threats (assault), 162.1 CCC on the publication of an intimate image without consent and 163.1 CCC on child pornography.

9 La Presse, "Projet de loi sur les méfaits en ligne : Les libéraux veulent inclure les hypertrucages sexuellement explicites", online.

10 Intimate Images Protection Act [SBC 2023], c. 11 [Act in force in British Columbia]; Protecting Victims of Non-consensual Distribution of Intimate Images Act, RSA 2017, ch. P-26.9 [Act in force in Alberta]; The Privacy Act, RSS 1978, ch. P-24 [Act in force in Saskatchewan]; The Intimate Image Protection Act, LM 2015, c. 42 [Act in force in Manitoba] ; Intimate Images and Cyber-protection Act, SNS 2017, ch. 7 [Act in force in Nova Scotia]; Intimate Images Protection Act, RSNL 2018, ch. I-22 [Act in force in Newfoundland and Labrador].

11 The Privacy Act, RSS 1978, ch. P-24, section 7.1; Intimate Images Protection Act, [SBC 2023], c. 11, section 1.

12 CQLR c. P-39.1; See in particular sections 3.3, 17 and 28.1. Note, however, that the ARPPIPS does not apply to a natural person who uses personal information (unless that natural person is found to be operating an enterprise within the meaning of article 1525 CCQ.

13 Copyright Act, RSC 1985, c. C -42; See in particular the definition of infringing and infringement of copyright in section 2 and sections 27 and 42.

14 An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, Bill C-27 (2nd reading and referral to committee – April 24, 2023), 1st sess., 44th Parl. (Can.), online.

15 Danielle Ferron and Alexandra Provost, "Legal framework for artificial intelligence: Where do we stand in Canada and Quebec?," Langlois Lawyers, November 10, 2023, online.

16 An Act to enact the Online Harms Act, to amend the Criminal Code, the Canadian Human Rights Act and An Act respecting the mandatory reporting of Internet child pornography by persons who provide an Internet service and to make consequential and related amendments to other Acts, Bill C-63, (introduction and first reading – February 26, 2024, 1st. sess., 44th Parl. (Can.), online.

17 Section 2 of the Online Harms Act.

18 La Presse, "L'urgence de réglementer l'IA a décuplé", 2024, online; Siècle Digital, "L'urgence d'agir face à la prolifération des deepfakes", 2024, online.

19 See Danielle Ferron and Alexandra Provost, "Legal framework for artificial intelligence: What are the approaches of the European Union, the United States and China?," Langlois Lawyers, February 1, 2024, online.

20 "Provisions on the Administration of Deep Synthesis of Internet-based Information Services", Cyberspace Administration of China, online (available in Mandarin only); Laney ZHANG, "China: Provisions on Deep Synthesis Technology Enter into Effect", Law Library of Congress, 2023, online.

21 "La Chine serre la vis au 'deepfake', ces trucages numériques hyperréalistes", RTBF, online.

22 Artificial Intelligence Act, European Union, online.

23 Ibid., art. 52(3).

24 "H.R. 6943", Library of Congress, online.

25 "S. 3696," Library of Congress, online.

26 Richard Durbin and Lindsay Graham, "The DEFIANCE Act of 2024," online.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More