The Colorado AI Act: A Primer For Canadian Businesses

MT
McCarthy Tétrault LLP

Contributor

McCarthy Tétrault LLP provides a broad range of legal services, advising on large and complex assignments for Canadian and international interests. The firm has substantial presence in Canada’s major commercial centres and in New York City, US and London, UK.
Artificial intelligence ("AI") regulations are steadily emerging in the United States. On May 17, 2024, Colorado became the latest state to join this movement...
Worldwide Technology
To print this article, all you need is to be registered or login on Mondaq.com.

Introduction

Artificial intelligence ("AI") regulations are steadily emerging in the United States. On May 17, 2024, Colorado became the latest state to join this movement with the signature by Governor Jared S. Polis of Senate Bill 24-205 (the "Colorado AI Act" or the "Act"). The Colorado AI Act represents the latest addition to a series of AI regulations proposed by U.S. jurisdictions, with notable regulations adopted by the City of New York and the State of Utah aiming to regulate the development and use of artificial intelligence systems ("AI Systems").1

As we will discuss in this blog, the Colorado AI Act is relevant for Canadian businesses, even ones with no activities in the State of Colorado as it highlights several key trends in the regulation of AI and can provide guidance for businesses that wish to implement responsible AI practices while the Artificial Intelligence and Data Act (the "AIDA") continues to move through the federal parliament.

We discuss here the most important provisions of the Colorado AI Act, and provide a high level comparative analysis with Canadian and European Union ("EU") law.

Purpose and Scope of the Colorado AI Act

Scope of the Colorado AI Act

High-risk AI Systems

The preamble of the Colorado AI Act clearly states its intention to promote "consumer protections in interactions with artificial intelligence systems". Like the EU's Artificial Intelligence Act  (the "EU AI Act") and AIDA, however, it does not regulate all AI Systems but only those considered "high-risk". For the Colorado AI Act, those are systems that, when deployed, can make, or be a "substantial factor in making", a "consequential decision".2

A consequential decision is any decision "that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of" specific types of services:

  • education enrollment or opportunity;
  • employment or employment opportunity;
  • financial or lending services;
  • essential government services;
  • healthcare services;
  • housing, insurance; and
  • legal services.

The Colorado AI Act, although ostensibly a consumer protection law, thus adopts a broad definition of what constitutes a consumer and would notably apply to certain AI-enhanced HR decisions making systems and systems used to provide (essential) public services (a notable difference with the AIDA, which expressly does not apply to the public sector).3 As such, the Colorado AI Act in part overlaps with the Colorado Privacy Act and its provisions on automated-decision making systems ("ADM Systems"). However, the Colorado AI Act has a larger reach because the Colorado Privacy Act only regulates "profiling", defined as the automated processing of personal data to evaluate, analyze or predict an individual's preferences or behavior. The new AI legislation rather focuses on the effect of a decision on consumers, regardless of whether it involves the processing of personal data.

Several exclusions however limit this seemingly broad scope. The Act does not apply to AI systems that are intended to perform narrow procedural tasks or detect decision-making patterns. Unless those make (or are a substantial factor in making) a consequential decision, the Act also does not consider to be high-risk technologies such as anti-fraud systems (but only if they do not use facial recognition technology), anti-virus, AI-enabled video games, databases, spell-checking, cybersecurity and data storage.

Interestingly, unlike its European and Canadian counterparts, the Colorado AI Act does not include a sui generis category for general-purpose AI systems such as the AI-enhanced chatbots popularized with the release of ChatGPT. On the contrary, the Colorado AI Act presumes that systems using natural language to communicate with consumers are not high-risk if: (1) the purpose is to provide information, make referrals or recommendations; and (2) the system is subject to a use policy prohibiting the generation of discriminatory or harmful content. However, such AI systems may still be considered high risk in certain circumstances, for instance if used to make (or is a "substantial factor" in making) a "consequential decisions".4 Substantial factors notably include AI-generated content, decisions, predictions or recommendations "used as a basis to make a consequential decision" about a consumer.5 For example, this might include the use of an AI chatbot to assist in an hiring decision, where the chatbot is asked to review a number of CVs and provide recommendations that are then acted upon uncritically. As such, although the Colorado AI Act is ostensibly about ADM Systems, it could apply to certain uses of generative AI in unexpected ways.

Regulated Roles in the AI Value Chains

Most current AI regulatory frameworks apply varying compliance obligations depending on the role a regulated enterprise plays in the AI value chain. For example, the EU AI Act includes specific obligations for "providers", "deployers", "importers", "distributors" and "operators" of AI Systems. For its part, the AIDA uses more convoluted, but related terminology of "person making available for the first time", "person making available" or "person managing the operations" of AI Systems. The Colorado AI Act adopts a binary approach which divides the AI value chain between "deployers" (persons using high-risk AI systems) and "developers" (persons doing business in Colorado and who develop or substantially modify AI Systems).6 As we will discuss in Section 2 below, the Colorado AI Act imposes different obligations on developers and deployers of AI Systems.

Requirements

Both developers and deployers of high-risk AI Systems must exercise reasonable care to protect consumers against known or foreseeable risks of algorithmic discrimination that arise from the intended use of the system. Algorithmic discrimination encompasses situations where the use of an AI system results in unlawful differential treatments or negative consequences based on protected characteristics such as age, color, disability, ethnicity, race and religion. This notion is similar to the concept of "biased output" found in the current draft of AIDA.7

However, developers and deployers benefit from a rebuttable presumption of reasonable care in the event of an action brought against them if they can demonstrate that they have complied with the obligations of the Act. They can also invoke several active affirmative defenses against claims brought by the Attorney General (see Section 4 below), such as the discovery and cure of a violation following adversarial testing or red teaming of the system while maintaining compliance with relevant AI standards.8

Compliance requirements vary for each role and will begin to apply on February 1, 2026:

Developers must:9

  • Provide deployers with a general statement describing the reasonably foreseeable uses and the known harmful or inappropriate uses of the high-risk AI System.10
  • Provide documentation disclosing the type of training data, the known and foreseeable limitations of, the purpose, the intended benefits and all other information about the high-risk AI System in order to allow the deployer to comply with its obligations.11
  • Provide documentation describing how the high-risk AI System was evaluated, the data governance measures used, its intended outputs, what mitigation measures against algorithmic discrimination were put in place and how it should be used, not used and monitored when deployed.12
  • Provide deployers with adequate information and documentation to conduct an impact assessment (including in the form of model cards or dataset cards).13
  • Publish a publicly available statement detailing high-risk systems developed and how they manage known or foreseeable risks of algorithmic discrimination.14
  • Disclose any known or reasonably foreseeable risk of algorithmic discrimination to the Attorney General and deployers within 90 days of discovery.15

Deployers must:16

  • Implement a reasonable risk management policy and program. Significantly, the Colorado AI Act refers here to the guidance found in NIST and ISO AI standards. For further information, refer to our previous blog.17
  • Conduct an annual impact assessment of the high-risk AI System or within 90 days of an intentional or substantial modification.18 This impact assessment must state the purposes, intended use cases and deployment context of the system. It must also include an analysis of whether the system poses any known or reasonably foreseeable risks of algorithmic discrimination and mitigation steps. Finally, it must set out the categories of data used as inputs or outputs, the data used for customization of the system, the metrics used to evaluate performance, the transparency measures and the post-deployment monitoring mechanisms. Records relating to AI impact assessments must be retained for three years following deployment.19
  • Notify consumers of the deployment of the high-risk AI System before a decision is made. The deployer must also disclose to the consumer the purpose of the AI System and provide a plain language description of the system.20
  • Publish a publicly available statement summarizing the high-risk AI system being deployed and any known or foreseeable risks of algorithmic discrimination.21
  • Disclose any instances of algorithmic discrimination to the Attorney General within 90 days of discovery.22

In addition, one obligation of the deployers of high-risk AI Systems is to inform customers of their right to opt-out of the processing of their personal data for purposes of profiling, as provided in the Colorado Privacy Act. 23 Consequently, the Colorado AI Act does not include a separate right to opt-out of consequential decision by AI Systems that do not process personal data. However, if a high-risk AI System is involved in a consequential decision that is adverse to a consumer, the deployer must provide a statement disclosing the principal reasons for the decision, an opportunity to correct any incorrect personal data involved and to appeal the decision, including with human review of the decision (but only if feasible).24 This is reminiscent of section 12.1 of the Quebec Act Respecting the Protection of Personal Information in the Private Sector (the "Quebec Privacy Act") which provides that businesses making decisions based exclusively on automated processing of personal information must inform the concerned individual of the information used in the decision as well as the reasons and principal factors that led to the outcome. Individuals in Quebec also have a right to have the personal information used in the decision corrected. Section 12.1 of the Quebec Privacy Act does not however include a minimal risk threshold and has in that sense a broader scope than the Colorado AI Act. This is mitigated by the fact that, like the Colorado Privacy Act, the Quebec Privacy is only concerned by systems that process personal information. Moreover, it applies only to decisions made "exclusively" through automated processing. The Colorado AI Act is on this dimension broader: high-risk AI Systems include those that play a substantial part in the decisional outcomes, without necessarily being their only factor.

Finally, the Colorado AI Act, similarly to the EU AI Act25 and proposed amendments to AIDA26, includes an obligation that apply to the deployment of any AI System (regardless of risk), if it is intended to interact directly with consumers. In those cases, the consumer must be informed that it is interacting with an AI System, unless it would be obvious for the consumer that he or she is interacting with an AI System.27

Developers and deployers thus need to implement complex and potentially costly compliance programs to meet the obligations of the Act. The Colorado legislature has been mindful that some businesses deploying AI systems, especially small companies and start-ups, may have limited resources for AI compliance and has consequently exempted corporations that employ less than fifty full-time equivalent employees from complying with the obligations related to: (1) implementing of a reasonable risk management policy and program; (2) conducting an annual impact assessment of the high-high AI System or within 90 days of an intentional or substantial modification; and (3) disclosing any instances of algorithmic discrimination to the Attorney General within 90 days of discovery, all as described above, to the extent however they do not use their own data to train an high-risk AI System and the systems are used for their intended purpose.28

Enforcement

The Colorado AI Act falls under the exclusive authority of the Attorney General of Colorado, who is expressly empowered to establish rules for enforcing its requirements.29 Whereas both the EU AI Act and AIDA incorporate substantial, multi-million dollars or euros fines within their sanction frameworks, the Colorado AI Act takes a different route. A violation of the Colorado AI Act is considered an unfair trade practice, which is sanctionable by fines of up to $USD 20,000 per violation.30

Furthermore, the Colorado AI Act does not provide consumers with a private right of action for non-compliance, instead granting such authority solely to the Attorney General. This aligns with AIDA and the EU AI Act, which also lack an explicit private right of action provision. However, this differs notably from the prevailing trend in privacy legislation, exemplified by the Quebec Privacy Act and Canada's newly introduced Consumer Privacy Protection Act (the "CPPA"), both of which incorporate a private right of action. Please refer to our blog posts Law 25 and CPPA for more information.

Conclusion

Canadian business that wish to export and use AI Systems in Colorado should pay close attention to this AI legislation. Even those that do not would benefit from a general understanding of law that may serve as inspiration for other States (some of which are already studying AI bills31). For those considering implementing a responsible AI governance program, the Colorado AI Act also offers valuable indications about the emerging best practice that, little by little, are finding their way into law.

Footnotes

1. Several other States are also currently studying Ai bills, including California, New York, Illinois, Louisiana and Massachuuo=setts. See IAPP, "US State AI Governance Legislation Tracker" as of May 1st, 2024, https://iapp.org/resources/article/us-state-ai-governance-legislation-tracker/.

2. Colorado AI Act, s. 6-1-1701 (9)(a)).

3. Colorado AI Act, s. 6-1-1701 (4).

AIDA, s. 3(1).

4. Colorado AI Act, s. 6-1-1701 (9)(b)(II)(R).

5. Colorado AI Act, s. 6-1-1701(11)(a).

6. Colorado AI Act, s. 6-1-1701 (6) & (7).

7. AIDA s. 5(1): "biased output means content that is generated, or a decision, recommendation or prediction that is made, by an artificial intelligence system and that adversely differentiates, directly or indirectly and without justification, in relation to an individual on one or more of the prohibited grounds of discrimination set out in section 3 of the Canadian Human Rights Act, or on a combination of such prohibited grounds. It does not include content, or a decision, recommendation or prediction, the purpose and effect of which are to prevent disadvantages that are likely to be suffered by, or to eliminate or reduce disadvantages that are suffered by, any group of individuals when those disadvantages would be based on or related to the prohibited grounds."

8. Colorado AI Act, s. 6-1-1706.

9. Colorado AI Act, s. 6-1-1702.

10. Colorado AI Act, s. 6-1-1702(2)(a).

11. Colorado AI Act, s. 6-1-1702(2)(b).

12. Colorado AI Act, s. 6-1-1702(2)(c).

13. Colorado AI Act, s. 6-1-1702(3)(a).

14. Colorado AI Act, s. 6-1-1702(4).

15. Colorado AI Act, s. 6-1-1702(5).

16. Colorado AI Act, s. 6-1-1703.

17. Colorado AI Act, s. 6-1-1703(2)(a).

18. Colorado AI Act, s. 6-1-1703(3)(a).

19. Colorado AI Act, s. 6-1-1703(3)(f).

20. Colorado AI Act, s. 6-1-1703(4)(a).

21. Colorado AI Act, s. 6-1-1703(5)(a).

22. Colorado AI Act, s. 6-1-1703(7).

23. Colorado AI Act, s. 6-1-1703(4)(a)(III).

24. Colorado AI Act, s. 6-1-1703(4)(b).

25. EU AI Act, Article 50(1).

26. Amendments proposed by ISED, s. 6(1): https://www.ourcommons.ca/content/Committee/441/INDU/WebDoc/WD12751351/12751351/MinisterOfInnovationScienceAndIndustry-2023-11-28-Combined-e.pdf.

27. Colorado AI Act, s. 6-1-1704.

28. Colorado AI Act, s. 6-1-1703(6).

29. Colorado AI Act, s. 6-1-1706.

30. Colorado Consumer Protection Act, s. 6-1-112(1)(a).

31. See the useful US AI legislation tracker regularly updated by the IAPP: https://iapp.org/resources/article/us-state-ai-governance-legislation-tracker/.

To view the original article click here

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More