Mile-High Risk: Colorado Enacts Risk-Based AI Regulation To Address Algorithmic Discrimination

Colorado's AI Act is the first comprehensive law regulating AI in the United States...
United States Technology
To print this article, all you need is to be registered or login on Mondaq.com.

On May 17, Colorado Governor Jared Polis signed into law the Colorado Artificial Intelligence Act (SB 205)("CAIA"), a measure passed out of the legislature on May 8 and now scheduled to become effective February 1, 2026. The CAIA aims to combat intentional and unintentional algorithmic discrimination through new, broad-based notice, disclosure, risk mitigation, and opt-out requirements for developers and deployers of "high-risk" artificial intelligence ("AI") systems and disclosures applicable to AI systems generally.

Governor Polis' signing statement acknowledged his reservations in signing the bill, noting that the measure creates a "complex compliance regime" for AI developers and deployers operating in Colorado and interacting with Colorado residents, which could add to the potential for additional states taking action, resulting in a patchwork of state laws that could tamper innovation and deter competition. In that regard, the governor also called for federal regulation of "nascent AI technologies ... to limit and preempt varied compliance burdens on innovators and ensure a level playing field across states." The Governor also encouraged the legislature to reexamine the scope of discriminatory conduct in the CAIA before it takes effect, noting that the CAIA deviates from the norm by prohibiting all discriminatory outcomes from AI system use, regardless of intent.

The law appears to build upon profiling and automated decision-making technology rules that the Colorado Attorney General finalized for compliance with the Colorado Privacy Act. The Colorado AG will also have enforcement and rulemaking authority to adopt rules to implement the extensive requirements of the CAIA, so additional requirements may follow as a result of that process. Developers and deployers may be able to leverage some of their existing Colorado Privacy Act processes to comply with the CAIA. Governor Polis signed the CAIA on May 17, 2024, and it will go into effect February 1, 2026.

Brief Summary

The CAIA imposes substantial new restrictions and compliance obligations on developers and deployers of high-risk AI systems that are intended to interact with consumers and make or be a substantial factor in making "consequential decisions" in areas such as employment, insurance, housing, credit, education, and healthcare.

The CAIA requires the following:

  • Developers of high-risk AI systems must use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk AI system. A rebuttable presumption that a developer used reasonable care arises if the developer complies with specific provisions of the CAIA, outlined below. Developers must disclose on their website details about the technology and complete an impact assessment.
  • Deployers of high-risk AI systems must use reasonable care to protect consumers from any known or reasonably foreseeable risk of algorithmic discrimination. A rebuttable presumption that a deployer used reasonable care attaches if the deployer complies with specific provisions in the CAIA, again, outlined below. Deployers must also implement a risk-management policy and program that includes an impact assessment.
  • Developers and deployers who make any AI system available to consumers (not just high-risk systems) must ensure that the AI system discloses that the consumer is interacting with an AI system unless it would be obvious to a reasonable person.

We highlight these key aspects of the Act and address additional requirements below.

Application and Definitions

The CAIA applies broadly to developers and deployers of high-risk AI systems that are intended to interact with Colorado residents and make consequential decisions. Key definitions include the following:

  • A "deployer" is a person doing business in Colorado who deploys a high-risk AI system.
  • A "developer" is a person doing business in Colorado who develops or intentionally and substantially modifies an AI system (not limited to "high risk").
  • An "artificial intelligence system" is any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations that can influence physical or virtual environments.
  • A "high-risk artificial intelligence system" is any AI system that, when deployed, makes or is a substantial factor in making a consequential decision.
  • A "consequential decision" is any decision that has a material legal or similarly significant effect on the provision or denial to any consumer of or the cost or terms of: education enrollment or opportunity, employment or an employment opportunity, a financial or lending service, an essential government service, healthcare services, housing, insurance, or legal service.
  • "Algorithmic discrimination" is any condition in which the use of an AI system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under Colorado or federal law, with some exceptions.

Requirements for Developers of High-Risk AI Systems

Documentation and Disclosures to Deployers and Attorney General

On or after CAIA's effective date, developers of high-risk artificial intelligence systems must make the following documents and information available to the deployers or other developers of the high-risk AI system, as well as the Colorado AG, within 90 days upon request:

  • A general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk AI system
  • Documentation disclosing and describing:
    • High-level summaries of the type of data used to train the high-risk AI system
    • Known or reasonably foreseeable limitations of the high-risk AI system, including known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk AI system
    • The purpose of the high-risk AI system
    • The intended benefits and uses of the high-risk AI system
    • All other information necessary to allow the deployer to comply with their requirements under the law
    • How the high-risk AI system was evaluated for performance and mitigation of algorithmic discrimination before the high-risk AI system was offered, sold, leased, licensed, given, or otherwise made available
    • The data governance measures used to cover the training datasets and the measures used to examine the suitability of data sources, possible biases, and appropriate mitigation
    • The intended outputs of the high-risk AI system
    • The measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination that may arise from the reasonably foreseeable deployment of the high-risk AI system
    • How the high-risk AI system should be used, not be used, and monitored by an individual when the high-risk AI system is used to make or is a substantial factor in making a consequential decision
    • Any additional documentation that is reasonably necessary to assist the deployer in understanding the outputs and monitor the performance of the high-risk AI system for risks of algorithmic discrimination.

To the extent feasible, the above documentation and information should be made available through artifacts currently used in the industry, such as model cards, dataset cards, or other impact assessments, necessary for a deployer or for a third party contracted by a deployer to complete an impact assessment as required by the law.

Developers who also serve as deployers for high-risk AI systems are not required to generate the above documentation unless the high-risk AI system is provided to an unaffiliated entity acting as a deployer.

Website Notice

Under the CAIA, developers of high-risk AI systems must make available, in a manner that is clear and readily available on the developer's website or in a public use case inventory, a statement summarizing:

  • The types of high-risk AI systems that the developer has developed or intentionally and substantially modified and currently makes available to a deployer or other developer
  • How the developer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the development or intentional and substantial modification of high-risk AI systems.

Affirmative Duty To Report Algorithmic Discrimination to Attorney General and Deployers

Developers of high-risk AI systems must disclose to the Colorado AG and to all known deployers or other developers of the high-risk AI system any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk AI system without unreasonable delay but no later than 90 days after the date on which:

  • The developer discovers through the developer's ongoing testing and analysis that the developer's high-risk AI system has been deployed and has caused or is reasonably likely to have caused algorithmic discrimination
  • The developer receives from a deployer a credible report that the high-risk AI system has been deployed and caused algorithmic discrimination.

Requirements for Deployers of High-Risk AI Systems

Risk Management Policy and Program

Deployers of high-risk AI systems must implement a risk management policy and program to govern the deployment of high-risk AI systems. The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must be an iterative process that is planned, implemented, and regularly and systematically reviewed and updated over the lifecycle of a high-risk AI system, requiring regular systematic review and updates.

A risk management policy and program implemented and maintained pursuant to the CAIA must be reasonable considering:

  • The guidance and standards set forth in the latest version of the NIST AI Risk Management Framework, Standard ISO/IEC 42001 of the International Organization for Standardization, or another nationally or internationally recognized risk management framework for AI systems, if the standards are substantially equivalent to or more stringent than the NIST AI RMF or ISO/IEC 42001
  • Any risk management framework for AI systems that the attorney general may designate
  • The size and complexity of the deployer
  • The nature and scope of the high-risk AI systems deployed by the deployer, including the intended uses of the high-risk AI systems, and
  • The sensitivity and volume of data processed in connection with the high-risk AI system deployed

Impact Assessments and Recordkeeping

Deployers of high-risk AI systems must complete an impact assessment for the high-risk AI system at least annually and within ninety days after any intentional and substantial modification to the high-risk AI system. The impact assessment must include at a minimum and to the extent reasonably known or available:

  • A statement by the deployer disclosing the purpose, intended use cases, and deployment context of, and benefits afforded by the high-risk AI system
  • An analysis of whether the deployment of the high-risk AI system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of the algorithmic discrimination and the steps that have been taken to mitigate the risks
  • A description of the categories of data the high-risk AI system processes as inputs and the outputs the high-risk AI system produces
  • If the deployer used data to customize the high-risk AI system, an overview of the categories of data the deployer used to customize the high-risk AI system
  • Any metrics used to evaluate the performance and known limitations of the high-risk AI system
  • A description of any transparency measures taken concerning the high-risk AI system, including measures taken to disclose to a consumer that the high-risk AI system is in use when the high-risk AI system is in use
  • A description of the post-deployment monitoring and user safeguards provided concerning the high-risk AI system, including the oversight, use, and learning process established by the deployer to address issues arising from the deployment of the high-risk AI system
  • If intentionally and substantially modifying the high-risk artificial system, a statement disclosing the extent to which the high-risk artificial system was used in a manner that was consistent with or varied from the developer's intended uses of the high-risk AI system

Deployers of high-risk AI systems must maintain the most recently completed impact assessment, all records concerning each impact assessment, and all prior impact assessments for at least three years following the final deployment of the high-risk AI system, and annually review the deployment of each high-risk AI system to ensure that the high-risk AI system is not causing algorithmic discrimination.

An impact assessment prepared for the purpose of complying with another appliable law or regulation satisfies the CAIA impact assessment requirement if that impact assessment "is reasonably similar in scope and effect" to the one required by the CAIA. This means that deployers could, for efficiency, complete a single-impact assessment that satisfies both the CAIA and Colorado Privacy Act requirements.

Consumer Notice of Consequential Decision

Deployers of high-risk AI systems must notify the consumer that the deployer has deployed a high-risk AI system to make or be a substantial factor in making a consequential decision before the decision is made.

Deployers of high-risk AI systems must also provide to the consumer a statement disclosing the purpose of the high-risk AI system and the nature of the consequential decision, the contact information for the deployer, a description, in plain language of the high-risk AI system (easier said than done), and instructions on how to access the statement.

Deployers must also provide the consumer information, if applicable, regarding the consumer's right under the Colorado Privacy Act to opt out of the processing of personal data concerning the consumer for the purposes of profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer.

Adverse Consequential Decision Requirements

If a high-risk AI system makes or is a substantial factor in making a consequential decision that is adverse to the consumer, a deployer must provide the consumer with:

  • A statement disclosing the principal reason or reasons for the consequential decision, including:
    • The degree to which and manner in which the high-risk AI system contributed to the consequential decision
    • The type of data that was processed by the high-risk AI system in making the consequential decision
    • The source or sources of the data
  • An opportunity to correct any incorrect personal data that the high-risk AI system processed in making or that was a substantial factor in making the consequential decision
  • An opportunity to appeal an adverse consequential decision concerning the consumer arising from the deployment of a high-risk AI system, which appeal must, if technically feasible, allow for human review unless providing the opportunity for appeal is not in the best interest of the consumer, including instances in which any delay might pose a risk to the life or safety of such consumer

Website Notice

Deployers of high-risk AI systems must make available on the deployer's website a statement summarizing:

  • The types of high-risk AI systems that are currently deployed
  • How the deployer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the deployment of each high-risk AI system
  • In detail, the nature, source, and extent of the information collected and used by the deployer

Affirmative Duty To Report Algorithmic Discrimination to the AG

If a deployer discovers that the high-risk AI system has caused algorithmic discrimination, the deployer must provide notice disclosing the discovery to the Colorado AG without unreasonable delay but no later than ninety (90) days after the date of discovery.

A deployer must also disclose the risk management policy implemented, impact assessment completed, or records maintained to the AG upon request no later than ninety days after the request.

Transparency Requirement for AI Systems That Interact With Consumers

The CAIA also imposes a basic transparency obligation on developers or deployers using AI systems that interact with consumers. Specifically, a deployer or other developer who deploys or makes available an AI system that is intended to interact with consumers must ensure that the AI system discloses to each consumer who interacts with the AI system that the consumer is interacting with an AI system. The CAIA does provide an exception such that this duty does not apply in those circumstances where it would be obvious to a reasonable person that they are interacting with an AI system.

Exclusions, Exemptions, and Exceptions

High-risk AI systems, as defined by the CAIA, do not include AI systems intended to perform a narrow procedural task or detect decision-making patterns or deviations from prior-decision-making patterns and that is not intended to replace or influence a previously completed human assessment without sufficient human review.

High-risk AI systems also do not include the following technologies unless the technologies, when deployed, make or are a substantial factor in making a consequential decision: anti-fraud technology that does not use facial recognition technology, anti-malware, anti-virus, artificial intelligence-enabled video games, calculators, cybersecurity, databases, data storage, firewall, internet domain registration, internet website loading, networking, spam and robocall filtering, spell-checking, spreadsheets, web caching, web hosting or any similar technology, and technology that communicates with consumers in natural language for the purpose of providing users with information, making referrals or recommendations, and answering questions and is subject to an acceptable use policy that prohibits generating content that is discriminatory or harmful.

Algorithmic discrimination, as defined by the CAIA, does not include the offer, license, or use of a high-risk AI system by a developer or deployer for the sole purpose of: the developer's or deployer's self-testing to identify, mitigate, or prevent discrimination or otherwise ensure compliance with Colorado and federal Law; or expanding an applicant, customer, or participant pool to increase diversity or redress historical discrimination. It also does not include an act or omission by or on behalf of a private club or other establishment that is not in fact open to the public under Title II of the Civil Rights Act of 1964.

Documentation and disclosure requirements do not require developers or deployers to disclose a trade secret, information protected from disclosure by state or federal law, or information that would create a security risk to the developer.

A deployer's duties to establish a risk management policy and program, impact assessment, and website statement do not apply to a deployer if each of the following applies:

  • The deployer employs fewer than 50 full-time equivalent employees and does not use the deployer's own data to train the high-risk AI system
  • The high-risk AI system is used for intended uses that are disclosed to the deployer and continues learning based on data derived from sources other than the deployer's own data
  • The deployer makes available to consumers any impact assessment that: the developer of the high-risk AI system has completed and provided to the deployer and includes information that is substantially similar to the information required for the impact assessment under the CAIA

Other exceptions include, under certain circumstances, HIPAA-covered entities and banks.

Enforcement and Affirmative Defenses

The attorney general has exclusive authority to enforce the CAIA, and there is no private right of action. Violations of the CAIA are deemed to be per se unfair trade practices under Colorado consumer protection law.

If an action is commenced by the Colorado AG, it is an affirmative defense that the developer or deployer: (a) discovers and cures a violation as a result of feedback that the developer or deployer encourages deployers or users to provide, adversarial testing or red teaming, or an internal review process; and (b) is in compliance with NIST's AI Risk Management Framework and ISO/IEC Standard 42001 of the International Organization for Standardization, or another nationally or internationally recognized risk management framework for AI systems, if the standards are substantially equivalent to or more stringent than the NIST AI RMF or ISO/IEC 42001.

Attorney General Rulemaking Authority

The CAIA grants the Colorado AG rulemaking authority to implement and enforce the requirements of the bill, including rules regarding: the documentation and requirements for developers, the contents and requirements for the notices and disclosures, the content and requirements of the risk management and policy program, the content and requirements of the impact assessments, the requirements for the rebuttable presumptions, and the requirements for affirmative defenses under the CAIA.

Looking Ahead

The CAIA represents the first attempt to impose a risk-based regime to regulate AI and algorithmic discrimination in the United States, but it will most certainly not be the last.

In the absence of federal action to regulate artificial intelligence technology (as well as privacy and data protection more broadly), states may follow Colorado's lead in pursuing broad-based regulation of AI systems that make decisions impacting consumers.

DWT's privacy and security team and AI team regularly counsel clients on how their business practices can comply with state privacy and AI laws. We will continue to monitor the rapid development of other state and new federal privacy and AI laws and regulations.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More