Colorado Legislature Approves AI Bill Targeting "High-Risk" Systems And AI Labeling

HK
Holland & Knight

Contributor

Holland & Knight is a global law firm with nearly 2,000 lawyers in offices throughout the world. Our attorneys provide representation in litigation, business, real estate, healthcare and governmental law. Interdisciplinary practice groups and industry-based teams provide clients with access to attorneys throughout the firm, regardless of location.
SB 205 is one of a number of similar bills under consideration by state legislatures, and it could create a model for such legislation going forward.
United States Technology
To print this article, all you need is to be registered or login on Mondaq.com.

Highlights

  • If signed into law, the Colorado Artificial Intelligence Act (SB 205) will be the first law in the U.S. to impose specific requirements intended to mitigate risk of "algorithmic discrimination" on both developers and deployers of artificial intelligence.
  • SB 205 is one of a number of similar bills under consideration by state legislatures, and it could create a model for such legislation going forward.
  • SB 205 would apply to many businesses that otherwise fall outside the scope of the Colorado Privacy Act (CPA), including many healthcare organizations, financial institutions and businesses processing lower volumes of personal data.

The Colorado legislature passed the Colorado Artificial Intelligence Act (SB 205 or the Act) on May 8, 2024. If approved by Gov. Jared Polis, it will be the first law in the U.S. to impose specific requirements intended to mitigate risk of "algorithmic discrimination" on both developers and deployers of artificial intelligence (AI) and on a variety of use cases.

SB 205 is one of a number of similar bills under consideration by state legislatures, and it could create a model for such legislation going forward. Utah recently enacted the Utah Artificial Intelligence Policy Act, which took effect May 1, 2024, and has limited requirements around identifying generative AI systems when used to respond to consumer requests or when used by certain regulated occupations such as physicians. SB 205 goes further, potentially creating a new duty of care and rules around risk management, documentation and notification in the event a "high-risk" AI system causes algorithmic discrimination.

Key Takeaways

The Act will impose a number of new requirements related to AI and algorithmic discrimination, including:

  • SB 205 would regulate the development and deployment of AI where used as a significant factor in decision-making in an enumerated set of situations considered to be "consequential."
  • Businesses developing or deploying "high-risk" AI systems will need to take reasonable care to prevent algorithmic discrimination.
  • The most significant impact on many businesses will be on the use of AI in employment matters.
  • SB 205 would apply to many businesses that otherwise fall outside the scope of the Colorado Privacy Act (CPA), including many healthcare organizations, financial institutions and businesses processing lower volumes of personal data.
  • A requirement to notify the Colorado attorney general (AG) when algorithmic discrimination is discovered has the potential to increase the risk of investigations and trigger requests for copies of some of the extensive documentation required by the Act.
  • Both SB 205 and CPA require transparency through notices and impact assessments, but some of the details regarding what must be included varies. SB 205 does not contain an opt-out right, meaning the use cases not covered by the CPA (such as employment) are not subject to consumer opt-out.

What Uses of AI Are Covered?

SB 205 would regulate the development and deployment of AI where used as a significant factor in decision-making in an enumerated set of situations considered to be "consequential." These include employment, education, lending, financial services and healthcare, and such AI systems are referred to as "high-risk" AI systems. The Act also requires labeling of AI systems that interact with individuals, whether or not the system is "high-risk."

What Is Algorithmic Discrimination?

"Algorithmic discrimination" is a use of AI that "results" in "unlawful differential treatment or impact" that disfavors an individual or group based on a protected classification, specifically: age, color, disability, ethnicity, genetic information, English proficiency, national origin, race, religion, reproductive health, sex, veteran status or another classification protected under law.

What Businesses Must Comply with SB 205?

The Act's key requirements apply to "developers" and "deployers" of "high-risk" AI systems. A "developer" is a person doing business in Colorado who develops or intentionally and substantially modifies an AI system. A "deployer" is a person or entity doing business in Colorado who utilizes a "high-risk" AI system.

There are no revenue or data volume thresholds, though small businesses acting as deployers may be exempt from certain requirements in limited situations. Unlike with some "comprehensive" privacy laws, there are no entity-level exemptions for healthcare or financial institutions or exemptions for employee or business contact data, and although certain regulated activities fall outside the scope, those exemptions are not as expansive as typical.

What Are the Key Requirements of the Act?

Requirements for "High-Risk" AI Systems

Requirements

Developers

Deployers

Duty of Care

Exercise reasonable care to protect individuals from known or foreseeable risks of algorithmic discrimination. The duty applies to discrimination arising from intended and contracted uses of the AI system deployed.

Exercise reasonable care to protect individuals from known or foreseeable risks of algorithmic discrimination.

Public Disclosures

Post a notice on website or in a public use case inventory disclosing: 1) the types of "high-risk" AI systems it has developed and makes available and 2) known or reasonably foreseeable risks of algorithmic discrimination.

Post a notice on website regarding the types of "high-risk" AI systems it uses, known risks, and details about information collected and used.

Individual Disclosures

N/A

Provide individuals with a notice before a system is used to make a "consequential" decision, with information such as its purpose and the existence of the CPA right to opt out. Additional disclosures are required if the system facilitates an adverse decision.

Risk Management

N/A

Implement a risk management policy and program that is reasonable taking into account frameworks such as the National Institute of Standards and Technology (NIST), the size and complexity of the deployer, the nature of the system, and the volume and sensitivity of the data processed.

Documentation and Impact Assessments

Provide a statement of the reasonably foreseeable uses and known harmful uses of the system, along with documents regarding, among other things: 1) training data, 2) limitations, 3) the system's purpose, 4) intended benefits and uses, 5) how the system was evaluated, 6) data governance, 7) risk mitigation and 8) other information required to complete a deployer's impact assessment.

Conduct an impact assessment initially and at least annually thereafter or when there are substantial modifications to the system. The assessment should consider elements such as the use cases, context of deployment, categories of data used, known limitations, transparency and post-deployment monitoring.

Rights

N/A

Individual rights to receive an explanation regarding an adverse consequential decision, correct inaccurate personal data used and appeal the decision for human review.

Colorado AG Notice

Notify the Colorado AG and known deployers of known or reasonably foreseeable risks of algorithmic discrimination without unreasonable delay and within 90 days.

Notify the Colorado AG if deployer discovers that system has caused algorithmic discrimination without unreasonable delay and within 90 days.

Labeling of AI Systems

A developer or deployer of an AI system intended to interact with individuals is required to disclose to the individual that they are interacting with AI unless it would be obvious to a reasonable person. Unlike most of the Act's other terms, these labeling requirements are not restricted to "high-risk" AI systems.

Enforcement

SB 205 does not provide for a private right of action. Instead, the Act would be enforced by the Colorado AG, with offenses constituting unfair trade practices under Colo. Rev. Stat. Section 6-1-105, punishable by fines of up to $20,000 per violation. There is a safe harbor for businesses that discover and cure a violation as a result of their own actions (rather than complaints) and that follow specified AI frameworks such as NIST.

Next Steps

Gov. Polis has not confirmed whether he intends to sign the legislation. Notably, a nearly identical bill in Connecticut failed to pass after Gov. Ned Lamont threatened to veto it over concerns the legislature was moving too fast. To address those concerns, the Colorado legislature adopted a separate bill creating a working group to study AI and biometric regulation, meaning that there is some likelihood that amendments to the Act will be proposed. If enacted, the law will go into operation on Feb. 1, 2026.

The Colorado AG is granted rulemaking authority. Although not mandatory, potential areas of rulemaking include: 1) developer documentation, 2) notice, 3) risk management, 4) impact assessments and 5) the Act's reputable presumptions and affirmative defenses. See Section 6-1-1607. Given the attention the Colorado AG gave to rulemaking under the CPA, it would not be surprising to again see thorough consideration of potential regulations.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More