Colorado Adopts Comprehensive AI Act Imposing Broad Disclosure Requirements
On May 17, Colorado Governor Jared Polis signed into law a comprehensive AI bill, SB205, titled “Concerning Consumer Protections In Interactions With Artificial Intelligence Systems” (“AI Act”). The new law imposes new regulations and extensive disclosure requirements upon those who develop and deploy “high risk artificial intelligence systems.” The AI Act will go into effect on Feb. 1, 2026.
Applicability.
Just like the European Union’s AI act, the new Colorado law implements a risk-based approach. The main category that AI Act addresses relates to artificial intelligence systems that fall under the rubric “high-risk artificial intelligence systems.” That term is defined as an AI system that, “when deployed, makes or is a substantial factor in making a consequential decision (the “High-Risk AI”).
A consequential decision is defined as a decision that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of: education enrollment or an education opportunity, employment or an employment opportunity, financial or lending service, essential government service, health-care services, housing, insurance, or legal service” (“Consequential Decision”). However, the key notion of “material legal or similarly significant effect,” likely adopted from the European Union’s General Data Protection Regulation, is not defined in the AI Act. A similar definition is used in the Proposed California Regulations for Automated Decision-Making, which relate to “a decision that produces legal or similarly significant effects concerning a consumer” and then defines such term as a “decision that results in access to, or the provision or denial of, financial or lending services, housing, insurance, education enrollment or opportunity, criminal justice, employment or independent contracting opportunities or compensation, healthcare services, or essential goods or services.” If we assume that the definition of “material legal or similarly significant effect” will be analogous in scope to California’s law, Colorado’s AI Act would provide very broad coverage.
Limitations and carve outs.
The AI Act applies only to developers and deployers of the High-Risk AI that are doing business in Colorado and have more than 50 employees.
The AI Act also provides an explicit carve out for several types of technologies: anti-fraud technology (excluding facial recognition), anti-malware, anti-virus, AI-enabled video games, calculators, cybersecurity, databases, data storage, firewalls, internet domain registration, website loading, networking, spam and robocall filtering, spell-checking, spreadsheets, web caching, web hosting, and technology for communicating with consumers in natural language for providing information, making referrals, and answering questions, subject to an accepted use policy. These technologies are excluded “unless the technologies, when deployed, make, or are a substantial factor in making, a [Consequential Decision].” The AI Act also excludes AI systems intended to: “perform a narrow procedural task”; or “detect decision-making patterns or deviations from prior decision-making patterns and is not intended to replace or influence a previously completed human assessment without sufficient human review.”
Key requirements:
Duty of Reasonable Care. The AI Act imposes a duty of “reasonable care” on both developers and deployers “to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination” based on “actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of [Colorado] or federal law.”
Developer’s Disclosures. Developers must disclose to the deployer of the High-Risk AI the following information and documents:
- Summaries of the data used to train the High-Risk AI;
- Known or foreseeable limitations and risks of algorithmic discrimination;
- The purpose of the High-Risk AI;
- The intended benefits and uses of the High-Risk AI;
- Any other information necessary to allow the deployer to comply with the AI Act.
Public Statements. Developers must clearly and accessibly provide on their website or in a public-use case inventory a statement summarizing:
- The types of the High-Risk AI s they have developed, significantly modified, and currently offer;
- How they manage foreseeable risks of algorithmic discrimination from these AI systems.
This statement must be updated as needed to remain accurate and within 90 days of any significant modification to any High-Risk AI.
Discrimination Risk Disclosures. Developer must notify the Attorney General and known deployers of the High-Risk AI about any known or reasonably foreseeable risk of algorithmic discrimination within 90 days of discovering it or receiving a credible report from a deployer that the High-Risk AI has caused or is likely to cause such discrimination.
Deployers have a similar obligation to make a publicly available statement summarizing the types of High-Risk AI used and disclosing how the deployer manages any known or reasonably foreseeable risks of algorithmic discrimination, as well as the nature, source, and extent of the information collected and used by the deployer.
General Disclosure Obligations. While the AI Act mainly addresses the High-Risk AI, it also imposes a disclosure obligation on any deployer or developer of an AI system (whether high-risk or not) intended to interact with consumers to disclose to each consumer that they are interacting with an AI system, unless it is obvious to a reasonable person that they are interacting with an AI.
The Deployer Shall:
- Implement a “risk management policy and program” for the High-Risk AI;
- Complete an impact assessment of the High-Risk AI. An impact assessment made to comply with another law or regulation can suffice for the purposes of the Act if it fulfills the requirements and as long as it is reasonably similar in scope and effect to the impact assessment required by the Act;
- Annually review the High-Risk AI to ensure that it is not causing algorithmic discrimination;
- Notify a consumer if the High-Risk AI makes a Consequential Decision concerning a consumer;
- Provide a consumer with an opportunity to correct any incorrect personal data that was processed in making a Consequential Decision;
- Provide a consumer with an opportunity to appeal, by human review if technically feasible, an adverse Consequential Decision.
Consumer Rights.
Consumers will have the right to be informed about the purpose and nature of the AI system and the type of decision influenced by the AI. They will also have the right in Consequential Decisions to opt out of profiling. These rights are similar to those granted by the European Union’s General Data Protection Regulation and other states’ privacy laws. Consumers will also have the right to be informed about the contact information and how to access the public statement on AI use.
Enforcement and Safe Harbor.
Only the Colorado Attorney General has the authority to enforce the AI Act, as the Act specifically denies a private right of action. Violation of the AI Act will be deemed an unfair and deceptive trade practice. The AI Act provides a rebuttable presumption that developers and deployers of the AI systems used “reasonable care” if they have complied with the aforementioned requirements.
The scope of the AI Act is very broad and raises a number of concerns regarding the effect on the market and development of innovative technologies. Colorado’s governor himself expressed his reservations in a letter to the General Assembly, specifically calling out the “complex compliance regime” of the new law and its broad applicability and regulation regardless of intent. The Governor’s letter also argued that the AI Act needs fine-tuning before it comes into effect in two years. Even though this act will primarily affect Colorado companies, similar laws in other states will likely follow.
The AI landscape is changing fast, and more states will likely follow to introduce legislation addressing use of AI, at least in such key areas as employment and consumer protection. While Colorado is the first state to enact a comprehensive AI Act, it is not the only state adopting legislation addressing AI risks.[1]
[1] Other states are also taking actions to address AI-related risks in employment, consumer protection, publicity rights, elections, and algorithmic bias, among other issues. On May 1st, Utah, enacted generative AI Transparency Law aimed to protect consumers against algorithmic bias, Artificial Intelligence Amendments (SB149). In October 2023, New York City adopted comprehensive Artificial Intelligence Action Plan for the responsible use of AI in government. New York also introduced a comprehensive AI bill of rights addressing algorithmic Bias: A8129 and S8209. California introduced several bills addressing Automated Decision Tools and algorithmic bias (AB-331), Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act (SB-1047), and Consumer Protection: Generative Artificial Intelligence (SB-942). Illinois introduced two algorithmic bias bills: Automated Decision Tools Act (HB5116) and Illinois Commercial Algorithmic Impact Assessments Act (HB5322). Oklahoma has introduced five AI bills: Ethical Artificial Intelligence Act (HB 3835), Oklahoma Artificial Intelligence Bill of Rights (HB 3453), Oklahoma Artificial Intelligence Act of 2024 (HB 3293), Artificial Intelligence Utilization Review Act (HB 3577), Citizen's Bill of Rights (SB 1975). New Jersey introduced a bill on the Automated Employment Decision Tools(S1588), Massachusetts has two pending bills on Automated Employment Decision Tools, An Act Preventing a Dystopian Work Environment (H.1873), and an Act Relative to Cybersecurity and Artificial Intelligence (S.2539).