Colorado SB 205: What Developers and Deployers of High-Risk AI Systems Must Do
Colorado SB 205, the Colorado Artificial Intelligence Act, is the first US state AI law imposing substantive obligations on developers and deployers of high-risk AI systems. Effective February 1, 2026, it defines high-risk AI as systems that make or substantially assist in making consequential decisions — in employment, credit, housing, healthcare, insurance, education, essential government services, and legal services. Developers must document systems, disclose risks to deployers, and publish public statements. Deployers must implement a risk management program, complete annual impact assessments, notify consumers, provide adverse action explanations, and enable human review rights. A NIST AI RMF safe harbor is available for deployers with documented programs.
Who Colorado SB 205 Covers
Colorado SB 205 applies to any developer or deployer doing business in Colorado. Developer means any person who develops or substantially modifies a high-risk AI system and makes it commercially available — including companies that fine-tune foundation models for high-risk applications. Deployer means any person who deploys a high-risk AI system to make or substantially assist in making consequential decisions about Colorado consumers. The law has no revenue threshold or employee count exemption. Any size business using a high-risk AI system that affects Colorado residents is in scope. An entity can be both developer and deployer simultaneously.
What Counts as a High-Risk AI System Under SB 205
A high-risk AI system is one that makes or substantially assists in making a consequential decision. Consequential decisions are those with a material, legal, or similarly significant effect on a consumer's access to: education enrollment, employment, financial or lending services, essential government services, healthcare, housing, insurance, or legal services. The substantially assists threshold is significant — an AI that generates a risk score used by a human decision-maker is covered even if a human makes the final determination. The definition is technology-agnostic, covering traditional ML, LLMs, rule-based systems, and hybrid approaches.
Deployer Obligations: Risk Program and Impact Assessments
Deployers must implement and maintain a risk management program using NIST AI RMF, ISO 42001, or a substantially equivalent framework — with documented policies, procedures, and accountable roles. Annually, deployers must complete an impact assessment for each high-risk AI system documenting: the system's purpose and known risks, measures implemented to mitigate identified risks, a description of training data including sources and known limitations, and performance metrics including accuracy and bias evaluation results. Impact assessments must be updated whenever the AI system undergoes a material change — including model updates, significant prompt changes, and changes to the decision domain.
Consumer Rights: Notice, Explanation, and Human Review
Colorado SB 205 grants consumers three rights when subject to a consequential AI decision. Right to Notice: consumers must be informed that a high-risk AI system was used in making or substantially assisting in the decision — at or before the time of the decision. Right to Explanation: for adverse decisions, consumers may request the specific reasons the decision was adverse — the factors that influenced this consumer's outcome, not generic AI documentation. Right to Human Review: consumers may appeal an adverse decision and request review by a human with authority to overturn it. Right to Correction: consumers may correct inaccurate personal data that contributed to the decision and have the decision reconsidered.
The NIST AI RMF Safe Harbor
Section 6-1-1704 provides a rebuttable presumption of compliance for deployers who implement and maintain a risk management program consistent with NIST AI RMF, ISO 42001, or a substantially equivalent framework — and complete the required impact assessments and consumer rights mechanisms. The presumption shifts the burden of proof to the Colorado AG to demonstrate the program was inadequate. The safe harbor requires genuine implementation — not paper compliance. An audit would examine whether behavioral monitoring is actually running, whether impact assessment performance metrics are based on real data, and whether human review mechanisms genuinely allow overriding AI decisions.