Colorado AI Act compliance starts with a deadline that most organizations have already wasted. Governor Polis signed SB 24-205 on May 17, 2024. The original effective date was February 1, 2026. Then the August 2025 special session produced SB 25B-004, pushing enforcement to June 30, 2026. No substantive changes. Every obligation, presumption, and penalty survived intact [Colorado SB 24-205; SB 25B-004]. The delay gave deployers five extra months. Most used zero of them.
The compliance gap is not theoretical. Colorado’s AI Act targets every AI system that makes or substantially influences a “consequential decision” covering employment, lending, insurance, housing, healthcare, education, essential government services, and legal services [Colorado SB 24-205, Section 6-1-1702]. If your hiring platform filters candidates, your underwriting model prices policies, or your CRM scores creditworthiness, you are deploying a high-risk AI system under this law. The Colorado Attorney General holds exclusive enforcement authority with penalties reaching $20,000 per violation under the Colorado Consumer Protection Act. One thousand undisclosed consequential decisions equal $20 million in potential exposure.
Six deployer obligations. One rebuttable presumption. One affirmative defense. The June 30, 2026 enforcement date leaves less than four months. Organizations that miss the deadline face $20,000 per violation under the Colorado Consumer Protection Act, with no grace period.
The Colorado AI Act (SB 24-205) takes effect June 30, 2026, requiring deployers of high-risk AI systems to implement six obligations: risk management policy, annual impact assessment, consumer notification before consequential decisions, public disclosure statement, human appeal process, and data correction opportunity. Meeting all six creates a rebuttable presumption of reasonable care. An affirmative defense under Section 6-1-1703 adds further protection through NIST AI RMF or ISO 42001 compliance [Colorado SB 24-205].
What Does the Colorado AI Act Classify as High-Risk?
The Colorado AI Act uses a two-part classification test that catches more systems than most compliance teams expect. First, the system must make or be a “substantial factor” in making a consequential decision. “Substantial factor” means the system assists in the decision and is capable of altering the outcome [Colorado SB 24-205, Section 6-1-1702]. Second, the decision must carry a “material legal or similarly significant effect” on one of eight protected domains: education enrollment, employment, financial or lending services, essential government services, healthcare, housing, insurance, or legal services. A chatbot answering customer FAQs is not high-risk. A chatbot triaging insurance claims and routing some to denial queues is. The classification turns on the decision’s consequence, not the technology’s sophistication.
What counts as a consequential decision under SB 205?
The statute defines consequential decision as any decision with a material legal or similarly significant effect on provision, denial, cost, or terms of the eight protected domains [Colorado SB 24-205, Section 6-1-1702]. This is broader than it first appears. A pricing algorithm that adjusts insurance premiums based on behavioral data is making a consequential decision about the “cost” of insurance. A resume screening tool that filters out 70% of applicants before a human reviews the remaining 30% is making a consequential decision about employment opportunity. The “substantial factor” qualifier means the AI must be capable of altering the outcome. A system providing informational support without influencing the final decision falls outside scope.
The practical test: remove the AI system from the process. Does the outcome change? If yes, the system is a substantial factor. If the same human would reach the same decision with or without the AI output, the system is informational, not consequential.
What is algorithmic discrimination under Colorado law?
Colorado defines algorithmic discrimination as any condition in which AI use results in unlawful differential treatment or impact disfavoring an individual or group based on protected classifications [Colorado SB 24-205, Section 6-1-1702]. The protected classes are extensive: age, color, disability, ethnicity, genetic information, limited English proficiency, national origin, race, religion, reproductive health, sex, veteran status, and any other class protected under Colorado or federal law. The word “unlawful” is load-bearing. Differential treatment that does not violate existing antidiscrimination law is not algorithmic discrimination under the Act. The statute creates obligations to prevent and detect discrimination. It does not create new protected classes.
Inventory every AI system in your organization that influences decisions in the eight protected domains. For each system: (1) Document whether the system makes or substantially factors into a consequential decision. Apply the removal test: does removing the AI change the outcome? (2) Identify which protected domain the decision falls within. (3) Classify the system as high-risk or not. This inventory becomes the foundation for every obligation that follows.
The Six Deployer Obligations: What the Law Actually Requires
Colorado AI Act compliance obligations form a six-part structure. Meeting all six creates a rebuttable presumption that the deployer used reasonable care to protect consumers from algorithmic discrimination [Colorado SB 24-205, Section 6-1-1702]. “Rebuttable presumption” is not absolute protection. The Attorney General can still prove lack of reasonable care even if all six boxes are checked. The presumption shifts the burden: instead of the deployer proving it acted responsibly, the AG must prove it did not. That shift matters in enforcement proceedings. The six obligations are interdependent. A risk management policy without an impact assessment is incomplete. Consumer notification without an appeal process is hollow. Build them as a system, not a checklist.
What does the risk management policy require?
Deployers must implement a risk management policy and program governing the deployment of high-risk AI systems [Colorado SB 24-205]. The statute does not prescribe a format, a template, or specific contents. It requires that the policy exist, that it govern deployment, and that a program implement it. The policy should address: which AI systems are classified as high-risk, the governance structure for deployment decisions, roles and responsibilities for ongoing monitoring, and the process for identifying and responding to algorithmic discrimination. An AI governance framework provides the structural backbone. The policy operationalizes it for Colorado compliance.
What does the annual impact assessment require?
Deployers must complete an impact assessment annually, plus within 90 days of any significant modification to a high-risk AI system [Colorado SB 24-205]. The assessment evaluates the system’s potential for algorithmic discrimination across all protected classes. The 90-day trigger on significant modifications is the provision most organizations will miss. Retraining a model on new data, expanding the system to a new decision domain, or changing the population the system serves all qualify as significant modifications. The assessment is not a one-time exercise. It is a recurring obligation tied to both a calendar cycle and system changes.
The NIST AI RMF risk assessment methodology maps directly to this requirement. The Map function identifies risks. The Measure function quantifies them. Together they produce the documentation the impact assessment demands.
What consumer notifications are required?
Before making a consequential decision, deployers must notify the consumer of five things: the purpose of the AI system, the nature of the decision being made, the deployer’s contact information, a plain-language description of the system, and instructions to access the deployer’s public statement [Colorado SB 24-205]. “Before” is the operative word. Post-decision notification does not satisfy the statute. The notification must happen while the consumer still has the ability to provide additional information or context before the decision is final.
The public statement is a separate obligation. Deployers must publish on their website: the types of high-risk AI systems deployed, how they manage discrimination risk, and the nature, source, and extent of information collected and used by the system. This is a standing disclosure, not a per-decision notification.
What appeal and correction rights must deployers provide?
Two related obligations round out the six. First, deployers must provide consumers the opportunity to correct incorrect personal data used by the AI system [Colorado SB 24-205]. If a consumer’s credit score, employment history, or medical record contains errors that influenced a consequential decision, the deployer must allow correction. Second, deployers must provide an appeal process via human review for adverse consequential decisions, if technically feasible [Colorado SB 24-205]. The “if technically feasible” qualifier limits but does not eliminate this obligation. Organizations claiming technical infeasibility should document the specific technical barriers. A general assertion that human review is too expensive or time-consuming will not satisfy the standard.
| Obligation | Requirement | Frequency | Key Evidence |
|---|---|---|---|
| Risk management policy | Written policy and program governing deployment | Ongoing | Policy document, executive approval, version history |
| Impact assessment | Evaluate algorithmic discrimination potential | Annual + 90 days after significant changes | Assessment report, methodology documentation |
| Consumer notification | Disclose purpose, nature, contact info, description, public statement | Before each consequential decision | Notification templates, delivery records |
| Public statement | Publish AI system types, discrimination risk management, data practices | Standing disclosure | Published webpage, update log |
| Data correction | Allow consumers to correct incorrect personal data | On request | Correction process documentation, request logs |
| Human appeal | Provide human review for adverse decisions (if technically feasible) | On request | Appeal process, reviewer training, decision logs |
Build the six-obligation compliance package in this order: (1) Draft the risk management policy first. It governs everything else. Get executive sign-off. (2) Conduct the initial impact assessment for every high-risk system identified in your inventory. (3) Design the consumer notification workflow. Map every decision point where notification must fire before the decision executes. (4) Publish the public statement on your website. (5) Build the data correction intake process. (6) Design the human appeal workflow, including reviewer qualifications, decision criteria, and turnaround SLAs. Document the technical feasibility analysis for any system where human appeal is not provided.
Developer Obligations: What Your Vendors Owe You
Colorado’s AI Act does not stop at deployers. Developers of high-risk AI systems carry four independent obligations that directly affect your compliance posture as a deployer [Colorado SB 24-205]. If your organization deploys AI systems built by third-party vendors, those vendors are developers under the statute. Their obligations include providing you with the technical documentation you need to satisfy your own deployer requirements. An under-documented vendor creates a compliance gap in your program. Understanding developer obligations is not optional for deployers. It determines what you demand in procurement and what you verify in vendor assessments.
What documentation must developers provide to deployers?
Developers must provide deployers with technical documentation covering: foreseeable harmful uses of the system, data governance measures applied during development, evaluation methodology and results, intended outputs, and relevant model cards, dataset cards, or impact assessment artifacts [Colorado SB 24-205]. This documentation feeds directly into your impact assessment. Without it, your assessment is built on assumptions rather than evidence. The statute also requires developers to exercise reasonable care for foreseeable risks, extending to both intended and contracted uses of the system.
Developers must make their own public disclosure on their website: a list of high-risk systems they have developed or intentionally modified, and how they manage discrimination risk. When a developer discovers algorithmic discrimination, they must notify the AG and all known deployers within 90 days [Colorado SB 24-205]. If your vendor discovers a discrimination issue in a system you deploy, you should learn about it within 90 days. Build that expectation into your vendor contracts now.
How does the small deployer exemption work?
Organizations with fewer than 50 full-time employees receive limited disclosure exemptions, but only under specific conditions [Colorado SB 24-205]. The small deployer must not use its own data to train or fine-tune the system. The system must be used for its disclosed intended purposes. And the system must learn from non-deployer data sources. The critical nuance: customizing a model with proprietary data removes the exemption. A 30-person company deploying an off-the-shelf hiring tool as-is qualifies. The same company fine-tuning that tool on its own historical hiring data does not. The moment you introduce your data into the model’s training pipeline, the full deployer obligations apply regardless of headcount.
For deployers working with third-party AI vendors: (1) Identify every vendor developing AI systems you deploy for consequential decisions. (2) Request the complete documentation package the statute requires: harmful use documentation, data governance measures, evaluation methodology, and model/dataset cards. (3) Add a contractual clause requiring 90-day notification of any discovered algorithmic discrimination. (4) Verify whether the vendor has published their required public disclosure. (5) If you qualify for the small deployer exemption, confirm you are not fine-tuning or training with your own data. Document this determination.
The Affirmative Defense: Building Legal Protection Beyond the Rebuttable Presumption
The rebuttable presumption from the six deployer obligations is the first layer of legal protection. The affirmative defense under Section 6-1-1703 is the second, and it requires different evidence [Colorado SB 24-205, Section 6-1-1703]. The defense has two prongs, both of which must be satisfied. Prong A requires proof the organization discovered and cured the violation through encouraged feedback, adversarial testing or red teaming per NIST definitions, or an internal review process. Prong B requires compliance with the NIST AI Risk Management Framework, ISO/IEC 42001, or a framework substantially equivalent to or more stringent than those two. The NIST AI RMF affirmative defense article covers the documentation strategy in depth.
How do the rebuttable presumption and affirmative defense work together?
The two protections operate on different legal mechanics. The rebuttable presumption shifts the burden of proof: the AG must prove you lacked reasonable care despite meeting all six obligations. The affirmative defense is an independent legal protection: even if the AG proves a violation occurred, you can defeat the enforcement action by proving you discovered the problem through proper channels and maintained framework compliance [Colorado SB 24-205]. The rebuttable presumption raises the evidentiary burden. The affirmative defense is a separate protection that survives even after the AG establishes a violation.
The practical implication: build both. The six deployer obligations create the presumption. The discovery-and-cure conduct plus framework compliance create the defense. Organizations building only the presumption are one AG investigation away from wishing they had built the defense. Organizations building both hold the strongest available legal position under the statute.
Why does the NIST AI RMF matter for Colorado compliance?
The statute names the NIST AI RMF by name as a qualifying framework for Prong B [Colorado SB 24-205, Section 6-1-1703]. NIST AI RMF is free, flexible, and maps to four core functions: Govern (organizational policies and procedures), Map (risk identification), Measure (risk quantification and monitoring), and Manage (risk treatment and response) [NIST AI 100-1, January 2023]. The framework has no certification mechanism, which creates a documentation challenge: you must prove compliance with a standard that has no formal compliance determination. The solution is a structured evidence package mapped function by function, with applicability statements for excluded subcategories and a governance trail proving ongoing implementation.
Texas TRAIGA, effective January 1, 2026, includes a similar safe harbor using “substantial compliance” language instead of Colorado’s stricter “compliance” standard [Texas TRAIGA]. Organizations operating AI systems serving both states should build to Colorado’s standard. Meeting the stricter bar automatically satisfies the lower one.
Build the affirmative defense evidence package alongside the six deployer obligations: (1) Choose your Prong B framework. NIST AI RMF is free and specifically named. ISO 42001 offers third-party certification. Both are valid. (2) For Prong A, implement at least two discovery mechanisms: red teaming plus internal review is the strongest combination. (3) Document every discovery cycle with four fields: method, violation found, cure implemented, timeline. (4) Map each high-risk system to the four NIST AI RMF functions. (5) Produce an applicability statement for every subcategory you adopt or exclude. (6) Establish a quarterly review cadence. One-time implementation does not satisfy “compliance.”
Enforcement: What the Attorney General Can and Cannot Do
Colorado’s enforcement model is AG-exclusive with no private right of action [Colorado SB 24-205]. Consumers cannot sue deployers directly under the AI Act. Class action plaintiff’s attorneys cannot bring claims under this statute. Only the Colorado Attorney General initiates enforcement actions, and violations are treated as unfair trade practices under the Colorado Consumer Protection Act. This concentrates enforcement power in a single office, which means enforcement patterns will be shaped by the AG’s priorities, resources, and rulemaking. The AG’s pre-rulemaking considerations were published September 10, 2024. Formal rulemaking has not started as of March 2026 [Colorado AG, September 2024]. The absence of final rules does not delay the June 30, 2026 effective date.
How are penalties calculated under the Colorado AI Act?
The penalty structure derives from the Colorado Consumer Protection Act: up to $20,000 per violation [Colorado CPA]. The per-violation structure is where exposure multiplies. Each undisclosed consequential decision is potentially a separate violation. An AI system making 500 consequential decisions per month without required consumer notifications creates $10 million in annual exposure from a single system. The statute also provides a 60-day cure period before enforcement action [Colorado SB 24-205]. The AG must notify the deployer of the alleged violation and allow 60 days to cure. This cure period rewards organizations with rapid-response capabilities. A deployer who receives notice and remediates within 60 days avoids enforcement. A deployer without incident response procedures loses the window.
What about federal preemption?
The December 2025 Executive Order on AI (“Ensuring a National Policy Framework for Artificial Intelligence”) signals federal intent to preempt state AI laws [White House EO, December 2025]. The DOJ AI Litigation Task Force, created January 9, 2026, is specifically charged with challenging state laws. Colorado is expected high on the Commerce Department’s referral list. The legal reality as of March 2026: executive orders lack the force of law, Congress has not authorized preemption, and no federal AI regulatory scheme exists [Ropes & Gray, March 2026]. State laws remain enforceable until a court rules otherwise. Build your compliance program as if Colorado’s law will stand. If federal preemption eventually applies, your risk management infrastructure retains value for federal procurement, EU AI Act preparation, insurance positioning, and governance maturity.
Federal preemption of state AI laws is a political signal, not a legal fact. No court has struck down a state AI law on preemption grounds. The organizations that pause compliance waiting for federal action will face the same obligations with less time if preemption fails. Build now. Adapt later.
The Compliance Playbook: What to Do Before June 30, 2026
The compliance architecture has four phases, sequenced to build the evidence package incrementally while prioritizing the obligations with the longest implementation timelines [Colorado SB 24-205]. Phase 1 (Days 1-30) establishes the foundation: inventory, classification, and governance structure. Phase 2 (Days 31-60) builds the deployer obligations. Phase 3 (Days 61-90) constructs the affirmative defense evidence. Phase 4 (Days 91+) tests, validates, and prepares for the enforcement environment. The phases overlap. Start Phase 2 work before Phase 1 is fully complete. The remaining window does not accommodate sequential execution.
Phase 1 (Days 1-30): What foundational work comes first?
- AI system inventory: Catalog every AI system that makes or substantially influences consequential decisions across the eight protected domains. Include third-party embedded AI. Include vendor tools your teams adopted without procurement review. The inventory you miss is the system the AG finds.
- High-risk classification: Apply the two-part test to every inventoried system. Document the “substantial factor” analysis and the consequential decision domain. Systems that inform but do not influence outcomes are out of scope. Systems that alter outcomes are in.
- Governance structure: Designate the AI risk management owner. Establish the governance body (steering committee, board subcommittee, or equivalent). Define reporting lines and escalation protocols. The risk management policy in Phase 2 requires this structure.
- Vendor assessment: Identify every developer whose AI systems you deploy. Inventory the documentation they have provided against the statutory requirements. Flag gaps for immediate vendor engagement.
Phase 2 (Days 31-60): How do you build the six deployer obligations?
- Risk management policy: Draft the policy governing all high-risk AI deployments. Include classification criteria, governance roles, monitoring requirements, and incident response procedures. Get executive approval. Version-control the document.
- Impact assessment: Conduct the initial assessment for every classified system. Evaluate algorithmic discrimination potential across all protected classes Colorado specifies. Document methodology, findings, and remediation plans.
- Consumer notification workflow: Design the notification mechanism for every decision point where a high-risk system influences a consequential decision. Build it into the decision flow, not as a bolt-on. Test the notification with actual consumers for clarity.
- Public statement: Draft and publish the required website disclosure covering system types, discrimination risk management, and data practices. Legal review before publication.
- Data correction and appeal processes: Build the intake mechanism for consumer correction requests and the human review workflow for appeals. Train reviewers on decision criteria and documentation requirements.
Phase 3 (Days 61-90): How do you build the affirmative defense?
- Framework selection: Choose NIST AI RMF, ISO 42001, or both. NIST AI RMF is free, flexible, and specifically named. ISO 42001 certification adds evidentiary weight but takes longer and costs more.
- Discovery mechanisms: Implement adversarial testing (red teaming) and an internal review process. Document the first testing cycle for each high-risk system. Establish the ongoing cadence.
- Framework documentation: Map each system to the four NIST AI RMF functions. Produce the applicability statement. Document implementation evidence by subcategory.
- Incident reporting preparation: The statute requires deployers to report discovered algorithmic discrimination to the AG within 90 days [Colorado SB 24-205]. Build the reporting workflow, assign responsibility, and template the report.
Phase 4 (Days 91+): How do you validate readiness?
- Tabletop exercise: Simulate an AG investigation. Test whether your evidence package answers the questions the statute implies: Did you know this was high-risk? Did you assess discrimination potential? Did you notify the consumer? Can you produce the documentation?
- Gap remediation: Address every gap the tabletop reveals. Prioritize by enforcement exposure: consumer notification gaps create per-violation liability. Policy documentation gaps undermine the rebuttable presumption.
- Vendor confirmation: Verify developers have met their statutory obligations. Confirm you have received the required technical documentation. Confirm vendor public disclosures are published.
- Board reporting: Present the compliance posture to the board or executive leadership. Document the briefing. The governance trail matters for both the rebuttable presumption and the affirmative defense.
Print the timeline. Assign an owner for each phase. Set three checkpoints: Day 30 (inventory and classification complete), Day 60 (six obligations built), Day 90 (affirmative defense documented). June 30 is the enforcement date, not the target. Everything should be operational by Day 90 with remaining days as buffer for remediation. Organizations starting after April 2026 cannot complete all four phases before enforcement begins.
Colorado’s AI Act is the template. The statute’s structure (high-risk classification, deployer obligations, rebuttable presumption, affirmative defense, AG enforcement) will replicate across state legislatures for the next three years. Texas already adopted the same pattern. The compliance architecture you build for Colorado transfers to every state that follows. Invest in the framework, not the checkbox. The organizations treating this as a single-state project will rebuild for every new law. The ones building a governance infrastructure will extend it.
Frequently Asked Questions
When does the Colorado AI Act take effect?
The Colorado AI Act (SB 24-205) takes effect June 30, 2026. The original effective date of February 1, 2026 was delayed by SB 25B-004, signed during the August 2025 special session. All substantive obligations remained unchanged [Colorado SB 24-205; SB 25B-004].
What is a high-risk AI system under Colorado law?
A high-risk AI system is any system that makes or is a substantial factor in making a consequential decision. A consequential decision has a material legal or similarly significant effect on education, employment, financial services, government services, healthcare, housing, insurance, or legal services [Colorado SB 24-205, Section 6-1-1702].
What penalties does the Colorado AI Act impose?
Violations are treated as unfair trade practices under the Colorado Consumer Protection Act, carrying penalties up to $20,000 per violation. Each undisclosed consequential decision is potentially a separate violation. The AG must provide a 60-day cure period before enforcement action [Colorado SB 24-205; Colorado CPA].
Can consumers sue under the Colorado AI Act?
No. The Colorado AI Act provides no private right of action. Only the Colorado Attorney General initiates enforcement actions [Colorado SB 24-205]. Consumers cannot bring direct claims or class actions under this statute.
What is the rebuttable presumption of reasonable care?
Deployers who satisfy all six statutory obligations (risk management policy, impact assessment, consumer notification, public statement, data correction, human appeal) earn a rebuttable presumption they used reasonable care to prevent algorithmic discrimination. The AG can still overcome this presumption with evidence of actual negligence [Colorado SB 24-205].
How does the affirmative defense differ from the rebuttable presumption?
The rebuttable presumption shifts the burden of proof to the AG. The affirmative defense is an independent legal protection that applies even if a violation occurred. The defense requires proof of violation discovery and cure plus compliance with the NIST AI RMF, ISO 42001, or an equivalent framework [Colorado SB 24-205, Section 6-1-1703].
Does the small deployer exemption apply if I fine-tune a model?
No. The exemption for organizations with fewer than 50 employees requires that you do not use your own data to train or fine-tune the AI system. Customizing a model with proprietary data removes the exemption and triggers full deployer obligations [Colorado SB 24-205].
Will federal preemption override the Colorado AI Act?
Not as of March 2026. The December 2025 Executive Order signals federal intent to preempt state AI laws, but executive orders lack the force of law, Congress has not authorized preemption, and no court has struck down a state AI law on preemption grounds [White House EO December 2025; Ropes & Gray March 2026]. Colorado’s law remains enforceable until a court rules otherwise.
Get The Authority Brief
Weekly compliance intelligence for security leaders. Frameworks decoded. Audit strategies explained. Regulatory updates analyzed.