REGULATORY GUIDE
Colorado AI Act
Comprehensive requirements for high-risk AI systems in consequential decisions, including employment.
Effective Date
February 1, 2026
Jurisdiction
State of Colorado
Enforcement
Attorney General
Risk Level
High-Risk AI Focus
Overview
The Colorado Artificial Intelligence Act (SB 21-169) establishes comprehensive requirements for developers and deployers of high-risk artificial intelligence systems. Unlike NYC's LL144, which focuses specifically on automated employment decision tools, Colorado's law covers a broader range of "consequential decisions" where AI is used.
The law places obligations on both developers (companies that create AI systems) and deployers (companies that use AI systems to make decisions about consumers). For employers using AI in hiring, you are likely a deployer under this law.
Scope: What Counts as High-Risk AI?
The Colorado AI Act applies to AI systems that make or substantially assist in making "consequential decisions." In the employment context, this includes decisions about:
- Hiring and recruitment
- Termination of employment
- Compensation and benefits
- Promotion decisions
- Job assignments or tasks
- Performance monitoring and evaluation
If your organization uses any AI tool that influences these decisions—including resume screeners, candidate scoring systems, interview analysis tools, or performance management AI—you are likely subject to this law.
Key Requirements for Deployers
1. Risk Management Policy
Deployers must implement a risk management policy and program that:
- Identifies and documents the intended uses of each high-risk AI system
- Analyzes potential algorithmic discrimination risks
- Implements reasonable measures to mitigate identified risks
2. Impact Assessment
Before deploying a high-risk AI system, you must conduct an impact assessment that documents:
- The purpose and intended use cases of the system
- Analysis of potential discrimination risks by protected class
- Data used by the system and its sources
- Metrics used to evaluate system performance
- Steps taken to mitigate algorithmic discrimination
3. Consumer Disclosure
You must provide clear notice to consumers (including job applicants) that includes:
- That a high-risk AI system is being used in the decision
- A description of what the AI system does and how it affects the decision
- Information about how to request human review (if the decision is adverse)
- How to correct inaccurate data used by the system
Relationship to NYC Local Law 144
If you're already compliant with NYC LL144, you have a strong foundation for Colorado compliance—but there are important differences:
| Aspect | NYC LL144 | Colorado AI Act |
|---|---|---|
| Focus | Automated Employment Decision Tools only | All high-risk AI in consequential decisions |
| Audit Required | Yes, annual independent audit | Impact assessment (can be internal) |
| Public Posting | Yes, summary results | No public posting required |
| Consumer Disclosure | Notice before use | Detailed disclosure + appeal process |
| Risk Management | Not required | Formal policy and program required |
How Paritas Helps
A Paritas bias audit addresses key Colorado AI Act requirements:
- Impact Assessment Documentation: Our audit report provides the statistical analysis of algorithmic discrimination risk required for your impact assessment.
- Risk Identification: Our three-tier classification (PASS/MONITOR/FLAG) identifies which demographic categories show potential discrimination risk.
- Mitigation Recommendations: Our prioritized remediation recommendations provide documented steps you're taking to address identified risks.
- Multi-Jurisdiction Coverage: Every Professional and Enterprise audit maps findings against Colorado requirements alongside NYC LL144, Illinois AIVAA, and EEOC guidelines.
Ensure your AEDT is compliant.
Get ahead of the February 2026 deadline with a Paritas audit that covers Colorado AI Act requirements.