AI Regulation Reference
Structured reference tracking AI regulation obligations across jurisdictions.
Regulations
| Regulation | Jurisdiction | Status | Effective | Provisions |
|---|---|---|---|---|
| California CCPA ADMT Regulations | California | enacted | Jan 1, 2027 | 2 |
| California SB 1120 (Physicians Make Decisions Act) | California | enforcing | Jan 1, 2025 | 1 |
| California SB 53 (Frontier AI Transparency Act) | California | enforcing | Jan 1, 2026 | 2 |
| CMS Medicare Advantage AI Rule | Federal | enforcing | Jan 1, 2024 | 1 |
| Colorado ADMT (SB 24-205) | Colorado | pending replacement | Jun 30, 2026 | 2 |
| Connecticut SB 1295 | Connecticut | enacted | Jul 1, 2026 | 1 |
| EU AI Act | EU | phased enforcement | Aug 1, 2024 | 5 |
| Illinois HB 3773 (AI in Employment) | Illinois | enforcing | Jan 1, 2026 | 1 |
| New York RAISE Act | New York | enacted | Jan 1, 2027 | 2 |
| NIST AI Risk Management Framework | Federal | voluntary | Jan 26, 2023 | 1 |
| Texas TRAIGA (HB 149) | Texas | enforcing | Jan 1, 2026 | 1 |
| Utah AI Policy Act (SB 149) | Utah | enforcing | May 1, 2024 | 1 |
Obligations
Requirement that organizations ensure sufficient AI literacy among staff who develop, deploy, or oversee AI systems, tailored to their roles and responsibilities.
Requirement to prevent, detect, and mitigate discriminatory outcomes from AI systems, including testing for disparate impact across protected classes.
Requirement to demonstrate that AI systems conform to applicable requirements through formal assessment procedures, potentially including third-party audits.
Requirement to manage training data, ensure data quality, document data provenance, and maintain appropriate data governance practices for AI systems.
Requirement to provide meaningful explanations of how AI systems reach decisions, particularly when those decisions significantly affect individuals.
Requirement that humans maintain meaningful oversight over AI-assisted decisions, including the ability to understand, override, and intervene in automated outputs.
Requirement to report AI-related incidents, safety concerns, or failures to regulatory authorities within specified timeframes.
Requirement to maintain records of AI system development, deployment decisions, and operational logs sufficient for regulatory review.
Requirement to assess and document the risks posed by AI systems, including potential harms, bias, and impacts on affected individuals.
Requirement to disclose AI involvement to users, label AI-generated content, and provide adequate information about system capabilities and limitations.