CCPA/CPRA Automated Decision-Making (ADMT): Opt-Out Rights, Access to AI Logic, and Human Review Compliance
California Privacy Rights Act (CPRA) and CPPA Automated Decision-Making Technology (ADMT) regulations grant California consumers new rights regarding AI decisions: the right to opt out of ADMT for significant decisions; the right to access per-decision AI logic in plain language; and the right to request human review of ADMT-based significant decisions. ADMT is defined broadly — it covers AI that "facilitates" human decisions (not just fully automated decisions). Significant decisions covered: credit, employment, healthcare, education, housing, insurance, and legal rights. CPPA enforces with fines up to $7,500 per intentional violation; there is no private right of action for ADMT violations (only for data breach security failures). AI teams must implement: opt-out mechanisms within 15 business days, per-decision explanation logs in plain language, and a documented human review process.
ADMT Definition: Covers AI That "Facilitates" Human Decisions
CPPA ADMT regulations define automated decision-making technology as any system that processes personal information using computation to make or execute a decision or to facilitate human decision-making. The "facilitates" clause is the expansive provision: AI recommendation engines, AI scoring systems presented to human decision-makers, and AI-ranked lists that influence loan officers, HR managers, or medical professionals are all ADMT — not just fully automated systems. ADMT covers: ML models, rule-based systems, scoring algorithms, profiling systems, and LLMs used in decision workflows. Product recommendations and content personalization without significant individual impact are not ADMT.
Significant Decisions: What Categories Trigger ADMT Rights
CPRA ADMT rights apply to decisions with significant effects on consumers: financial decisions (credit, loans, payment terms, insurance); employment (hiring, promotion, performance evaluation, termination); education (admission, financial aid, academic discipline); healthcare (diagnosis, treatment, authorization, coverage); housing (rental applications, mortgage, property services); government services; and other decisions producing legal or similarly significant effects. This covers a wide range of AI deployments. Businesses must conduct an ADMT inventory to identify all AI systems that make or contribute to any of these decision categories for California consumers.
Access to ADMT Logic: Per-Decision Plain-Language Explanations
When a consumer requests access to ADMT logic, the response must be reasonably understandable to an average consumer — not a general model description. The response must be specific to the individual decision: what categories of personal information were used as inputs; how those factors were weighted or evaluated; what the AI output was; and how that output contributed to the decision. Generic model card links, technical feature lists, or "we use machine learning to evaluate applications" responses are insufficient. AI systems must generate per-decision explanation artifacts that can be rendered in plain language. This drives a technical requirement: decision-level explainability, not model-level explainability.
Opt-Out Right and Human Review Alternative
Consumers may opt out of ADMT for significant decisions. Businesses must honor opt-out requests within 15 business days and offer a genuine alternative (typically, a human-reviewed decision process). The alternative cannot be more burdensome than the default ADMT path — slower timelines, higher costs, or less favorable outcomes for the opt-out path effectively penalize consumers for exercising their rights. Businesses must designate a human reviewer who has authority and access to independent evaluate the decision without reliance on the AI score. CPPA regulations (§ 7033) require documenting the human review process and disclosing the appeal path in the privacy notice.
CPPA Enforcement and Interaction with GDPR Article 22
CPPA enforces ADMT regulations with civil penalties up to $7,500 per intentional violation. There is no private right of action for ADMT violations — enforcement is by CPPA only. CPPA may investigate based on consumer complaints, routine audits, or proactive monitoring. Contrast with GDPR Article 22: GDPR restricts solely automated decisions with legal/significant effects, while CPRA covers AI that "facilitates" (not just fully automates) human decisions. CPRA's "facilitates" clause is broader — an AI recommendation that influences a human loan officer is covered by CPRA ADMT but may not be covered by GDPR Art. 22 if a human makes the final decision. AI teams serving both US and EU consumers must implement the more demanding California standard for US operations.