GDPR Article 22 and AI Agents: What Automated Decision-Making Compliance Actually Requires
GDPR Article 22 gives individuals the right not to be subject solely to automated decisions that produce legal effects or similarly significantly affect them. For AI agents making credit, insurance, employment, or healthcare decisions, Article 22 creates three distinct compliance obligations: proactive transparency about decision logic at collection time (Articles 13-14), the right to individual explanation of each specific decision (Article 22(3) + Recital 71), and a genuine human review mechanism. The right to explanation cannot be satisfied with generic model descriptions — it requires capturing the per-decision factors that drove each individual outcome.
What Article 22 Actually Requires
Article 22(1) gives individuals the right not to be subject solely to automated processing that produces legal or similarly significant effects. Automated decisions are permitted under three lawful bases (Article 22(2)): necessary for a contract, authorized by EU/member state law, or explicit consent. Legitimate interests is not a permitted basis — a common compliance mistake. When automated decisions are permitted, Article 22(3) requires suitable measures including human intervention, the right to express a point of view, and the right to contest. Article 35(3)(a) mandates a DPIA for systematic automated processing with legal or significant effects.
Which AI Agent Decisions Are In Scope
Article 22 applies when a decision is (1) based solely on automated processing and (2) produces legal effects or similarly significant effects. Solely automated means no genuine human review — supervisory authorities have clarified that a human who never overrides AI decisions, processes hundreds without meaningful scrutiny, or lacks access to the same information the AI used does not constitute genuine human intervention. In-scope agent types: credit scoring and loan denial agents, insurance pricing and claims determination, HR candidate screening, prior authorization AI with auto-approve/deny, fraud detection with automatic service blocks.
What 'Meaningful Explanation' Means Under GDPR
Generic model descriptions fail Article 22: 'Our AI considers credit history and income' is insufficient. A meaningful explanation (Recital 71) must be specific to the individual's decision: the specific factors evaluated, the applicant's actual observed values, whether each factor contributed positively or negatively, and the relative weight of each factor. This is only possible if the AI's decision record captured structured per-decision factors with observed values and impact directions — not just the final output.
Implementation: Generating Explanations from Decision Records
Capture structured factor output in the AI decision record: each factor with name, observed value, policy threshold, impact direction (POSITIVE/NEGATIVE/NEUTRAL), and weight (PRIMARY/SECONDARY/MINOR). Store the record with tamper-evident signing. When a data subject exercises Article 22 rights, retrieve the record by application ID and generate an individual explanation from its structured factor content. The record_id links the explanation to the tamper-evident decision record, satisfying Article 5(2) accountability.
DPIAs and Article 5(2) Accountability for AI Automated Decisions
Article 35(3)(a) mandates a DPIA for systematic automated decision-making with legal or significant effects — covering virtually all commercial AI agents making significant decisions. The DPIA must address: the logic of the processing, retention periods, and safeguards. Recital 71 additionally prohibits automated decision-making on special categories of personal data (health, race, political opinion) without explicit consent or substantial public interest grounds. Article 5(2) accountability requires maintaining records of legal basis, safeguards, explanation requests, and override documentation — satisfied by tamper-evident decision records and linked override records.