AI Explainability for Regulators: EU AI Act, GDPR, ECOA, HIPAA, and NAIC Practical Guide
Five regulatory frameworks require AI explanations — but each demands a different audience, a different granularity, and a different format. EU AI Act Article 13 requires system-level transparency for users and deployers. GDPR Article 22 requires individual explanation for automated credit and insurance decisions. ECOA/Reg B requires adverse action notices with AI-derived reason codes. HIPAA OCR requires activity records showing what the AI accessed. NAIC Bulletin Principle 4 requires factor-level explanation for adverse underwriting outcomes. "We use explainable AI" satisfies none of these obligations. This guide maps each framework to the specific decision records that actually satisfy each requirement.
Why Generic XAI Does Not Satisfy Regulatory Obligations
Explainable AI (XAI) methods — SHAP values, LIME, attention maps — explain model behavior in terms ML engineers can interpret. Regulatory obligations require something different: an explanation the affected person and regulator can understand, tied to the specific decision they are challenging, in the format each framework specifies. A SHAP plot satisfies no regulatory framework by itself. The question is not "can we explain this model" but "can we produce the specific explanation each framework requires for this specific decision." Decision records are the mechanism that bridges XAI output to regulatory compliance.
EU AI Act Article 13: System-Level Transparency Requirements
EU AI Act Article 13 requires high-risk AI systems to be transparent — not to individual decision subjects, but to deployers and regulators. Required transparency artifacts include: system documentation (intended purpose, performance metrics, limitations), operating conditions, human oversight measures, and capabilities and limitations that deployers must understand to use the system appropriately. Article 13 does not require per-decision explanation to end users (Article 86 addresses that separately). The compliance artifact is documentation attached to the system, not explanation generated per decision. Decision records support Article 13 by capturing model version provenance, behavioral baselines, and drift evidence — system-level evidence auditors request.
GDPR Article 22: Individual Decision Explanation Requirements
GDPR Article 22(3) requires controllers to implement suitable safeguards for solely automated decisions — including the right to obtain human intervention, express point of view, and contest the decision. Recital 71 specifies that controllers must provide meaningful information about the logic involved, significance, and envisaged consequences. "Meaningful information" has been interpreted by supervisory authorities to require factor-level explanation — the specific factors that influenced this decision for this person, in plain language. Generic model documentation (how the model generally works) does not satisfy the per-decision obligation. Per-decision reasoning records that capture the inputs, their weights, and the reasoning chain enable compliant Article 22 explanations to be generated deterministically.
ECOA/Reg B: Adverse Action Notice Requirements for AI
Equal Credit Opportunity Act and Federal Reserve Regulation B require adverse action notices to specify the principal reasons for adverse credit decisions. When AI makes or influences credit decisions, CFPB guidance (CFPB Circular 2023-03) confirms: lenders cannot satisfy adverse action notice requirements by citing black-box AI decisions. The required content is the specific factors that influenced this applicant's decision. SHAP values must be translated into Reg B Appendix C reason codes — not raw feature importance scores. Per-decision records that capture input features and model output enable this translation. Records that capture only the final score or approval/denial cannot support compliant adverse action notice generation.
HIPAA OCR: What AI Activity Records Must Show
HIPAA §164.312(b) requires audit controls — hardware, software, and procedural mechanisms that record and examine activity in systems containing electronic protected health information. During OCR investigations, the audit trail question is not "can you explain this model" but "can you show what this system accessed about this patient and what it recommended." Required evidence includes: which patient records were accessed as context by the AI, what the AI recommended for this patient, when the access and recommendation occurred, and whether the recommendation was acted upon or overridden. Decision records capturing context snapshot, intent, and outcome satisfy OCR evidence requirements. EHR access logs capture the first element; clinical AI requires decision-level records for the remainder.
NAIC Bulletin Principle 4: Factor-Level Adverse Underwriting Explanation
NAIC Model Bulletin Principle 4 (Transparency) requires insurers to explain adverse underwriting decisions at the factor level — not as a model description but as a specific explanation for this applicant. Required content: the rating factors that influenced the adverse outcome, their values for this applicant, and how they affected the premium or coverage decision. Adverse action notices under state unfair trade practices acts have equivalent requirements. Generating these explanations post-hoc from a black-box model is unreliable and may produce explanations that do not accurately represent the actual decision basis. Per-decision records that capture the rating factors, their values, and the reasoning chain enable factor-level explanation generation that is deterministic and auditable — the same record that supports disparate impact testing under Principle 3.