ECOA and Regulation B: What Fair Lending Law Requires for Credit AI Systems
ECOA (Equal Credit Opportunity Act) and Regulation B require creditors using AI credit models to provide adverse action notices with specific principal reasons, test for disparate impact by protected class, and retain decision records for 25 months. The CFPB 2022 circular confirmed that AI model complexity is not a defense for failing to provide specific reasons. This guide explains what per-decision records, reason code documentation, and fair lending testing credit AI teams must implement.
Regulation B Adverse Action Notices for AI Credit Models
When an AI model produces an adverse credit decision, the adverse action notice must include the principal reasons that are specific to this applicant — not a description of the model or a reference to the AI system. Reg B (12 CFR 202.9) requires four or fewer principal reasons identifying the actual factors: "Insufficient credit history in the past 24 months," "Debt-to-income ratio exceeds threshold," or "Number of recent credit inquiries." This requires per-decision feature attribution — SHAP values, inherently explainable models, or a reason code mapping layer. Reason codes must be generated and captured at decision time; re-running a current model on historical applications may produce different codes and does not satisfy the original adverse action obligation.
CFPB 2022 Circular: AI Complexity Is Not a Defense
The CFPB's September 2022 circular clarified that creditors cannot use model complexity as a defense for failing to provide specific adverse action reasons. A pure black-box model that cannot produce per-decision feature attribution creates ECOA compliance risk. Three approaches exist: inherently explainable models (logistic regression, decision trees — lowest regulatory risk but limited predictive power), post-hoc explanation methods (SHAP/LIME applied to complex models — medium risk, not explicitly approved by CFPB), or hybrid architectures (complex model plus rules-based reason code mapping layer — variable risk depending on mapping accuracy). Each approach requires different documentation of the explanation methodology.
Disparate Impact Testing Requirements for AI Credit Models
ECOA prohibits facially neutral AI models that disproportionately affect protected classes (race, sex, national origin, marital status, age) without business justification. CFPB fair lending examiners request: complete feature list with identification of potential proxy variables, training data sources and demographic composition, disparate impact analysis by protected class (approval rates, pricing), model monitoring results showing ongoing disparate impact tracking, and remediation documentation. Proxy features — ZIP code, employer type, certain spending patterns — can correlate with protected classes. Creditors must document the business justification for such features and demonstrate no less discriminatory alternative achieves comparable performance.
Regulation B 25-Month Recordkeeping for AI Credit Decisions
Reg B (12 CFR 202.12) requires retention of written applications and related documentation for 25 months after notifying the applicant of action taken. For AI credit decisions, a complete record includes: all applicant input features at evaluation time, any external data retrieved during decision-making, model output and score, reason codes generated for this decision, model version active at decision time, timestamp of AI decision, human review or override records, and adverse action notice copy. Records must be immutable — re-scoring historical applications against an updated model overwrites the original reason code basis and creates a discrepancy with the adverse action notice already sent. ECOA disputes and CFPB examinations routinely require records beyond the 25-month minimum; building to 3-5 years is operationally safer.
ECOA vs EU AI Act: Credit AI Documentation Compared
ECOA and EU AI Act both address credit AI but with different mechanisms and scopes. ECOA requires specific reasons per adverse action notice (four-or-fewer, within 30 days); EU AI Act requires meaningful explanation upon request (within one month). ECOA has no formal Annex IV equivalent; EU AI Act requires technical documentation including monitoring methodology and human oversight procedures. ECOA's disparate impact prohibition requires case-by-case testing documentation; EU AI Act Articles 9 and 10 require accuracy and bias testing with documentation. EU AI Act Article 14 requires human oversight mechanisms with no equivalent in Reg B. Teams operating in both US and EU face both frameworks simultaneously — building to EU AI Act standards generally satisfies Reg B recordkeeping as a byproduct, but the specific reasons obligation requires dedicated attention.