How to Build an Auditable Loan Approval AI Agent That Satisfies Regulators
EU AI Act Annex III explicitly lists credit scoring AI as high-risk, triggering Articles 9, 12, 13, 14, and 26 obligations. ECOA and Regulation B require adverse action notices with specific factors reflecting actual AI decision drivers. This guide builds a complete loan approval AI agent with structured factor output, tamper-evident decision records, ECOA adverse action generation, and pre-deployment deterministic replay — everything regulators and bank examiners actually ask for.
Regulatory Requirements for Loan AI
Key regulations: EU AI Act Annex III Category 5 (credit scoring AI is explicitly high-risk, Article 12 logging required), ECOA/Regulation B (adverse action notices with specific principal factors reflecting actual AI decision drivers), FCRA (permissible purpose records for credit inquiries, 25-month retention), CFPB Guidance 2023 (AI adverse action notices must describe actual model factors — uninterpretable model does not exempt lender), Fair Housing Act/HMDA (demographic data for disparate impact analysis), MiFID II (5-year retention for investment-related credit decisions).
What Regulators Actually Ask For
Financial regulators examining loan AI systems request: (1) Individual decision records for sampled applications — complete record for each, retrievable by application ID. (2) Model version and policy version provenance — which model was running when this decision was made. (3) Adverse action factor lists — specific factors that drove the denial, ranked by importance, mapped to FCRA reason codes. (4) Fair lending analysis data — aggregate decisions by protected class for disparate impact analysis. (5) Pre-deployment validation evidence — EU AI Act Article 9 requires testing under realistic conditions.
Architecture: 4 Key Decisions
(1) Decision records vs LLM traces: design record schema around regulatory output — ECOA factors, not raw LLM completions. (2) Structured factor output: prompt the LLM to produce FCRA adverse action reason codes and factor weights in structured JSON. (3) Complete context snapshot: capture all model inputs including credit score, DTI, LTV, bureau trade lines, policy version — partial snapshots create examination risk. (4) Human review capture: EU AI Act Article 14 requires capturing human reviewer actions with actor ID, timestamp, decision, and reason for borderline applications.
Implementation with Tenet AI SDK
Use TenetClient with tenet.intent() context manager. Call intent.snapshot_context() with the complete application data including model version and policy version. Prompt the LLM to return structured JSON with decision, confidence, and primary_factors array (each with factor name, observed value, impact, weight, and FCRA adverse action code). Call intent.decide() with ActionOptions built from structured factors, storing adverse_action_codes in metadata. Call intent.execute() to close the tamper-evident record and return a record_id for audit reference.
ECOA Adverse Action Generation
The structured factor output captured in the decision record maps directly to FCRA adverse action reason codes (100-250). Generate adverse action notices by retrieving the decision record and formatting the top 4 principal negative factors using FCRA code descriptions. The audit_record_id in the adverse action notice links the notice to the immutable decision record — when regulators challenge a decision, retrieve the record and verify its cryptographic signature proving it was not altered.
Pre-Deployment Testing with Deterministic Replay
Before deploying a new underwriting model version, use Tenet Deterministic Replay to re-execute a representative sample of past production loan decisions against the candidate model. The output shows: decision change rate, which applicant segments are affected (credit score band, DTI range), whether changes align with policy intent, and fair lending risk from disparate impact analysis by segment. This satisfies EU AI Act Article 9 risk management evidence requirements.