DORA and AI Agents: What ICT Risk Management Requirements Mean for Financial Services AI
The EU Digital Operational Resilience Act (DORA) became applicable January 17, 2025, applying to all EU financial entities including banks, investment firms, insurance companies, and payment institutions. DORA's ICT risk management requirements (Chapter II), incident reporting obligations (Chapter III), resilience testing program (Chapter IV), and third-party ICT risk management (Chapter V) all apply to AI systems used by covered financial entities. AI model API providers — OpenAI, Anthropic, Google, AWS Bedrock — are DORA ICT third-party service providers requiring contractual compliance, exit strategies, and concentration risk management.
DORA ICT Risk Management for AI Systems
DORA Chapter II requires financial entities to implement a comprehensive ICT risk management framework covering five areas. Identification: map all AI systems, model API providers, RAG infrastructure, and downstream consumers. Protection: access controls, input validation, model version pinning, behavioral baselines. Detection: behavioral drift monitoring against established baselines, output distribution tracking, anomalous tool call detection — standard availability monitoring does not satisfy DORA detection for AI behavioral failures. Response and recovery: isolation procedures, fallback activation, audit evidence preservation. Backup and recovery: documented fallback procedures for AI-assisted critical functions, tested RTOs for AI system outages.
AI Model Providers as DORA Third-Party ICT Providers
DORA Chapter V requires financial entities to maintain a complete register of contractual arrangements with ICT third-party service providers. AI model API providers are DORA third-party providers when their APIs support financial entity operations. The register must include: service criticality classification, concentration risk assessment, contract terms including SLA and audit rights, and exit strategy documentation. DORA Article 28 requires exit strategies — documented alternative providers, tested transition procedures, and concentration risk mitigation. Standard API terms of service do not include the audit rights, exit provisions, and operational resilience terms DORA requires.
DORA Incident Reporting for AI Incidents
DORA Chapter III requires classification and reporting of major ICT-related incidents to national competent authorities (NCAs). AI system failures qualify as ICT incidents under DORA criteria: number of clients affected (AI credit decisioning failure affecting thousands of applications), duration of service disruption, data integrity impact (model drift causing incorrect outputs in financial records), impact on critical functions (payment processing, credit decisioning, trading), and economic impact (incorrect AI-assisted financial decisions at scale). Major incidents follow a three-stage reporting process: initial notification, intermediate report (72 hours), and final report with root cause analysis. AI incident root cause analysis requires decision-level audit trails — infrastructure metrics cannot explain behavioral AI failures.
Digital Operational Resilience Testing for AI (Chapter IV)
DORA resilience testing must include AI-specific test scenarios beyond standard IT availability testing. Behavioral drift testing: verify detection capabilities can identify when AI model behavior deviates from established baselines, including drift from model provider silent updates. Model API failover: test that AI-dependent critical functions can continue when the model API is unavailable, with documented RTOs and tested fallback procedures. Adversarial input testing: test AI agent resilience to prompt injection, context poisoning via RAG retrieval, and malicious tool call manipulation. Concentration risk scenario: simulate loss of the primary AI model provider to test exit strategy execution. Significant financial institutions subject to TLPT must demonstrate AI provider concentration risk management.
DORA vs EU AI Act for Financial Services AI
DORA and EU AI Act apply simultaneously to financial services AI. DORA is an operational resilience law — ICT risk management, incident reporting, third-party dependencies, business continuity. EU AI Act is an AI-specific law — risk classification, technical documentation, transparency, human oversight, ongoing monitoring. A bank deploying AI in credit decisioning faces both: DORA for ICT risk management of the AI system as a technology component, EU AI Act for the AI system as a high-risk AI system affecting credit access. Compliance programs must coordinate: DORA ICT incident classification must map to EU AI Act Article 73 serious incident reporting, and DORA resilience testing must integrate with EU AI Act Article 9(5) testing requirements.