AI Incident Response Plan: What Regulators Require (EU AI Act, HIPAA, SOC 2)
EU AI Act Article 73 requires serious incident notification within 15 days for incidents involving death or irreversible harm. HIPAA breach notification is 60 days from discovery. SOC 2 CC7.2 requires documented event classification and response procedures. A standard IT incident response plan does not satisfy any of these — each requires AI-specific definitions, evidence standards, and notification targets. This guide provides a five-phase AI IRP template with the specific requirements for each framework.
Why AI Incidents Require a Separate IRP
Standard IT incident response covers availability failures, data breaches, and unauthorized access. AI-specific incidents include behavioral drift causing systematic incorrect decisions, PHI exposure through model outputs or RAG retrieval, unauthorized tool scope expansion, and multi-agent cascades. None of these are captured by infrastructure alerting, and none map cleanly to IT breach definitions. An AI-specific IRP addendum must define incident categories, detection sources, classification procedures, and the regulatory notification obligation triggered by each category.
EU AI Act Article 73: Serious Incident Reporting
Article 73 requires providers of high-risk AI systems to report serious incidents to national market surveillance authorities. A serious incident is any malfunction or misuse that causes or could cause death, serious bodily injury, significant property damage, or serious disruption of critical infrastructure. For death and irreversible harm, the reporting deadline is 15 days from awareness. For other serious incidents, reporting must occur without undue delay. Intermediate reports are required every 30 days during ongoing investigation. Providers must be able to produce incident documentation with root cause analysis and corrective measures — which requires sufficient decision audit trails to reconstruct what the AI system did during the incident window.
HIPAA Breach Notification for AI Systems
HIPAA Breach Notification Rule (45 CFR §§164.400-414) requires notification within 60 days of discovering a breach of unsecured PHI. AI-specific PHI breach scenarios include cross-patient data leakage via RAG retrieval, training data memorization producing verbatim PHI, unauthorized transmission of PHI to external APIs by AI agents, and insecure AI decision logs containing PHI without proper access controls. The 60-day clock starts at discovery. Entities with more than 500 affected individuals must also notify prominent media in the state.
SOC 2 CC7.2: Anomaly Detection and Response
SOC 2 Trust Service Criteria CC7.2 requires entities to evaluate security events to determine whether they qualify as incidents and to document that determination. For AI systems, this requires defining behavioral anomalies as security events — output distribution shifts, unexpected tool calls, unauthorized data access — logging all AI decisions with replay fidelity, and maintaining documented runbooks for AI-specific incident categories. SOC 2 auditors will request evidence that anomaly detection is operational (dashboards with thresholds) and that event classification decisions are documented (not just the incidents, but the non-incidents too).
Five-Phase AI Incident Response Template
Phase 1: Detect — define behavioral baselines, configure anomaly alerts, route AI alerts to on-call rotation, maintain immutable decision logs. Phase 2: Classify — apply incident taxonomy within 4 hours, determine which regulatory frameworks apply, document classification reasoning, start notification clock. Phase 3: Contain — freeze model version, preserve audit trail snapshot before any remediation, assess decision scope during affected period. Phase 4: Notify — file Art.73 report to MSA, HIPAA breach notification to HHS, CO SB 205 consumer notifications, customer notifications per SOC 2 CC7.2 obligations. Phase 5: Recover — remediate root cause, review all affected decisions, conduct post-incident review within 30 days, update behavioral baselines.
Evidence Requirements for AI Incident Investigation
Every AI incident framework requires that you can reconstruct what the AI system decided and why. For Art.73: description of the incident, potential cause, and corrective measures — all requiring decision-level logs. For HIPAA: scope of affected individuals requires identifying every decision that accessed PHI during the incident window. For SOC 2: classification documentation requires showing what anomaly was detected and why it was or was not elevated to incident status. These evidence requirements cannot be satisfied by standard application logs — they require immutable, tamper-evident, decision-level audit trails captured at the time of each decision.