NYDFS 23 NYCRR 500 Cybersecurity Regulation for AI/ML Systems: What DFS-Licensed Institutions Must Do
New York Department of Financial Services (NYDFS) 23 NYCRR 500 cybersecurity regulation applies to all DFS-licensed entities — banks, insurers, money transmitters, mortgage servicers, and licensed lenders — operating in New York. The 2023 amendments created tiered obligations: Class A entities (2,000+ employees, $1B+ revenue, or $5B+ assets) face enhanced requirements including independent cybersecurity audits and quarterly board reporting. AI and ML systems used in financial operations are within scope as "information systems" containing "nonpublic information." Key AI-specific obligations: risk assessment before AI deployment, 5-year audit trail retention for all AI decisions involving nonpublic information, 72-hour incident notification for material AI security events, and annual penetration testing of AI infrastructure. DFS enforcement has produced fines exceeding $100M in individual actions.
Which Entities Are Covered and What Is a "Class A" Institution
23 NYCRR 500 applies to any person operating under a license, registration, charter, certificate, or similar authorization under New York Banking Law, Insurance Law, or Financial Services Law — covering state-chartered banks, insurance companies, money transmitters, licensed lenders, mortgage loan servicers, and check cashers. Class A entities under the 2023 amendments are those with 2,000 or more employees (including affiliates), $1 billion or more in annual gross revenue (including affiliates), or $5 billion or more in year-end total assets (including affiliates). Class A entities face enhanced cybersecurity requirements beyond the baseline: independent annual audits, quarterly board reporting on cybersecurity, and senior officer attestations. AI-intensive financial institutions should determine their Class A status at the outset — Class A thresholds are easily met by mid-size banks and regional insurers.
AI Systems as "Information Systems" and Nonpublic Information in AI Pipelines
23 NYCRR 500.01(f) defines "information systems" as any electronic system used to access, transmit, store, or process information, and "nonpublic information" (NPI) as individually identifiable financial, health, or business information. AI and ML systems processing customer financial data, credit histories, insurance claims, transaction records, or health information for underwriting are information systems containing NPI under this definition. This means: AI training datasets containing customer financial or health records must meet 23 NYCRR 500 data security requirements; inference pipelines that process NPI through AI models must have audit controls; and AI model outputs that include or reference NPI are subject to access control, encryption, and retention requirements. The DFS has indicated in examination guidance that AI and ML systems are squarely within the regulation's scope as information systems.
5-Year Audit Trail Requirement for AI Financial Decisions
23 NYCRR 500.06 requires covered entities to maintain audit trails designed to detect and respond to cybersecurity events with sufficient detail to reconstruct material financial transactions. For AI systems making or contributing to financial decisions, a compliant audit trail must: record every AI inference that accesses or produces NPI; include the model version, input data categories, output, timestamp, and downstream action taken; be retained for a minimum of 5 years; and be tamper-evident or tamper-resistant. DFS examiners specifically review whether AI audit trails are sufficient to reconstruct automated credit decisions, insurance underwriting outputs, and fraud detection actions. AI teams should design audit logging for 5-year retention from day one — retrofitting retention is expensive and creates gaps during exam cycles.
72-Hour Cybersecurity Incident Notification for AI Events
23 NYCRR 500.17 requires covered entities to notify DFS within 72 hours of determining a cybersecurity event is material. AI-specific events that likely require 72-hour notification: unauthorized access to AI model infrastructure or training data; model poisoning attacks that corrupted AI financial decision outputs; adversarial attacks on fraud detection AI causing measurable financial loss; and third-party AI model provider breaches that exposed NPI transmitted via API. The 72-hour clock starts when the event is determined to be material — covered entities should establish a defined materiality determination process for AI events that can operate within 72 hours. DFS also requires a written cybersecurity incident response plan (CIRP) under 500.16 that covers AI systems.
Class A Enhanced Requirements: Independent Audit and Board Reporting
Class A entities must complete an independent cybersecurity audit conducted by a qualified external or internal auditor, report quarterly to the board of directors or equivalent senior officer on cybersecurity matters including AI risks, and have the CISO provide written reports to the board at minimum quarterly on material cybersecurity risks and the status of the cybersecurity program. For AI: the independent audit scope must include AI/ML systems — DFS examiners have reviewed AI model risk management frameworks as part of cybersecurity audits. Board reports should address AI-specific risk metrics: anomaly detection results, model performance monitoring, third-party AI vendor risk, and any AI incidents. The senior officer attestation (500.17(b)) certifies compliance; Class A CISOs must be prepared to defend AI cybersecurity program adequacy.