SEC Cybersecurity Disclosure Rules and AI Systems: 8-K Material Incidents, 10-K Risk Management, and Board Oversight
SEC Cybersecurity Disclosure Rules (17 CFR §§ 229.106, 240.13a-1, 240.13a-11), effective December 2023, require public companies to disclose material cybersecurity incidents within 4 business days on Form 8-K and describe their cybersecurity risk management program annually in Form 10-K. AI systems introduce new material cybersecurity risk categories: model poisoning and adversarial attacks; training data breaches containing customer PII; third-party AI model provider dependency risk; and AI used in financial reporting (internal controls over financial reporting implications). The SEC charged SolarWinds and its CISO personally in 2023 for misleading cybersecurity disclosures — establishing personal liability precedent for security leaders who certify inaccurate security disclosures. AI security leaders at public companies must ensure their 10-K disclosures accurately describe known AI security risks and practices.
Form 8-K Item 1.05: Material AI Cybersecurity Incident Disclosure Within 4 Business Days
SEC Rule 13a-11 requires disclosure within 4 business days after the company determines a cybersecurity incident is material. The 4-day clock starts at the materiality determination, not at discovery. AI incidents that may be material: unauthorized access to AI training data repositories containing customer PII; model poisoning attacks that corrupted AI financial outputs; adversarial attacks causing AI fraud detection to fail at scale; third-party AI vendor breaches exposing customer data through API integrations; and LLM prompt injection enabling customer data exfiltration. Materiality analysis must consider financial impact, regulatory consequences, reputational harm, and the likelihood that reasonable investors would consider the information important. Companies need a defined materiality determination process for AI incidents — who decides, what criteria apply, and what evidence is required.
Form 10-K Item 1C: Annual AI Cybersecurity Risk Management Disclosure
The 10-K must describe: processes for assessing, identifying, and managing material cybersecurity risks (including AI-specific risks); material cybersecurity risks and threats that could materially affect the registrant; whether third-party risk management processes include AI vendors; and board and management oversight of cybersecurity risk. AI-intensive companies should address: third-party AI model provider dependency (OpenAI, Anthropic, AWS Bedrock, Google); AI model integrity risks (adversarial attack, prompt injection, model poisoning); training data security program; and AI systems used in financial reporting that create ICFR exposure. Generic boilerplate without AI-specific content is increasingly inadequate given the prevalence of AI in public company operations.
Board Oversight of AI Cybersecurity Risk
The 10-K must describe the board's role in overseeing cybersecurity risks. For AI-intensive companies, board oversight of AI security should include: the board committee with AI risk oversight responsibility (typically Audit Committee); frequency and depth of AI security briefings to the board; management accountability structure (CISO, CTO, Chief AI Officer) for AI security; and the escalation process from AI security incidents to board-level awareness. Institutional investors and proxy advisors (ISS, Glass Lewis) have increasingly scrutinized the quality of board cybersecurity oversight, including whether directors have relevant AI security expertise. Disclosures about board oversight must accurately reflect actual governance practices — not aspirational descriptions.
SolarWinds Enforcement: Personal CISO Liability and AI Security Disclosure
In October 2023, the SEC charged SolarWinds and its CISO Timothy Brown with fraud and internal control violations for allegedly misleading investors about cybersecurity practices before the SUNBURST attack. The case established that: (1) public cybersecurity disclosures must match internal security assessments; (2) CISOs can be personally named in SEC enforcement actions for misleading disclosures; (3) internal documents showing known security deficiencies while public disclosures claim robust security create fraud exposure. For AI security leaders: 10-K language about AI security practices must accurately reflect what your organization actually does. Known AI security vulnerabilities (model poisoning test failures, known adversarial attack vectors, AI vendor risk that exceeds disclosed risk management capability) that are not disclosed may constitute misleading disclosure.
AI in Financial Reporting: ICFR and SOX Implications
If AI systems generate or materially contribute to financial reporting outputs — revenue recognition models, credit loss estimates, fraud risk scoring, expense classification — those AI systems are part of the internal controls over financial reporting (ICFR) framework. Model drift, adversarial manipulation, or unexpected behavior changes in financial reporting AI could affect the accuracy of reported financials and create material weakness disclosure obligations under SOX Section 302/906. AI teams responsible for financial reporting AI must: include these AI systems in the ICFR scope; document model validation and change control procedures; maintain audit trails of model outputs; and escalate material AI model failures to the CFO and Audit Committee.