Third-Party Risk Management for AI Model Providers: OCC 2023-17 and SR 13-19 Compliance
OCC Bulletin 2023-17 (October 2023) and Federal Reserve SR 13-19 require banks to manage AI model API providers — OpenAI, Anthropic, AWS Bedrock, Google Vertex — as covered third parties subject to formal due diligence, written contracts with specific provisions, ongoing monitoring, and documented exit strategies. Standard API terms of service do not satisfy these requirements. Most banks have not applied full TPRM rigor to AI model providers at the depth regulators expect, creating examination exposure.
Why AI Model Providers Are Covered Third Parties
OCC Bulletin 2023-17 defines third-party relationships as any business arrangement between a bank and another entity, regardless of whether a formal contract exists. AI model API providers are third parties when their services support bank operations. The criticality classification — critical versus non-critical activity — determines the rigor of required controls. Credit decisioning AI, fraud detection, AML monitoring, and customer-facing AI assistants generally qualify as critical activities. Internal productivity tools may qualify as non-critical. The classification must be documented and justified.
Due Diligence Requirements for AI Providers
OCC 2023-17 requires due diligence before engaging a third party and on a periodic basis thereafter. For AI model providers, required areas include: financial condition and business viability of the provider and the specific API product line; information security program covering SOC 2 Type II scope for the services used; data handling policies for bank customer data submitted in API requests — training exclusions, retention, access controls; subcontractor relationships (cloud infrastructure, data pipeline vendors, content moderation providers); business continuity plans with tested RTOs; and regulatory and legal compliance posture including exposure to evolving AI regulation. Due diligence must be refreshed on a cycle proportional to criticality.
Contract Requirements AI Providers Typically Do Not Meet
OCC 2023-17 specifies required contract provisions. Standard AI provider API terms are designed for developer adoption, not regulated financial institutions. Key gaps: performance standards limited to uptime SLAs without behavioral performance obligations; no audit rights (standard terms offer SOC 2 reports whose scope may not cover the bank's specific use); incident notification timelines that do not meet bank requirements; no subcontractor change notification provisions; no data portability or certified deletion obligations on termination; no tested BCP provisions for critical customers; and no termination-for-regulatory-requirement clause. Banks must negotiate bank-specific addenda for critical AI provider relationships.
Ongoing Monitoring for AI Model Provider Behavioral Performance
Standard TPRM ongoing monitoring — annual SOC 2 review, financial assessment, periodic relationship review — does not capture AI-specific risks. Three monitoring capabilities are required for critical AI provider relationships: behavioral performance monitoring (tracking AI decision distributions and output quality against baselines, not just API uptime); model version change detection (mechanism to detect when the provider has updated the underlying model and trigger SR 11-7 change management); and concentration risk tracking (measuring proportion of critical AI functions dependent on each provider, with documented exit strategy for each concentration). Standard third-party performance management systems are not designed for these AI-specific monitoring requirements.
Intersection of TPRM and SR 11-7 for AI
Third-party risk management and model risk management address the same AI system from different governance angles and must be coordinated. SR 11-7 governs the model itself — development documentation, validation, behavioral monitoring, change management. OCC 2023-17 governs the vendor relationship — due diligence, contracts, performance monitoring, exit. When the AI model is a third-party-provided foundation model, the SR 11-7 model inventory entry should cross-reference the TPRM third-party register entry. Behavioral anomalies detected in AI model monitoring should feed back to the TPRM relationship owner. Model provider updates detected by the SR 11-7 monitoring program should trigger TPRM change notification review. The two governance programs need operational linkage, not just parallel documentation.