FTC AI Enforcement: Section 5 UDAP and What AI Product Teams Must Document
The FTC enforces Section 5 of the FTC Act against unfair or deceptive AI claims and practices. Section 5's substantiation doctrine requires that objective performance claims be supported by a reasonable basis existing before the claim is made. For AI systems, this applies to accuracy claims, bias claims, explainability representations, and human review claims. The FTC has enforcement authority over B2B AI vendors whose products harm downstream consumers, and coordinates enforcement with CFPB, EEOC, and DOJ. Documentation that satisfies EU AI Act Annex IV requirements substantially satisfies FTC substantiation requirements for the same AI system.
FTC Authority Over AI Products and Services
Section 5 of the FTC Act prohibits unfair or deceptive acts or practices in or affecting commerce. This authority applies to AI products without any AI-specific statute — the FTC treats AI performance claims the same way it treats claims about any other product. The FTC has two enforcement theories: deception (a representation that is likely to mislead a reasonable consumer and is material) and unfairness (a practice causing substantial injury that is not outweighed by benefits and not reasonably avoidable). Both apply to AI. The FTC has published explicit AI guidance, issued civil investigative demands to AI companies, and coordinated with CFPB, EEOC, and DOJ on AI enforcement in credit, hiring, and housing.
High-Risk AI Claims and Required Substantiation
FTC substantiation doctrine requires objective performance claims to have a reasonable basis existing before the claim is made. Accuracy claims ('98% accurate') require representative testing data with demographic breakdowns — benchmark performance on academic datasets does not satisfy substantiation for production AI systems in credit or healthcare. Bias-free claims are particularly high-risk — absolute claims of no bias are extremely difficult to substantiate, and the FTC considers materiality broadly. Explainability claims ('fully explainable decisions') require the system to actually produce per-decision explanations for each individual affected — aggregate SHAP scores or model-level summaries do not satisfy claims made to individual consumers who received AI decisions. Human review claims require documentation of actual review rates, reviewer qualifications, and override rates.
FTC Enforcement Areas for AI Systems
The FTC has brought enforcement actions in AI hiring (HireVue consent decree — claimed performance not substantiated for all demographic groups), financial services AI bias, and chatbot impersonation. For credit AI, the FTC coordinates with CFPB on adverse action, disparate impact, and model transparency — a deceptive claim about credit AI explainability can trigger both FTC and CFPB enforcement simultaneously. For healthcare AI, the FTC coordinates with FDA on deceptive clinical accuracy claims. The joint 2023 FTC-DOJ-CFPB-EEOC statement explicitly extended existing civil rights and consumer protection frameworks to AI systems. Companies in all sectors face multi-agency enforcement risk for AI practices that touch credit, hiring, housing, or public accommodation.
Documentation Requirements for FTC Defense
The FTC's substantiation doctrine requires documentation to predate the claim. Post-hoc documentation assembled after an investigation begins has limited defensive value. Required documentation: pre-launch testing records showing performance on representative data with demographic breakdowns (methodology, sample size, results); ongoing performance monitoring records showing that continuing claims remain substantiated as the AI system operates in production; decision audit trails with inputs, outputs, and reasoning for each decision (to respond to FTC civil investigative demands and demonstrate harm scope); and limitations disclosure records showing that known limitations were communicated to customers. Documentation programs built for EU AI Act Annex IV compliance substantially satisfy FTC substantiation requirements.
FTC Act vs. EU AI Act: Documentation Overlap and Gaps
FTC Section 5 and EU AI Act share substantive requirements for high-stakes AI systems. Both require pre-deployment testing on representative data with demographic analysis. Both require documentation of known limitations. Both require ongoing performance monitoring after deployment. Key differences: FTC Section 5 is post-hoc enforcement triggered by specific deceptive acts or harmful practices; EU AI Act is pre-deployment compliance with market access implications. FTC enforcement is US-jurisdictional; EU AI Act applies to AI systems affecting EU residents regardless of where the provider is located. State AI laws (Colorado SB 205, emerging state UDAP AI amendments) add disclosure and human review requirements on top of FTC Section 5. Companies building to EU AI Act Annex IV documentation standards substantially satisfy FTC substantiation requirements as a byproduct.