EU AI Act GPAI Model Compliance: Chapter V Obligations, Systemic Risk, and the Code of Practice
EU AI Act Chapter V (Articles 51–56) applies specific obligations to providers of General Purpose AI (GPAI) models — AI trained at scale that can perform a wide range of tasks and be integrated into diverse downstream applications. All GPAI providers must: maintain technical documentation per Annex XI; provide downstream providers with transparency information; implement a policy for EU copyright law compliance; and publish a training data summary per the AI Office template. GPAI models with systemic risk (training compute exceeding 10²⁵ FLOPs, or AI Office designation) face additional obligations: adversarial testing before market release; serious incident reporting to the AI Office; cybersecurity protection; and energy consumption reporting. The AI Office developed a GPAI Code of Practice operationalising these requirements — adherence creates a presumption of conformity under Article 56. Open-source GPAI models have a partial exemption: still must meet copyright and training data summary obligations, and systemic risk models must meet Article 55 regardless of open/closed weights. GPAI rules applied August 2025, 12 months ahead of the full Act. Fines for GPAI violations: up to €15M or 3% of global annual turnover.
What Is a GPAI Model Under the EU AI Act?
Article 3(63) defines a general-purpose AI model as an AI model trained with a large amount of data using self-supervision at scale that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market, and that can be integrated into a variety of downstream systems or applications. This covers foundation models — large language models, multimodal models, embedding models used across diverse applications — but not purpose-trained narrow AI (a fraud detection model trained only for fraud detection is not a GPAI model). The GPAI definition captures the top of the AI stack: GPT-4, Claude 3, Gemini 1.5, Llama 3, Mistral, Qwen, and similar models released by labs or offered through API are GPAI models. Embedding models (text-embedding-ada-002, etc.) and image generation models (DALL-E, Stable Diffusion) are also within scope.
Article 53 Baseline Obligations: All GPAI Providers
Article 53(1) applies to all GPAI model providers regardless of systemic risk designation. Four baseline obligations: (a) Technical documentation — maintain documentation per Annex XI (training methodology, compute used, data sources, evaluation procedures, capabilities/limitations, known risks) and update it throughout the model lifecycle; (b) Downstream transparency — provide the information and documentation necessary for downstream providers to understand the model and fulfill their own compliance obligations, including capability summaries, known limitations, and information about training data relevant to copyright; (c) Copyright compliance policy — implement a policy addressing EU copyright law compliance, including the Article 4 Text and Data Mining opt-out mechanism under Directive 2019/790; content creators who deployed the TDM opt-out (rights reserved metadata) must be identified and respected; (d) Training data summary — publish a sufficiently detailed summary of the content used for training, following the template published by the AI Office, making this available on a dedicated website or public register. The training data summary is publicly accessible — it creates reputational accountability for training data sourcing.
Article 55 Systemic Risk Obligations: Enhanced Tier
GPAI models with systemic risk face four additional obligations under Article 55(1): (a) Model evaluations — perform evaluations per standardized protocols, including adversarial testing designed to identify and mitigate systemic risks before placing the model on the market; (b) Incident reporting — report to the AI Office serious incidents identified during development or deployment, and measures taken to address them; the AI Office will define incident taxonomy and reporting timeframes in implementing acts; (c) Cybersecurity — protect GPAI model infrastructure and physical facilities against cybersecurity risks appropriate to the threats identified, including model extraction, data poisoning, adversarial attacks, and unauthorized access to training data; (d) Energy efficiency — assess and report energy consumption for the systemic risk model's training and inference operations. The 10²⁵ FLOPs threshold creates a rebuttable presumption of systemic risk — if a model's training used more than this compute, systemic risk is presumed unless the provider can demonstrate otherwise to the AI Office. The AI Office can also designate models as systemic risk based on capability evaluations, reach, or multimodal integration even if below the compute threshold.
Open-Source GPAI: Partial Exemption Explained
Article 2(12) provides a partial exemption for GPAI models whose parameters are publicly available. Open-weight models (Llama 3, Mistral, Gemma, Qwen) are exempt from Art. 53(1)(a) technical documentation obligations and Art. 53(1)(b) downstream transparency requirements because public availability of weights and architecture documents itself. However, open-source exemption does NOT apply to: Art. 53(1)(c) copyright compliance — open-source providers must still implement a TDM opt-out policy; Art. 53(1)(d) training data summary — still required regardless of weight availability; and any Art. 55 obligation — systemic risk obligations apply to all models above the threshold including open-weight. The partial exemption reflects the theory that public weights enable downstream inspection. But it does not eliminate the copyright and transparency obligations, and it absolutely does not limit systemic risk obligations — a lab releasing open-weight GPT-4-scale models still faces full Art. 55 scrutiny.
Downstream Impact: What GPAI Integration Means for Your Compliance
Organizations integrating GPAI models into products or services are downstream providers under the Act. Their obligations depend on whether their application falls within the Annex III high-risk categories. GPAI providers are required under Art. 53(1)(b) to provide downstream providers with the documentation they need for their own compliance. But downstream providers cannot rely on this information to satisfy their own Annex III high-risk obligations — they must independently fulfill provider obligations for their high-risk applications. The practical implication: if you build a high-risk AI application (employment screening, credit scoring, healthcare decision support) on top of GPT-4 or Claude, you are the provider of that high-risk AI system. You need Article 13 instructions, Annex IV technical documentation, and a conformity assessment. The GPAI provider's Art. 53 compliance does not substitute for your Art. 13 compliance. From August 2027, GPAI models that were already integrated into Annex III high-risk systems before August 2026 must also meet the combined GPAI + high-risk system requirements.