EU AI Act Article 13: Transparency Obligations for High-Risk AI Systems — What Providers and Deployers Must Disclose
EU AI Act Article 13 requires providers of high-risk AI systems to supply instructions for use containing 10 mandatory elements — provider identity, capabilities and limitations, performance metrics, input data specifications, changes affecting conformity assessment, human oversight measures, computational resource requirements, expected lifespan and maintenance, logging capabilities per Article 12, and installation/operation instructions. High-risk AI is defined by Annex III: employment AI (hiring, performance evaluation, termination), credit scoring, insurance pricing, education assessment, law enforcement risk scoring, biometric identification, and more. Responsibility is asymmetric: providers create the instructions; deployers must implement the human oversight measures described in those instructions. Deployers using AI outputs outside the disclosed intended purpose assume full liability. Article 13 runs parallel to GDPR Article 22 — for AI systems covered by both, deployers must implement EU AI Act human oversight measures AND GDPR meaningful human review rights. Non-compliance fines: providers up to €15M or 3% of global turnover; deployers up to €7.5M or 1.5%.
High-Risk AI Under Annex III: The 9 Use Case Areas That Trigger Article 13
EU AI Act Article 6 and Annex III define high-risk AI by use case. Annex III covers nine areas: (1) biometric identification — real-time and remote biometric ID, emotion recognition in workplaces/schools; (2) critical infrastructure management — electricity, water, traffic AI safety components; (3) education — AI determining access to educational institutions, student assessment AI, exam integrity monitoring; (4) employment — recruitment, CV screening, interview analysis AI, task allocation, performance monitoring, promotion and termination AI; (5) essential services — credit scoring, insurance risk and pricing AI, social benefit eligibility; (6) law enforcement — recidivism prediction AI, crime analytics, predictive policing; (7) border and migration — visa and asylum AI, risk assessment for migration; (8) administration of justice — AI researching facts and law, influencing court decisions; (9) democratic processes — AI influencing elections or referenda. AI embedded in safety-critical products regulated under other EU directives (medical devices, machinery, vehicles) is separately high-risk under Article 6(1). The Commission may add Annex III categories by delegated act.
The 10 Mandatory Elements of Article 13 Instructions for Use
Article 13(3) specifies that instructions must contain: (a) provider identity and contact details; (b)(i) the system's intended purpose, accuracy and performance metrics, known and foreseeable limitations, and failure circumstances; (b)(ii) performance metrics used to measure accuracy, robustness, and cybersecurity — including test datasets used and any known bias; (b)(iii) input data specifications — what data the system was tested on, and conditions under which inputs may fail to produce reliable outputs; (b)(iv) any changes to the system that affect its compliance with essential requirements (requiring updated instructions); (b)(v) human oversight measures — specific technical measures to facilitate deployer oversight including stop/override functions; (b)(vi) computational resources required and energy consumption metrics; (c) expected lifespan and maintenance — software update cadence and post-market monitoring; (d) logging capabilities per Article 12 — what is logged, at what detail, and for how long; (e) installation and operation instructions — including output interpretation guidance. Instructions must be in a machine-readable format and in the official language(s) of the EU member states where the system is placed on the market.
Provider vs. Deployer: The Asymmetric Responsibility Split
The EU AI Act creates an asymmetric compliance structure for high-risk AI. Providers bear documentation obligations: creating Article 13 instructions, Annex IV technical documentation, CE marking, and maintaining a post-market monitoring plan. Deployers bear implementation obligations: implementing the human oversight measures from the instructions, using the system only within the disclosed intended purpose, informing affected persons that AI is used in decisions affecting them, and reporting serious incidents to market surveillance authorities. If a deployer modifies a high-risk AI system in a way that substantially changes its intended purpose, performance, or risk profile, Article 25 converts the deployer into a provider — and they assume all provider obligations. Fine-tuning a GPAI model on proprietary data for a high-risk use case, adding a specialized classification head, or deploying in a substantially different context than documented are the most common Article 25 triggers.
GDPR Article 22 Interaction: Two Parallel Compliance Tracks
EU AI Act Article 13 creates a B2B transparency obligation from providers to deployers — it does not directly grant rights to affected individuals. GDPR Article 22 restricts solely automated decisions with legal or significant effects on EU individuals, granting the right not to be subject to such decisions and the right to meaningful information about the logic involved. When a high-risk AI system makes decisions covered by both frameworks, two compliance tracks run in parallel. The EU AI Act track: provider supplies Article 13 instructions; deployer implements human oversight measures. The GDPR track: data controller (typically the deployer) provides meaningful information about AI logic; individuals have the right to human review. Key divergence: GDPR Art. 22 applies only to solely automated decisions; EU AI Act Article 13 applies to any Annex III use regardless of human involvement in the final decision. An AI loan scoring system with nominal human sign-off may avoid GDPR Art. 22 (if human review is substantive) but still requires full Article 13 compliance. For AI systems in scope of both, implementing the more demanding EU AI Act human oversight standard generally satisfies GDPR Art. 22 human review requirements as well.
Article 13 Enforcement: Market Surveillance and Financial Penalties
Enforcement of high-risk AI obligations begins August 2026. Market Surveillance Authorities (MSAs) in each EU member state will conduct inspections, review technical documentation, and examine whether Article 13 instructions have been supplied and implemented. Financial penalties: providers non-compliant with Article 13 face fines up to €15 million or 3% of global annual turnover (whichever is higher); deployers face fines up to €7.5 million or 1.5% of global annual turnover; providing incorrect or misleading information to an NCA carries fines up to €7.5 million or 1% of turnover. Non-EU providers whose high-risk AI is used within the EU must appoint an EU-based authorized representative to interact with market surveillance authorities. National enforcement capacity varies — Germany, France, and the Netherlands are expected to have the most active MSA programs in the initial years.