I
Impetora

EU AI Act implementation checklist for providers and deployers

By Impetora -

Regulation (EU) 2024/1689, the EU AI Act, applies to providers, deployers, importers and distributors of AI systems placed on the EU market or whose output is used in the Union, regardless of where the operator is established [1]. Compliance is staged: the Act has been in force since 1 August 2024, prohibited practices apply from 2 February 2025, general-purpose AI obligations from 2 August 2025, and the bulk of high-risk obligations from 2 August 2026 [1]. This checklist sets out the twelve concrete steps every operator should complete before the August 2026 cut-off.

Aug 2026
most high-risk AI obligations apply
EUR-Lex
EUR 35M / 7%
maximum fine for prohibited practices (Art 99)
EUR-Lex
Annex III
eight high-risk use-case categories
EUR-Lex

Step 1. Determine your role under the Act

Article 3 defines four operator roles, and the same organisation can hold more than one. A provider develops an AI system or has it developed and places it on the market or puts it into service under its own name. A deployer uses an AI system under its authority in a professional context. An importer places on the EU market a system bearing the name of a non-EU provider. A distributor makes a system available without affecting its properties [1].

Crucially, a deployer who substantially modifies a high-risk system, rebrands it, or changes its intended purpose becomes a provider for the modified system under Article 25. Map every AI system in your portfolio against these four roles before doing anything else, because the rest of the checklist depends on the answer.

Step 2. Classify each AI system

The Act splits AI systems into four risk tiers. Prohibited practices under Article 5 include subliminal manipulation, exploitation of vulnerable groups, social scoring by public authorities, untargeted scraping of facial images, biometric categorisation by sensitive attributes, real-time remote biometric identification in public spaces (with narrow law-enforcement exceptions) and emotion recognition in workplaces and education [1]. High-risk systems are defined by Article 6 and Annex III, covering eight categories: biometrics, critical infrastructure, education and vocational training, employment and worker management, access to essential public and private services (including credit scoring), law enforcement, migration and border control, and administration of justice and democratic processes. Limited-risk systems trigger Article 50 transparency duties (chatbots, deepfakes, AI-generated content). Minimal-risk systems carry no specific Act obligations beyond voluntary codes of conduct.

Document the classification decision per system, with the reasoning. The European Commission AI Office is publishing classification guidance, and that document is the authoritative reference, not vendor marketing materials [2].

Step 3. Establish a Risk Management System (Article 9)

For every high-risk system, Article 9 requires a written, continuous and iterative risk management system that runs across the entire lifecycle. It must identify and analyse foreseeable risks to health, safety and fundamental rights, estimate and evaluate risks that may emerge in intended use and reasonably foreseeable misuse, evaluate other risks based on post-market monitoring data, and adopt suitable risk management measures.

The RMS document is not a one-off deliverable. It is reviewed and updated systematically, and testing must verify that the chosen measures work in the relevant operational context. Treat it as a living artefact owned by a named accountable person.

Step 4. Data governance (Article 10)

Training, validation and testing data must be relevant, sufficiently representative, free of errors and complete in view of the intended purpose. Article 10 requires documented data-governance practices covering: design choices, data collection processes, data preparation operations (annotation, labelling, cleaning, enrichment, aggregation), formulation of assumptions, prior assessment of availability, examination for biases likely to affect health, safety or fundamental rights, and identification of relevant data gaps with mitigation measures.

Where strictly necessary, processing of special categories of personal data is permitted under Article 10(5) to detect and correct bias, with appropriate safeguards. Document every choice; the conformity assessment will not pass without it.

Step 5. Technical documentation (Article 11 + Annex IV)

Technical documentation must be drawn up before the system is placed on the market and kept up to date. Annex IV sets out the minimum content: a general description of the system, detailed design and architecture, monitoring and control, validation and test data and procedures, evaluation results including metrics and known limitations, cybersecurity measures, change management procedures, and the EU declaration of conformity.

SMEs and start-ups can use a simplified Annex IV-bis form provided by the AI Office, but the underlying obligations are unchanged. Keep the documentation in a form that the national competent authority can request and read on demand.

Step 6. Automatic logging (Article 12)

High-risk systems must automatically record events ("logs") over their lifetime to a degree appropriate to their intended purpose. Logs must allow identification of situations that may result in the system presenting a risk under Article 79, facilitate post-market monitoring, and enable the deployer's monitoring under Article 26. Article 12(3) imposes specific minimum logging on remote biometric identification systems (period of use, reference database, input data, identifying personnel).

Define a retention period proportionate to the intended purpose and at least six months unless other Union or national law provides differently.

Step 7. Transparency to deployers (Article 13)

High-risk systems must be designed so deployers can interpret outputs and use them appropriately. They must be accompanied by concise, complete, correct and clear instructions for use that include: the provider's identity, the system's characteristics and intended purpose, level of accuracy and robustness, foreseeable circumstances that may lead to risks, technical capabilities and characteristics relevant to the explanation of outputs, performance regarding specific persons or groups, input-data specifications, human-oversight measures, expected lifetime, and necessary maintenance.

For limited-risk systems, Article 50 imposes additional transparency duties: users must be informed they are interacting with an AI system, AI-generated synthetic content must be machine-readable as such, deepfakes must be disclosed.

Step 8. Human oversight (Article 14)

High-risk systems must be designed to be effectively overseen by natural persons during use. Oversight measures must allow the assigned person to: understand the relevant capacities and limitations of the system, remain aware of automation bias, correctly interpret output, decide not to use the system or otherwise disregard, override or reverse output, and intervene to interrupt operation through a stop button or similar procedure.

For remote biometric identification under Annex III(1)(a), no action or decision can be taken on the basis of identification unless verified and confirmed by at least two natural persons with the necessary competence, training and authority.

Step 9. Accuracy, robustness and cybersecurity (Article 15)

Article 15 requires that high-risk systems achieve, in the light of their intended purpose, an appropriate level of accuracy, robustness and cybersecurity, and that they perform consistently throughout their lifecycle. Accuracy levels and relevant metrics must be declared in the instructions for use.

Robustness must address feedback loops, errors and inconsistencies. Cybersecurity must be resilient to attempts to alter use, behaviour or performance, including data poisoning, model poisoning, adversarial examples, model evasion and confidentiality attacks. The AI Office is co-developing technical specifications with ENISA [3].

Step 10. Conformity assessment, CE marking, EU database

Before placing a high-risk system on the market, the provider must complete a conformity assessment. Most Annex III systems can use internal control under Annex VI. Annex III(1) biometric systems must use the notified-body procedure under Annex VII unless the provider has applied harmonised standards. Successful assessment results in: an EU declaration of conformity (Article 47), affixing of the CE marking (Article 48), and registration of the system in the EU database (Article 49) before being placed on service. Public-authority deployers also register their use under Article 49(1a).

Provider obligations are set out in Articles 16 to 26 and cover quality management, document keeping, automatically generated log keeping, corrective actions, cooperation with authorities and accessibility.

Step 11. Post-market monitoring and serious-incident reporting (Articles 72-73)

Providers must establish a documented post-market monitoring system proportionate to the nature of the AI technologies and the high-risk system's risks. The plan is part of the technical documentation. Data is actively and systematically collected, documented and analysed throughout the system's lifetime, and the provider evaluates continued compliance with the Act's requirements.

Article 73 requires reporting of serious incidents to the market surveillance authority of the Member State where the incident occurred, immediately after the provider has established a causal link or reasonable likelihood of one, and not later than 15 days after becoming aware. Death or serious harm triggers a 10-day deadline. Widespread infringement or critical-infrastructure disruption triggers a 2-day deadline.

Step 12. Public-sector deployers - Fundamental Rights Impact Assessment (Article 27)

Before deploying a high-risk system listed in Annex III (with limited exceptions), bodies governed by public law and private operators providing public services, plus deployers of credit-scoring or life and health insurance risk-assessment systems, must perform a Fundamental Rights Impact Assessment. The FRIA must describe: the deployment process, the period and frequency of use, categories of natural persons likely to be affected, specific harms reasonably likely to result, the human-oversight measures, and the measures to be taken if those risks materialise.

The FRIA result is notified to the market surveillance authority via a template the AI Office publishes. A new FRIA is required if any element changes materially.

EU AI Act application timeline

  • 1 August 2024 - Regulation enters into force.
  • 2 February 2025 - Article 5 prohibited practices apply, AI literacy duty (Article 4) applies to providers and deployers.
  • 2 August 2025 - General-purpose AI model obligations (Articles 51-55), governance structure, penalties, notifying authorities and notified bodies framework apply.
  • 2 August 2026 - Bulk of obligations apply, including Article 6(2) and Annex III high-risk systems, Article 26 deployer obligations, Article 27 FRIA, Article 49 EU database registration.
  • 2 August 2027 - Obligations apply to high-risk AI systems that are safety components of products covered by Annex I sectoral legislation.

Penalties (Article 99)

Sanctions are tiered. Non-compliance with the Article 5 prohibitions carries fines up to EUR 35 million or 7% of total worldwide annual turnover, whichever is higher. Non-compliance with most other operator obligations (Articles 16-26 for providers, Article 26 for deployers, Articles 31, 33, 34 for notified bodies) carries fines up to EUR 15 million or 3% of turnover. Supplying incorrect, incomplete or misleading information to authorities carries fines up to EUR 7.5 million or 1% of turnover. SME and start-up caps are calculated against the lower of the two figures.

General-purpose AI model providers face a separate regime under Article 101 with caps of EUR 15 million or 3% of turnover.

Frequently asked questions

Does the EU AI Act apply to a non-EU vendor selling into Europe?
Yes. Article 2(1)(c) extends the Act to providers and deployers established outside the EU when the output produced by the system is used in the Union. A US or UK vendor selling a high-risk AI system to an EU customer must designate an authorised representative under Article 22 before placing the system on the market and must satisfy the full provider obligations in Articles 16 to 26. Drafting a contract that delegates this to the EU customer does not work; the Act treats the provider role as a question of fact, not contract.
How do general-purpose AI obligations and Annex III high-risk obligations overlap?
They are distinct regimes. General-purpose AI model obligations (Articles 51-55) apply to the model provider regardless of downstream use, with stricter rules for models with systemic risk above the 10^25 FLOPs training threshold. If that model is integrated into a downstream system that meets the Annex III high-risk criteria, the downstream provider has full Article 6 obligations on top, and inherits documentation from the model provider via the Article 53(1)(b) information duty. Buyers should require both layers of evidence in procurement.
Are AI regulatory sandbox programmes useful for compliance?
Yes for many providers. Article 57 requires Member States to establish at least one national AI regulatory sandbox by 2 August 2026 (the Commission may also organise joint sandboxes). Sandboxes give legal certainty for testing innovative systems under regulator supervision, and the participation evidence supports later conformity assessment. Article 59 provides a specific legal basis for processing personal data lawfully collected for other purposes when developing public-interest AI in the sandbox, with strict safeguards.
When does August 2026 actually start to bite for an enterprise buyer?
In practice, by Q4 2025. Procurement cycles for systems intended to be live in summer 2026 begin 9-12 months earlier, and any vendor that cannot produce a conformity assessment plan, an Annex IV technical documentation outline and a post-market monitoring approach by then is a procurement risk. The honest answer is that organisations starting in 2026 are already late. The earliest deliverable is an AI portfolio inventory plus a Step 2 classification per system; that work alone usually takes 3-4 weeks.
Is ISO/IEC 42001 certification a substitute for AI Act compliance?
No, but it is a strong base. ISO/IEC 42001:2023 is an AI management system standard and is not a harmonised standard under the Act yet. Certification demonstrates governance maturity that maps cleanly onto Article 17 quality-management obligations and reduces the gap to a clean conformity assessment. Once the European harmonised standards organisations (CEN-CENELEC JTC 21) publish AI Act harmonised standards, applying them gives a presumption of conformity under Article 40.
Who is the national competent authority for the AI Act?
Each Member State must designate at least one notifying authority and at least one market surveillance authority by 2 August 2025 (Article 70). Lists are published on the Commission's AI Office page, and several Member States have appointed their data-protection authority, telecommunications regulator or a new dedicated AI agency. For a multi-country deployment, identify the authority in each Member State of operation, because incident reporting and registry obligations are national.
What is the AI literacy obligation under Article 4?
Providers and deployers must take measures to ensure, to the best of their ability, a sufficient level of AI literacy among their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account technical knowledge, experience, education and the context in which the systems are used. This applies from 2 February 2025 and is not limited to high-risk systems. Document the training programme, content and delivery, because authorities can ask.
Where do we register a high-risk system in the EU database?
The EU database for high-risk AI systems is operated by the Commission under Article 71 and accessed via the AI Office portal. The provider registers the system before placing it on the market (Article 49(1)). Public-authority deployers separately register their use under Article 49(1a) before putting the system into service. Most fields are public; some safety-, security- and law-enforcement fields are restricted.
Impetora

Ready to scope your project? Submit a short brief and we reply within one business day.

Sources cited

Sources cited (7) - show
  1. Regulation (EU) 2024/1689 (AI Act) - consolidated text. EUR-Lex, Official Journal of the European Union, 2024-07-12. https://eur-lex.europa.eu/eli/reg/2024/1689/oj
  2. European AI Office - guidance and implementing acts. European Commission, 2025-01. https://digital-strategy.ec.europa.eu/en/policies/ai-office
  3. Multilayer framework for good cybersecurity practices for AI. ENISA, 2023-06. https://www.enisa.europa.eu/publications/multilayer-framework-for-good-cybersecurity-practices-for-ai
  4. ISO/IEC 42001:2023 - AI management systems. International Organization for Standardization, 2023-12. https://www.iso.org/standard/81230.html
  5. List of national notifying authorities and notified bodies. European Commission, NANDO database, 2025-08. https://single-market-economy.ec.europa.eu/single-market/european-standards/notified-bodies_en
  6. Article 27 FRIA template. European AI Office, 2026-02. https://digital-strategy.ec.europa.eu/en/policies/ai-office
  7. Regulation (EU) 2016/679 (GDPR). EUR-Lex, Official Journal of the European Union, 2016-04-27. https://eur-lex.europa.eu/eli/reg/2016/679/oj
About Impetora
Impetora designs, builds, and deploys custom AI systems for enterprises in regulated industries. We operate from Vilnius and work in five languages.
Discovery call

Book a discovery call

Tell us what you would like to build. We reply within one business day.

30-minute call. Free of charge. No obligation.