EU AI Act conformity assessment: how high-risk systems get certified

By Impetora -

A conformity assessment under the EU AI Act is the procedural process by which a provider demonstrates that a high-risk AI system meets the requirements of Chapter III, Section 2 of Regulation (EU) 2024/1689 before placing it on the EU market [1]. The Act provides two routes - internal control under Annex VI and third-party assessment by a notified body under Annex VII - and the choice of route depends on the system's Annex III area, whether harmonised standards have been applied, and whether the system is a safety component of an Annex I product.

2
Conformity assessment routes
Annexes VI, VII
9
Annex IV documentation sections
Annex IV
10 yrs
Document retention
Article 18
2 Aug 2026
When the regime applies
Article 113

What is a conformity assessment under the AI Act?

A conformity assessment is the documented procedure that demonstrates a high-risk AI system has been designed and built in compliance with the seven sections of Chapter III, Section 2: risk management (Article 9), data and data governance (Article 10), technical documentation (Article 11), record-keeping (Article 12), transparency and provision of information to deployers (Article 13), human oversight (Article 14), and accuracy, robustness, and cybersecurity (Article 15) [1].

The assessment ends with three artefacts: a signed EU declaration of conformity under Article 47, the CE marking affixed to the system or its accompanying documentation under Article 48, and a registration entry in the EU database for high-risk AI systems under Article 49. These three together are the public proof that the provider has met the design-time obligations and is operating the post-market monitoring required by Article 72.

The conformity assessment is repeated whenever the system undergoes a substantial modification, defined in Article 3(23) as a change not foreseen by the provider in the initial conformity assessment that affects compliance or changes the intended purpose. Routine updates inside the foreseen-change envelope do not trigger reassessment.

What are the two assessment routes?

The Act provides two routes. Annex VI: internal control. The provider verifies its quality management system, examines the technical documentation, and confirms that the design and development process complies with the regulation - all without third-party involvement. This route applies to most Annex III high-risk systems, provided the provider has applied harmonised standards or, where standards do not exist, common specifications adopted by the Commission.

Annex VII: assessment by a notified body. A notified body, accredited by a member state and listed in the NANDO database, reviews the quality management system and the technical documentation, and may perform conformity tests on the system itself. Annex VII is mandatory for AI systems used for biometric identification under Annex III, point 1, where the provider has not applied harmonised standards. It is also the default route for AI safety components in Annex I regulated products, where the existing sectoral conformity assessment regime continues to apply with the AI requirements layered on top [3].

Annex VI / VII
Internal control vs notified body
Article 43

The choice of route should be documented in the conformity assessment plan before the build starts. Notified-body capacity is currently limited and lead times of three to nine months are realistic, which means the route choice has direct programme-management consequences.

What goes in the Annex IV technical documentation?

Annex IV lists the nine sections the technical documentation must cover. One, a general description of the AI system: intended purpose, provider details, version, hardware, software, deployment forms, instructions for use. Two, a detailed description of the elements and the development process: design choices, system architecture, computational resources, data requirements, training methodologies, validation and testing procedures, key design choices and their rationale. Three, monitoring, functioning, and control: capabilities and limitations, foreseeable unintended outcomes, human oversight measures, technical measures to facilitate interpretation. Four, performance metrics: accuracy, robustness, fairness measures appropriate to the use case [3].

Five, the risk management system under Article 9. Six, a description of the data and data-governance practices under Article 10. Seven, the change management process. Eight, harmonised standards applied or, where partly applied, the alternative solutions. Nine, a copy of the EU declaration of conformity and the post-market monitoring plan. The pack is retained for 10 years from when the system is placed on the market under Article 18 and made available to national competent authorities on request.

How do harmonised standards fit in?

Article 40 establishes a presumption of conformity: an AI system that complies with harmonised standards published in the Official Journal is presumed to comply with the corresponding requirements of Chapter III, Section 2. The Commission has issued a standardisation request to CEN-CENELEC under JTC 21, and the first wave of harmonised standards is being finalised. Until the harmonised standards are published, providers can apply common specifications adopted by the Commission under Article 41 or document equivalent solutions.

The practical importance of harmonised standards is that they unlock the Annex VI internal-control route for most Annex III systems. A provider who applies the standards can self-certify compliance. A provider who does not apply standards must either justify the alternative or use a notified body. ISO/IEC 42001:2023, the AI management-system standard, is closely tracked by the harmonised-standards work and is increasingly used as a procurement reference even where it is not yet formally harmonised [5].

What is the post-market monitoring obligation?

Article 72 requires providers of high-risk AI systems to establish and document a post-market monitoring system that actively and systematically collects, documents, and analyses relevant data on the system's performance throughout its lifetime. The post-market monitoring plan is part of the technical documentation and must address how the system will be evaluated against the obligations of Chapter III, Section 2 in real-world conditions.

Serious incidents - defined in Article 3(49) as malfunctions or use leading to death, serious damage to property or environment, serious and irreversible disruption of critical infrastructure, breach of fundamental rights, or serious harm to health - must be reported to the relevant national market-surveillance authority under Article 73. The reporting deadline is 15 days for general serious incidents, 10 days for cases involving death, and 2 days for incidents involving widespread infringement or serious infrastructure disruption.

15 days
Serious-incident reporting (Art. 73)
Article 73

The post-market monitoring plan should name the metrics, the data-collection mechanism, the review cadence, the responsible owner, and the trigger conditions for re-running the conformity assessment. Plans that defer the operational specification to "after deployment" generally produce gaps that surface during the first regulator audit.

What is the deployer's role in conformity?

Deployers do not perform the conformity assessment - that is the provider's responsibility - but they carry obligations that interact with it. Article 26 requires deployers to use the system in accordance with the instructions for use, assign human oversight to natural persons with the necessary competence and authority, ensure input data is relevant and sufficiently representative, monitor operation against the instructions, retain logs for at least six months unless other law requires longer, inform workers and their representatives where high-risk systems are used in the workplace, conduct a fundamental rights impact assessment under Article 27 where applicable, and inform individuals subject to AI-driven decisions under Article 26(11) where applicable.

The contractual interface between provider and deployer should make this allocation explicit. A well-drafted master services agreement names the provider, names the responsible party for each Annex IV section, includes the post-market monitoring data feed back to the provider, and specifies audit rights, indemnification, and termination triggers tied to compliance status.

How should an enterprise prepare for conformity assessment?

Five workstreams cover the preparation surface. One, the conformity assessment plan: route choice (Annex VI vs VII), notified-body selection if needed, milestones, owners, evidence schedule. Two, the data-governance description: training data sources and provenance, validation and testing splits, bias evaluation, special-categories handling, data-retention policy. Three, the technical documentation pack: a living document that grows alongside the build, structured around the nine Annex IV sections.

Four, the human-oversight design: not a UI overlay but an architectural constraint that shapes which decisions are made by the model, which by the human, and what override paths exist. Five, the post-market monitoring plan: metrics, owners, review cadence, incident-reporting procedure, retraining triggers.

For the broader cluster context, see EU AI Act overview, risk classification, and ISO 42001 mapping. For a guide to selecting a vendor that can produce these artefacts, see EU AI Act compliant AI vendors. For the underlying methodology, see TRACE.

What are the common conformity assessment pitfalls?

Five patterns recur. A provider that defers the conformity assessment plan to "after pilot" - the work cannot be retro-fitted cheaply because the design choices that drive compliance happen at architecture time. A provider whose risk-management documentation is generic and not tied to the system's specific Annex III area. A provider whose data-governance description treats Article 10 as a checklist rather than a design constraint, leaving training-data provenance and bias evaluation thin. A provider whose human-oversight design is a UI overlay rather than an architectural property of the system. A provider whose post-market monitoring plan ends at deployment and has no structured metric collection or review cadence.

The Bank for International Settlements' 2024 paper on generative AI in finance describes similar patterns from the supervisory side and is a useful cross-reference for buyers in financial services [8]. The same underlying problem - compliance work treated as a documentation exercise rather than as an engineering constraint - drives most of the audit findings.

Frequently asked questions

Who signs the EU declaration of conformity?
The provider, defined as the legal entity that places the high-risk AI system on the EU market under its own name or trademark. The declaration is a signed document attesting that the system meets the requirements of Chapter III, Section 2. The provider retains the declaration for 10 years and makes it available to national competent authorities on request. For a vendor-built system delivered to an enterprise buyer, the vendor is normally the provider and signs. For an in-house build using a vendor's components, the buyer is normally the provider and signs.
Do all high-risk systems need a notified body?
No. Most Annex III systems can use the Annex VI internal-control route, especially where harmonised standards or common specifications have been applied. AI systems used for biometric identification under Annex III, point 1, must use Annex VII (notified-body assessment) where the provider has not applied harmonised standards. AI systems that are safety components of regulated products under Annex I follow the existing sectoral conformity regime, which usually involves a notified body. The route choice should be documented in the conformity assessment plan before the build starts.
How long does a conformity assessment take?
The technical work runs in parallel with the build and is not a separate phase. The documentation pack grows alongside the system. The formal sign-off, when using internal control under Annex VI, can be done in a few weeks once the build is complete and the testing evidence is in place. Notified-body assessment under Annex VII typically takes three to nine months because of capacity constraints and the depth of the file review. Programme plans for 2026 launches should reserve notified-body capacity early.
Is CE marking required for AI systems?
Yes, for high-risk AI systems. Article 48 requires the CE marking to be affixed visibly, legibly, and indelibly to the high-risk AI system. Where this is not possible due to the nature of the system, the marking is affixed to the packaging or accompanying documentation. The marking is followed by the identification number of the notified body where Annex VII applies. CE marking does not apply to limited-risk or minimal-risk systems and does not apply to general-purpose AI models.
What is the EU database for high-risk AI systems?
Article 49 establishes a public EU database where providers register high-risk AI systems before placing them on the market. The database is operated by the Commission and contains a description of the system, its intended purpose, the provider details, the conformity assessment route, and the EU declaration of conformity. Deployers who are public authorities, agencies, or bodies must also register their use of high-risk AI systems. The database is the public-facing transparency layer of the conformity regime.
What happens if a system fails its conformity assessment?
The system cannot be placed on the EU market until the non-conformity is corrected and the assessment passes. If a system already on the market is found to be non-conformant - for example through post-market monitoring or a regulator audit - Article 79 requires the provider to take corrective action immediately, which can include withdrawal, recall, modification, or use restrictions. The national competent authority is informed, and the EU database is updated. Penalties under Article 99 can apply on top of the corrective measures.
Does the AI Act apply to AI systems still in development?
Article 2(8) excludes AI systems and models, including their output, specifically developed and put into service for the sole purpose of scientific research and development. Pre-market testing in real-world conditions is also permitted under Articles 60 and 61, with specific safeguards including a written real-world testing plan, registration in the EU database, informed consent from subjects, and a maximum testing duration. Once the system moves out of research or testing into commercial deployment, the full obligation set applies.
Impetora

Ready to scope your project? Submit a short brief and we reply within one business day.

Sources cited

Sources cited (8) - show
  1. Regulation (EU) 2024/1689 (Articles 9-15, 43-49, 72-73, Annexes IV, VI, VII). European Union, Official Journal, 2024-07-12. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689
  2. Regulatory framework for artificial intelligence. European Commission, DG CNECT, 2026-01. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
  3. AI Act Explorer - Annex IV and conformity assessment articles. Future of Life Institute, 2024-08. https://artificialintelligenceact.eu/the-act/
  4. Multilayer framework for good cybersecurity practices for AI. ENISA, 2023-06. https://www.enisa.europa.eu/publications/multilayer-framework-for-good-cybersecurity-practices-for-ai
  5. ISO/IEC 42001:2023 - AI management systems. International Organization for Standardization, 2023-12. https://www.iso.org/standard/81230.html
  6. AI Risk Management Framework. NIST, 2023-01. https://www.nist.gov/itl/ai-risk-management-framework
  7. EDPB guidelines and recommendations. European Data Protection Board, 2026-01. https://www.edpb.europa.eu/our-work-tools/our-documents/guidelines
  8. Generative artificial intelligence in finance. Bank for International Settlements, 2024-08. https://www.bis.org/fsi/publ/insights63.htm
About Impetora
Impetora designs, builds, and deploys custom AI systems for enterprises in regulated industries. We operate from Vilnius and Amsterdam and work in five languages.