I
Impetora

EU AI Act compliance for healthcare AI in 2026

By Impetora -

Healthcare AI sits in two parallel high-risk regimes simultaneously. The EU AI Act classifies clinical decision support and triage AI as high-risk under Annex III, point 5(d) when used by public authorities to dispatch emergency medical services and under Annex III, point 5(c) for emergency triage. Separately, AI built into a medical device is high-risk under Article 6(1) of the AI Act because the underlying device is regulated under the EU Medical Devices Regulation (EU) 2017/745 and requires a notified-body conformity assessment [1] [2]. The EMA reflection paper on AI in the medicinal product lifecycle and the WHO ethics and governance guidance set the convergent expectations on data quality, validation, and human oversight [3] [4].

Which Annex III risk category applies to healthcare AI?

Annex III of the AI Act contains two healthcare-relevant entries. Point 5(d) covers AI systems "intended to be used by public authorities or on behalf of public authorities to evaluate and classify emergency calls by natural persons or to be used to dispatch, or to establish priority in the dispatching of, emergency first response services, including by police, firefighters and medical aid, as well as of emergency healthcare patients triage systems" [1]. Triage AI used at the front door of public hospitals or used by national emergency-call dispatchers is therefore high-risk on the Annex III dimension.

The second pathway is Article 6(1) and Annex I. AI that is itself a safety component of a product covered by EU harmonisation legislation, or that is itself such a product, becomes high-risk by classification overlap. The Medical Devices Regulation (MDR) and the In Vitro Diagnostic Regulation (IVDR) are listed in Annex I Section A. AI-as-medical-device under MDR Class IIa or above and AI-as-IVD under IVDR Class B or above is high-risk under the AI Act because it is already a regulated medical device [2].

What conformity assessment is required for AI medical devices?

For AI-as-medical-device the existing MDR conformity assessment procedure absorbs the AI Act conformity assessment. Article 43(3) of the AI Act requires that the AI Act technical documentation be integrated into the MDR technical documentation, and the notified body designated under MDR also assesses conformity with the AI Act requirements of Articles 8 to 15 [1]. The result is one notified-body assessment, not two, but the assessment has to cover both regimes and the technical file has to evidence both.

For Annex III point 5(d) triage AI that is not a medical device, the internal-control procedure of Annex VI applies and no notified body is required. The provider runs the self-assessment, draws up the EU declaration of conformity and the technical documentation, and registers the system in the EU database. The MDCG (Medical Device Coordination Group) has issued guidance on the borderline between MDR and the AI Act, which is the canonical reading for AI tools that sit close to but outside the medical device definition.

How is high-risk classification triggered for healthcare AI?

Three pathways. First, AI-as-medical-device under MDR or IVDR. The intended purpose stated in the device's instructions for use determines the device class; AI Class IIa or above is high-risk under the AI Act by overlap with MDR. Second, Annex III point 5(d) public-authority triage and dispatch. Public hospitals and national emergency dispatchers are inside; private clinics outside the public-authority perimeter generally are not, although the deployer may still trigger Article 26 obligations. Third, Annex III point 1 biometric categorisation - relevant for AI that infers protected attributes from a patient's voice, face or other biometric signal.

The Article 6(3) carve-out applies in narrow cases. A documentation summarisation tool that produces a draft for a human clinician to review is a strong "preparatory task" candidate. A diagnostic AI that produces a confidence-ranked differential is generally not, because the output substantially shapes the clinical decision even with human review. The carve-out has to be documented in the technical file. The European Medicines Agency reflection paper sets parallel expectations on AI used in the medicinal product lifecycle, including discovery, clinical trials, and pharmacovigilance [3].

What technical documentation must a healthcare AI system produce?

Annex IV of the AI Act sets the contents [1]. For AI-as-medical-device the Annex IV pack is integrated into the MDR Annex II and Annex III technical documentation. The integration creates two specific deliverables that are uncommon in non-medical AI: a clinical evaluation report, and a post-market clinical follow-up plan. Both extend the AI Act's risk management and post-market monitoring expectations into a clinical-evidence framework where claims have to be supported by published or sponsor-generated clinical study evidence.

For Article 10 data governance, the additional consideration is GDPR Article 9 special-category data. Health data is special-category and may only be processed under one of the Article 9(2) lawful bases. Training a clinical AI on patient records requires either explicit consent (Article 9(2)(a)) or a public-interest research basis (Article 9(2)(j)) or a healthcare-purposes basis (Article 9(2)(h)) supported by Member-State law and professional secrecy. The EDPS and the EDPB have issued joint guidance reflecting these expectations. The WHO ethics and governance guidance for large multi-modal models gives the parallel international floor on validation and bias evaluation [4].

What does human oversight look like for clinical AI?

Article 14 expects oversight by a natural person with the competence and authority to override. For diagnostic and triage AI, the meaningful test is whether a clinician can override the system's recommendation in real time, with the override logged and reviewed. The reviewer interface has to surface the input features, the output, the confidence band, and the differential alternatives. Hospital clinical governance committees commonly add a parallel review of override patterns at population level - a model whose overrides cluster on a particular demographic is a model with a representativeness problem.

The MDR Article 14 user-information obligations interact with the AI Act Article 13 transparency obligations. The instructions for use must describe the AI system's intended purpose, performance characteristics, the conditions under which performance was validated, foreseeable misuse, and the human oversight measures. Article 86 of the AI Act gives the patient the right to a clear and meaningful explanation of the role of the AI in any decision producing legal or similarly significant effects, applying from 2 August 2026. For high-risk healthcare AI, the explanation artefact is best designed at build time as part of the clinical evaluation evidence.

How does Impetora handle healthcare AI Act conformity?

Impetora ships every clinical AI system with a written risk classification analysis (MDR overlap yes or no, Annex III point 5(d) yes or no, with the reasoning written out), a data-governance description aligned with Article 10 plus GDPR Article 9 lawful-basis evidence, an integrated AI Act plus MDR technical documentation pack where applicable, a clinical evaluation outline aligned with the MDR clinical evidence framework, a human-oversight design spec mapped to Article 14 and the MDR user-information obligations, an Article 86 explanation artefact, and a post-market monitoring plan that satisfies both the AI Act post-market obligations and the MDR post-market clinical follow-up plan.

For non-medical-device healthcare AI - hospital operations, scheduling, document workflow, internal knowledge - the same Annex IV pack is produced even though no notified body assessment is triggered, because the next deployment context (a clinical pilot, a national emergency-services contract) can trigger MDR or Annex III reclassification. Cross-references: the EU AI Act overview, the healthcare industry hub, the document processing automation use case, and the TRACE methodology.

Frequently asked questions

Is all clinical decision support software high-risk under the EU AI Act?
Not by Annex III alone, but most production CDS software is a medical device under MDR. The MDCG guidance on the qualification and classification of software as a medical device (MDCG 2019-11) sets the framework. Software providing patient-specific information used to support clinical decisions is generally Class IIa or above under MDR Rule 11. AI Class IIa or above under MDR is high-risk under the AI Act by overlap with Article 6(1). The narrow exceptions are CDS software that simply organises or stores information without applying any decision rule.
Does the AI Act add a separate notified-body assessment for AI medical devices?
No. Article 43(3) integrates the AI Act conformity assessment into the existing MDR or IVDR notified-body assessment. The same notified body assesses conformity with both regimes; the technical file evidences both. The integration is intended to avoid double assessment, not to create one. Manufacturers should confirm that their notified body has the AI Act competence in scope when designating, particularly for higher-risk classes.
Is hospital-operations AI inside the AI Act's high-risk regime?
Generally no, on its face. Scheduling, capacity planning, document workflow, and internal-knowledge AI used inside a hospital are not named in Annex III and are not medical devices. They are still inside the GDPR Article 9 special-category regime where they touch patient data, and they are still inside the hospital's own clinical-governance framework. The risk classification can change if the system is repurposed to drive triage decisions or to influence treatment choice.
When do the high-risk obligations apply to healthcare AI?
2 August 2026 for the bulk of high-risk Annex III obligations, including point 5(d) emergency triage. For AI medical devices the MDR conformity-assessment regime is already in force; the Article 43(3) AI Act integration applies from 2 August 2027 for systems already covered by MDR or IVDR, giving manufacturers a longer runway to integrate the technical files. A 2026 procurement should be specified to the August 2026 floor with explicit treatment of the MDR overlap and timeline.
What does the EMA AI reflection paper actually require?
The EMA reflection paper, last updated September 2024, sets expectations on the use of AI across the medicinal product lifecycle: drug discovery, non-clinical and clinical development, manufacturing, regulatory submission, and pharmacovigilance. It is not directly binding in the way the AI Act is, but it sets the EMA expectations sponsors should plan against when AI is used in a regulatory submission. Data quality, validation, traceability, and human oversight are the convergent themes. For sponsors using AI in clinical-trial design, real-world evidence generation, or pharmacovigilance signal detection, the reflection paper is the reference document.
Does WHO guidance on AI in health add a separate compliance obligation?
No - the WHO guidance is policy, not law. The 2024 ethics and governance guidance on large multi-modal models in health and the 2021 ethics guidance set six convergent principles: protect autonomy, promote human well-being and safety, ensure transparency, foster responsibility and accountability, ensure inclusiveness and equity, and promote AI that is responsive and sustainable. Healthcare AI providers building to the AI Act and MDR will largely meet the WHO floor in passing; the WHO guidance is useful as a higher-level framing rather than as a separate compliance regime.
Is ambient-scribing AI for clinicians high-risk under the AI Act?
On its own, generally no. Ambient scribing - AI that listens to a clinical encounter and produces a draft note for the clinician to edit and sign - is a documentation tool, not a clinical decision support tool. It is not named in Annex III and it is not normally a medical device under MDR Rule 11 because it does not provide patient-specific information used to support a clinical decision. It is still inside GDPR Article 9 because the recording contains health data, and it is still inside the hospital's clinical-governance framework. The risk classification can change if the AI starts inferring diagnoses or treatment options rather than simply transcribing.
Impetora

Ready to scope your project? Submit a short brief and we reply within one business day.

Sources cited

Sources cited (6) - show
  1. Regulation (EU) 2024/1689 (Artificial Intelligence Act), Annex I and III, Articles 6, 43. European Union, Official Journal, 2024-07-12. https://eur-lex.europa.eu/eli/reg/2024/1689/oj
  2. Regulation (EU) 2017/745 (Medical Devices Regulation, MDR). European Union, Official Journal, 2017-04-05. https://eur-lex.europa.eu/eli/reg/2017/745/oj
  3. Reflection paper on the use of Artificial Intelligence in the lifecycle of medicines. European Medicines Agency, 2024-09-09. https://www.ema.europa.eu/en/news/reflection-paper-use-artificial-intelligence-ai-medicinal-product-lifecycle
  4. Ethics and governance of artificial intelligence for health: Guidance on large multi-modal models. World Health Organization, 2024-01-18. https://www.who.int/publications/i/item/9789240084759
  5. MDCG 2019-11 Guidance on Qualification and Classification of Software in Regulation (EU) 2017/745. Medical Device Coordination Group, European Commission, 2019-10. https://health.ec.europa.eu/system/files/2020-09/md_mdcg_2019_11_guidance_qualification_classification_software_en_0.pdf
  6. Regulation (EU) 2017/746 (In Vitro Diagnostic Regulation, IVDR). European Union, Official Journal, 2017-04-05. https://eur-lex.europa.eu/eli/reg/2017/746/oj
About Impetora
Impetora designs, builds, and deploys custom AI systems for enterprises in regulated industries. We operate from Vilnius and Amsterdam and work in five languages.