---
title: "EU AI Act Compliance for Insurance AI in 2026 | Impetora"
description: "How the EU AI Act applies to insurance AI in 2026: Annex III risk pricing classification for life and health insurance, EIOPA AI principles, Solvency II model governance, and how to ship a compliant insurance AI system."
url: https://impetora.com/eu-ai-act/by-vertical/insurance
locale: en
datePublished: 2026-04-27
dateModified: 2026-04-27
author: Impetora
---

# EU AI Act compliance for insurance AI in 2026

> Annex III, point 5(c) of Regulation (EU) 2024/1689 names AI systems "intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance" as high-risk under the EU AI Act [1]. Property, motor and commercial-line pricing models sit outside that high-risk band but inside EIOPA's supervisory expectations on AI governance and inside the Solvency II model risk management requirements [2]. A 2026-grade insurance AI deployment is built against all three regimes simultaneously.

*Updated 2026-04-27. By Impetora.*

## Which Annex III risk category applies to insurance AI?

Annex III, point 5(c) covers "AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance" [1]. The legal text is narrow on its face but expansive in practice. Underwriting models, pricing models, telematics-driven life-and-health risk assessment, claim-fraud-screening models that influence pricing, and propensity-to-claim models that feed pricing all fall inside the band when the line of business is life or health. Property, motor, marine, aviation and transit, commercial liability, and most non-life lines are not named. They sit outside Annex III and therefore outside the high-risk regime by default. They are still inside EIOPA's June 2024 supervisory statement on the use of AI governance principles by insurance and reinsurance undertakings, which sets convergent expectations on data governance, model risk management, and consumer fairness regardless of AI Act classification [2].

## What conformity assessment is required for high-risk insurance AI?

Article 43 of the Act and the internal-control procedure of Annex VI apply. The provider runs a self-assessment against Articles 8 to 15, draws up the EU declaration of conformity, affixes the CE marking, and registers the system in the EU database. No notified body is required for the Annex III point 5(c) category. The practical evidence is the Annex IV technical documentation pack and the post-market monitoring plan under Article 72. Where an undertaking is the deployer of a third-party-built model, Article 26 deployer obligations apply: the system must be used in line with provider instructions, input data must be relevant and sufficiently representative, and human oversight must be assigned to natural persons with the necessary competence and training. The deployer also has to monitor operation and notify the provider of incidents. EIOPA's supervisory statement reinforces these expectations from a prudential angle [2].

## How is high-risk classification triggered for insurance AI?

Three tests. First, line of business. Life or health triggers point 5(c); the other lines do not. Second, AI function. Risk assessment or pricing triggers; pure operational AI (claim document classification, agent productivity, contact-centre routing) does not. Third, the Article 6(3) carve-out. A model that surfaces patterns for an underwriter without setting any decision parameter can qualify as a "preparatory task," but the carve-out has to be documented in the technical file. The interaction with Solvency II is the part that catches insurers off guard. The Solvency II Pillar 2 model governance regime - including the use-test, the documentation standards, and the Own Risk and Solvency Assessment - applies to internal models used for capital purposes. AI used for pricing or reserving inside the Internal Model has to satisfy the Solvency II Internal Model approval process in addition to the AI Act conformity assessment. The two regimes do not align cleanly; the documentation packs overlap but are not interchangeable [3].

## What technical documentation must an insurance AI system produce?

Annex IV of the AI Act sets the contents [1]. For an insurance AI deployment the contentious parts are Article 10 data governance and Article 14 human oversight design. Training data must be representative across protected categories. The European Insurance Gender Directive (CJEU C-236/09 Test-Achats and Directive 2004/113/EC as interpreted) prohibits the use of gender as a rating factor for new contracts entered after 21 December 2012 [4]. A model that uses correlated proxies for gender has to be evaluated and documented as such; "the model does not see gender" is not sufficient evidence. For health insurance, the protected-category list extends to data revealing racial or ethnic origin, religious beliefs, genetic data, and biometric data under GDPR Article 9. These categories may not be processed for risk pricing without a specific Article 9(2) lawful basis. The technical file has to evidence that they are not in the feature set, not in the embeddings, and not reconstructable from the proxies.

## What does human oversight look like for insurance underwriting AI?

Article 14 expects the deployer to assign oversight to a natural person with the competence, authority and training to override. For high-volume retail lines the operational pattern is sampled review with full review of edge cases, rather than 100% review. The threshold for "edge case" has to be defined and documented; the EIOPA statement notes that this threshold itself becomes a control that supervisors will test [2]. For low-volume specialty lines, full review is the default. Either way, the override has to be logged with reason code, reviewer ID, and timestamp. Article 86 of the AI Act gives the data subject the right to a clear and meaningful explanation of the role of the AI in any high-risk decision producing legal or similarly significant effects, applying from August 2026 - that explanation artefact is best designed at build time. The Solvency II use-test layers a parallel expectation that the model is actually used by the relevant decision-makers in the way the documentation says it is used [3].

## How does Impetora handle insurance AI Act conformity?

Impetora ships every insurance AI system with a written risk classification analysis (point 5(c) yes or no, with the line-of-business and AI-function reasoning written out), a data-governance description aligned with Article 10 plus the Test-Achats gender-rating prohibition, a draft technical documentation pack aligned with Annex IV, a human-oversight design spec mapped to Article 14 and the EIOPA statement, an Article 86 explanation artefact, and a post-market monitoring plan with named drift metrics, owners and reporting cadence. For deployments inside a Solvency II Internal Model perimeter, Impetora produces a parallel Solvency II model file that maps the AI Act technical documentation onto the use-test and the documentation standard expected for Internal Model approval. Cross-references: the EU AI Act overview, the insurance industry hub, the decision-support AI use case, and the TRACE methodology.

## Frequently asked questions

### Is property and casualty AI high-risk under the EU AI Act?

Generally no. Annex III, point 5(c) names life and health insurance specifically. Property, motor, commercial liability, marine and aviation fall outside the named high-risk band and therefore outside the conformity-assessment regime by default. They are still inside EIOPA's supervisory statement on AI governance and inside the Solvency II model risk management framework, which apply regardless of AI Act classification.

### Does the EU AI Act prohibit using AI in life insurance pricing?

No. It classifies the system as high-risk and requires the provider and deployer to meet the obligations of Articles 8 to 15 and 26: risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy and robustness, and post-market monitoring. The Act does not prohibit the AI; it requires that the AI be built and operated in a way that supervisors can audit.

### How does Test-Achats interact with insurance AI?

CJEU C-236/09 and the implementation of Directive 2004/113/EC prohibit using gender as a risk factor in pricing for new contracts entered after 21 December 2012. An AI model trained on historical data that included gender, or that uses correlated proxies, has to be evaluated and documented to show it does not reintroduce gender-based pricing. 'The model does not see gender' is not sufficient evidence; the technical file has to show absence of correlated proxies.

### Does an EU AI Act conformity assessment satisfy Solvency II Internal Model approval?

No. The two regimes are independent. The AI Act covers product-level safety and trustworthiness for the AI component; Solvency II Pillar 2 covers the use-test, the documentation standard, the Own Risk and Solvency Assessment, and the model governance framework for any model feeding capital decisions. An AI model inside an Internal Model perimeter needs both regimes' documentation. The packs overlap but are not interchangeable.

### When do the high-risk obligations apply to insurance AI?

2 August 2026 for the bulk of high-risk Annex III obligations, including point 5(c) life and health insurance pricing. Prohibited practices applied from 2 February 2025; general-purpose AI obligations applied from 2 August 2025. A 2026 procurement should be specified to the August 2026 floor, with a documented plan to reach full compliance by go-live and explicit treatment of the Solvency II overlap.

### What does EIOPA's June 2024 statement add to the AI Act picture?

EIOPA's supervisory statement on the use of AI governance principles applies to all insurance AI regardless of AI Act high-risk status. It sets six principle-based expectations: proportionality, fairness and ethics, data governance, documentation and traceability, transparency and explainability, and human oversight. The statement is the convergent supervisory floor that applies to property, motor, and other lines that fall outside the AI Act's high-risk classification.

### Is fraud detection AI in insurance high-risk under the AI Act?

Annex III, point 5(b) explicitly excludes 'AI systems used for the purpose of detecting financial fraud' from the creditworthiness high-risk category. By analogy and in practice, insurance fraud detection that is used solely to flag claims for human investigation is not high-risk under point 5(c). The classification can change if the fraud model is repurposed to feed pricing decisions; in that case point 5(c) applies.

## Sources cited

1. Regulation (EU) 2024/1689 (Artificial Intelligence Act), Annex III point 5(c), Articles 6, 8-15, 26, 86. European Union, Official Journal, 2024-07-12. https://eur-lex.europa.eu/eli/reg/2024/1689/oj
2. Supervisory Statement on the use of AI governance principles by insurance and reinsurance undertakings. European Insurance and Occupational Pensions Authority (EIOPA), 2024-06-04. https://www.eiopa.europa.eu/publications/opinion-artificial-intelligence-governance-and-risk-management_en
3. Directive 2009/138/EC (Solvency II), Pillar 2 model governance and use-test. European Union, Official Journal, 2009-11-25. https://eur-lex.europa.eu/eli/dir/2009/138/oj
4. Judgment in Case C-236/09 (Test-Achats) on the use of gender as a rating factor. Court of Justice of the European Union, 2011-03-01. https://curia.europa.eu/juris/liste.jsf?num=C-236/09
5. Regulation (EU) 2016/679 (General Data Protection Regulation), Articles 9 and 22. European Union, Official Journal, 2016-05-04. https://eur-lex.europa.eu/eli/reg/2016/679/oj
6. ISO/IEC 42001:2023 - AI management systems. International Organization for Standardization, 2023-12. https://www.iso.org/standard/81230.html
