---
title: "EU AI Act Risk Classification: 4 Tiers Explained (2026) | Impetora"
description: "How the EU AI Act sorts AI systems into four risk tiers - prohibited, high-risk, limited-risk, minimal-risk - and what each tier means for providers and deployers."
url: https://impetora.com/eu-ai-act/risk-classification
locale: en
datePublished: 2026-04-27
dateModified: 2026-04-27
author: Impetora
---

# EU AI Act risk classification: how the four tiers work

> The EU AI Act sorts artificial intelligence systems into four risk tiers, with obligations and penalties scaled to the severity of potential harm. The tiers, set out in Articles 5, 6, 50, and 95 of Regulation (EU) 2024/1689, are prohibited, high-risk, limited-risk, and minimal-risk [1]. Most enterprise AI use today sits in the limited-risk or minimal-risk tiers, but a single high-risk system - a credit-scoring model, a CV-screening tool, a fraud-detection engine in a regulated product - pulls the whole organisation into the heavy obligation set.

*Updated 2026-04-27. By Impetora.*

## Why does the Act use a tiered structure?

The Act takes a risk-based approach to avoid imposing identical obligations on a low-stakes recommendation engine and a high-stakes credit-scoring model. The tiered structure also keeps the regulatory burden proportionate to potential harm to fundamental rights, health, and safety. The European Commission's AI portal describes the four tiers as the central design choice of the regulation [2]. For enterprises, the practical implication is that risk classification is the first analytical step of any AI procurement or build decision. Until you know which tier a system falls into, you cannot scope the documentation, the testing, the human-oversight design, or the contract. Vendors who skip this step and propose a system without a written risk classification are signalling that the conformity work has not been done.

## What practices are prohibited under Article 5?

Article 5 lists eight categories of prohibited AI practice. Subliminal or manipulative techniques that materially distort behaviour and cause significant harm. Exploitation of vulnerabilities due to age, disability, or social or economic situation. Social scoring by public or private actors that leads to detrimental treatment unrelated to the original context. Risk assessment of natural persons for predicting criminal offences based solely on profiling. Untargeted scraping of facial images from the internet or CCTV to build facial-recognition databases. Emotion recognition in workplaces and educational institutions. Biometric categorisation inferring sensitive attributes such as race, political opinions, or sexual orientation. Real-time remote biometric identification in publicly accessible spaces by law enforcement, with narrow exceptions [1]. These obligations applied from 2 February 2025 and the European Commission published detailed guidelines on prohibited practices in early 2025 [4]. Penalties for breach are the highest band: up to 35 million euros or 7 percent of worldwide annual turnover.

## What systems are classified as high-risk?

High-risk classification follows two paths. The first is Annex III, which lists eight areas where AI systems are presumed high-risk: biometrics (post-event identification, categorisation, emotion recognition outside prohibited contexts), critical infrastructure (safety components for road, rail, water, gas, electricity), education and vocational training (admissions, grading, monitoring), employment and worker management (recruitment, promotion, performance evaluation, task allocation), access to essential services (credit scoring, social benefits, emergency response triage, life and health insurance pricing), law enforcement (risk assessment, polygraphs, evidence evaluation), migration and border control (risk assessment, document verification, asylum eligibility), and administration of justice and democratic processes (assistance to judicial authorities, electoral influence) [3]. The second path is Annex I: AI safety components in regulated products that already require third-party conformity assessment under sectoral law - machinery, toys, lifts, radio equipment, civil aviation, two- and three-wheel vehicles, agricultural and forestry vehicles, marine equipment, rail interoperability, motor vehicles, in vitro diagnostic medical devices, medical devices. An AI system that is a safety component of a CE-marked machine is high-risk under the AI Act in addition to the existing sectoral conformity requirements. Article 6(3) provides an exemption: an Annex III system is not high-risk if it performs a narrow procedural task, improves the result of a previously completed human activity, detects decision patterns without replacing the human assessment, or performs preparatory work. The exemption is narrow, must be documented, and is subject to challenge by national authorities.

## What obligations apply to high-risk systems?

High-risk systems carry the heaviest design-time and operational burden. Providers must establish a risk-management system across the lifecycle (Article 9), implement data governance meeting Article 10 (training, validation, testing data quality, provenance, bias mitigation), produce technical documentation aligned with Annex IV (Article 11), build automatic logging (Article 12), provide transparency to deployers (Article 13), design human oversight (Article 14), achieve appropriate accuracy, robustness, and cybersecurity (Article 15), implement a quality-management system (Article 17), retain documentation for 10 years (Article 18), register the system in the EU database (Article 49), perform a conformity assessment (Article 43), and operate post-market monitoring (Article 72). Deployers of high-risk systems must use the system in accordance with the instructions for use, assign human oversight to competent staff, ensure input data is relevant, monitor operation and report serious incidents, retain logs, conduct a fundamental rights impact assessment where Article 27 applies, and inform individuals subject to AI-driven decisions where Article 26(11) applies. The conformity assessment guide walks through the procedural steps, and the ISO 42001 mapping shows how the AI management-system standard aligns with these articles.

## What counts as a limited-risk system?

Limited-risk systems are subject to transparency obligations under Article 50. Providers of AI systems that interact directly with humans must inform the human that they are interacting with an AI, unless this is obvious from context. Providers of AI systems generating synthetic audio, image, video, or text content must mark the output as artificially generated or manipulated in a machine-readable format. Deployers of emotion-recognition or biometric-categorisation systems must inform exposed individuals. Deployers of deepfake systems must disclose that the content has been artificially generated or manipulated, with exceptions for clearly artistic, satirical, or fictional work. The transparency obligations apply from 2 August 2026 alongside the high-risk regime. Most consumer-facing chatbots, AI-powered customer-support tools, and generative-content products fall into this tier and need to add the disclosure layer to their UX before the 2026 deadline.

## What about minimal-risk systems?

Minimal-risk systems carry no specific obligations under the Act beyond the cross-cutting AI literacy requirement of Article 4, which applied from 2 February 2025. The literacy requirement asks providers and deployers to take measures to ensure a sufficient level of AI literacy among staff who use AI systems on the organisation's behalf, taking into account the context of use, the staff's technical knowledge, experience, education, and training, and the persons or groups on whom the AI systems are used. Most workflow-automation tools, spam filters, recommendation engines for non-essential services, and AI-enabled productivity software sit in this tier. Voluntary codes of conduct under Article 95 are encouraged for minimal-risk systems and can be a useful procurement signal in regulated industries. Stanford HAI's AI Index 2024 reports that more than 78 percent of organisations have adopted AI in at least one business function [7], so most enterprises have multiple minimal-risk systems even before they encounter a high-risk one.

## How do general-purpose AI models fit the tier structure?

General-purpose AI models, defined in Article 3(63) as models displaying significant generality and capable of performing a wide range of distinct tasks, sit on a separate track. Chapter V splits them into two: standard general-purpose AI models, with documentation, copyright, and training-data summary obligations; and general-purpose AI models with systemic risk, defined as those trained with cumulative compute exceeding 10^25 floating-point operations, with additional evaluation, mitigation, incident-reporting, and cybersecurity obligations [1]. The threshold of 10^25 FLOPs is an objective trigger but the AI Office can also designate a model as systemic-risk based on capabilities, reach, or other criteria. Downstream applications built on top of general-purpose models inherit a partial compliance pack from the model provider but layer their own system-level obligations on top, classified by the use case (high-risk, limited-risk, or minimal-risk).

## How do you classify a system in practice?

A defensible classification follows five steps. One, write a short system description: purpose, intended users, input data, output, deployment context. Two, check Article 5 for prohibited-practice triggers. If any apply, the system cannot be placed on the market. Three, check Annex I for safety-component status in regulated products. If yes, the system is high-risk. Four, check Annex III for area-based triggers. If yes, the system is presumed high-risk; document any Article 6(3) exemption claim with reasoning. Five, check Article 50 transparency triggers. If yes, the system is at least limited-risk and the disclosure requirements apply. The classification document should be retained as part of the technical documentation and reviewed annually or whenever the system, the input data, or the deployment context materially changes. National competent authorities can challenge a classification under Article 79 and require evidence supporting the analysis. For a guide to picking vendors who can produce defensible classifications, see EU AI Act compliant AI vendors. For the underlying methodology, see TRACE. For an example high-risk use case, see decision-support AI. For the cluster pillar, see EU AI Act overview.

## Frequently asked questions

### Is a chatbot a high-risk AI system?

Generally no. A consumer or enterprise chatbot is normally a limited-risk system under Article 50, requiring disclosure that the user is interacting with AI. A chatbot becomes high-risk if it is used as part of an Annex III system - for example, a chatbot embedded in a credit-scoring workflow, an employment-screening tool, or a public benefits system. The classification follows the use case, not the technology.

### Is a recommendation engine a high-risk AI system?

Most recommendation engines are minimal-risk or limited-risk. A recommendation engine becomes high-risk if it allocates access to essential services (Annex III, point 5), if it screens education applicants or workers (points 3 and 4), or if it materially influences electoral outcomes (point 8). A product-recommendation engine on an e-commerce site is minimal-risk. A loan-product recommendation engine that effectively determines credit access is high-risk because it is upstream of an Annex III decision.

### How is fraud detection classified?

Fraud detection is not on the Annex III list per se. AI systems used by financial institutions to detect financial fraud are explicitly excluded from the credit-scoring high-risk category in Annex III, point 5(b). However, AI systems used by law enforcement to assess the risk of an individual offending or reoffending are high-risk under Annex III, point 6. The classification depends on who deploys the system and what decision the system feeds.

### What if my system sits between two tiers?

Document the analysis. The Act does not provide for a formal tie-break between tiers but the higher tier's obligations apply where the system meets that tier's triggers. If a system has both a limited-risk transparency trigger (Article 50) and a high-risk Annex III trigger, the high-risk obligations apply in full and the transparency obligations apply on top. The classification should be in writing, retained as part of the technical documentation, and updated whenever the system or its deployment context changes.

### Can a system change tiers over time?

Yes. A system that starts as a minimal-risk pilot can become high-risk if its deployment expands into an Annex III area. A system that is high-risk in one product can be reused as a component in a non-high-risk product. The classification is per-system and per-deployment, and it should be reviewed annually and after any material change. Reclassification triggers updates to the technical documentation, the conformity assessment, and the contractual allocation of responsibilities.

### What is the Article 6(3) exemption from high-risk classification?

Article 6(3) provides that an Annex III system is not high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights, including by not materially influencing the outcome of decision-making. Four scenarios are listed: narrow procedural task, improvement of a previously completed human activity, pattern-detection that does not replace human assessment, and preparatory work. The exemption must be documented, and the AI system must still be registered in the EU database. National authorities can challenge the exemption and require the provider to justify it.

### What is the AI literacy obligation?

Article 4 requires providers and deployers to take measures to ensure a sufficient level of AI literacy among staff and other persons dealing with the operation and use of AI systems on their behalf. The obligation applies regardless of risk tier and applied from 2 February 2025. In practice, organisations operationalise this by combining role-based AI training, documented use guidelines, and audit trails of who has completed the training. The obligation is enforceable but proportionate, and the AI Office is publishing guidance on what counts as sufficient.

## Sources cited

1. Regulation (EU) 2024/1689 (Articles 5, 6, 50, Annexes I and III). European Union, Official Journal, 2024-07-12. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689
2. Regulatory framework proposal on artificial intelligence. European Commission, DG CNECT, 2026-01. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
3. AI Act Explorer - Annex III breakdown. Future of Life Institute, 2024-08. https://artificialintelligenceact.eu/the-act/
4. Commission guidelines on prohibited AI practices. European Commission, 2025-02. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
5. Multilayer framework for good cybersecurity practices for AI. ENISA, 2023-06. https://www.enisa.europa.eu/publications/multilayer-framework-for-good-cybersecurity-practices-for-ai
6. AI Risk Management Framework. NIST, 2023-01. https://www.nist.gov/itl/ai-risk-management-framework
7. AI Index Report 2024. Stanford HAI, 2024-04. https://aiindex.stanford.edu/report/
