---
title: "EU AI Act Overview: Scope, Risk Tiers, Timeline (2026) | Impetora"
description: "What the EU AI Act is, who it applies to, the four risk tiers, the staggered timeline through 2027, and the obligations enterprises need to plan around in 2026."
url: https://impetora.com/eu-ai-act/overview
locale: en
datePublished: 2026-04-27
dateModified: 2026-04-27
author: Impetora
---

# EU AI Act overview: scope, risk tiers, and timeline through 2027

> The EU AI Act is Regulation (EU) 2024/1689, the world's first horizontal legal framework for artificial intelligence. It entered into force on 1 August 2024 and applies on a staggered timeline through 2027, with most high-risk system obligations becoming enforceable on 2 August 2026 [1]. The Act covers any AI system placed on the EU market or used in the EU, regardless of where the provider is established, and sorts systems into four risk tiers with proportionate obligations and penalties up to 35 million euros or 7 percent of worldwide annual turnover.

*Updated 2026-04-27. By Impetora.*

## What is the EU AI Act and what does it regulate?

The EU AI Act is a directly applicable regulation that creates a single horizontal framework for artificial intelligence across all 27 EU member states. It defines an AI system, in Article 3, as a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that infers from input how to generate outputs that can influence physical or virtual environments [1]. The definition is intentionally broad and tracks the OECD definition. The Act regulates the entire lifecycle of an AI system - from data sourcing and model training through deployment, monitoring, and incident reporting - and applies to providers, deployers, importers, distributors, and authorised representatives. Providers carry the heaviest burden because they place the system on the market under their name. Deployers, the organisations actually using the system, carry a lighter but still meaningful set of obligations around oversight, instructions for use, and incident reporting [3]. Crucially, the Act has extraterritorial reach. A US, UK, or Asian provider that makes a high-risk system available to EU users must appoint an EU authorised representative, meet the same documentation obligations as an EU provider, and accept fines under EU jurisdiction. The same applies if the output of an AI system is used inside the EU even if the system itself runs outside.

## Who does the EU AI Act apply to?

The Act applies to four roles. Providers develop AI systems and place them on the EU market under their own name or trademark. Deployers use AI systems in a professional capacity inside the EU. Importers place AI systems from third-country providers on the EU market. Distributors make AI systems available in the EU supply chain without modifying them. Authorised representatives act on behalf of non-EU providers. For most enterprises, the practical question is which roles they occupy across their AI portfolio. A bank that builds an internal credit-scoring model is a provider. A bank that buys a third-party fraud-detection product is a deployer. A bank that white-labels a partner's KYC tool under its own name is again a provider. These roles can shift on a per-system basis and the contractual allocation of responsibilities should be made explicit before signature. The Act exempts AI systems used exclusively for military, defence, or national security purposes, AI systems used solely for scientific research and development, and personal non-professional use. Free and open-source AI components are partially exempt, though general-purpose AI models with systemic risk are still in scope regardless of licence [3].

## What are the four risk tiers?

The Act sorts AI systems into four tiers with proportionate obligations. Prohibited practices, listed in Article 5, include manipulative behavioural alteration, untargeted scraping of facial images for facial-recognition databases, social scoring by public authorities, real-time remote biometric identification in public spaces with limited law-enforcement exceptions, and emotion recognition at the workplace and in educational settings. These obligations applied from 2 February 2025 [1]. High-risk systems, listed in Annex III, cover eight areas: biometrics, critical infrastructure, education and vocational training, employment and worker management, access to essential private and public services (including credit scoring), law enforcement, migration and border control, and administration of justice and democratic processes. High-risk systems also include AI safety components in regulated products under Annex I, such as machinery, medical devices, and aviation. Most high-risk obligations apply from 2 August 2026 [2]. Limited-risk systems are subject to transparency obligations under Article 50: chatbots must disclose they are AI, AI-generated synthetic content must be marked as machine-generated where reasonable, and deepfakes must be labelled. Minimal-risk systems, the bulk of business AI use today, carry no specific obligations beyond the cross-cutting AI literacy requirement that applied from February 2025. For a granular tier-by-tier breakdown, see the EU AI Act risk classification guide.

## How does the Act treat general-purpose AI models?

General-purpose AI models, the foundation models that power downstream applications, carry their own dedicated obligations under Chapter V. All providers of general-purpose AI models must publish a sufficiently detailed summary of training data, maintain technical documentation, comply with EU copyright law, and cooperate with the European Commission's AI Office. These obligations applied from 2 August 2025 [1]. Models with systemic risk, defined as general-purpose models trained with more than 10^25 floating-point operations, carry additional obligations: state-of-the-art evaluations, systemic-risk assessment and mitigation, serious-incident reporting, and an adequate cybersecurity posture. The list of models reaching this threshold is maintained by the AI Office and is updated as new models are released. For enterprise buyers, the practical implication is that downstream applications built on top of GPT-4-class, Claude-class, Gemini-class, or open-weights frontier models inherit a partial compliance pack from the model provider. The buyer's own application then layers a system-level compliance pack on top. Vendors who cannot point to which model they use and which compliance documentation they inherit are not yet ready for 2026 procurement.

## What is the staggered application timeline?

The Act's application is staggered to give the market and regulators time to build governance capacity. The key dates are: 1 August 2024 - Entry into force. 2 February 2025 - Prohibited practices (Article 5) and AI literacy obligation (Article 4) apply. 2 August 2025 - General-purpose AI model obligations (Chapter V), governance and notified-body framework, and penalties (Article 99) apply. 2 August 2026 - The bulk of high-risk system obligations (Annex III) apply. 2 August 2027 - High-risk systems integrated into regulated products under Annex I (machinery, medical devices, aviation) apply. Enterprises signing AI contracts in 2026 should assume the full high-risk obligation set is enforceable by go-live. Contracts that defer compliance work to "after pilot" generally produce gaps that cost more to close late than to design in early.

## What are the penalties for non-compliance?

Article 99 sets three penalty bands. Breach of the Article 5 prohibited practices carries the highest exposure: up to 35 million euros or 7 percent of worldwide annual turnover, whichever is higher. Breach of most other obligations - high-risk system requirements, transparency, governance - carries up to 15 million euros or 3 percent of turnover. Provision of incorrect, incomplete, or misleading information to notified bodies and authorities carries up to 7.5 million euros or 1 percent of turnover. The penalty regime is enforced by national competent authorities under member-state procedural rules. The European Commission's AI Office enforces obligations on general-purpose AI model providers directly, with fines up to 3 percent of turnover or 15 million euros. National authorities are still being designated as of mid-2026, with most member states publishing their structure during the transitional governance period. The reputational and contractual consequences typically dwarf the headline fine. Public-sector procurement frameworks already require AI Act compliance attestation. Enterprise customers in regulated industries cascade the same requirement through their supply chain. A single compliance gap can disqualify a vendor from a multi-year framework agreement.

## What should enterprises do in 2026?

Six practical steps cover most of the planning surface. One, run an AI inventory across the organisation - which systems exist, which are planned, and what risk tier each falls into. The inventory is the foundation for every other decision. Two, deploy an AI literacy programme for staff who use or oversee AI systems. This obligation has been live since February 2025 and is also a soft requirement of Article 14 on human oversight. Three, for each high-risk system, draft a written conformity assessment plan and a technical documentation pack aligned with Annex IV. Four, refresh vendor contracts to allocate responsibilities for the conformity assessment, data governance, post-market monitoring, and incident reporting. Five, build the incident-reporting workflow with a named owner who interfaces with national competent authorities. Six, align AI governance work with existing GDPR, ISO 27001, and (where relevant) DORA, NIS2, and sectoral compliance, because the documentation expectations overlap heavily. Stanford HAI's AI Index 2024 found that more than 78 percent of organisations had adopted AI in at least one business function [7]. Most of those organisations therefore have at least one limited-risk or high-risk system in their portfolio in 2026, even if they have not yet classified them.

## How does Impetora help with EU AI Act work?

Impetora is an AI consultancy and solutions partner. We design, build, and deploy custom AI systems for enterprises in regulated industries. Our TRACE methodology has the AI Act baked in: Trust covers EU data residency and audit trails as deliverables, Readiness covers the data and risk audit before any code, Architecture covers production-grade design with logging and observability, and Citations and Evidence covers the traceability of every output to its source. For buyers running an AI Act compliance programme, the practical artefacts we produce on every engagement are the conformity assessment plan, the data-governance description aligned with Article 10, the Annex IV technical documentation pack, the human-oversight design, and the post-market monitoring plan. These are deliverables, not upsells. Related cluster pages: risk classification, conformity assessment, ISO 42001 mapping, how to evaluate AI Act-ready vendors, TRACE methodology, and decision-support AI (a typical Annex III high-risk use case).

## Frequently asked questions

### When does the EU AI Act fully apply?

The Act entered into force on 1 August 2024 with staggered application. Prohibited practices and AI literacy obligations applied from 2 February 2025. General-purpose AI model obligations and the penalty regime applied from 2 August 2025. The bulk of high-risk system obligations apply from 2 August 2026. High-risk systems integrated into regulated products under Annex I apply from 2 August 2027. Buyers signing AI contracts in 2026 should assume the full high-risk obligation set is enforceable by go-live.

### Does the EU AI Act apply to non-EU companies?

Yes, on three grounds. The Act applies if the provider places an AI system on the EU market regardless of where the provider is established. It applies if the deployer is established or located in the EU. It applies if the output of an AI system is used in the EU, even if the system itself runs outside. Non-EU providers placing high-risk systems on the EU market must appoint an EU-resident authorised representative under Article 22 and accept fines under EU jurisdiction.

### Is GDPR relevant to the EU AI Act?

Yes, in parallel. The AI Act does not displace GDPR for AI systems that process personal data. The two regimes overlap: GDPR Article 22 on automated decision-making, GDPR Articles 35-36 on data protection impact assessments, and the EDPB's guidance on AI all run in parallel with the AI Act's Article 10 on data governance and Article 27 on fundamental rights impact assessments. A well-designed compliance programme covers both regimes in a single workstream rather than treating them as separate projects.

### Are open-source AI models exempt?

Partially. Article 2(12) exempts free and open-source AI systems from most obligations, but the exemption does not apply to high-risk systems, prohibited practices, transparency obligations on limited-risk systems, or general-purpose AI models with systemic risk. In practice, an enterprise that fine-tunes an open-weights model and deploys it in a high-risk context inherits the full provider obligations regardless of the upstream licence. The exemption is for the open-source ecosystem, not for downstream commercial deployment.

### Who enforces the EU AI Act?

Enforcement is split. Each member state designates a national market-surveillance authority to enforce the Act for systems placed on its territory. The European Commission's AI Office, established within DG CNECT, enforces obligations on general-purpose AI model providers directly and coordinates the European Artificial Intelligence Board. Notified bodies, accredited under member-state procedures, perform third-party conformity assessments where required. Sectoral regulators (financial, medical, aviation) retain their existing competences for AI systems within their remit and coordinate with the national AI authority.

### What is the difference between a provider and a deployer?

A provider develops an AI system and places it on the market under its own name or trademark. A deployer uses an AI system in a professional capacity. Providers carry the bulk of the design-time obligations: risk management, data governance, technical documentation, conformity assessment, registration in the EU database, post-market monitoring. Deployers carry use-time obligations: follow the instructions for use, ensure human oversight, monitor system behaviour, log and report incidents, and inform individuals subject to AI decisions where required by Article 26. The roles can shift on a per-system basis and the contract should make the allocation explicit.

### What is a fundamental rights impact assessment?

Article 27 requires deployers of certain high-risk systems - particularly bodies governed by public law, private operators providing public services, and deployers of credit-scoring or insurance-pricing systems - to conduct a fundamental rights impact assessment before first use. The assessment describes the deployer's processes for using the system, the period and frequency of use, the categories of individuals affected, the specific risks of harm, the measures for human oversight, and the measures to be taken in case the risks materialise. The output is shared with the national market-surveillance authority. The assessment runs in parallel with the GDPR data protection impact assessment under Article 35.

## Sources cited

1. Regulation (EU) 2024/1689 (Artificial Intelligence Act). European Union, Official Journal, 2024-07-12. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689
2. Regulatory framework proposal on artificial intelligence. European Commission, DG CNECT, 2026-01. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
3. AI Act Explorer (consolidated text and annexes). Future of Life Institute, 2024-08. https://artificialintelligenceact.eu/the-act/
4. Multilayer framework for good cybersecurity practices for AI. ENISA, 2023-06. https://www.enisa.europa.eu/publications/multilayer-framework-for-good-cybersecurity-practices-for-ai
5. ISO/IEC 42001:2023 - AI management systems. International Organization for Standardization, 2023-12. https://www.iso.org/standard/81230.html
6. AI Risk Management Framework. NIST, 2023-01. https://www.nist.gov/itl/ai-risk-management-framework
7. AI Index Report 2024. Stanford HAI, 2024-04. https://aiindex.stanford.edu/report/
8. Generative artificial intelligence in finance. Bank for International Settlements, 2024-08. https://www.bis.org/fsi/publ/insights63.htm
