---
title: "EU AI Act readiness checklist for CIOs | Impetora"
description: "A 25-item EU AI Act readiness checklist a CIO or CTO can run against an existing AI inventory in an afternoon. Article-by-article timelines, the risk-classification process, and what the obligations actually look like in production."
url: https://impetora.com/blog/eu-ai-act-readiness-checklist-for-cios
category: Regulation
datePublished: 2026-04-27
dateModified: 2026-04-27
readMinutes: 13
author: Impetora
---

# EU AI Act readiness checklist for CIOs

> The EU AI Act, Regulation (EU) 2024/1689, is the first horizontal AI regulation with binding obligations on any organisation that places an AI system on the EU market or puts an AI system into service for users in the EU, regardless of where the organisation is headquartered. By the time the high-risk obligations apply in August 2026, every CIO in scope should be able to produce an AI inventory, a risk classification, and a written readiness file for each in-scope system. The 25-item checklist below is what we use with clients to get there.

*Updated 2026-04-27. By Impetora. 13 min read.*

## When does each part of the EU AI Act apply?

The Act entered into force on 1 August 2024, with staggered application dates that every CIO needs in their planning calendar [1]. 2 February 2025. Chapter I (general provisions) and Chapter II (prohibited practices) apply. From this date, the practices listed in Article 5 - including untargeted scraping of facial images, social scoring by public authorities, and certain emotion-recognition systems in workplaces and education - cannot be placed on the market or put into service in the EU. Article 4 obligations on AI literacy of staff also apply. 2 August 2025. Obligations on providers of general-purpose AI models, the rules for notified bodies, the governance structure under the AI Office and the European AI Board, the confidentiality and penalty provisions, and the rules for model providers placed on the market all apply. 2 August 2026. The bulk of the Act applies. This is the date most enterprises are planning around. From this date, providers and deployers of high-risk systems under Annex III must comply with Articles 8 to 22, which cover the risk-management system, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, and post-market monitoring. 2 August 2027. The remaining obligations apply, including the high-risk regime for systems that are safety components of products covered by the Annex I sectoral legislation (medical devices, machinery, toys, lifts, in-vitro diagnostics, automotive type-approval, civil aviation security and others). The European Commission's AI Office has published progressively detailed guidance through 2025 and is the canonical reference, supplemented by the AI Act Explorer maintained by the Future of Life Institute, which is widely used as a navigation aid [2][3].

## How should a CIO classify each AI system in the inventory?

Risk classification is the gate that determines which obligations apply. The Act defines four categories. The CIO's first task is to place every AI system in the inventory into one of them, with a written rationale. Prohibited (Article 5). A short list of practices that cannot be deployed in the EU at all, including manipulation that exploits vulnerabilities, social scoring by public authorities, predictive policing based solely on profiling, untargeted scraping of facial images to build recognition databases, real-time remote biometric identification in public spaces by law enforcement except in narrow circumstances, and emotion recognition in workplaces and education except for medical or safety reasons. If a system in the inventory falls here, decommission it. High-risk (Article 6 plus Annex III). Systems used in eight listed areas where AI can materially affect rights or safety: biometrics, critical infrastructure, education and vocational training, employment and worker management, access to essential private and public services (including credit scoring), law enforcement, migration and border control, administration of justice and democratic processes. Plus systems that are safety components of products covered by the Annex I legislation. High-risk systems carry the heaviest obligations and most CIOs will have at least one in their inventory. Limited-risk (Article 50). Systems that interact with natural persons (chatbots), generate synthetic content (deepfakes), perform emotion recognition or biometric categorisation outside high-risk contexts, or generate text published to inform the public on matters of public interest. The obligation is principally transparency: users must be informed they are interacting with AI, and synthetic content must be labelled as such in machine-readable form. Minimal-risk. Everything else. No mandatory obligations under the Act, although the Commission encourages voluntary codes of conduct. The classification process should be a written exercise, not a workshop verdict. For each system, name the use case, map it to Annex III if applicable, document why it does or does not fall under each Annex III heading, and have the legal function counter-sign. The output is the foundation of the entire compliance file [4].

## What is the 25-item readiness checklist?

The checklist below is grouped into five sections. Each item is a binary question with a documented evidence path. A CIO should be able to walk through this in an afternoon for any single in-scope system, longer for the full inventory. Section A - Inventory and classification (items 1-5). Is there a single, current inventory of every AI system in use across the organisation, including third-party platforms with embedded AI? For each system, is there a written description of the intended purpose, the deployer, the provider, and the user population? Is each system classified as prohibited, high-risk, limited-risk, or minimal-risk under the Act, with a written rationale? For each high-risk system, is the specific Annex III heading documented? Is there a named accountable owner inside the organisation for each system? Section B - Data and model governance (items 6-10). For each high-risk system, is there a documented data-governance description covering data sources, lawfulness of processing under GDPR, classification, retention, and quality controls (Article 10)? Is there a documented description of the training, validation, and test data sets used, with bias examination evidence? Is the model version pinned in production, with a recorded change-control process for upgrades? For systems using third-party general-purpose AI models, is the provider's model card or technical documentation on file? Is the data processing agreement with each AI sub-processor up to date, naming residency and security commitments? Section C - Technical documentation and logging (items 11-15). For each high-risk system, is the technical documentation specified in Annex IV in preparation or complete (system description, design specifications, monitoring, logging, change records)? Does the system automatically log events sufficient to ensure traceability over its lifecycle (Article 12)? Are logs retained for the period required by the Act (at least six months, or longer where required by other Union or national law)? Can any single output be reconstructed from the logs, including model version, retrieval context, and human-review decision? Are logs stored in a tenant-scoped, access-controlled environment with encryption in transit and at rest? Section D - Transparency, oversight, and accuracy (items 16-20). Are users of the system informed in clear terms about its capabilities, limitations, and the conditions under which its output should not be relied on (Article 13)? Where the system interacts with natural persons or generates synthetic content, are the Article 50 transparency obligations met? Is effective human oversight designed into the workflow, with reviewers who have authority to override and the means to do so (Article 14)? Are accuracy, robustness, and cybersecurity targets defined for each high-risk system, measured continuously, and reported (Article 15)? Is there a post-market monitoring plan that defines drift detection, incident reporting, and re-evaluation cadence (Article 72)? Section E - Organisational readiness (items 21-25). Is the AI literacy obligation under Article 4 met by appropriate training for staff who design, deploy, or oversee AI systems? Is there a quality management system covering AI development and deployment (Article 17), or is ISO 42001 used as the equivalent management standard? For systems that are placed on the market by the organisation as a provider, is the EU declaration of conformity drafted and the CE marking process planned? Is there an incident-reporting process aligned with Article 73 for serious incidents and malfunctions? Is the AI Act compliance file accessible to designated regulators on request, in the language of the relevant Member State? Items that fail the binary test go onto a remediation backlog with a named owner and a target date. ENISA's good-practice frameworks, NIST's AI Risk Management Framework, and ISO/IEC 42001 are the supporting references that turn each binary into a defensible practice [5][6][7].

## What does compliance look like in practice for a single high-risk system?

For a single high-risk system, the compliance file has a predictable shape. Article 8 requires a risk-management system that runs across the lifecycle. Article 9 requires that risk-management system to identify, evaluate, and mitigate known and foreseeable risks. Article 10 requires data governance, including bias examination of training data. Article 11 requires technical documentation in line with Annex IV, prepared before the system is placed on the market. Article 12 requires automatic logging. Article 13 requires transparency for users. Article 14 requires human oversight. Article 15 requires accuracy, robustness, and cybersecurity. Article 17 requires a quality management system. Article 72 requires post-market monitoring. Article 73 requires reporting of serious incidents. In a real organisation, this translates into a structured set of artefacts: a risk register, a data inventory, a model card, a logging schema specification, a user-facing description, a human-oversight design document, a measurement plan with current results, a quality-management procedure or an ISO 42001 statement of applicability, a monitoring plan, and an incident-response runbook. Each artefact has a named owner. Each is reviewed on a documented cadence. The file is not a one-time submission. It is a living set of documents that change as the system changes. The file does double duty. It satisfies the AI Act, and it is the same file that supports DPIAs under GDPR, ISO 42001 audits, customer assurance questionnaires, and internal audit reviews. Building it once for the AI Act and re-using it across these regimes is the most efficient compliance posture available.

## What does the Act require for organisations using general-purpose AI models?

From August 2025, providers of general-purpose AI models face their own obligations under Article 53 and following: technical documentation, summaries of training content, copyright policy, and additional obligations for models with systemic risk. Most enterprise CIOs are not providers of GPAI models. They are deployers building on top of them. The deployer's obligations flow through the relationship with the provider. For deployers, the practical questions are: which GPAI models are in our stack, what does the provider's published documentation cover, what are the contractual commitments on data residency and on training-data use, and what is our fallback if the provider changes the terms or withdraws the model. The AI Act formalises the documentation chain that responsible deployers were already running. It also creates a stronger procurement position, because a provider that cannot answer the documentation questions has a regulatory problem, not just a commercial one. The European Commission's AI Office has been publishing GPAI codes of practice through 2025 with industry input. CIOs procuring GPAI capacity should be tracking the codes that apply to their providers and asking, in writing, how the provider intends to comply [2].

## How does Impetora help organisations through the readiness work?

We run a TRACE readiness audit as a paid two-to-four-week scoping engagement. The deliverable is a written file that includes an AI inventory, a risk classification per system, a gap analysis against the 25-item checklist, a target-state architecture for the highest-priority systems, and a remediation plan with named owners and target dates. The file is reviewable by the DPO, the security lead, and internal audit. From there, engagements split into delivery work on specific in-scope systems, ongoing operate work for systems already in production, and advisory work supporting the compliance function through the August 2026 application date. We do not certify or audit; that work belongs to notified bodies and accredited assessors. We design and build the systems and the documentation that survive their reviews. If you would like to walk the 25-item checklist against your own AI inventory, the intake form is the only path in. We reply within one business day with a written next step.

## Frequently asked questions

### Does the EU AI Act apply to non-EU companies?

Yes, where they place AI systems on the EU market or where the output of the system is used in the EU. A US-headquartered software company that sells AI-enabled tooling to European customers is in scope. So is a UK firm whose AI's output is consumed by users in the EU. Geography is determined by the destination of the system or its output, not by the organisation's headquarters.

### Is everything that uses a large language model automatically high-risk?

No. Risk classification is determined by the use case under Annex III, not by the underlying technology. A customer-support chatbot that answers product questions is typically limited-risk. The same chatbot, retasked to determine credit eligibility, becomes high-risk because credit scoring is in Annex III. Re-classify whenever the use case changes, even when the technology stays the same.

### Can ISO 42001 certification be used to satisfy AI Act obligations?

ISO 42001 covers the management-system aspects of AI governance and overlaps substantially with Article 17. It does not certify conformity with the high-risk system obligations themselves, which are about the system rather than the management system. In practice, organisations running ISO 42001 will have produced most of the process artefacts the AI Act expects, but they still need system-specific evidence for each high-risk system on top.

### What is the penalty for non-compliance with the EU AI Act?

Article 99 sets administrative fines up to 35 million euros or 7% of total worldwide annual turnover for prohibited-practice violations, up to 15 million euros or 3% for non-compliance with most other obligations, and up to 7.5 million euros or 1% for supplying incorrect information. Member States may set additional penalties. The exposure for a large enterprise is material and enforcement responsibility sits with national competent authorities coordinated through the AI Office.

### Do small and mid-sized enterprises get any relief under the Act?

Article 62 instructs national competent authorities and the Commission to take SME interests into account, including reduced fees for conformity assessment and access to AI regulatory sandboxes. The substantive obligations on high-risk systems still apply. The relief is procedural and supportive, not exemption from the rules.

### Where does the EU AI Act overlap with GDPR?

Substantially. Article 10 on data governance, the bias-examination duty, and the human-oversight obligation all interact with GDPR principles on lawfulness, fairness, and automated decision-making. Article 26 of the AI Act explicitly preserves GDPR. In practice, the DPIA, the AI Act technical documentation, and the ISO 42001 management-system records share most of their content. Treat them as one compliance posture, not three.

### What is the right cadence for re-running the readiness checklist?

Run the full 25 items against the AI inventory at least annually, and against any individual system whenever the use case, the data sources, the model version, or the deployment scope changes materially. Items 11 to 15 (technical documentation and logging) and items 18 to 20 (oversight, accuracy, monitoring) typically need updating more often than the inventory and classification items. ENISA's frameworks suggest a quarterly internal review for high-risk systems, which is consistent with what most regulators expect.

## Sources cited

1. Regulation (EU) 2024/1689 (Artificial Intelligence Act). European Union, Official Journal, 2024-07-12. https://eur-lex.europa.eu/eli/reg/2024/1689/oj
2. AI Office and AI Act implementation - guidance and codes of practice. European Commission, 2025-onwards. https://digital-strategy.ec.europa.eu/en/policies/ai-office
3. AI Act Explorer. Future of Life Institute, ongoing. https://artificialintelligenceact.eu/ai-act-explorer/
4. Annex III to Regulation (EU) 2024/1689 - High-risk AI systems. European Union, Official Journal, 2024-07-12. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689
5. Multilayer framework for good cybersecurity practices for AI. ENISA, 2023-06. https://www.enisa.europa.eu/publications/multilayer-framework-for-good-cybersecurity-practices-for-ai
6. AI Risk Management Framework (NIST AI 100-1) and Generative AI Profile (NIST AI 600-1). NIST, 2023-2024. https://www.nist.gov/itl/ai-risk-management-framework
7. ISO/IEC 42001:2023 Artificial intelligence management system. International Organization for Standardization, 2023-12. https://www.iso.org/standard/81230.html
