I
Impetora

ISO/IEC 42001 to EU AI Act mapping: a clause-by-article crosswalk

By Impetora -

ISO/IEC 42001:2023 is the first international management-system standard for artificial intelligence. Its clauses and Annex A controls overlap substantially with the EU AI Act's design-time and operational obligations, which is why ISO 42001 certification is increasingly used as a procurement reference and an audit-readiness baseline [1][2]. ISO 42001 is not a substitute for AI Act conformity - the Act has system-level obligations the standard does not certify - but the management-system controls it requires cover most of the organisational and process work the Act expects to find in place.

10
ISO 42001 clauses (high-level structure)
ISO/IEC 42001:2023
9
Annex A control objectives
Annex A
38
Annex A controls
Annex A controls
2 Aug 2026
AI Act high-risk obligations apply
Article 113

What is ISO/IEC 42001 and why does it matter for the AI Act?

ISO/IEC 42001:2023, published in December 2023, specifies requirements for an artificial intelligence management system (AIMS). It follows the standard ISO management-system structure (Clauses 4-10) and is designed to coexist with ISO 9001 quality, ISO 27001 information security, and ISO 27701 privacy management systems. The standard's Annex A defines nine control objectives and 38 specific controls covering AI policies, internal organisation, resources, impact assessment, lifecycle, data management, third-party use, and end-user information [1].

The standard does not prescribe technical AI methods. It prescribes the management system that surrounds the AI work: how an organisation sets policy, allocates roles, identifies AI risks, manages data, controls the lifecycle, communicates with users, and improves over time. The certification is awarded by an accredited certification body and attests that the management system meets the standard's requirements.

For AI Act preparation, ISO 42001 matters because the Act's organisational obligations - quality management system under Article 17, data governance under Article 10, post-market monitoring under Article 72, transparency under Article 13 - all map to ISO 42001 clauses or controls. A vendor that holds 42001 certification has built the management system the Act expects. The Act's product-level obligations - per-system conformity assessment, technical documentation, registration in the EU database - are still separate work the standard does not certify.

How is ISO 42001 structured?

The standard follows the standard ISO Harmonized Structure. Clause 4: Context. Understand the organisation and stakeholders, scope of the AIMS. Clause 5: Leadership. Top-management commitment, AI policy, roles and responsibilities. Clause 6: Planning. Actions to address risks and opportunities, AI objectives, AI impact assessment. Clause 7: Support. Resources, competence, awareness, communication, documented information.

Clause 8: Operation. Operational planning and control, AI risk treatment, AI system impact assessment, third-party relationships. Clause 9: Performance evaluation. Monitoring, measurement, analysis, internal audit, management review. Clause 10: Improvement. Nonconformity, corrective action, continual improvement.

Annex A contains the nine control objectives: A.2 policies related to AI, A.3 internal organisation, A.4 resources, A.5 assessing impacts, A.6 AI lifecycle, A.7 data, A.8 information for interested parties, A.9 use, A.10 third-party and customer relationships [2]. Each objective has 2-7 specific controls.

How does Clause 6 map to AI Act risk management (Article 9)?

ISO 42001 Clause 6.1 requires the organisation to identify AI risks, analyse them, evaluate them, and plan actions to address them. Clause 6.1.4 specifically requires an AI system impact assessment that considers the system's potential consequences for individuals, groups, and society. This maps directly to AI Act Article 9, which requires a risk-management system established and documented across the lifecycle, identifying and analysing known and reasonably foreseeable risks, estimating and evaluating risks under intended use and reasonably foreseeable misuse, and adopting risk-management measures.

The 42001 impact assessment, when scoped to a specific high-risk system, can serve as the documentation backbone for the Article 9 risk-management file. The same applies to AI Act Article 27's fundamental rights impact assessment for deployers - 42001 Clause 6.1.4 plus Annex A.5.2 (AI system impact assessment) gives the procedural structure, with the FRIA's specific content layered on top [3].

Clause 6
ISO 42001 planning maps to Art. 9
ISO 42001 Clause 6

How does Annex A.7 map to AI Act data governance (Article 10)?

ISO 42001 Annex A.7 covers the data lifecycle. A.7.2 requires data for AI systems to be acquired, prepared, and managed in alignment with the AI policy and intended purpose. A.7.3 requires data quality management. A.7.4 requires the provenance of data to be documented. A.7.5 requires data preparation methods to be documented. A.7.6 covers protection of personal data within the AIMS, coordinated with the privacy management system.

AI Act Article 10 requires training, validation, and testing data sets to be subject to data governance and management practices appropriate to the intended purpose, addressing relevant design choices, data collection processes and origin, data preparation operations such as annotation, labelling, cleaning, updating, enrichment, aggregation, formulation of assumptions, examination in view of possible biases, identification of data gaps or shortcomings, appropriate measures to detect, prevent, and mitigate possible biases.

The mapping is direct: A.7.2 covers the design-choice and origin work, A.7.3 covers preparation and quality, A.7.4 covers provenance, A.7.5 covers labelling and cleaning, and the bias-evaluation work needs an additional control or procedure beyond what 42001 explicitly requires. Most certified organisations layer a separate bias-evaluation procedure on top of A.7 to close this gap.

How does Annex A map to human oversight (Article 14)?

AI Act Article 14 requires high-risk systems to be designed and developed in such a way that they can be effectively overseen by natural persons during the period in which they are in use. The oversight measures must enable individuals to understand the relevant capacities and limitations of the system, remain aware of automation bias, correctly interpret the system's output, decide not to use the output or override it, and intervene or stop the system through a stop button or similar procedure.

ISO 42001 covers the management-system side of this through Clause 5.3 (roles and responsibilities), Annex A.3.2 (AI roles and responsibilities), Annex A.4.5 (computational resources), and Annex A.6.1.4 (system documentation). The architectural design of human-oversight controls - which decisions the model makes, which the human makes, what the override paths are - is engineering work that 42001 does not specify directly but expects to be covered under Annex A.6 (AI lifecycle). A vendor with 42001 certification has documented who has oversight authority; a vendor with AI Act Article 14 readiness has designed the system so that authority is operationally meaningful.

How does Clause 9 map to post-market monitoring (Article 72)?

ISO 42001 Clause 9.1 requires the organisation to determine what needs to be monitored and measured, the methods, the frequency, the analysis, and the evaluation of results. Clause 9.2 requires internal audits at planned intervals. Clause 9.3 requires management review with specific inputs including changes in the external context, performance information, audit results, and improvement opportunities.

AI Act Article 72 requires providers of high-risk systems to establish and document a post-market monitoring system that actively and systematically collects, documents, and analyses relevant data on the system's performance throughout its lifetime. The post-market monitoring plan must enable the provider to evaluate continuous compliance with the requirements of Chapter III, Section 2. Article 73 adds the serious-incident reporting deadlines: 15 days for general serious incidents, 10 days for cases involving death, 2 days for incidents involving widespread infringement.

Clause 9
Performance evaluation maps to Art. 72
Article 72

The mapping: Clause 9.1 plus a system-specific monitoring plan covers most of Article 72. The serious-incident reporting workflow under Article 73 needs an explicit procedure that 42001 does not prescribe in detail but expects to find under Clause 10.2 (corrective action). Most certified organisations operationalise Article 73 as a named procedure with a deadlined escalation path and a designated authority interface [4].

How does Annex A.6 map to technical documentation (Article 11 + Annex IV)?

ISO 42001 Annex A.6 covers the AI lifecycle. A.6.1.1 requires the AI lifecycle to be defined and documented. A.6.1.2 covers system documentation throughout the lifecycle. A.6.1.3 covers the AI development, deployment, and operation processes. A.6.1.4 covers system documentation. A.6.2 covers AI system requirements and design.

AI Act Article 11 requires technical documentation drawn up before the system is placed on the market and kept up to date. Annex IV specifies the nine sections: general description, detailed description of elements and development process, monitoring and control, performance metrics, risk management, data and data governance, change management, harmonised standards applied, EU declaration of conformity. The 42001 lifecycle documentation under A.6.1 covers the procedural work for sections 1-4 and 7. Annex IV sections 5 (risk management), 6 (data governance), 8 (standards applied), and 9 (declaration plus post-market monitoring plan) need additional system-specific files that 42001 expects to find under the broader management system.

For procurement, the practical question is whether the vendor can produce the Annex IV pack as a deliverable - templated, customised to the buyer's workload, retained for 10 years. A 42001-certified vendor has the infrastructure to do this. A non-certified vendor can still produce the pack but the procurement risk is higher because there is no third-party attestation that the management system is in place.

What does ISO 42001 not cover that the AI Act requires?

Three categories. Per-system conformity assessment. The AI Act requires a documented conformity assessment for each high-risk system before it goes to market, ending in an EU declaration of conformity, CE marking, and registration in the EU database. ISO 42001 certifies the management system, not individual products. The conformity assessment is separate, system-specific work. Prohibited-practice analysis. Article 5 prohibits eight categories of AI practice with criminal-grade penalties. ISO 42001 does not enumerate these prohibitions. Organisations need a specific control or procedure that screens new AI systems against Article 5 before they enter the lifecycle.

Specific transparency obligations under Article 50. ISO 42001 Annex A.8 covers information for interested parties at the management-system level, but the Article 50 requirements for chatbot disclosure, AI-generated content marking, deepfake labelling, and emotion-recognition disclosure to exposed individuals are specific UX-level obligations that need engineering implementation. The standard expects the management system to ensure these are addressed but does not prescribe the detailed implementation.

For the broader AI Act context, see EU AI Act overview, risk classification, and conformity assessment. For the underlying methodology that integrates 42001 work into delivery, see TRACE.

Frequently asked questions

Is ISO 42001 certification required for AI Act compliance?
No. ISO 42001 is a voluntary standard. The AI Act does not require any specific certification, though it establishes a presumption of conformity for systems that comply with harmonised standards published in the Official Journal under Article 40. Whether ISO 42001 will be formally harmonised under the Act is being worked through CEN-CENELEC JTC 21. In the interim, 42001 certification is a strong procurement signal and an audit-readiness baseline but it is not a legal substitute for AI Act conformity.
Can a 42001 management system cover both ISO 42001 and AI Act requirements?
Yes, with extensions. The 42001 management system covers most of the organisational and process obligations under the AI Act. Extensions are needed for prohibited-practice screening, system-specific conformity assessments, technical documentation packs aligned with Annex IV, the EU declaration of conformity, CE marking, EU database registration, and the serious-incident reporting workflow under Article 73. Most organisations operate an integrated management system where 42001 sits alongside ISO 27001, ISO 9001, and the AI Act-specific procedures.
How long does it take to achieve ISO 42001 certification?
For an organisation with an existing ISO 27001 or ISO 9001 management system, six to nine months from project start to certification audit is realistic, depending on scope and the maturity of existing AI controls. For an organisation building a management system from scratch, twelve to eighteen months is more typical. The certification audit itself is a two-stage process: a Stage 1 readiness review and a Stage 2 main audit, with the certificate issued after Stage 2 if no major non-conformities are identified.
How does ISO 42001 relate to NIST AI RMF?
They are complementary. NIST's AI Risk Management Framework, published in January 2023, is a voluntary framework structured around four core functions - Govern, Map, Measure, Manage. It is widely used in the United States and is technically rich on AI risk identification and measurement methods. ISO 42001 is a certifiable management-system standard. Many organisations use NIST AI RMF as the technical methodology that operates inside the 42001 management system, with the Govern function aligning to 42001 Clauses 4-5, Map aligning to Clauses 6 and 8, Measure aligning to Clause 9, and Manage spanning Clauses 8 and 10. NIST and ISO have published a crosswalk supporting this integration.
Does 42001 certification cover GDPR obligations on AI?
Partially. 42001 Annex A.7.6 covers protection of personal data within the AIMS, coordinated with the privacy management system. The standard expects an organisation to have GDPR work in place but does not certify GDPR compliance. ISO 27701, the privacy management-system extension to ISO 27001, is the standard most often used for GDPR-aligned certification. A typical regulated-industry vendor stack is ISO 27001 plus ISO 27701 plus ISO 42001 plus AI Act-specific procedures, all integrated under a single management system.
Can SMEs realistically achieve 42001 certification?
Yes, with proportionality. ISO management-system standards are designed to be applied proportionate to the size and complexity of the organisation. A small AI vendor with focused scope can build a 42001-aligned management system that is documented in a small number of policies and procedures, with annual internal audits and a single certification body engagement. The cost and timeline are scaled accordingly. The substantive work - AI policy, AI risks, data governance, lifecycle management - is the same in form but smaller in volume.
What is the audit evidence a buyer should ask for?
Six items cover most of the procurement diligence. The current ISO 42001 certificate (scope, certification body, validity dates). The Statement of Applicability listing which Annex A controls are in place and any exclusions with justification. The most recent management review minutes. The most recent internal audit report. The corrective action register for any open non-conformities. The AI policy. These six together let a buyer assess whether the certification is current, which controls are operating, what improvement work is in progress, and whether the management system is genuinely live or paper-only.
Impetora

Ready to scope your project? Submit a short brief and we reply within one business day.

Sources cited

Sources cited (7) - show
  1. ISO/IEC 42001:2023 - AI management systems. International Organization for Standardization, 2023-12. https://www.iso.org/standard/81230.html
  2. Regulation (EU) 2024/1689 (Articles 9-15, 17, 27, 50, 72, Annex IV). European Union, Official Journal, 2024-07-12. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689
  3. AI Act Explorer - obligations and annexes. Future of Life Institute, 2024-08. https://artificialintelligenceact.eu/the-act/
  4. Multilayer framework for good cybersecurity practices for AI. ENISA, 2023-06. https://www.enisa.europa.eu/publications/multilayer-framework-for-good-cybersecurity-practices-for-ai
  5. AI Risk Management Framework. NIST, 2023-01. https://www.nist.gov/itl/ai-risk-management-framework
  6. Regulatory framework for artificial intelligence. European Commission, DG CNECT, 2026-01. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
  7. EDPB guidelines and recommendations. European Data Protection Board, 2026-01. https://www.edpb.europa.eu/our-work-tools/our-documents/guidelines
About Impetora
Impetora designs, builds, and deploys custom AI systems for enterprises in regulated industries. We operate from Vilnius and Amsterdam and work in five languages.