I
Impetora

ISO/IEC 42001 implementation guide: what enterprises actually have to build in 2026

By Impetora -

ISO/IEC 42001:2023 is the world's first management system standard for artificial intelligence. It is voluntary, certifiable through accredited bodies, and structurally aligned with ISO 9001 and ISO 27001 - so an enterprise that already runs a quality or information-security management system can graft 42001 onto the same Annex SL backbone [1]. It is not a substitute for EU AI Act conformity assessment, but it is the documentation engine that makes Act compliance feasible at scale.

10
clauses in the Annex SL structure
ISO
38
Annex A controls in 42001
ISO
9-18
months typical certification timeline
ISO
2023-12
publication date of 42001
ISO

What is ISO/IEC 42001 and why was it published?

ISO/IEC 42001:2023, published in December 2023, specifies the requirements for an artificial intelligence management system (AIMS). It is built on the same Annex SL high-level structure as ISO 9001 (quality), ISO 27001 (information security) and ISO 27701 (privacy), so the clauses on context, leadership, planning, support, operation, performance evaluation and improvement are familiar to anyone who has built a management system before [1].

The standard was developed by ISO/IEC JTC 1/SC 42, the joint technical committee that has spent the last decade building the AI vocabulary, risk-management and trustworthiness standards (ISO/IEC 22989, 23053, 23894, 24028 and others). It complements ISO/IEC 23894:2023 on AI risk management, which provides the technical methodology that the management system in 42001 operationalises [2].

Why now. Regulators in the EU, the US, the UK and Singapore wanted a market-ready certification that procurement teams could ask vendors to produce. ISO 42001 fills that role. It does not replace the EU AI Act, NIST AI RMF or any sector regulator. It gives a certifiable, audit-ready scaffold that maps onto all of them.

What are the 38 Annex A controls and how do they translate to engineering work?

Annex A of 42001 lists 38 reference controls grouped under nine objectives: policies for AI, internal organisation, AI system lifecycle, data for AI systems, information for interested parties, use of AI systems, third-party and customer relationships, plus AI-specific controls on impact assessment and human oversight. Each control is one or two sentences, deliberately abstract, with the implementation guidance in Annex B.

In engineering terms, the controls map to roughly seven workstreams. A documented AI policy and roles register. A model and data inventory with risk classification per system. A documented impact assessment per high-risk system covering individuals, groups and society. A data-governance pack covering provenance, quality, bias and special categories, aligned with ISO/IEC 5259 and ISO/IEC 23894. A lifecycle process covering development, validation, deployment, monitoring and retirement, with named gates. A logging and monitoring stack with retention rules. A third-party register covering model providers, hosting, training-data sources and downstream users.

38 controls
in 42001 Annex A
ISO

Annex B and the related ISO/IEC 42005 (AI system impact assessment, published 2025) give worked examples for each control. Buyers should ask prospective vendors which Annex A controls map to which internal procedure, and the vendor should be able to produce that mapping table on request [3].

What does an enterprise implementation roadmap actually look like?

A typical mid-to-large enterprise running its first 42001 programme follows a four-phase pattern across nine to eighteen months. Phase one (months 0-3): scope, gap analysis against existing 9001/27001/27701 documentation, AI policy, roles and responsibilities, AI system inventory. Phase two (months 3-6): impact assessments on the priority systems, data-governance pack, lifecycle process, third-party register, integration with existing risk-management committees. Phase three (months 6-9): operational evidence period covering monitoring, internal audit, management review, with at least one full Plan-Do-Check-Act cycle visible in the records. Phase four (months 9-18): stage-1 audit (documentation review), stage-2 audit (operational evidence), certification decision, then surveillance audits annually and recertification at year three.

The cost driver is not the audit fee, which is typically in the EUR 25-80k range for a mid-sized organisation depending on scope and accreditation body. It is the internal effort to produce evidence for the controls and to operationalise the lifecycle process across product teams. Organisations that already operate a mature ISO 27001 ISMS can typically reuse 40-60% of the management system content. Organisations starting cold should budget meaningfully more.

How does ISO 42001 relate to the EU AI Act?

This is the question every regulated-industry buyer asks first, and the answer is that they are complements, not substitutes. ISO/IEC 42001 is a voluntary management-system standard with third-party certification through accredited bodies. The EU AI Act, Regulation (EU) 2024/1689, is mandatory law for systems placed on or used in the EU market, with risk-class-specific obligations enforced by national competent authorities and the European Commission's AI Office [4].

The overlap is real. The Act's Article 9 (risk management), Article 10 (data governance), Article 11 (technical documentation), Article 12 (record-keeping), Article 14 (human oversight), Article 15 (accuracy, robustness, cybersecurity) and Article 17 (quality management system) all map to clauses or Annex A controls in 42001. CEN-CENELEC JTC 21 is developing harmonised European standards in support of the Act, and the working drafts confirm that 42001 is one of the foundational references being woven into the harmonised text.

The practical consequence: 42001 certification is not a presumption of conformity with the Act on its own, but it is the management-system layer that makes Act compliance auditable. A vendor with 42001 certification has built the documentation engine. A vendor without it is rebuilding that engine for every conformity assessment.

What are the most common ISO 42001 implementation mistakes?

Five patterns recur in early implementations. First, treating the AI policy as a marketing document rather than as a governance instrument with named owners and decision rights. Second, scoping too broadly on the first cycle - certifying the entire enterprise instead of a defined business unit or product line, which makes evidence collection unmanageable. Third, skipping the impact assessment for "low-risk" systems without writing down why they were classified that way, which fails the audit trail. Fourth, treating the data-governance pack as static documentation rather than as live evidence (provenance records, quality reports, bias monitoring) that updates with each release. Fifth, deferring the human-oversight design until after deployment, which contradicts both 42001 and the EU AI Act's expectation that oversight is a design constraint.

ENISA, the EU agency for cybersecurity, has published guidance on AI security that maps cleanly onto the cybersecurity expectations within 42001 Annex A and Article 15 of the AI Act, and is a useful cross-reference during implementation [5].

How does Impetora support ISO 42001 implementations?

Impetora's TRACE methodology was designed around the same documentation backbone that 42001 requires. Trust covers the policy, residency and audit-trail layer. Readiness covers the data and workflow audit - which is the input that the impact assessment, data-governance pack and lifecycle process all depend on. Architecture covers the production-grade design with logging, monitoring and recoverability that Annex A controls A.6 (lifecycle) and A.8 (information for interested parties) expect. Citations and Evidence covers the traceability layer that makes the system auditable on request.

For enterprises starting their 42001 programme, the practical path is to scope the first cycle around two to three live AI systems, run the impact assessments and lifecycle process on those, then expand to the rest of the estate in cycle two. The audit body wants to see a working management system, not a paper one. The evidence has to be lived for at least one PDCA cycle before stage-2 will pass.

Frequently asked questions

Is ISO/IEC 42001 mandatory?
No. ISO 42001 is a voluntary management-system standard. It becomes effectively required only when a customer, partner, regulator or tender RFP names it as a procurement requirement, which is now happening regularly in financial services, healthcare and EU public-sector tenders. It is also frequently cited in vendor due-diligence questionnaires as evidence that the supplier operates a recognised AI governance framework.
How long does ISO 42001 certification take?
Nine to eighteen months from kickoff for a mid-sized organisation, depending on starting maturity. Organisations with an existing ISO 27001 information security management system can typically certify in nine to twelve months because the Annex SL backbone and many supporting procedures already exist. Organisations starting from no management-system baseline should plan for twelve to eighteen months including the operational evidence period that the auditor needs to see.
Does ISO 42001 satisfy EU AI Act requirements?
Not on its own. The EU AI Act is law and ISO 42001 is a voluntary standard - they sit at different levels of the regulatory stack. ISO 42001 builds the management system (governance, lifecycle, documentation) that the Act expects to see, and the overlap with Articles 9, 10, 11, 12, 14, 15 and 17 is substantial. But the Act's product-level obligations - risk classification per system, conformity assessment, registration in the EU database, post-market monitoring - are still separate work that 42001 does not certify per system.
What is the difference between ISO 42001 and ISO 23894?
ISO/IEC 42001 is the management-system standard - the governance, lifecycle, roles and documentation scaffold. ISO/IEC 23894:2023 is the AI risk-management methodology - the techniques for identifying, analysing and treating AI-specific risks. 42001 references 23894 as the recommended risk-management approach. In practice, an enterprise implementing 42001 will operate a risk-management process aligned with 23894 (and with the parent risk-management standard ISO 31000). 23894 is not certifiable on its own; 42001 is.
How much does ISO 42001 certification cost?
External audit fees from accredited certification bodies typically fall in the EUR 25-80k range for a mid-sized organisation, depending on scope, number of sites, number of AI systems in scope, and the choice of accreditation body. The dominant cost is internal: programme management, evidence collection, impact assessments, data-governance work and the eight-to-twelve-week operational evidence period. Total programme cost for a mid-sized first-time implementer often lands in the EUR 150-400k range across internal and external work.
Which accreditation bodies certify ISO 42001?
BSI, DNV, TÜV Rheinland, TÜV SÜD, Bureau Veritas, SGS, LRQA and Schellman are among the certification bodies that have launched accredited 42001 services since publication. Buyers should confirm that the accreditation is held under a body recognised by the International Accreditation Forum (IAF), and that the certificate is issued under an accredited mark, not as an unaccredited 'verification' or 'attestation' which is not equivalent.
Does ISO 42001 apply to GenAI and LLM use cases?
Yes. The standard is technology-agnostic - it applies to any AI system regardless of whether it is supervised learning, reinforcement learning, generative AI or a foundation-model-based application. The Annex A controls on data governance, lifecycle, third-party relationships and human oversight are particularly relevant for GenAI deployments because the input/output surface, third-party model providers and prompt-injection risk all need to be documented within the management system.
Impetora

Ready to scope your project? Submit a short brief and we reply within one business day.

Sources cited

Sources cited (7) - show
  1. ISO/IEC 42001:2023 - Information technology - Artificial intelligence - Management system. International Organization for Standardization, 2023-12. https://www.iso.org/standard/81230.html
  2. ISO/IEC 23894:2023 - AI - Guidance on risk management. International Organization for Standardization, 2023-02. https://www.iso.org/standard/77304.html
  3. ISO/IEC 42005:2025 - AI system impact assessment. International Organization for Standardization, 2025. https://www.iso.org/standard/44545.html
  4. Regulation (EU) 2024/1689 (Artificial Intelligence Act). European Union, Official Journal, 2024-07-12. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689
  5. Artificial Intelligence cybersecurity guidance. ENISA - European Union Agency for Cybersecurity, 2024. https://www.enisa.europa.eu/topics/cybersecurity-policy/artificial-intelligence
  6. AI Risk Management Framework (AI RMF 1.0). NIST, 2023-01. https://www.nist.gov/itl/ai-risk-management-framework
  7. EU Cloud Code of Conduct (data protection). EU Cloud CoC General Assembly, 2024. https://eucoc.cloud/en/home
About Impetora
Impetora designs, builds, and deploys custom AI systems for enterprises in regulated industries. We operate from Vilnius and Amsterdam and work in five languages.