I
Impetora

NIST AI Risk Management Framework: enterprise implementation in 2026

By Impetora -

The NIST AI Risk Management Framework (AI RMF 1.0) is a voluntary, US-government-published framework that gives organisations a structured way to identify, measure and manage AI risks across the lifecycle. It is built around four functions - GOVERN, MAP, MEASURE, MANAGE - and is supported by a public Playbook of recommended actions and a Generative AI Profile released in July 2024 [1]. It is the de facto reference for AI risk in the US, increasingly cited in EU and UK procurement, and structurally compatible with ISO/IEC 42001 and the EU AI Act.

4 functions
GOVERN, MAP, MEASURE, MANAGE
NIST
Jan 2023
AI RMF 1.0 published
NIST
Jul 2024
GenAI Profile (NIST AI 600-1) published
NIST

What is the NIST AI Risk Management Framework?

The AI RMF was published by the US National Institute of Standards and Technology in January 2023, in response to a Congressional mandate in the National AI Initiative Act of 2020. It is voluntary, sector-agnostic and developed through a multi-year open-consultation process with industry, academia, civil society and international partners. The framework is documented in the AI RMF 1.0 Core, supported by a Playbook (the recommended-action library), a Roadmap (future work), Crosswalks to other frameworks, and Profiles for specific contexts [1].

The AI RMF defines AI risk as the composite likelihood that an AI system causes adverse impacts to individuals, organisations, ecosystems or society, weighted by severity. The framework operationalises seven characteristics of trustworthy AI: valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed.

What do the four GOVERN-MAP-MEASURE-MANAGE functions actually require?

GOVERN sets the cultural and operational backbone. Roles, responsibilities, accountability lines, policies, training, supplier and third-party risk practices, incident response, and the integration of AI risk into the organisation's enterprise risk-management framework. The Playbook lists 19 GOVERN sub-categories with concrete recommended actions [2].

MAP establishes the context for each AI system: intended purpose, deployment setting, stakeholders, dependencies, capabilities and limitations, risk categorisation, impact analysis. This is the closest analogue to the EU AI Act's risk-classification step and to the ISO 42001 impact-assessment work. MEASURE applies quantitative and qualitative methods to evaluate and track risk: measurement methods are selected, applied, monitored, validated and documented. MANAGE allocates resources and treats risk: prioritisation, treatment selection, third-party risk handling, ongoing monitoring, post-deployment incident response, retirement.

GOVERN-MAP-MEASURE-MANAGE
the four AI RMF core functions
NIST

What does the Generative AI Profile add?

NIST AI 600-1, the Generative AI Profile, was published in July 2024 in response to the October 2023 Executive Order on AI. It is a Profile of the AI RMF tuned to generative-AI-specific risks: confabulation, dangerous or violent recommendations, data privacy, environmental impacts, harmful bias, human-AI configuration, information integrity, information security, intellectual property, obscene/degrading/abusive content, value-chain and component integration [3].

For enterprises deploying GenAI applications - retrieval-augmented question answering, summarisation, drafting, agentic workflows - the GenAI Profile is the most useful structured checklist available in the public domain. It maps each risk to the relevant Core sub-categories and gives recommended actions per function. It is now commonly cited in vendor due-diligence questionnaires across US federal agencies, financial services and healthcare.

How does AI RMF compare with ISO 42001 and the EU AI Act?

Three different layers of the regulatory stack. AI RMF is a voluntary risk-management framework with no certification body and no statutory force. ISO/IEC 42001 is a voluntary management-system standard with third-party certification through accredited bodies and substantial structural overlap with ISO 27001. The EU AI Act is binding law with risk-class-specific obligations and supervisory enforcement [4].

The frameworks are explicitly cross-walked. NIST has published an AI RMF-to-ISO crosswalk that maps the four AI RMF functions onto ISO/IEC 42001 clauses, ISO/IEC 23894 risk-management techniques and ISO/IEC 5338 lifecycle requirements. CEN-CENELEC JTC 21 references AI RMF in its harmonised-standards work for the EU AI Act. Enterprises operating in multiple jurisdictions can build one underlying control set and present it under whichever framework the local regulator or customer expects.

The practical pattern in 2026 is: AI RMF as the risk-management methodology that the engineering and product teams use day-to-day; ISO/IEC 42001 as the certifiable management system that auditors and procurement see; EU AI Act conformity as the legal compliance layer when the system is placed on or used in the EU market.

What does an AI RMF implementation look like inside an enterprise?

A typical pattern across mid-to-large enterprises follows three stages. First, a GOVERN baseline: AI policy, accountability matrix, supplier-risk language, an AI inventory with risk-classification per system, and integration with the existing enterprise risk-management committee. Second, MAP and MEASURE work on the priority systems: structured impact assessments, measurement-method selection, evaluation evidence, monitoring dashboards. Third, MANAGE operationalisation: incident-response playbooks, third-party risk handling, post-deployment monitoring, retirement procedures.

NIST's AI Risk Management Framework Playbook is the operational reference; it lists, per Core sub-category, suggested actions, transparency and documentation expectations and references to underlying technical standards. Most enterprise programmes use the Playbook as the source of truth for "what does good look like" at each step [2].

How does Impetora apply AI RMF in delivery?

Impetora's TRACE methodology aligns directly with the AI RMF Core. Trust covers the GOVERN posture: policy, residency, audit trails, supplier-risk language. Readiness covers MAP: data and workflow audit, risk classification, impact assessment, stakeholder mapping. Architecture and Citations and Evidence cover MEASURE and MANAGE: production-grade design with logging, monitoring, evaluation harnesses, incident response and traceable per-decision evidence.

For US enterprise buyers and for EU buyers operating across the Atlantic, the practical handle is to ask the vendor for the AI RMF crosswalk on their standard delivery: which Core sub-categories does each delivery artefact satisfy. A vendor with a real implementation can produce that mapping in a single page. A vendor without one will return a marketing answer.

Frequently asked questions

Is the NIST AI RMF mandatory?
It is voluntary at the federal level for non-government organisations. It is effectively required for many US federal contractors under Executive Order 14110 (October 2023) and subsequent agency guidance, and for AI systems used by federal agencies under the OMB M-24-10 memorandum on AI use cases. It is increasingly required in vendor due-diligence questionnaires in financial services and healthcare regardless of federal contracting status.
How does AI RMF differ from NIST CSF?
The Cybersecurity Framework (CSF) addresses cybersecurity risk; the AI RMF addresses AI-specific risks across the lifecycle, of which cybersecurity is one component. The two frameworks share Annex SL-style structural ideas (functions, categories, sub-categories) and are designed to be used together. NIST has published a crosswalk between AI RMF Core sub-categories and CSF 2.0 Categories for organisations that operate both.
How does the AI RMF treat third-party AI components?
Third-party risk is addressed under the GOVERN function, particularly in sub-category GOVERN-6 (organisational practices and accountability for third-party AI). The Playbook recommends documented supplier risk-management practices, contractual obligations covering documentation and incident reporting, and a value-chain inventory that tracks dependencies on foundation models, hosting, training-data sources and downstream users. The GenAI Profile expands this with specific recommendations for foundation-model dependencies.
Is AI RMF certifiable?
No. NIST does not operate a certification programme for AI RMF. There is no accredited body that issues a 'NIST AI RMF certified' mark. Organisations that want third-party attestation typically combine AI RMF implementation with ISO/IEC 42001 certification, where the certification scheme exists and is run by accredited certification bodies under IAF-recognised accreditation.
Does following AI RMF help with EU AI Act compliance?
Yes, structurally, but it is not sufficient on its own. The AI RMF's MAP function aligns with the Act's risk-classification and Article 9 risk-management requirements. MEASURE and MANAGE align with Articles 10-15 obligations on data, documentation, oversight, accuracy and robustness. CEN-CENELEC JTC 21 references the AI RMF in its harmonised-standards work for the Act. But the Act adds specific product-level obligations (conformity assessment, registration, post-market monitoring, declarations of conformity) that AI RMF does not cover by name.
How does AI RMF address bias and fairness?
Bias is addressed across the four functions. GOVERN sub-categories require organisational policy on harmful bias and clear roles. MAP requires identification of stakeholders potentially affected by bias and analysis of disparate impacts. MEASURE specifies bias measurement methods and validation evidence. MANAGE specifies treatment, monitoring and incident response when bias is detected post-deployment. NIST SP 1270 ('Towards a Standard for Identifying and Managing Bias in AI'), published in 2022, is the deeper technical companion document.
How long does AI RMF implementation take?
An initial GOVERN baseline (policy, inventory, accountability matrix, supplier language) is typically achievable in three to four months. Full MAP and MEASURE work on a priority set of systems takes a further three to six months depending on the number of systems and existing measurement infrastructure. Reaching a steady-state where the full Core is operationalised across the enterprise typically takes nine to fifteen months, similar to ISO/IEC 42001 timelines.
Impetora

Ready to scope your project? Submit a short brief and we reply within one business day.

Sources cited

Sources cited (7) - show
  1. AI Risk Management Framework (AI RMF 1.0). NIST, 2023-01-26. https://www.nist.gov/itl/ai-risk-management-framework
  2. AI RMF Playbook. NIST AI Resource Center, 2024. https://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook
  3. NIST AI 600-1: Generative AI Profile. NIST, 2024-07. https://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook
  4. Regulation (EU) 2024/1689 (Artificial Intelligence Act). European Union, Official Journal, 2024-07-12. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689
  5. ISO/IEC 42001:2023 - AI management systems. International Organization for Standardization, 2023-12. https://www.iso.org/standard/81230.html
  6. ISO/IEC 23894:2023 - AI - Guidance on risk management. International Organization for Standardization, 2023-02. https://www.iso.org/standard/77304.html
  7. Artificial Intelligence cybersecurity guidance. ENISA - European Union Agency for Cybersecurity, 2024. https://www.enisa.europa.eu/topics/cybersecurity-policy/artificial-intelligence
About Impetora
Impetora designs, builds, and deploys custom AI systems for enterprises in regulated industries. We operate from Vilnius and Amsterdam and work in five languages.