I
Impetora

NIS2 and AI systems compliance for critical infrastructure

By Impetora -

The NIS2 Directive, Directive (EU) 2022/2555, replaced the original NIS Directive on 18 October 2024 and dramatically widened the EU's cybersecurity perimeter. NIS2 covers 18 sectors and brings tens of thousands of essential and important entities under harmonised cybersecurity, supply-chain and incident-reporting obligations. AI systems deployed inside those entities - whether for fraud detection, predictive maintenance, energy load forecasting or healthcare triage - inherit the full weight of the directive's risk-management measures [1].

2024-10-18
NIS2 transposition deadline
EUR-Lex
18
sectors in scope (Annex I + II)
ENISA
Article 21
10 cybersecurity risk-management measures
EUR-Lex
24h / 72h / 1m
incident notification windows
EUR-Lex

What does NIS2 cover and how is it different from the original NIS Directive?

NIS2 (Directive (EU) 2022/2555) is a cybersecurity directive, not a regulation, which means it had to be transposed into national law by 17 October 2024. It covers 18 sectors split between Annex I (essential entities: energy, transport, banking, financial market infrastructure, health, drinking water, waste water, digital infrastructure, ICT service management, public administration, space) and Annex II (important entities: postal and courier, waste management, chemicals, food, manufacturing, digital providers, research). The size threshold for inclusion is 50 employees and EUR 10 million turnover, with mandatory inclusion regardless of size for several sub-sectors [1].

The headline differences from the original 2016 NIS Directive: dramatically wider scope (18 sectors vs 7), explicit supply-chain and third-party security requirements, harmonised incident-notification timelines, management-body accountability with personal liability for breaches, and a uniform framework of administrative fines (up to EUR 10 million or 2% of worldwide turnover for essential entities).

For the purposes of AI deployment, the key shift is that supply-chain risk-management is now an explicit Article 21 obligation, not a soft expectation. Any AI vendor or model provider whose service is used in production by a NIS2 entity is part of that entity's regulated supply chain.

What does Article 21 require for AI systems in essential and important entities?

Article 21(2) lists the ten cybersecurity risk-management measures that essential and important entities must implement. They are: (a) policies on risk analysis and information system security; (b) incident handling; (c) business continuity and crisis management including backup and disaster recovery; (d) supply-chain security including vulnerabilities and security of relationships with direct suppliers; (e) security in network and information systems acquisition, development and maintenance; (f) policies and procedures to assess effectiveness of risk-management measures; (g) basic cyber hygiene practices and training; (h) cryptography and encryption policies; (i) human resources security, access control and asset management; (j) multi-factor authentication, secured voice/video/text and emergency communications.

For AI systems, measure (e) on secure development and (d) on supply-chain security do most of the work. Secure development means the AI lifecycle (data sourcing, training, validation, deployment, monitoring) has to be governed by documented security controls. Supply-chain security means an entity using a third-party model provider, hosted inference platform or AI consultancy must perform vendor due diligence, capture the dependency in its risk register and treat the upstream provider as part of its threat surface.

10 measures
in NIS2 Article 21(2)
EUR-Lex

ENISA has published successive iterations of its threat landscape and AI cybersecurity guidance, and the 2023 "Multilayer Framework for Good Cybersecurity Practices for AI" provides a practical template for mapping NIS2 measures onto AI deployments [2].

How does the incident-reporting timeline work for AI failures?

Article 23 sets a three-stage incident notification timeline. An "early warning" must be submitted to the CSIRT or competent authority within 24 hours of becoming aware of a significant incident. An "incident notification" with an initial assessment must follow within 72 hours. A "final report" with root-cause analysis, mitigation measures and cross-border impact must be filed within one month. Significant intermediate updates are permitted between stages.

An incident is "significant" when it has caused or is capable of causing severe operational disruption or financial losses, or when it has affected or is capable of affecting other natural or legal persons by causing material or non-material damage. AI failures - a model serving incorrect outputs at scale, a training-data poisoning event affecting decisions in production, a prompt-injection vulnerability that exfiltrates customer data - fall squarely inside this definition when they affect a service in scope.

The implementing regulation on significance criteria, adopted in 2024, refines the thresholds for digital infrastructure and ICT service management entities specifically. Buyers of AI services should require incident-cooperation clauses in vendor contracts that flow through the same 24/72-hour rhythm, since the entity remains responsible for meeting the deadline regardless of where in the supply chain the incident originated.

Where does national transposition stand and which member states matter?

Member states had until 17 October 2024 to transpose NIS2. By the deadline, only a minority of member states had completed transposition; the European Commission opened infringement proceedings against the majority. Belgium, Italy, Hungary, Croatia, Slovakia and Lithuania were among the early movers. Germany, France, Spain, the Netherlands and Poland completed transposition in waves through late 2024 and the first half of 2025 [3].

National transposition matters because NIS2 is a directive, not a regulation: the operative obligations live in the national implementing law, and member states had discretion on certain elements (size thresholds for sub-sectors, fines structure, supervisory authority assignment, sector-specific guidance). Multinational essential entities therefore face a matrix of slightly differing rules across the member states where they operate, and AI vendors selling cross-border have to map their contracts to the strictest member-state regime in scope.

The Cooperation Group of national NIS authorities, supported by ENISA, publishes guidance to harmonise interpretation. The 2024 Cooperation Group reference document on supply-chain security is the key cross-reference for AI vendor onboarding, since it explicitly addresses third-party software and AI components.

How does NIS2 interact with the EU AI Act, DORA and the Cyber Resilience Act?

NIS2 is the horizontal cybersecurity baseline. DORA is lex specialis for the financial sector and overrides NIS2 on the same matters under DORA Article 1(2). The EU AI Act (Regulation 2024/1689) governs AI-specific properties (risk classification, training data, transparency, human oversight, accuracy under Article 15). The Cyber Resilience Act (Regulation 2024/2847) governs the security of products with digital elements placed on the market, including AI components shipped as products [5].

For an essential entity using a high-risk AI system, the practical stack is: NIS2 governs the entity-level cybersecurity programme; AI Act Article 15 imposes accuracy, robustness and cybersecurity obligations on the AI system itself; CRA imposes essential cybersecurity requirements on the AI product as placed on the market. The three regimes were drafted to be complementary, but they generate parallel documentation streams that mature compliance programmes consolidate into a single evidence base.

How does Impetora support NIS2-grade AI engagements?

Impetora's TRACE methodology is built around AI systems that have to survive cybersecurity audits in regulated infrastructure. Trust covers the policy and contractual layer including supply-chain disclosure, sub-processor controls and incident-cooperation clauses that match the 24/72-hour rhythm. Readiness covers the workflow and data audit that becomes the input to the entity's NIS2 risk register. Architecture covers production-grade design with logging, monitoring, encryption, access control and recoverability that map directly onto Article 21's ten measures. Citations and Evidence covers the audit-trail layer that supervisory authorities and CSIRTs can request post-incident.

The practical path for a NIS2-bound engagement: scope the AI system against the entity's existing cybersecurity policy, document the supply chain explicitly (model provider, hosting, sub-processors), align the secure-development lifecycle with measure (e), and structure runbooks that meet Article 23 notification windows.

Frequently asked questions

Which sectors are covered by NIS2?
Annex I (essential): energy, transport, banking, financial market infrastructure, health, drinking water, waste water, digital infrastructure, ICT service management, public administration, space. Annex II (important): postal and courier, waste management, chemicals, food, manufacturing, digital providers, research. Eighteen sectors in total, with the 50-employee / EUR 10 million size threshold applying by default and mandatory inclusion regardless of size for several critical sub-sectors.
Are AI vendors directly bound by NIS2?
Only if the AI vendor itself qualifies as an essential or important entity (for example, an AI provider classified under digital infrastructure or ICT service management at the size threshold). Where the AI vendor sits below the threshold or outside Annex I/II categories, it is bound indirectly through its customers' supply-chain obligations. Either way, the vendor is treated as part of the regulated entity's threat surface and must support the contractual and technical controls the entity needs to comply with Article 21.
What happens if an AI failure causes a NIS2 reportable incident?
The essential or important entity must notify its CSIRT or competent authority within 24 hours (early warning), 72 hours (incident notification), and one month (final report). The reports must include identification of the incident's root cause and any third-party providers involved, including the AI vendor. The entity remains responsible for meeting the deadlines regardless of where in the supply chain the incident originated, which is why incident-cooperation clauses with AI vendors are non-negotiable.
What are the maximum fines under NIS2?
Article 34 sets maximum administrative fines at EUR 10 million or 2% of total worldwide annual turnover, whichever is higher, for essential entities. For important entities the cap is EUR 7 million or 1.4% of worldwide turnover. National competent authorities also have powers to issue binding instructions, suspend authorisations and impose temporary management bans. Article 20(1) imposes personal accountability on management bodies for approving and overseeing the cybersecurity risk-management measures.
How does NIS2 compare to ISO 27001 and ISO/IEC 42001?
NIS2 is binding EU law for entities in scope. ISO 27001 (information security management) and ISO/IEC 42001:2023 (AI management system) are voluntary management-system standards. Operating an ISO 27001 ISMS provides the documentation backbone that Article 21 measures expect and substantially reduces the implementation gap. ISO/IEC 42001 layers AI-specific governance on top, including the supply-chain controls relevant to NIS2 measure (d). Neither standard is a substitute for NIS2 compliance, but both make the audit story dramatically easier.
Where can I find the official NIS2 text and ENISA guidance?
The directive is published as Directive (EU) 2022/2555 on EUR-Lex. ENISA maintains the central guidance landing page covering threat landscape reports, the AI cybersecurity multilayer framework, sector-specific guidance and the Cooperation Group reference documents. National competent authority guidance is published by each member state's designated CSIRT or cybersecurity agency and is the operative source for entities established in that jurisdiction.
Impetora

Ready to scope your project? Submit a short brief and we reply within one business day.

Sources cited

Sources cited (6) - show
  1. Directive (EU) 2022/2555 (NIS2 Directive). European Union, Official Journal, 2022-12-14. https://eur-lex.europa.eu/eli/dir/2022/2555/oj
  2. Multilayer Framework for Good Cybersecurity Practices for AI. ENISA - European Union Agency for Cybersecurity, 2023. https://www.enisa.europa.eu/topics/cybersecurity-policy/artificial-intelligence
  3. NIS2 transposition status across member states. European Commission - DG CONNECT, 2024. https://digital-strategy.ec.europa.eu/en/policies/nis2-directive
  4. Regulation (EU) 2024/1689 (Artificial Intelligence Act). European Union, Official Journal, 2024-07-12. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689
  5. Regulation (EU) 2024/2847 (Cyber Resilience Act). European Union, Official Journal, 2024-11-20. https://eur-lex.europa.eu/eli/reg/2024/2847/oj
  6. Regulation (EU) 2022/2554 (DORA). European Union, Official Journal, 2022-12-14. https://eur-lex.europa.eu/eli/reg/2022/2554/oj
About Impetora
Impetora designs, builds, and deploys custom AI systems for enterprises in regulated industries. We operate from Vilnius and work in five languages.
Discovery call

Book a discovery call

Tell us what you would like to build. We reply within one business day.

30-minute call. Free of charge. No obligation.