I
Impetora

EU AI Act compliance for logistics AI in 2026

By Impetora -

Most logistics AI sits outside the EU AI Act's high-risk regime. Annex III, point 2 of Regulation (EU) 2024/1689 covers AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating and electricity, but the road-traffic and vehicle-control entries are explicitly carved out where the system is already regulated under sectoral type-approval law [1]. Route optimisation, demand forecasting, warehouse robotics scheduling, last-mile dispatch, and fleet maintenance prediction are not named in Annex III. Driver-monitoring AI and fleet-data AI are inside GDPR Article 22 where they produce legally-significant decisions about a natural person [2].

Which Annex III risk category applies to logistics AI?

Annex III, point 2 covers AI used as a safety component in the management and operation of critical infrastructure including road traffic and the supply of water, gas, heating and electricity [1]. The "safety component" test is narrow. A traffic-management AI that controls signal phasing on motorways or that controls a critical port's vessel-traffic system can fall inside; a route-optimisation AI that suggests delivery sequences to drivers does not, because the system is not a safety component of road infrastructure.

Vehicle-level AI - lane-keeping, autonomous emergency braking, automated driving functions - is regulated under the UNECE WP.29 framework and the relevant EU type-approval regulations rather than primarily under the AI Act. The AI Act applies on top, but the technical assessment is integrated into the type-approval process. UNECE Regulation No. 157 on Automated Lane Keeping Systems and the broader UNECE work on AI for vehicles are the canonical reference for vehicle-level autonomous functions [3].

What conformity assessment applies to logistics AI?

For logistics AI that is not a safety component of road traffic or critical infrastructure and that is not built into a regulated vehicle, no AI Act conformity assessment is required. The system sits in the limited-risk or minimal-risk category. Article 50 transparency obligations may still apply: an AI system that interacts with natural persons must inform the data subject they are interacting with an AI unless this is obvious from the circumstances. AI-generated content must be marked as such where applicable. These transparency obligations apply from 2 August 2026 [1].

Where the AI is a safety component, the Article 43 internal-control procedure of Annex VI applies. Where the AI is built into a vehicle covered by EU type-approval law, the AI Act technical documentation is integrated into the type-approval technical documentation. The result is one assessment, not two. The provider still maintains the Annex IV technical documentation pack and the post-market monitoring plan under Article 72.

How is high-risk classification triggered for logistics AI?

Three pathways. First, the safety-component test under Annex III point 2 - narrow, applying to AI that controls critical infrastructure. Second, vehicle-level AI under EU type-approval law - integrated into the type-approval process via Article 43(3). Third, the cross-vertical Annex III categories that catch logistics deployers in specific contexts: point 4 covers employment-related AI including AI used to monitor and evaluate workers, which captures driver-monitoring and warehouse-worker-monitoring AI [1].

The Article 6(3) carve-out applies to most logistics AI on its face. Route optimisation, demand forecasting, dynamic pricing for shipping, container yard scheduling, warehouse robotics scheduling, and last-mile dispatch are all preparatory or operational tasks that do not fit Annex III. The carve-out has to be documented with intended-purpose evidence. The interaction with GDPR Article 22 is the part that catches deployers - any AI output that is used to make a decision producing legal or similarly significant effects on a driver, warehouse worker or end customer is inside Article 22 regardless of the AI Act classification.

What documentation must logistics AI produce?

For non-high-risk logistics AI no Annex IV pack is mandatory under the AI Act. Sensible deployer hygiene is to maintain a model card, a data-governance description, a validation summary, and a human-oversight design even where not legally required, because the next deployment context (a public-sector logistics tender, a critical-infrastructure contract, a worker-monitoring deployment) can trigger high-risk reclassification overnight. The ISO/IEC 42001:2023 AI management system standard is the convergent reference for voluntary documentation [4].

For driver-monitoring AI - dashcam-based fatigue detection, telematics-based driving-style scoring, AI-driven driver-performance evaluation - GDPR Article 22 plus Article 88 (data processing in the employment context) plus the relevant Member-State employment law all apply. The EDPB's guidelines on processing personal data through video devices and the Article 29 Working Party (now EDPB) opinion on data processing at work set the canonical reading [2]. A logistics deployer running driver monitoring needs a Data Protection Impact Assessment under Article 35 of the GDPR, a worker-information notice under Article 88, and a worker-council consultation in jurisdictions where collective representation applies.

What does human oversight look like for fleet and warehouse AI?

For non-high-risk logistics AI, human oversight is not a legal obligation under the AI Act. It is still operational hygiene: a designated reviewer for model output, an escalation path for anomalies, and a periodic validation pass against ground truth. For driver-monitoring AI inside the GDPR Article 22 perimeter, oversight is mandatory: a human reviewer with authority to override any decision producing legal or similarly significant effects, with the override logged.

The interaction with Annex III point 4 - AI in the employment context for hiring, promotion, termination, performance evaluation, or task allocation - is the part that catches deployers using AI to allocate routes or shifts to drivers. AI that allocates work in a way that produces meaningfully different earnings outcomes can trigger point 4 as a high-risk classification on the deployer side, regardless of how the provider describes the system. A 2026-grade deployment treats route-allocation AI as Annex III point 4 high-risk by default and produces the documentation pack accordingly. Article 26 deployer obligations apply.

How does Impetora handle logistics AI Act conformity?

Impetora ships every logistics AI system with a written risk classification analysis (Annex III points 2, 4 and the safety-component test, with reasoning written out), a data-governance description aligned with Article 10, a model card, a validation summary, and a human-oversight design even where not strictly required. Where the deployment touches driver or worker data, the GDPR Article 22 review architecture, the Article 35 DPIA outline, and the Article 88 worker-information notice are produced as named deliverables.

For deployments inside the type-approval perimeter or critical-infrastructure perimeter, the Annex IV pack is integrated into the existing sectoral technical documentation. Cross-references: the EU AI Act overview, the logistics industry hub, the document processing automation use case, and the TRACE methodology.

Frequently asked questions

Is route optimisation AI high-risk under the EU AI Act?
No, on its face. Route optimisation, last-mile dispatch, demand forecasting, dynamic pricing, and warehouse scheduling are not named in Annex III and are not safety components of critical infrastructure. They sit in the limited-risk or minimal-risk category. Article 50 transparency obligations apply where the AI interacts with natural persons. The classification can change if the AI is repurposed to allocate work in the employment context (Annex III point 4) or to control a critical-infrastructure safety function (point 2).
Is driver-monitoring AI high-risk under the AI Act?
It depends on use. AI used to monitor and evaluate worker performance for decisions about promotion, termination, route allocation or compensation falls under Annex III point 4 employment, which is high-risk. AI used purely as a fatigue-detection safety overlay - alerting the driver and recording the event without feeding any employment decision - is generally outside Annex III. The trigger is whether the AI output drives a legally-significant employment decision. Either way, the GDPR Article 22 and Article 88 employment-context obligations apply on the deployer side.
Does the AI Act apply to vehicle autonomous systems?
Yes, but indirectly. Vehicle-level autonomous functions - automated lane keeping, autonomous emergency braking, automated driving systems - are regulated under EU type-approval law and the UNECE WP.29 framework. The AI Act applies on top, with Article 43(3) integrating the AI Act technical documentation into the type-approval technical documentation. The result is one integrated assessment carried out by the type-approval authority and any designated technical service, not two parallel assessments.
When do the high-risk obligations apply to logistics AI?
2 August 2026 for the bulk of high-risk Annex III obligations, including the point 4 employment category that catches driver-monitoring and route-allocation AI when they feed employment decisions. Article 50 transparency obligations apply from the same date. Prohibited practices applied from 2 February 2025; general-purpose AI obligations applied from 2 August 2025. Most logistics AI deployments will not trigger high-risk obligations at all, but the GDPR Article 22 and Article 88 employment regime applies regardless.
Does GDPR Article 22 apply to AI-driven gig-economy work allocation?
Yes when the allocation produces legal or similarly significant effects on the worker. CJEU C-634/21 (SCHUFA) clarified that an automated score relied on by a downstream decision-maker can itself be the Article 22 decision. AI that allocates routes or shifts in a way that produces meaningfully different earnings outcomes is inside Article 22; the deployer needs a meaningful human review architecture, the Article 35 DPIA, and (where applicable) worker-council consultation under Member-State employment law. The Platform Workers Directive (Directive (EU) 2024/2831) sets a parallel sectoral floor.
Are warehouse robotics inside the AI Act high-risk regime?
Generally no for the AI component on its own. Warehouse robotics is regulated under the Machinery Regulation (Regulation (EU) 2023/1230), which is listed in Annex I Section A of the AI Act. AI built into a machine that is itself a safety component under the Machinery Regulation becomes high-risk by overlap with Article 6(1). Pure scheduling AI that allocates picking tasks to robots is not a safety component and is not high-risk. The Machinery-Regulation overlap is the part that catches manufacturers of physical robotics products.
What is the practical scope of the Platform Workers Directive for logistics AI?
Directive (EU) 2024/2831 on improving working conditions in platform work, applying from late 2026, sets specific obligations on platforms that use algorithmic management for route allocation, performance evaluation and termination decisions of platform workers. It includes a presumption of employment (subject to rebuttal), a prohibition on solely-automated decisions on termination or account suspension, and a worker-information obligation on the algorithmic management system. Logistics platforms operating delivery, ride-hail or last-mile gig models should treat the Directive as the binding sectoral floor on top of the AI Act and the GDPR.
Impetora

Ready to scope your project? Submit a short brief and we reply within one business day.

Sources cited

Sources cited (6) - show
  1. Regulation (EU) 2024/1689 (Artificial Intelligence Act), Annex III points 2 and 4, Articles 6, 26, 50. European Union, Official Journal, 2024-07-12. https://eur-lex.europa.eu/eli/reg/2024/1689/oj
  2. Regulation (EU) 2016/679 (General Data Protection Regulation), Articles 22, 35, 88. European Union, Official Journal, 2016-05-04. https://eur-lex.europa.eu/eli/reg/2016/679/oj
  3. UN Regulation No. 157 - Automated Lane Keeping Systems (ALKS). United Nations Economic Commission for Europe, World Forum for Harmonization of Vehicle Regulations (WP.29), 2021-01-22. https://unece.org/transport/vehicle-regulations
  4. ISO/IEC 42001:2023 - AI management systems. International Organization for Standardization, 2023-12. https://www.iso.org/standard/81230.html
  5. Directive (EU) 2024/2831 on improving working conditions in platform work. European Union, Official Journal, 2024-10-23. https://eur-lex.europa.eu/eli/dir/2024/2831/oj
  6. Regulation (EU) 2023/1230 (Machinery Regulation). European Union, Official Journal, 2023-06-14. https://eur-lex.europa.eu/eli/reg/2023/1230/oj
About Impetora
Impetora designs, builds, and deploys custom AI systems for enterprises in regulated industries. We operate from Vilnius and Amsterdam and work in five languages.