High-risk classification follows two paths. The first is Annex III, which lists eight areas where AI systems are presumed high-risk: biometrics (post-event identification, categorisation, emotion recognition outside prohibited contexts), critical infrastructure (safety components for road, rail, water, gas, electricity), education and vocational training (admissions, grading, monitoring), employment and worker management (recruitment, promotion, performance evaluation, task allocation), access to essential services (credit scoring, social benefits, emergency response triage, life and health insurance pricing), law enforcement (risk assessment, polygraphs, evidence evaluation), migration and border control (risk assessment, document verification, asylum eligibility), and administration of justice and democratic processes (assistance to judicial authorities, electoral influence) [3].
The second path is Annex I: AI safety components in regulated products that already require third-party conformity assessment under sectoral law - machinery, toys, lifts, radio equipment, civil aviation, two- and three-wheel vehicles, agricultural and forestry vehicles, marine equipment, rail interoperability, motor vehicles, in vitro diagnostic medical devices, medical devices. An AI system that is a safety component of a CE-marked machine is high-risk under the AI Act in addition to the existing sectoral conformity requirements.
Article 6(3) provides an exemption: an Annex III system is not high-risk if it performs a narrow procedural task, improves the result of a previously completed human activity, detects decision patterns without replacing the human assessment, or performs preparatory work. The exemption is narrow, must be documented, and is subject to challenge by national authorities.