Risk classification is the gate that determines which obligations apply. The Act defines four categories. The CIO's first task is to place every AI system in the inventory into one of them, with a written rationale.
Prohibited (Article 5). A short list of practices that cannot be deployed in the EU at all, including manipulation that exploits vulnerabilities, social scoring by public authorities, predictive policing based solely on profiling, untargeted scraping of facial images to build recognition databases, real-time remote biometric identification in public spaces by law enforcement except in narrow circumstances, and emotion recognition in workplaces and education except for medical or safety reasons. If a system in the inventory falls here, decommission it.
High-risk (Article 6 plus Annex III). Systems used in eight listed areas where AI can materially affect rights or safety: biometrics, critical infrastructure, education and vocational training, employment and worker management, access to essential private and public services (including credit scoring), law enforcement, migration and border control, administration of justice and democratic processes. Plus systems that are safety components of products covered by the Annex I legislation. High-risk systems carry the heaviest obligations and most CIOs will have at least one in their inventory.
Limited-risk (Article 50). Systems that interact with natural persons (chatbots), generate synthetic content (deepfakes), perform emotion recognition or biometric categorisation outside high-risk contexts, or generate text published to inform the public on matters of public interest. The obligation is principally transparency: users must be informed they are interacting with AI, and synthetic content must be labelled as such in machine-readable form.
Continue reading+
Minimal-risk. Everything else. No mandatory obligations under the Act, although the Commission encourages voluntary codes of conduct.
The classification process should be a written exercise, not a workshop verdict. For each system, name the use case, map it to Annex III if applicable, document why it does or does not fall under each Annex III heading, and have the legal function counter-sign. The output is the foundation of the entire compliance file [4].