Impact Assessment
An impact assessment is a structured analysis of the potential effects an AI system could have on individuals, groups, and processes before it is deployed.
What is Impact Assessment?
Impact assessments cover privacy (GDPR Article 35 DPIA), fundamental rights (EU AI Act Article 27 FRIA for some deployers), and broader societal effects. They identify affected populations, quantify likelihood and severity of harms, document mitigation measures, and define monitoring. A well-run impact assessment is the single best document for unblocking enterprise procurement and regulator engagement.
How does Impact Assessment apply to enterprise AI?
Public-sector deployers and certain private-sector deployers of high-risk AI must run a fundamental-rights impact assessment under the EU AI Act. All deployers handling personal data should run a DPIA in parallel.
Related terms
- AI Risk Management - AI risk management is the discipline of identifying, assessing, mitigating, and monitoring the harms an AI system can cause across its lifecycle.
- EU AI Act - The EU AI Act (Regulation (EU) 2024/1689) is the European Union's horizontal regulation for AI, classifying systems by risk and imposing obligations on providers, deployers, importers, and distributors.
- GDPR - The General Data Protection Regulation (GDPR) is the EU's data-protection regulation, governing the processing of personal data of people in the EU and EEA.
- Transparency Notice - A transparency notice is a clear disclosure to users that they are interacting with an AI system, what it is doing with their data, and what its limits are.
External references
Need help applying Impact Assessment to your enterprise? Submit a short brief and we reply within one business day.