I
Impetora

GDPR Article 22 automated decisions: what AI systems can and cannot do in 2026

By Impetora -

Article 22 of the General Data Protection Regulation gives individuals the right not to be subject to a decision based solely on automated processing - including profiling - that produces legal or similarly significant effects. The European Data Protection Board's 2024 guidelines, building on the Court of Justice's SCHUFA judgment of December 2023, expanded the practical scope of Article 22 well beyond what most enterprises had assumed it covered [1]. AI systems that score, screen, price or rank people now sit squarely inside that scope.

Article 22
GDPR right not to be subject to solely automated decisions
GDPR
Dec 2023
CJEU SCHUFA judgment expanded scope
CJEU
20M / 4%
GDPR fine ceiling (whichever higher)
GDPR

What does GDPR Article 22 actually say?

The text of Article 22(1) reads: "The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her" [2]. Three exceptions in Article 22(2) allow such decisions: when necessary for entering into or performing a contract; when authorised by Union or Member State law; or with the data subject's explicit consent. Even where one of the three exceptions applies, Article 22(3) requires the controller to implement suitable safeguards - at minimum, the right to obtain human intervention, to express a point of view, and to contest the decision.

Article 22(4) further restricts processing of special-category data (health, biometrics, race, political opinions, etc.) in solely automated decisions to two narrow grounds: explicit consent, or substantial public interest grounded in EU or Member State law. Articles 13(2)(f), 14(2)(g) and 15(1)(h) layer on transparency obligations: when Article 22 applies, the controller must give the data subject "meaningful information about the logic involved, as well as the significance and the envisaged consequences" of the processing.

How did the SCHUFA judgment and the 2024 EDPB guidelines change the scope?

In SCHUFA Holding (C-634/21), decided 7 December 2023, the Court of Justice held that the automated generation of a credit score - even where the score is then passed to a third party (a bank) which formally takes the lending decision - amounts to a "decision" within Article 22(1) when the score plays a determining role in that downstream decision. This extended the article's reach beyond decisions taken in a single legal entity to any case where an automated output strongly determines a subsequent human-formal decision [3].

The European Data Protection Board's guidelines on Article 22 (revised 2024 in light of SCHUFA) reinforce three points buyers should design around. First, "solely automated" includes cases where a human is nominally in the loop but does not exercise meaningful authority over the outcome - a rubber-stamp human is not human oversight. Second, "similarly significant effects" includes pricing, eligibility, employment screening, fraud flags, content moderation that affects livelihood, and access to public services. Third, the transparency obligation requires meaningful explanation of the logic, not a generic statement that "AI is used" [4].

SCHUFA
CJEU expanded Art. 22 scope (Dec 2023)
CJEU

Which AI use cases now fall inside Article 22?

The practical effect of SCHUFA and the EDPB guidelines is that a wide set of AI use cases now requires Article 22 analysis as a default, not as an edge case. Credit scoring, debt-collection prioritisation and contact-strategy systems. Insurance underwriting and claims-fraud scoring. Hiring and CV-screening systems. Tenant-screening systems. Employee-performance and dismissal-risk scoring. Content-moderation and account-suspension decisions on platforms where this affects livelihood. Pricing personalisation that materially affects access to a product. Welfare-eligibility and benefits-allocation systems in the public sector.

Conversely, decisions that are genuinely advisory - where a competent human reviews the AI output, has authority to override, has time and information to do so, and exercises that authority in practice - fall outside Article 22(1). The bar is design, not branding. A "human-in-the-loop" UX overlay where the human has fifteen seconds and no countervailing information does not meet the EDPB's standard for meaningful intervention.

How does an enterprise design an AI system to comply with Article 22?

The compliance pattern has six elements. A documented assessment per system answering whether it is in or out of Article 22(1) and, if in, which Article 22(2) lawful basis applies. A meaningful human-review design where the reviewer has the AI output, the underlying data, the rationale, and authority and time to override. An explanation interface giving data subjects a clear, non-trivial description of the logic, the significance and the envisaged consequences. An access-rights workflow that can produce the per-decision explanation when an Article 15(1)(h) request comes in. A contest-and-appeal workflow with an SLA. A monitoring layer that detects when the human-review rate drops below the threshold that makes the human meaningful.

The EU AI Act's Article 14 on human oversight reads as a more concrete sibling of these requirements. For high-risk systems under the Act, the human-oversight obligation is explicit and detailed, and a system designed to comply with Article 14 will typically also meet the Article 22 GDPR threshold - though the legal bases are independent and both must be documented.

What does Article 22 enforcement actually look like?

Article 83 of the GDPR sets the fine framework: up to EUR 20 million or 4% of global annual turnover, whichever is higher, for breaches of the lawful-basis and data-subject-rights provisions including Article 22. Recent enforcement actions across EU supervisory authorities have targeted credit-scoring providers, gig-economy platforms with algorithmic worker-management systems, AI-driven insurance pricing and biometric-based access systems. Fines have ranged from EUR 100k for documentation failures to multi-million-euro penalties for systemic non-compliance.

The other risk vector is private litigation. Article 79 gives the data subject a right to a judicial remedy, and consumer collective actions under the Representative Actions Directive (effective June 2023) can now bundle Article 22 claims into class proceedings. The reputational and litigation cost of an Article 22 finding is frequently larger than the regulatory fine. ENISA's threat-landscape work on AI underlines the security and trust dimension that compounds the regulatory exposure [5].

How does Impetora design AI systems to be Article 22 ready?

Every Impetora build that touches a person-level decision starts with a written Article 22 analysis: is the decision solely automated, what is the lawful basis, what does meaningful human review look like, what does the explanation interface need to expose. The architecture step bakes the explanation, contest and access-rights workflows into the system rather than bolting them on afterwards. The Citations and Evidence pillar of TRACE makes per-decision explanation a property of the output, not a separate report.

For buyers building or buying an AI system that affects individuals, the practical test is to ask the vendor for a sample Article 22 analysis from a comparable past project. A vendor that can produce one within hours has done this work before. A vendor that returns a generic statement about "compliance with GDPR" has not.

Frequently asked questions

Does Article 22 apply to AI chatbots or generative AI?
It depends on what the chatbot does. A general information assistant that does not take person-level decisions sits outside Article 22. A chatbot that screens job candidates, denies access to a service, sets a price, classifies a person into a risk category or makes any decision with legal or similarly significant effects sits inside. Generative AI does not change the test - the question is the effect of the output on the individual, not the model architecture.
Is having a human review the AI output enough to escape Article 22?
Only if the review is meaningful. The EDPB's 2024 guidelines and the SCHUFA judgment together make clear that a nominal human in the loop does not remove Article 22 scope. The reviewer needs authority to override, sufficient information to evaluate the AI output, sufficient time to do so, and the override behaviour needs to be observable in the data. If the human approval rate is 99.9% across thousands of decisions per reviewer per day, regulators will treat the decisions as solely automated regardless of UI design.
What counts as 'similarly significant effects'?
The EDPB and several supervisory authorities have published examples: pricing decisions that materially affect access to a product, employment screening, performance management decisions that affect promotion or dismissal, content moderation that affects livelihood (e.g. account suspension on a platform the person earns from), tenant screening, eligibility for public benefits and access to financial services. Targeting advertising in some specific contexts (e.g. exclusion from job advertising on protected-attribute proxies) has also been treated as in-scope by some authorities.
How does the EU AI Act interact with Article 22?
They are complementary regimes with overlapping obligations. The AI Act's Article 14 (human oversight) and Article 13 (transparency) for high-risk systems are stricter and more concrete than Article 22's safeguards, so a system designed to meet Article 14 typically also meets Article 22's safeguard requirement. But the legal bases are independent. Article 22 derives from data protection and applies wherever personal data is processed in an automated decision; the AI Act applies based on risk classification of the system. Both must be documented.
What does 'meaningful information about the logic involved' mean?
Not a copy of the model weights and not a generic statement that 'machine learning is used'. The EDPB has indicated that meaningful logic disclosure includes: the input variables and their relative importance to the outcome, the decision rule or threshold structure, the training-data domain and known limitations, and a worked example or counterfactual where relevant. It is a higher bar than most enterprises currently meet, and tooling for explanation interfaces is now a significant procurement category in regulated sectors.
Are there any exceptions for fraud or security?
Recital 71 of the GDPR mentions fraud and tax-evasion monitoring as a context where automated decisions can be justified, and the Article 22(2)(b) Member State law exception is the primary route for these cases. National laws in Germany, France and elsewhere now explicitly authorise certain fraud-detection automated decisions with safeguards. The exception is narrow and the safeguards (human review, contest right, explanation) still apply. There is no 'fraud carve-out' from Article 22 in the GDPR text itself.
What documentation does a controller need to keep for Article 22 compliance?
At minimum: the per-system Article 22 analysis (in/out of scope, lawful basis, safeguards), the data protection impact assessment under Article 35, the explanation framework and the explanation surfaced to data subjects, the human-review process and monitoring evidence, the contest-and-appeal procedure and case log, records of access requests under Article 15(1)(h) and the responses, and the privacy notice text under Articles 13/14. Most supervisory authorities will ask for some subset of these on first contact.
Impetora

Ready to scope your project? Submit a short brief and we reply within one business day.

Sources cited

Sources cited (7) - show
  1. Guidelines on automated individual decision-making and profiling (revised 2024). European Data Protection Board, 2024. https://www.edpb.europa.eu/our-work-tools/our-documents/guidelines
  2. Article 22 GDPR - Automated individual decision-making, including profiling. European Union (gdpr-info.eu), 2018-05-25. https://gdpr-info.eu/art-22-gdpr/
  3. SCHUFA Holding (Scoring) - Case C-634/21. Court of Justice of the European Union, 2023-12-07. https://curia.europa.eu/juris/liste.jsf?num=C-634/21
  4. Regulation (EU) 2024/1689 (Artificial Intelligence Act) - Articles 13, 14. European Union, Official Journal, 2024-07-12. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689
  5. Artificial Intelligence cybersecurity guidance. ENISA - European Union Agency for Cybersecurity, 2024. https://www.enisa.europa.eu/topics/cybersecurity-policy/artificial-intelligence
  6. ISO/IEC 42001:2023 - AI management systems. International Organization for Standardization, 2023-12. https://www.iso.org/standard/81230.html
  7. Article 83 GDPR - General conditions for imposing administrative fines. European Union (gdpr-info.eu), 2018-05-25. https://gdpr-info.eu/art-83-gdpr/
About Impetora
Impetora designs, builds, and deploys custom AI systems for enterprises in regulated industries. We operate from Vilnius and Amsterdam and work in five languages.