I
Impetora

AI credit decisioning fairness: what EU rules require

By Impetora -

Fairness in AI credit decisioning is not a soft expectation. It is a stack of binding obligations: EU AI Act Annex III 5(b) Chapter III duties, GDPR Article 22 safeguards as interpreted by the Court of Justice in the SCHUFA judgment (C-634/21), EBA loan-origination guidance, and consumer-protection rules including the Consumer Credit Directive (Directive (EU) 2023/2225) [1].

C-634/21
CJEU SCHUFA judgment (Dec 2023)
CJEU
Annex III 5(b)
AI Act high-risk credit
EUR-Lex
Art 14
AI Act human oversight obligation
EUR-Lex
Art 22
GDPR automated decisions
EUR-Lex

What does fairness mean in EU credit AI?

Fairness in this context is a regulatory term combining several legal obligations. First, prohibition of discrimination on protected grounds (Article 21 of the Charter, the Race Equality Directive 2000/43/EC, the Gender Goods and Services Directive 2004/113/EC). Second, the Article 22 GDPR safeguards on solely-automated decisions. Third, the AI Act's data governance and bias-testing duties under Article 10. Fourth, the EBA guidelines' expectation that automated models be challenged for unintended outcomes.

Statistical fairness metrics (demographic parity, equalised odds, calibration) are tools used to evidence compliance with these obligations. None of them is itself a legal standard.

What did the SCHUFA judgment change?

In December 2023 the Court of Justice ruled in case C-634/21 (OQ v SCHUFA Holding) that the automated establishment of a probability value (a credit score) is itself a "decision based solely on automated processing" under Article 22 GDPR when that score plays a determining role in a subsequent contractual decision [2].

The practical consequence: the credit-scoring step, not just the lender's loan decision, falls inside Article 22. Both the score provider and the lender carry obligations. Banks that use third-party scoring (SCHUFA, Experian, Equifax, national bureaus) must ensure the upstream score complies and that the downstream decision provides Article 22 safeguards.

C-634/21
scoring is itself an automated decision
CJEU

What does the AI Act add for credit fairness?

Article 10 of the AI Act sets data-governance duties for high-risk systems including credit AI: training, validation and testing data must be relevant, sufficiently representative, free of errors as far as possible, and complete. Article 10(5) explicitly permits processing of special categories of personal data where strictly necessary to ensure bias detection and correction.

Article 14 requires effective human oversight. Article 15 requires accuracy, robustness and cybersecurity proportionate to the intended purpose. The combination means that bias testing, including disparate-impact testing across protected classes, is a documented duty rather than a discretionary best practice.

What concrete controls demonstrate fairness compliance?

Six controls supervisors expect to see. First, exclusion of protected attributes and validated proxies from features. Second, disparate-impact testing across protected classes at training time and in production monitoring. Third, calibration analysis showing equal predictive validity across groups. Fourth, documented data-governance procedures aligned with Article 10. Fifth, human-oversight policy mapping scores to actions with clear override authority (Article 14). Sixth, customer-facing meaningful information and right-to-contest pathway (Article 22 GDPR).

The most common audit gap is treating bias testing as a one-off pre-launch check. Production drift can introduce disparities a clean development model never showed; without ongoing monitoring, the bank inherits the drift undetected.

How does the Consumer Credit Directive 2023 fit?

Directive (EU) 2023/2225 modernised the EU consumer credit framework. Article 18 covers creditworthiness assessment and explicitly addresses the use of automated processing. Where assessment is conducted through automated processing, the consumer has the right to request human intervention, express their point of view and obtain a clear explanation of the assessment [3].

This is consistent with Article 22 GDPR but adds a sector-specific layer: explicit reference in the credit-agreement disclosures and the right to a clear explanation of the assessment, not merely the logic at large. Banks must update pre-contractual information to reflect the directive's requirements as Member States transpose it through 2026.

How does Impetora support fairness work?

Impetora's TRACE methodology builds the four artefacts supervisors examine: a data-governance pack aligned with Article 10, a fairness testing protocol covering disparate impact and calibration across protected classes, a human-oversight policy mapping scores to actions, and a customer-facing explanation framework that satisfies Article 22 and Directive 2023/2225 Article 18. Trust covers the contracting layer (third-party score providers, sub-processors). Citations and Evidence covers the audit trail.

Frequently asked questions

Can we exclude only the protected attribute and call the model fair?
No. Validated proxies (postcode, given name, employment sector) often correlate strongly with protected attributes. Fairness testing must look at outcomes by protected class even when the attribute itself is excluded. The AI Act Article 10(5) explicitly permits processing of special-category data for the purpose of bias detection.
Does Article 22 GDPR apply if a human signs off?
Only if the human review is meaningful, not rubber-stamping. EDPB guidelines and case law require that the human have authority and competence to overturn the decision. A human pressing approve on every recommendation does not lift the case out of Article 22.
What does meaningful information require?
Enough that the customer can meaningfully exercise their rights to contest and obtain human intervention. The 2024 EDPB guidelines describe the standard as accessible, accurate and useful, without requiring disclosure of trade secrets or full model internals. Boilerplate is insufficient; sector-specific narrative explaining the main feature families and outcome categories is the working template.
Are there fines for fairness failures specifically?
AI Act non-compliance attracts up to EUR 15 million or 3 percent of worldwide annual turnover for high-risk system breaches; up to EUR 35 million or 7 percent for prohibited-practice breaches. GDPR Article 22 breaches attract up to EUR 20 million or 4 percent. National consumer-protection regulators have additional powers.
How often should fairness be re-tested?
Continuous monitoring with documented thresholds and triggers. Most banks set quarterly fairness reports for credit-touching models, with immediate triggered re-testing on material data shifts. The cadence and triggers must be in the model risk policy approved by governance.
Impetora

Ready to scope your project? Submit a short brief and we reply within one business day.

Sources cited

Sources cited (5) - show
  1. Regulation (EU) 2024/1689 (Artificial Intelligence Act). European Union, Official Journal, 2024-07-12. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689
  2. Case C-634/21, OQ v SCHUFA Holding AG. Court of Justice of the European Union, 2023-12-07. https://curia.europa.eu/juris/document/document.jsf?docid=280426
  3. Directive (EU) 2023/2225 on consumer credit. European Union, Official Journal, 2023-10-18. https://eur-lex.europa.eu/eli/dir/2023/2225/oj
  4. Guidelines on loan origination and monitoring (EBA/GL/2020/06). European Banking Authority, 2020-05-29. https://www.eba.europa.eu/regulation-and-policy/credit-risk/guidelines-on-loan-origination-and-monitoring
  5. Guidelines 1/2024 on automated decisions under Article 22 GDPR. European Data Protection Board, 2024. https://www.edpb.europa.eu/our-work-tools/our-documents/guidelines_en
About Impetora
Impetora designs, builds, and deploys custom AI systems for enterprises in regulated industries. We operate from Vilnius and Amsterdam and work in five languages.
Discovery call

Book a discovery call

Tell us what you would like to build. We reply within one business day.

30-minute call. Free of charge. No obligation.