---
title: "EU AI Act Compliance for Banking AI in 2026 | Impetora"
description: "How the EU AI Act applies to banking AI in 2026: Annex III creditworthiness classification, EBA model risk and outsourcing guidelines, BCBS principles, and DORA operational resilience for AI vendors."
url: https://impetora.com/eu-ai-act/by-vertical/banking
locale: en
datePublished: 2026-04-27
dateModified: 2026-04-27
author: Impetora
---

# EU AI Act compliance for banking AI in 2026

> Banking AI is the most heavily regulated AI vertical in the European Union. Three regimes overlap: the EU AI Act high-risk classification under Annex III, point 5(b) for creditworthiness and credit scoring, the European Banking Authority's body of guidelines on internal governance, outsourcing arrangements, and ICT and security risk management, and the Digital Operational Resilience Act (DORA) for ICT third-party risk management on critical AI providers [1]. The Basel Committee's 2024 paper on the digitalisation of finance sets the convergent international floor [2].

*Updated 2026-04-27. By Impetora.*

## Which Annex III risk category applies to banking AI?

Annex III, point 5(b) of Regulation (EU) 2024/1689 covers AI systems "intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud" [1]. The named carve-out for fraud detection is meaningful in banking - transaction monitoring, anti-money-laundering screening, and fraud-scoring AI used to flag transactions for human review sit outside the high-risk band by explicit exception, even though they involve scoring natural persons. Inside the high-risk band: retail credit underwriting AI, behavioural scoring used in pricing or limit-setting, AI-driven affordability assessments, and propensity models that feed legally-significant decisions. SME and corporate credit AI involve legal persons rather than natural persons in the borrower role; the Annex III text refers to "natural persons" specifically, so corporate credit AI is generally outside the high-risk band but inside the EBA model-risk and ICRG governance framework [3].

## What conformity assessment is required for high-risk banking AI?

Article 43 of the AI Act provides a special route for credit institutions. Article 43(2) allows banks regulated under Directive 2013/36/EU to integrate the AI conformity assessment into the existing internal control framework required by that Directive, rather than running a parallel internal-control procedure under Annex VI. The result is one assessment, not two, but the assessment has to satisfy both regimes [1]. The practical evidence is the integrated Annex IV technical documentation pack plus the EBA-aligned model risk management evidence: the model inventory entry, the model validation report from the independent validation function, the Board-level model risk governance approval, the use-case statement, and the ongoing monitoring plan. The EU declaration of conformity under Article 47 references both legal bases. The Article 49 EU database registration applies. The Article 72 post-market monitoring obligation applies on top of the EBA monitoring expectations.

## How is high-risk classification triggered for banking AI?

Three filters in sequence. First, function. Creditworthiness or credit scoring of a natural person triggers point 5(b); fraud detection is excluded by name; transaction monitoring for AML purposes is generally outside the high-risk band. Second, the natural-person test. SME and corporate credit AI is not in scope where the borrower is a legal person, even though the underlying directors or guarantors are natural persons. Third, the Article 6(3) carve-out for narrow procedural tasks, improvements on a completed human activity, pattern detection without replacing the human assessment, and preparatory tasks - applied with documented reasoning in the technical file. The interaction with EBA model risk management is the part that catches banks. The EBA Guidelines on internal governance, the IRB model risk principles, and the SREP supervisory review process all require model-level documentation, validation and ongoing monitoring regardless of whether the model is technically "AI." A 2026 deployment that is high-risk under the AI Act and inside the IRB perimeter has to satisfy the AI Act technical documentation, the EBA validation expectations, and the SREP supervisory dialogue. DORA layers operational resilience expectations on top, including ICT third-party risk management when the AI is sourced from a vendor classified as a critical ICT third-party [4].

## What technical documentation must a banking AI system produce?

Annex IV of the AI Act sets the contents [1]. For banking AI the binding parts are Article 10 data governance, Article 14 human oversight, and Article 15 accuracy and robustness. Training data must be representative across the bank's actual portfolio. A model trained on the prime-segment book will discriminate when applied to subprime; a model trained on a single-Member-State book will mispriced when applied across the Single Market. The technical file has to evidence portfolio-representative training and out-of-distribution validation. The Basel Committee's 2024 paper on the digitalisation of finance and the BCBS principles for the sound management of operational risk converge on the same expectations from a prudential angle: Board-level model governance, an independent validation function, change management for model updates, and incident reporting [2]. The EBA's report on big data and advanced analytics, while older, is the canonical regulator-side reading of how these expectations apply to AI specifically [3].

## What does human oversight look like for retail credit AI?

Article 14 expects a designated reviewer with the competence, authority and information to override. For high-volume retail credit, the operational pattern is sampled review with full review of edge cases - declines near the threshold, approvals for atypical applicants, output below a confidence band - rather than 100% review. The threshold itself is a control. EBA expects the threshold and the review architecture to be documented and tested. GDPR Article 22 prohibits decisions based solely on automated processing producing legal or similarly significant effects unless one of the three exemptions applies; the CJEU C-634/21 (SCHUFA) decision clarified that a score handed to a third party who relies on it strongly can be the Article 22 decision [5]. Article 86 of the AI Act, applying from 2 August 2026, gives the data subject the right to a clear and meaningful explanation of the role of the AI in the decision and the main elements of the decision. For credit decisions this is a designed artefact: the features used, the model output, the human reviewer's decision and reasoning, and the alternative outcomes considered. Build the artefact at design time.

## How does Impetora handle banking AI Act conformity?

Impetora ships every banking AI system with a written risk classification analysis (point 5(b) yes or no, with the natural-person and function reasoning written out), a data-governance description aligned with Article 10 plus the EBA loan origination and IRB model risk expectations, an integrated Annex IV plus EBA model file under the Article 43(2) route, a human-oversight design spec mapped to Article 14, GDPR Article 22 and the SCHUFA case, an Article 86 explanation artefact, a DORA ICT third-party risk register entry where the AI is sourced from an external provider, and a post-market monitoring plan with named drift metrics, owners and reporting cadence. For deployments inside the IRB perimeter, the technical file is integrated into the bank's existing IRB model documentation rather than produced as a parallel pack. Cross-references: the EU AI Act overview, the banking industry hub, the decision-support AI use case, and the TRACE methodology.

## Frequently asked questions

### Is fraud detection AI high-risk under the EU AI Act?

No. Annex III, point 5(b) explicitly excludes 'AI systems used for the purpose of detecting financial fraud' from the creditworthiness high-risk category. Transaction monitoring, AML screening, and fraud-scoring AI used to flag transactions for human review sit outside the high-risk band. They are still inside the EBA's ICT and security risk management guidelines and inside DORA's operational resilience framework, but not inside the AI Act's high-risk obligations.

### Does the AI Act apply to corporate credit AI?

Annex III point 5(b) refers to creditworthiness of 'natural persons.' SME and corporate credit AI where the borrower is a legal person sits outside the high-risk band on its face. It is still inside the EBA Guidelines on loan origination and monitoring and the IRB model risk framework, which apply to all credit models regardless of AI Act classification. The legal-person carve-out is on the AI Act dimension only.

### What does Article 43(2) of the AI Act add for banks?

Article 43(2) allows credit institutions regulated under Directive 2013/36/EU to integrate the AI conformity assessment into the existing internal control framework required by that Directive, rather than running a parallel Annex VI internal-control procedure. The practical effect is one integrated assessment rather than two, but the assessment has to satisfy both legal bases. The Annex IV technical documentation can be merged with the EBA model risk file.

### How does DORA interact with the AI Act for banking AI?

DORA (Regulation (EU) 2022/2554) applies from 17 January 2025. It sets ICT operational resilience and ICT third-party risk management expectations. When a bank sources AI from a vendor classified as a critical ICT third-party, DORA requires a written register entry, contractual operational resilience clauses, exit strategy, and incident reporting. The AI Act's product-level conformity assessment sits alongside DORA's third-party risk regime; both apply concurrently.

### When do the high-risk obligations apply to banking AI?

2 August 2026 for the bulk of high-risk Annex III obligations, including point 5(b). Prohibited practices applied from 2 February 2025; general-purpose AI obligations applied from 2 August 2025; DORA applied from 17 January 2025. A 2026 procurement should be specified to the August 2026 floor, with concurrent treatment of DORA, the EBA model risk expectations and GDPR Article 22.

### Is GDPR Article 22 satisfied if a credit underwriter clicks confirm on every AI recommendation?

Only if the underwriter is a meaningful reviewer with authority and information to overturn. The CJEU SCHUFA decision (C-634/21) clarified that a score handed to a decision-maker who relies on it strongly can itself be the Article 22 decision. A confirm-button workflow without genuine override capacity does not satisfy Article 22. The reviewer interface has to surface the model inputs, the output, the confidence band and the alternatives, and the override has to be logged with reason code.

### What is the practical scope of the BCBS 2024 paper on digitalisation in finance?

The Basel Committee's August 2024 paper, 'Digitalisation of finance,' sets convergent international expectations on AI governance, data governance, model risk management, and operational resilience for AI used in banking. It is not directly binding inside the EU - the binding regime is the AI Act plus the EBA framework plus DORA - but it is the international floor that Member State supervisors reference, and large internationally-active banks build to it as a single global standard rather than splitting documentation per jurisdiction.

## Sources cited

1. Regulation (EU) 2024/1689 (Artificial Intelligence Act), Annex III point 5(b), Articles 6, 10, 14, 43, 86. European Union, Official Journal, 2024-07-12. https://eur-lex.europa.eu/eli/reg/2024/1689/oj
2. Digitalisation of finance. Bank for International Settlements, Basel Committee on Banking Supervision, 2024-05-16. https://www.bis.org/bcbs/publ/d575.htm
3. EBA Guidelines on loan origination and monitoring (EBA/GL/2020/06). European Banking Authority, 2020-05-29. https://www.eba.europa.eu/regulation-and-policy/credit-risk/guidelines-on-loan-origination-and-monitoring
4. Regulation (EU) 2022/2554 (Digital Operational Resilience Act, DORA). European Union, Official Journal, 2022-12-14. https://eur-lex.europa.eu/eli/reg/2022/2554/oj
5. Judgment in Case C-634/21 (SCHUFA Holding) on automated credit scoring. Court of Justice of the European Union, 2023-12-07. https://curia.europa.eu/juris/liste.jsf?num=C-634/21
6. Directive 2013/36/EU (Capital Requirements Directive IV). European Union, Official Journal, 2013-06-26. https://eur-lex.europa.eu/eli/dir/2013/36/oj
7. Generative artificial intelligence in finance. Bank for International Settlements, 2024-08. https://www.bis.org/fsi/publ/insights63.htm
