AI for banking, from credit decisioning to KYC, AML and treasury automation.
AI for banking is the design and deployment of custom systems that automate credit decisioning, KYC and transaction monitoring, document processing, fraud detection, and treasury forecasting while preserving the model-risk discipline that supervisors and internal validation teams expect. Impetora builds these systems for retail and commercial banks, neobanks and fintechs, with classification against EU AI Act Annex III §5(b) (creditworthiness scoring is high-risk by default) and audit logs that map cleanly onto the Federal Reserve SR 11-7 model-risk-management standard.
How AI is reshaping banking in 2026
Capability is no longer the bottleneck. Model risk management is. The banks winning with AI are the ones treating SR 11-7 lineage and EU AI Act conformity as the deliverable, not the afterthought.
Banking has run on probabilistic models for decades. What changes with generative and agentic AI is the surface area: credit memos, KYC documentation, AML alerts, customer correspondence, fraud narratives and treasury commentary all become drafts a model can produce, with a human reviewer signing off the exception path.
The Financial Stability Board's November 2024 report on AI in financial services flagged adoption of generative AI across credit, fraud and operations as the fastest-moving development in the sector since cloud migration. The BIS Financial Stability Institute Insights paper #63 documents how supervisors expect generative AI to fit within existing model-risk-management frameworks rather than outside them, and the Bank of England's Financial Stability in Focus, April 2025 calls out third-party concentration and explainability as the two most material risks.
The unsolved problem is not capability; it is governance. Supervisors, internal model-validation, and second-line risk all want the same artefact: a verifiable record of what the model saw, what it produced, which version of the prompt and weights ran, and which human approved the exception. We treat that record as the deliverable.
Supervisors expect generative AI to fit within existing model-risk-management frameworks, not outside them.
Use cases we deliver for banking teams at retail and commercial banks, neobanks and fintechs
KYC and AML transaction monitoring
Alert backlogs in financial-crime ops scale linearly with onboarding and transaction volume. Analysts spend the bulk of their time on false positives, and the auditable record of why each alert was cleared is often a free-text field.
Credit decisioning with explainability
Underwriters spend hours synthesising bureau data, bank statements, business filings and policy rules into a single decision. EU AI Act Annex III §5(b) classifies creditworthiness scoring as high-risk - explainability and audit trail are not optional.
Loan and KYC document automation
Mortgage files, business-loan packs and KYC bundles arrive as 30 to 200 page PDFs and scans. Operations teams burn FTE re-keying fields into the core platform, and field-level error rates of 1 to 3% drive rework downstream.
Customer support automation across digital channels
Chat, email and in-app messaging absorb a large share of service-team capacity. Routing, intent classification, drafting and policy lookup are repetitive and rules-based, but each response still has to be defensible against compliance and consumer-duty obligations.
Fraud pattern detection
Rules-based fraud engines miss novel patterns, and tuning thresholds is a blunt instrument. The cost of a false positive is customer friction; the cost of a false negative is loss and a regulator letter.
Treasury and cash-flow forecasting AI
Treasury teams reconcile fragmented liquidity, intraday flow and FX exposure across multiple core systems. Daily forecasting work is highly manual, and the lineage of inputs is rarely auditable end-to-end.
How TRACE applies to banking AI
Trust
Readiness
Architecture
Citations and lineage
Regulatory considerations for banking AI
Banking AI sits inside multiple overlapping regulatory frameworks. We map every engagement to the relevant authority before code is written.
- 01
EU AI Act Annex III §5(b) - creditworthiness scoring is high-risk
AI systems used to evaluate creditworthiness or establish a credit score are classified as high-risk. Mandatory conformity assessment, risk management, data governance, technical documentation, transparency, human oversight, accuracy and cybersecurity controls. We build to that bar by default.EUR-Lex - 02
Federal Reserve SR 11-7 - model risk management
The global benchmark for model-risk discipline. Sound development, implementation and use; effective validation; sound governance, policies and controls. Our architecture and audit logs map onto SR 11-7 line items so validation teams do not have to invent a new control set.Federal Reserve - 03
GDPR Article 22 - automated individual decisions
Decisions producing legal or similarly significant effects (including loan refusal) cannot be made solely on automated processing without explicit safeguards. We design every credit-decisioning workflow to preserve the human reviewer's substantive role.GDPR-Info - 04
EBA guidelines on loan origination and monitoring
European Banking Authority expectations on credit underwriting governance, including data quality, model use and ongoing monitoring. AI-assisted underwriting must fit inside these guidelines, not bypass them.EBA - 05
DORA - Digital Operational Resilience Act
Banks are in-scope financial entities. AI systems used in critical or important functions inherit DORA obligations on ICT risk management, third-party risk, incident reporting and resilience testing.EIOPA - 06
BCBS - Basel Committee guidance
The Basel Committee on Banking Supervision sets the prudential framework AI systems sit inside. Operational risk, internal governance and disclosure expectations flow through to any AI used in credit, AML or operations.BIS - BCBS
How we typically engage
Three phases. The discovery sprint always comes first, and the cost of doing it is recovered the moment scope is locked correctly against the bank's MRM and conformity calendar.
- 011 to 2 weeks
Discovery
Workflow audit, model-risk-management baseline against SR 11-7 line items, sample 30 days of real cases (applications, alerts, exceptions), scope sign-off with named success metrics. Output is a written diagnosis with EU AI Act risk classification and the conformity-assessment gap list.
- 024 to 12 weeks
Build
Production architecture, eval suite tied to the case mix, shadow-mode rollout where the AI runs alongside analysts with output logged but not actioned, core-banking integration, audit-log delivery, and the SR 11-7 / EU AI Act conformity pack as a single deliverable.
- 03Ongoing
Operate
Quarterly drift reports, eval-set growth from real human corrections, model-version upgrades behind a regression suite, regulatory-update tracking. The system stays accurate as the case mix and the regulation evolve.
Frequently asked questions
How do you handle EU AI Act high-risk classification for credit-decisioning AI?
Annex III §5(b) classifies AI used to evaluate the creditworthiness of natural persons or to establish their credit score as high-risk. That triggers obligations on risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness and cybersecurity. We build conformity-assessment scaffolding into the system from week one: an ISO 42001-aligned governance memo, the technical documentation pack the regulation requires, an append-only audit log, and a documented human-in-the-loop step for any decision that affects a customer. If your specific use case is limited-risk rather than high-risk, we ship the proportionate controls, but we never default to less than the regulation requires.
How does your architecture map onto SR 11-7 model-risk management?
Federal Reserve SR 11-7 is the most-cited model-risk-management standard globally and most internal validation teams already work to it (or its OCC counterpart, Bulletin 2011-12). We anchor every banking-AI engagement to its three pillars: sound development, implementation and use; effective validation; and sound governance, policies and controls. The technical-documentation pack we deliver mirrors the SR 11-7 line items so validation teams do not have to invent a new control set. Where the bank operates under PRA SS1/23 or another local equivalent, we align to that instead.
Can the system integrate with Temenos, Mambu, FIS, Finastra or our in-house core?
Yes. The delivery layer is built around your core, not the other way around. We ship integrations with Temenos Transact and Infinity, Mambu, FIS Profile, Finastra Fusion, and the major card and AML platforms (Actimize, SAS AML, ComplyAdvantage). For systems without a modern API we build a queue-based bridge with idempotent writes and a manual reconciliation interface. The audit log writes regardless of where the data lands.
How do you preserve explainability and human review for automated decisions under GDPR Article 22?
Article 22 prohibits decisions producing legal or similarly significant effects from being made solely on automated processing without explicit safeguards. For credit decisioning that means the human reviewer has to retain a substantive, not rubber-stamp, role. We design the workflow so the AI structures the decision packet (features, policy hits, comparable cases, draft rationale) and the human underwriter signs the actual decision with the ability to override. Every override is logged with reason codes and a free-text rationale, and the customer-facing reasons-for-decline letter is generated against the same audit log.
How does DORA apply to AI systems running inside the bank?
DORA applies to banks as in-scope financial entities under Article 2(1)(a). AI systems used in critical or important functions therefore inherit DORA obligations on ICT risk management, third-party risk, incident reporting and resilience testing. We deliver the third-party-risk register entry, the ICT-incident playbook for AI-specific failure modes (drift, prompt-injection, model unavailability), and a resilience-testing protocol that fits inside DORA's threat-led penetration-testing regime where applicable.
Where is the data processed, and do you train on our data?
By default, all processing and storage runs in EU regions on infrastructure under EU jurisdiction. We support regional pinning when a regulator or contract requires it (Germany-only, Lithuania-only, US-only). Customer data lands in immutable EU object storage with hashes recorded in the audit log. We do not train any model on your data, full stop. If your contract requires US-resident processing for a US-only line of business, we expose that as an explicit configuration toggle, never a default.
What is the typical scope for a first banking-AI engagement?
A first engagement targets one workflow with a measurable baseline, runs 4 to 12 weeks to production, and lands as a single signed-off system inside one core or supporting platform. Common first scopes are: KYC document automation, AML alert triage on one alert family, credit decisioning on one product (unsecured consumer lending or SME lending), or fraud-pattern detection on one product line. Submit a project with the workflow you have in mind and the rough volume, and we scope and price the discovery phase before any code is written.
What does a banking-AI engagement cost?
Pricing is set after the discovery sprint, against your specific workflow, integration surface and conformity scope. We do not publish a flat rate because the variation across banking AI is wide: KYC document extraction on a stable form set is a different build from a multi-product credit-decisioning stack with full SR 11-7 documentation. Submit a project with the workflow and rough volume, and we come back with a discovery proposal within one business day.
Considering AI for your banking team?
Tell us the workflow you have in mind and we come back within one business day with a discovery proposal.