I
Impetora

Model risk management for AI: SR 11-7 and the EU equivalent

By Impetora -

Model risk management (MRM) is the discipline banks have used since the 2008 crisis to govern the quantitative models they rely on for credit, capital, market risk, AML and pricing decisions. The US Federal Reserve and OCC's Supervisory Letter SR 11-7 set the global benchmark; the EBA, ECB and Basel Committee have built the EU equivalent through internal-governance guidelines, the ECB Guide to Internal Models and BCBS 239. AI and machine-learning models inherit the full MRM regime when they are used in regulated banking decisions [1].

2011-04-04
SR 11-7 publication date
Federal Reserve
5
lifecycle stages in SR 11-7
Federal Reserve
BCBS 239
risk-data aggregation principles
BIS
EBA / ECB
EU MRM expectations
EBA

What is model risk management and why does it apply to AI?

SR 11-7, jointly issued by the Federal Reserve and the OCC in April 2011, defines model risk as the potential for adverse consequences from decisions based on incorrect or misused model outputs. The letter sets out the supervisory expectations for the entire model lifecycle: development and implementation, model validation, ongoing monitoring, governance and policies, and use of vendor models. It is the operative reference for how US banks govern any quantitative model used in regulated decisions, and the EU regulators have adopted substantively equivalent expectations [1].

AI and machine-learning models are explicitly inside the MRM perimeter. The OCC, Federal Reserve, FDIC, EBA, ECB and BoE have all issued statements confirming that an AI model used to support a regulated banking decision is a "model" for MRM purposes regardless of whether it is a regression, a gradient-boosted tree, a deep neural network or a foundation-model-based application. The 2024 Federal Reserve interagency request for information on AI in financial services confirmed that the regulators expect SR 11-7 to govern AI use cases without modification.

The implication is that any bank deploying AI for credit underwriting, transaction monitoring, customer scoring, fraud detection or capital calculation must run the model through the same independent validation, ongoing monitoring and governance process as a traditional regulatory model.

What does the SR 11-7 lifecycle require for AI models?

SR 11-7 organises model risk around five activities. (1) Model development: the model owner documents the conceptual soundness, data sources, methodology, limitations and intended use. For AI models, this includes training-data provenance, feature engineering, hyperparameter selection, demographic robustness analysis and explainability artefacts. (2) Model implementation: the implementation environment is documented, version-controlled and subject to change-control. (3) Model validation: an independent function (separate from development) confirms conceptual soundness, performs outcomes analysis on held-out and production data, evaluates the implementation, and issues a finding with severity ratings. (4) Ongoing monitoring: production performance is tracked against documented benchmarks, with triggers for revalidation. (5) Governance, policies and controls: the bank maintains a model inventory, tiers models by risk, sets validation cadence and escalation rules, and reports model risk to the board.

For AI models, the validation activity is where most teams underinvest. Independent validation must cover training data quality and bias analysis, model performance under stress and on out-of-distribution inputs, stability of feature importance over time, fairness metrics aligned with the fair-lending regime, and the human-oversight integration for outputs that drive decisions affecting customers. The 2021 OCC bulletin on the use of alternative data in credit decisions and the 2023 OCC examination handbook updates set expectations on AI-specific validation procedures [2].

5 stages
in the SR 11-7 model lifecycle
Federal Reserve

What is the EU equivalent of SR 11-7 for AI in banking?

The EU does not have a single document called "SR 11-7" but the equivalent expectations are distributed across several supervisory texts. The EBA Guidelines on internal governance (EBA/GL/2021/05, updated 2024) require credit institutions to operate effective risk-management arrangements that explicitly cover models. The ECB Guide to Internal Models, last updated in 2024, sets the supervisory expectations for IRB credit-risk, market-risk and counterparty-credit-risk models, including the governance, validation and ongoing-monitoring requirements that map onto SR 11-7. The EBA Discussion Paper on machine learning for IRB models (2021) and follow-up Report (2023) lay out the supervisory expectations specifically for AI/ML models inside the IRB regime [3].

Beyond the IRB perimeter, the EBA Guidelines on outsourcing arrangements (2019) plus DORA's third-party regime apply to AI models supplied by external vendors. The EBA Guidelines on loan origination and monitoring (2020) impose model-specific governance requirements on creditworthiness assessment, including for AI-driven decisions. The ECB SREP methodology (Pillar 2 supervisory review) explicitly examines model risk as part of governance and risk-management adequacy.

BCBS 239, "Principles for effective risk data aggregation and risk reporting" (BIS, 2013), is the global standard on risk data quality, lineage and timeliness that underpins model inputs in systemically important banks. AI models trained on or scoring against firm-wide data must comply with BCBS 239 expectations on data lineage and quality.

How does MRM interact with the EU AI Act for high-risk banking AI?

The EU AI Act (Regulation 2024/1689) classifies AI used for creditworthiness assessment of natural persons as high-risk under Annex III, point 5(b). The Act's Article 9 (risk management), Article 10 (data governance), Article 12 (record-keeping), Article 14 (human oversight), Article 15 (accuracy, robustness, cybersecurity) and Article 17 (quality management system) impose obligations that overlap substantially with SR 11-7 / EBA expectations.

The mature pattern is to treat the AI Act technical documentation as the same document as the SR 11-7 / ECB validation file, structured so that the headings satisfy both regimes simultaneously. Financial entities that maintain two parallel documents - one for the prudential regulator and one for the AI Act - typically duplicate effort without improving substance. Annex IV of the AI Act lists the technical-documentation requirements explicitly and most of those headings already exist in mature SR 11-7 / ECB validation files.

For ongoing monitoring, the AI Act Article 72 post-market monitoring obligation aligns naturally with the SR 11-7 ongoing-monitoring activity. The trigger thresholds and escalation paths that the bank already operates for model performance can be repurposed to satisfy the post-market monitoring system the Act requires.

What are the most common AI model risk management mistakes?

Five patterns recur in supervisory findings on AI models. First, classifying an AI system as "non-model" because it is described as a tool, dashboard or recommender, then having the supervisor reclassify it as a model post-fact. Second, treating training-data quality as out of scope for validation, when the EBA and OCC explicitly include data lineage and bias analysis in the validation perimeter. Third, validating once at deployment and then deferring revalidation indefinitely - the SR 11-7 expectation is risk-tier-based revalidation cadence, typically 12 to 36 months. Fourth, stale model inventories that do not capture every AI deployment in the bank, leaving shadow models unvalidated. Fifth, treating human-oversight integration as a UX layer rather than as a control: the supervisor expects to see logs of when overrides occurred, why, and how the model owner used the override data.

Vendor models are a sixth recurring failure point. SR 11-7 explicitly requires the bank to validate vendor models with the same rigour as in-house models, even where the bank cannot inspect the vendor's training data or code. The validation team must request whatever documentation the vendor will provide, use challenger models, perform outcomes analysis on the bank's own data, and document the limitations of the validation explicitly.

How does Impetora support MRM-grade AI engagements in banking?

Impetora's TRACE methodology was designed around AI systems that have to survive the SR 11-7 / ECB / EBA validation rhythm. Trust covers the policy and contractual layer including model documentation, vendor-model evidence packages and audit rights aligned with EBA outsourcing guidelines. Readiness covers the data and workflow audit that becomes the model's training-data lineage and bias-analysis baseline. Architecture covers production-grade design with logging, monitoring and explainability hooks that the validation function exercises during outcomes analysis. Citations and Evidence covers the audit trail that the validation team and the supervisor consume during revalidation cycles.

The practical path for a bank's AI engagement: scope the AI system into the model inventory at the right risk tier from day one, structure the documentation so that one technical documentation pack satisfies SR 11-7 / ECB / EBA / AI Act simultaneously, build the ongoing-monitoring runbook with the trigger thresholds the validation team has signed off, and prepare for revalidation on the cadence the bank's policy sets.

Frequently asked questions

Is SR 11-7 still the operative US standard for AI models in banking?
Yes. The Federal Reserve, OCC and FDIC have repeatedly confirmed in supervisory statements, examination handbooks and the 2024 interagency request for information that SR 11-7 governs AI and machine-learning models used in regulated banking decisions. There is no separate AI MRM letter; the same five-activity lifecycle applies with AI-specific application notes on training data, bias analysis, explainability and ongoing monitoring.
Does the EU have a published equivalent of SR 11-7?
Not as a single document. The equivalent expectations are distributed across the EBA Guidelines on internal governance, the ECB Guide to Internal Models, the EBA Guidelines on loan origination and monitoring, the EBA Guidelines on outsourcing arrangements, BCBS 239 and the EBA Report on machine learning for IRB models. Substantively the requirements align: independent validation, documented lifecycle, ongoing monitoring, model inventory, governance, vendor-model treatment.
Are foundation-model-based applications inside the MRM perimeter?
If the application supports a regulated banking decision, yes. The decision-supporting use is what brings the system into the MRM perimeter, regardless of the underlying model architecture. A retrieval-augmented generation system used for AML alert triage is a model. A chatbot that returns regulatory information to relationship managers without affecting customer-facing decisions may be out of scope, but the bank must document that classification rationale.
How does BCBS 239 interact with AI models?
BCBS 239 sets the principles for risk-data aggregation and risk reporting in global systemically important banks. AI models trained on or scoring against firm-wide data inherit BCBS 239 expectations on data lineage, quality, completeness, accuracy and timeliness. The data layer feeding the AI model must be auditable end-to-end, and the model's outputs must be reconcilable to the underlying data sources. The 2023 BCBS progress report on BCBS 239 implementation calls out remaining gaps in many banks' data infrastructures.
What are the typical revalidation cadences for AI models?
Risk-tier-based. Tier-1 (high-impact) models are typically revalidated annually with continuous monitoring; tier-2 every two years; tier-3 every three years. Material changes to the model, the training data or the use case trigger out-of-cycle revalidation regardless of tier. The validation finding's severity rating drives remediation timelines. For AI models with high data drift or non-stationary distributions, banks often shorten the cadence and supplement with monthly drift-monitoring reports.
Do AI vendor models satisfy MRM if the vendor provides validation reports?
Vendor reports are an input to the bank's validation, not a substitute for it. SR 11-7 is explicit that the bank remains responsible for validating any model in use, including vendor models. The bank's validation team must perform outcomes analysis on the bank's own data, evaluate any documented limitations of the vendor validation, use challenger models where feasible, and document the residual model risk explicitly. Vendor reluctance to share documentation is itself a finding that the bank must capture and escalate.
Impetora

Ready to scope your project? Submit a short brief and we reply within one business day.

Sources cited

Sources cited (6) - show
  1. Supervisory Letter SR 11-7: Guidance on Model Risk Management. Federal Reserve / OCC, 2011-04-04. https://www.federalreserve.gov/supervisionreg/srletters/sr1107.htm
  2. Comptroller's Handbook - Model Risk Management. Office of the Comptroller of the Currency (OCC), 2023. https://www.occ.gov/publications-and-resources/publications/comptrollers-handbook/index-comptrollers-handbook.html
  3. EBA Discussion Paper / Report on machine learning for IRB models. European Banking Authority, 2023. https://www.eba.europa.eu/regulation-and-policy/model-validation
  4. Principles for effective risk data aggregation and risk reporting (BCBS 239). Basel Committee on Banking Supervision (BIS), 2013-01. https://www.bis.org/publ/bcbs239.htm
  5. ECB Guide to Internal Models. European Central Bank, 2024. https://www.bankingsupervision.europa.eu/ecb/pub/pdf/ssm.guidetointernalmodels_consolidated_202402.en.pdf
  6. Regulation (EU) 2024/1689 (Artificial Intelligence Act). European Union, Official Journal, 2024-07-12. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689
About Impetora
Impetora designs, builds, and deploys custom AI systems for enterprises in regulated industries. We operate from Vilnius and Amsterdam and work in five languages.
Discovery call

Book a discovery call

Tell us what you would like to build. We reply within one business day.

30-minute call. Free of charge. No obligation.