I
Impetora
For: Chief Risk Officer

Custom AI for the CRO: how we ship AI inside the model risk framework you already operate.

Model risk management is the discipline of identifying, measuring, monitoring, and controlling the risk of model failure across the lifecycle, from data ingestion through validation, deployment, change-control, and retirement. AI does not replace this framework. AI sits inside it. We design every system to fit your existing three-lines-of-defence governance, with documentation that maps to SR 11-7, the EU AI Act risk tiers, and DORA outsourcing obligations before the model goes near a production decision.

Independent validation pack, model inventory entries, change-control logs, and breach-reporting workflow. Delivered as artefacts your second line can sign off without rewrites.

5
SR 11-7 model lifecycle stages: development, implementation, use, validation, governance
Federal Reserve
4
EU AI Act risk tiers: prohibited, high-risk, limited-risk, minimal-risk
EUR-Lex
8
Annex III high-risk use case categories under the EU AI Act
EUR-Lex
5
DORA ICT pillars: risk management, incident reporting, resilience testing, third-party risk, information sharing
EUR-Lex
What CROs actually care about

The five concerns we hear on every CRO discovery call.

Model risk inventory

Every AI system in production has to be in the inventory with owner, lifecycle stage, validation status, and risk classification. The inventory is the spine of your CRO function.

EU AI Act risk classification

Each system has to be classified against Annex III before it ships. High-risk triggers conformity assessment, data governance, transparency, and human oversight obligations.

DORA outsourcing

If the AI provider is a third-party processor, DORA Article 28 sub-contracting and exit-strategy requirements apply. The contract has to support the regulator's ask.

Independent validation

First line builds, second line validates. The validation pack has to be assembled by people independent of the build team and signed off before promotion.

Change control

A foundation-model upgrade is a model change. The change-control log has to capture the trigger, the impact assessment, the validation, and the sign-off chain.

Breach and incident reporting

When a model produces a wrong, biased, or harmful output that affects a customer, the incident has to flow through your breach-reporting workflow with the evidence chain attached.
TRACE pillar focus

For CROs, the spine is Readiness.

For a CRO, the spine is Readiness. A two-week data and workflow audit before any code, with the output mapped to your model risk framework, your three-lines-of-defence governance, and your regulator's posture. We document risk classification under the EU AI Act, lifecycle stage under SR 11-7 logic, and outsourcing posture under DORA where applicable. The Readiness deliverable is a written diagnosis your second line can validate without rewrites.

AI does not replace your model risk framework. AI sits inside it, or it does not ship.
Impetora discovery brief, banking engagement
Engagement model

What the engagement looks like from your seat.

Risk auditDiscoveryAI Act tieringAnnex IIIBuild + validateIndependentInventory entryMRMOperate + reportDORA
How a CRO engagement runs end to end.
Deliverables

What CROs need from a partner, and what we ship.

Risk classification memo

Each system classified against EU AI Act Annex III, GDPR Article 22, and your sectoral framework (SR 11-7, EBA, EIOPA, EMA, BCBS as applicable).

Independent validation pack

A document set assembled for second-line review: data lineage, evaluation set, performance metrics, refusal cases, edge-case behaviour, and known limitations.

Model inventory entry

A structured record fit for your model risk inventory: owner, version, lifecycle stage, validation status, residency, sub-processors, change history.

Change-control workflow

Every model upgrade, prompt change, or retrieval-pipeline change goes through a documented impact assessment and sign-off chain. Captured in the audit log.

Breach-reporting hooks

When the model misclassifies, hallucinates, or violates a policy in a way that affects a customer, the incident routes through your existing breach-reporting workflow with the evidence chain attached.

DORA outsourcing pack

If we are your third-party processor under DORA, you get the sub-contracting register, exit strategy, and incident-notification SLAs the regulation expects.

CRO questions, answered.

Does TRACE map to SR 11-7?

Yes. SR 11-7 organises model risk around development, implementation, use, validation, and governance. TRACE-Readiness covers development and implementation through the workflow audit and architecture spec. TRACE-Architecture covers use through the production-grade build with versioning, observability, and rollback. TRACE-Citations covers validation through the evidence chain on every output, which is what an independent validator uses to replay and assess the model. TRACE-Trust covers governance through the residency, sub-processor, and audit-log posture. The mapping is documented per engagement and signed off before Build phase.

How do you handle model change-control?

Every change to the production system is a controlled event. Foundation-model upgrades, prompt changes, retrieval-pipeline changes, and tool-schema changes go through an impact assessment, an evaluation-suite rerun, and a sign-off chain that matches your three-lines-of-defence framework. The change is logged with the trigger, the diff, the test result, and the approver. Promotion to production is gated by a passing regression suite. The full chain is reviewable in the audit log.

What about DORA outsourcing requirements?

If we are processing data on your behalf under DORA, the contract includes the elements DORA Article 28 expects: written agreement, sub-contracting register, exit strategy, security and audit rights, incident-notification SLAs, and the regulator's right to access. We disclose every sub-processor that touches inference, retrieval, or storage, and we keep the list current. If your environment requires a critical-third-party-provider posture under DORA, we structure the engagement to support it.

How is AI risk-classified under the EU AI Act?

We classify every system against the four-tier risk taxonomy in the EU AI Act before Build phase. Most enterprise systems land in limited-risk (transparency obligations) or high-risk under one of the eight Annex III categories (employment, education, essential private and public services, law enforcement, justice, migration, biometric identification, critical infrastructure). For high-risk systems, we ship the conformity-assessment scaffolding the regulation requires: risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity. The classification memo is delivered in Discovery and refreshed if the use case changes.

Who does the independent validation?

Independent validation is, by definition, performed by people independent of the build team. We support your second line by delivering the validation pack as a structured artefact, but the sign-off is theirs. Where you do not have an internal validation function, we work with a partner network of independent model validators we can introduce, with the engagement scoped and contracted directly between you and them.

Bring us the CRO mandate. We bring the audit-ready system.

Discovery starts with a scoped audit. The deliverable is yours either way. We respond within two business days at info@ainora.lt.

Discovery call

Book a discovery call

Tell us what you would like to build. We reply within one business day.

30-minute call. Free of charge. No obligation.