AI operations layer: keep production AI auditable
An ongoing retainer for organisations operating one or more production AI systems. We monitor the models, detect input and output drift, refresh evaluation suites on a fixed cadence, run incident response with a named on-call, and maintain the regulator pack as the EU AI Act application timeline lands through 2027. For buyers who need the AI system to keep being defendable a year after launch, not only on launch day.
01.What does the operations layer cover?
02.Who is the operations layer for?
03.What is not included?
04.How does it differ from a typical managed AI service?
05.How does it integrate with the Build phase?
06.How this SKU sits inside the TRACE methodology
This engagement is the Operate phase of the Impetora delivery model. The Operate phase is where most enterprise AI systems quietly fail. The operations layer is the SKU built to prevent that.
Trust
Readiness
Architecture
Citations and evidence
07.What happens, week by week
- 01Months 0
Onboarding (one-off)
Two to three weeks of inventory, eval-suite import or rebuild, dashboard wiring, runbook authoring, on-call introduction.
- 02Monthly
Steady-state monthly cadence
Weekly drift and incident review, monthly written status to the steering group, eval suite runs on every change.
- 03Quarterly
Quarterly evidence pack and re-baseline
Regulator-grade evidence pack: eval results, drift report, incident log, model and prompt change history. Annual eval re-baseline.
08.Inputs we need from you. Outputs we ship.
From your team
- Code, prompts, evaluation harness, and observability access for the AI systems in scope
- A risk or operations sponsor available for the monthly steering call
- An internal SRE counterpart for the on-call handshake (we do not replace your SRE; we partner with it)
- Clarity on which incidents page on-call and which incidents wait for next-business-day
- A change-window calendar for prompt and model upgrades
Concrete deliverables
- Live observability dashboards (latency, cost, accuracy drift, refusal rate, exception traffic)
- Written monthly steering update with the eval-suite trend and the open-incident list
- Quarterly evidence pack ready for an internal audit committee or a regulator submission
- Annual eval-suite re-baseline against your current operating reality
- On-call rotation, post-mortem template, and signed incident-response procedure
- Maintained regulator pack as EU AI Act and ISO 42001 expectations evolve
09.Who this is not for
We turn engagements down when the fit is wrong. If any of these match, a different SKU, or a different partner, will serve you better.
See the full list of fit signals we screen against
- Pre-production systems; the operations layer is for systems already serving real users
- Non-AI infrastructure; we do not run general SRE for systems outside the AI perimeter
- Buyers who want a one-off audit; that is the AI readiness audit SKU
- Engagements without a named risk or operations sponsor on your side
Frequently asked questions
Can you operate AI systems we did not build?
Yes. We run a 2 to 3 week onboarding sprint at the start of month one to import the system into our operating cadence: eval suite review or rebuild, dashboard wiring, runbook authoring, on-call introduction. Onboarding is in scope from month one; there is no separate onboarding fee.
How is the on-call rotation structured?
EU business hours by default, with named on-call engineers on our side and a partner on yours. Out-of-hours pager coverage is available as a scoped add-on; many buyers find that AI-system incidents do not require 24/7 paging, while others (financial services, healthcare) do.
Do you cover model upgrades and prompt changes?
Yes. Both run through the eval suite and a written change window. We do not push a model or prompt change to production without the eval suite clearing the agreed bar. Model deprecations from upstream providers are handled inside the retainer scope.
How is the regulator pack maintained as the AI Act timeline lands?
The pack is reviewed quarterly against the AI Act application calendar and any newly published EDPB or sector-regulator guidance. Material changes trigger an interim update outside the quarterly cycle, on the schedule the legislation imposes.
What is the contract term?
Twelve months minimum, with quarterly review gates. Either party can exit at the end of any quarter for any reason, with a 30-day handover window. We aim to make ourselves replaceable at every gate, by maintaining clean documentation, on the principle that an operations partner should be earning the next quarter, not capturing it.
Do you transfer learning across clients?
We carry methodology lessons (not data, not prompts) across engagements. Every client benefits from the operations playbook accumulated across the others. No data, no prompts, no model fine-tunes, and no evaluation suites cross client boundaries.
Book a discovery call for a fixed-scope plan.
One form. We reply within two working days with a written scope, a delivery plan, and the team you would work with.