AI Sprint: ship one workflow to a controlled cohort
A time-boxed 4 to 6 week proof-of-concept build for one workflow. We ship to a controlled cohort, in shadow mode first and assist mode second, and exit with a working system, an evaluation harness with real numbers on it, and a written decision memo on whether the workflow earns a full production deployment. For organisations that need evidence, not a slide deck, before committing capital.
01.What does the AI Sprint deliver?
02.Who is the AI Sprint for?
03.What is not included in a Sprint?
04.How does the Sprint differ from a typical AI pilot?
05.What turns a successful Sprint into a Lighthouse Build?
06.How this SKU sits inside the TRACE methodology
This engagement is the Build phase of the Impetora delivery model. The Sprint is the smallest viable Build engagement. Its job is to convert a hypothesis into evidence on real volume.
Trust
Readiness
Architecture
Citations and evidence
07.What happens, week by week
- 01Wk 1
Readiness gate
One-week refresh of the baseline, scope confirmation, cohort selection, evaluation criteria locked in writing.
- 02Wk 2-4
Shadow mode build
We build the system end to end, run it against historical and live volume in shadow mode, and tune until the eval suite clears the agreed bar.
- 03Wk 5-6
Assist mode and decision memo
Cohort runs the system in assist mode for the final week. We write the decision memo, hand over the eval harness, and close the sprint.
08.Inputs we need from you. Outputs we ship.
From your team
- One bounded workflow with a measurable baseline
- A controlled cohort of 5 to 25 users willing to engage in shadow and assist modes
- Read and write API access to the systems the workflow touches (ticketing, CRM, document store, ERP)
- An operations-side counterpart with the calendar to attend three weekly check-ins
- Existing evaluation criteria where they exist, or our help drafting them in week one
Concrete deliverables
- Working AI system for one workflow, deployed to the agreed cohort
- Evaluation harness with shadow-mode and assist-mode results vs the baseline
- Versioned prompts, retrieval indexes, and configuration, owned by you
- Operations runbook covering the cohort phase, including rollback procedure
- Written decision memo recommending Lighthouse Build, sprint iteration, or close
09.Who this is not for
We turn engagements down when the fit is wrong. If any of these match, a different SKU, or a different partner, will serve you better.
See the full list of fit signals we screen against
- Workflows that lack a measurable baseline; the eval bar cannot be set without one
- Organisations without basic data infrastructure (no API access, no logs, no document store the AI can read)
- Multi-workflow scopes; the Sprint is a one-workflow SKU by design
- Buyers who need certainty before week one; Sprints can exit with a no-go memo and that is a valid outcome
Frequently asked questions
What happens if the Sprint exits with a no-go memo?
The no-go memo is the deliverable. It states why the workflow did not clear the eval bar, what would need to be true for it to do so on a re-run, and whether a different workflow inside the same operation is more promising. We do not extend the sprint to manufacture a positive answer. About one in four sprints in this kind of work do not clear the bar on the first pass; the buyers who pay for an honest no-go save a quarter and a budget cycle.
Do you need TRACE Discovery before an AI Sprint?
Discovery is the safer path. It is not always required: if the workflow is already well-understood internally, the regulatory frame is straightforward, and the operations counterpart is committed, the Sprint can run without Discovery. We will tell you in the kick-off call which path fits.
Where is the data hosted during the sprint?
Inside your environment by default, with EU residency where the regulatory frame requires it. Where a cloud foundation-model layer is needed, we use EU-resident endpoints. We do not move data into our own infrastructure. The Data Processing Agreement is signed before any data moves.
How disruptive is the assist-mode week to the cohort?
Light. Assist mode adds an AI-drafted output next to the human worker's existing tool; the human still acts. The cohort feedback loop in the final week is what we measure, and the operational disruption is on the order of a tool upgrade, not a process change.
Can we run two Sprints in parallel on different workflows?
Yes, with two separate engagement leads on our side. Each sprint keeps its own cohort, eval harness, and decision memo. Buyers running parallel sprints typically have a portfolio governance need: they want comparable outputs against three or four candidate workflows before committing to a single Lighthouse Build.
Who owns the code and prompts at the end of the sprint?
You do. The code, prompts, retrieval indexes, configuration, and evaluation harness are delivered to your repository under your account. We retain the right to use anonymised lessons learned in our methodology, not your data and not your prompts.
Book a discovery call for a fixed-scope plan.
One form. We reply within two working days with a written scope, a delivery plan, and the team you would work with.