I
Impetora
Service - Build phase

AI Sprint: ship one workflow to a controlled cohort

A time-boxed 4 to 6 week proof-of-concept build for one workflow. We ship to a controlled cohort, in shadow mode first and assist mode second, and exit with a working system, an evaluation harness with real numbers on it, and a written decision memo on whether the workflow earns a full production deployment. For organisations that need evidence, not a slide deck, before committing capital.

4-6 wk
Sprint duration
1
Workflow shipped to a controlled cohort
2
Rollout modes proven (shadow, then assist)
1
Evaluation harness with real-baseline results
Section 01

01.What does the AI Sprint deliver?

A working AI system on one workflow, deployed to a defined cohort of users, with a measured baseline and an evaluation harness that any future engineer can re-run. Shadow mode runs first: the AI generates outputs that are not user-visible, and the team compares them against the human baseline. Assist mode runs second: the AI's output sits next to the human worker, with the human signing off before the action ships. We exit with a written decision memo: continue to a Lighthouse Build, iterate the sprint, or close the use case.
Section 02

02.Who is the AI Sprint for?

Operations leaders, line-of-business owners, and CIOs in regulated industries who have a single bounded workflow they want to test with AI before signing a multi-quarter Build engagement. It works best when the workflow is high-volume, repeatable, and has a measurable baseline. It works less well when the workflow is bespoke, low-volume, or contested between functions; that situation needs Discovery first.
Section 03

03.What is not included in a Sprint?

Production-grade rollout to all users. The Sprint is bounded to a controlled cohort and exits before broad rollout. Multi-workflow scope: one Sprint covers one workflow. Procurement-grade vendor selection: we deliberately use a defensible default architecture during the sprint and reserve full vendor evaluation for the subsequent Lighthouse Build. Long-running operations: handover to internal teams or to the Operations Layer happens at the end of the sprint, not during it.
Section 04

04.How does the Sprint differ from a typical AI pilot?

Typical pilots run 8 to 16 weeks, ship into a friendly demo environment, and produce a positive-sounding write-up regardless of the underlying numbers. The AI Sprint runs 4 to 6 weeks, ships to a real user cohort against a measured baseline, and exits with a written go or no-go memo. We commit in writing to a calendar exit date and to a baseline-comparison metric chosen during the readiness check.
Section 05

05.What turns a successful Sprint into a Lighthouse Build?

A successful sprint produces a workflow whose evaluation results clear the bar set during readiness, plus a cohort that wants the system extended to their full team. The Lighthouse Build turns that into a production deployment: full rollout, integration depth, runbook, and observability. The sprint code, evaluation harness, and prompts are reused directly; we do not start over.
Methodology mapping

06.How this SKU sits inside the TRACE methodology

This engagement is the Build phase of the Impetora delivery model. The Sprint is the smallest viable Build engagement. Its job is to convert a hypothesis into evidence on real volume.

T

Trust

Data residency and access controls fixed before code ships. The risk classification from Discovery is the input, not an afterthought.
R

Readiness

We refresh the baseline against the production environment, then ship to a controlled cohort in shadow mode before any user-visible behaviour.
A

Architecture

Versioned prompts, retrieval indexes, evaluation harness, observability, and runbook. Only what passes the eval suite reaches assist mode.
C

Citations and evidence

Every output the system produces is traceable to its inputs, prompt version, and confidence score. The audit log is built in, not bolted on.
Engagement model

07.What happens, week by week

  1. 01Wk 1

    Readiness gate

    One-week refresh of the baseline, scope confirmation, cohort selection, evaluation criteria locked in writing.

  2. 02Wk 2-4

    Shadow mode build

    We build the system end to end, run it against historical and live volume in shadow mode, and tune until the eval suite clears the agreed bar.

  3. 03Wk 5-6

    Assist mode and decision memo

    Cohort runs the system in assist mode for the final week. We write the decision memo, hand over the eval harness, and close the sprint.

Scope of work

08.Inputs we need from you. Outputs we ship.

Inputs we need

From your team

  • One bounded workflow with a measurable baseline
  • A controlled cohort of 5 to 25 users willing to engage in shadow and assist modes
  • Read and write API access to the systems the workflow touches (ticketing, CRM, document store, ERP)
  • An operations-side counterpart with the calendar to attend three weekly check-ins
  • Existing evaluation criteria where they exist, or our help drafting them in week one
Outputs we ship

Concrete deliverables

  • Working AI system for one workflow, deployed to the agreed cohort
  • Evaluation harness with shadow-mode and assist-mode results vs the baseline
  • Versioned prompts, retrieval indexes, and configuration, owned by you
  • Operations runbook covering the cohort phase, including rollback procedure
  • Written decision memo recommending Lighthouse Build, sprint iteration, or close
Honest scoping

09.Who this is not for

We turn engagements down when the fit is wrong. If any of these match, a different SKU, or a different partner, will serve you better.

See the full list of fit signals we screen against
  • Workflows that lack a measurable baseline; the eval bar cannot be set without one
  • Organisations without basic data infrastructure (no API access, no logs, no document store the AI can read)
  • Multi-workflow scopes; the Sprint is a one-workflow SKU by design
  • Buyers who need certainty before week one; Sprints can exit with a no-go memo and that is a valid outcome

Frequently asked questions

What happens if the Sprint exits with a no-go memo?

The no-go memo is the deliverable. It states why the workflow did not clear the eval bar, what would need to be true for it to do so on a re-run, and whether a different workflow inside the same operation is more promising. We do not extend the sprint to manufacture a positive answer. About one in four sprints in this kind of work do not clear the bar on the first pass; the buyers who pay for an honest no-go save a quarter and a budget cycle.

Do you need TRACE Discovery before an AI Sprint?

Discovery is the safer path. It is not always required: if the workflow is already well-understood internally, the regulatory frame is straightforward, and the operations counterpart is committed, the Sprint can run without Discovery. We will tell you in the kick-off call which path fits.

Where is the data hosted during the sprint?

Inside your environment by default, with EU residency where the regulatory frame requires it. Where a cloud foundation-model layer is needed, we use EU-resident endpoints. We do not move data into our own infrastructure. The Data Processing Agreement is signed before any data moves.

How disruptive is the assist-mode week to the cohort?

Light. Assist mode adds an AI-drafted output next to the human worker's existing tool; the human still acts. The cohort feedback loop in the final week is what we measure, and the operational disruption is on the order of a tool upgrade, not a process change.

Can we run two Sprints in parallel on different workflows?

Yes, with two separate engagement leads on our side. Each sprint keeps its own cohort, eval harness, and decision memo. Buyers running parallel sprints typically have a portfolio governance need: they want comparable outputs against three or four candidate workflows before committing to a single Lighthouse Build.

Who owns the code and prompts at the end of the sprint?

You do. The code, prompts, retrieval indexes, configuration, and evaluation harness are delivered to your repository under your account. We retain the right to use anonymised lessons learned in our methodology, not your data and not your prompts.

Book a discovery call for a fixed-scope plan.

One form. We reply within two working days with a written scope, a delivery plan, and the team you would work with.

Discovery call

Book a discovery call

Tell us what you would like to build. We reply within one business day.

30-minute call. Free of charge. No obligation.