AI Lighthouse Build: production system in 8 to 12 weeks
An 8 to 12 week production AI delivery for one workflow, with the architecture, integrations, evaluation harness, observability, runbook, and handover that lets the system survive its first audit. For enterprises that have either completed an AI Sprint, or have a sufficiently clean internal scope to skip the sprint and ship to production directly. One workflow in, one workflow live.
01.What does a Lighthouse Build deliver?
02.Who is a Lighthouse Build for?
03.What is not included?
04.How does it differ from a typical AI build engagement?
05.What does handover look like?
06.How this SKU sits inside the TRACE methodology
This engagement is the Build phase of the Impetora delivery model. The Lighthouse Build is the full Build phase. Its job is to ship a system that survives audit, not a demo.
Trust
Readiness
Architecture
Citations and evidence
07.What happens, week by week
- 01Wk 1-2
Architecture and readiness lock
Architecture review, integration mapping, evaluation criteria locked in writing, regulator-pack scaffolded.
- 02Wk 3-7
Shadow build and integration
Build the ingestion, retrieval, model layer, and integration surface end to end. Eval suite runs on every change. Shadow mode against live volume.
- 03Wk 8-12
Assist rollout, handover, parallel run
Assist-mode rollout to the full user team. Runbook authored, dashboards live, named on-call on both sides. Two-week parallel run, then exit.
08.Inputs we need from you. Outputs we ship.
From your team
- A workflow that has cleared a Sprint or carries a high Discovery readiness score
- API access (read and write) to the systems of record involved in the workflow
- An operations-side product owner and an engineering counterpart for the duration of the build
- A change-management plan inside your team for the assist-mode rollout (we contribute, we do not run it)
- Sign-off from risk and legal on the rollout perimeter at the end of week one
Concrete deliverables
- Production AI system for one workflow, integrated end to end into your stack
- Evaluation harness running on every change, with shadow and assist mode results in version control
- Observability stack: latency, cost, accuracy drift, and exception monitoring
- Audit log and regulator pack: every input, prompt, retrieval, output, and review is captured and queryable
- Runbook with named on-call rotation, rollback procedure, and incident-response playbook
- Two-week parallel run with our team alongside yours before final exit
09.Who this is not for
We turn engagements down when the fit is wrong. If any of these match, a different SKU, or a different partner, will serve you better.
See the full list of fit signals we screen against
- Workflows without a measured baseline or without a clean regulatory frame; Discovery first
- Multi-workflow ambitions inside a single Build window; one workflow per Lighthouse
- Operations that cannot commit a product owner and an engineering counterpart for the full window
- Buyers needing a one-week, no-handover demo; the Lighthouse is a production SKU, not a demo SKU
Frequently asked questions
Do we need an AI Sprint before a Lighthouse Build?
It is the safer path on workflows we have not seen before. Where Discovery shows a high readiness score, a clean regulatory frame, and a workflow with a measured baseline, the Sprint can be skipped and the Lighthouse run directly. We will say so in writing during the scoping conversation; we do not push a sprint that the readiness score does not justify.
How is the regulatory pack maintained during the build?
The regulator pack is scaffolded in week one and updated through every workstream. By the end of the build, it includes the EU AI Act risk classification, the Annex IV technical documentation, the GDPR Article 22 review where relevant, the evaluation results, and the audit-log architecture. The pack is yours and is structured for a regulator submission or an internal audit committee.
Where does the system run?
Inside your environment, with EU residency by default for EU-perimeter operations, and with the regional residency the regulatory frame requires for non-EU operations. Foundation-model and retrieval layers are described in vendor-neutral terms during marketing; the specific selection is locked in week two of the build, against your procurement preferences and the architecture's needs.
What happens if scope changes during the build?
A written change order, a fresh calendar, a fresh exit date. We do not absorb scope creep silently. The change order names the impact on cost, the impact on calendar, and the impact on the eval suite, and it goes through the same sponsor sign-off as the original scope.
Who owns the code, prompts, and configuration at exit?
You do. Everything is delivered to your repository under your account, with the version history intact. We retain the right to use anonymised methodology lessons (not data, not prompts) in our internal playbooks.
What is the difference between a Lighthouse Build and an AI platform deployment?
Lighthouse is workflow-first. A platform deployment is infrastructure-first, with the workflow added later. We see more value, in regulated industries, in shipping one workflow that survives audit, then composing across workflows, than in building infrastructure ahead of demand. Buyers who genuinely need infrastructure ahead of demand take TRACE Discovery first to size that workstream properly.
Book a discovery call for a fixed-scope plan.
One form. We reply within two working days with a written scope, a delivery plan, and the team you would work with.