---
title: "AI Lighthouse Build: production AI in 8-12 weeks | Impetora"
description: "An 8 to 12 week production AI delivery for one workflow: architecture, integrations, evaluation harness, observability, and a runbook handover to operate."
url: https://impetora.com/services/ai-lighthouse-build
locale: en
dateModified: 2026-04-30
author: Impetora
---

# AI Lighthouse Build: production system in 8 to 12 weeks

> An 8 to 12 week production AI delivery for one workflow, with the architecture, integrations, evaluation harness, observability, runbook, and handover that lets the system survive its first audit. For enterprises that have either completed an AI Sprint, or have a sufficiently clean internal scope to skip the sprint and ship to production directly. One workflow in, one workflow live.

*Updated 2026-04-30. By Impetora. Email info@ainora.lt to discuss this service.*

## Anchor metrics

- **8-12 wk** - Build duration
- **1** - Production workflow live, end to end
- **1** - Evaluation harness on a CI cadence
- **1** - Runbook handover, named on-call rotation

## What does a Lighthouse Build deliver?

A production AI system for one workflow, integrated into your systems of record, with full observability, an evaluation harness running on every change, an audit log that captures every input, prompt, retrieval, and output, and a runbook that names the on-call rotation and the rollback procedure. The system goes through shadow mode, assist mode, and where the numbers earn it, autonomous mode on the categories that pass the eval suite. The handover artefact is a working system that the Operations Layer SKU, or your internal SRE team, can run from day one.

## Who is a Lighthouse Build for?

Heads of operations, CIOs, and engineering leaders who have a workflow ready for production and the integration surface to land it. The build is the right SKU after a successful AI Sprint, or for organisations whose Discovery readiness score is high enough to skip the sprint. It is also the right SKU when the workflow has a measured baseline, a defined regulatory frame, and a sponsor who can carry the change-management work that goes with shipping AI to a real team.

## What is not included?

Multiple workflows: one Lighthouse Build covers one workflow. Long-running operations: the build ends with handover; ongoing operations are the Operations Layer SKU. Process redesign beyond what the AI workflow itself touches: we do not do general operating-model redesign during a build. Vendor relationship management for tools we did not select: where you have an existing vendor in the stack, we integrate against it, but we do not become the contract owner.

## How does it differ from a typical AI build engagement?

Typical builds underweight evaluation and observability, then ship a system that works in the demo and not on Tuesday morning. The Lighthouse Build budgets evaluation and observability as first-class workstreams, with the eval harness running on every change before any code reaches assist mode. We also commit to a fixed scope and a fixed exit date in writing. Scope expansion during the build is handled as a written change order against a calendar reset, not as silent timeline drift.

## What does handover look like?

Code, prompts, retrieval indexes, configuration, evaluation harness, observability dashboards, runbook, and audit-log access, all delivered to your repository and your environments. A two-week parallel-run period where we operate the system alongside your team. A written incident-response procedure with named owners on both sides. From there, you continue with internal operations, with the Operations Layer SKU, or with a different operator.

## TRACE methodology mapping

This SKU sits inside the **Build** phase of the Impetora delivery model. The Lighthouse Build is the full Build phase. Its job is to ship a system that survives audit, not a demo.

### T - Trust

Data residency and access controls fixed before code ships. The risk classification from Discovery is the input, not an afterthought.

### R - Readiness

We refresh the baseline against the production environment, then ship to a controlled cohort in shadow mode before any user-visible behaviour.

### A - Architecture

Versioned prompts, retrieval indexes, evaluation harness, observability, and runbook. Only what passes the eval suite reaches assist mode.

### C - Citations and evidence

Every output the system produces is traceable to its inputs, prompt version, and confidence score. The audit log is built in, not bolted on.

## Engagement model, week by week

1. **Architecture and readiness lock** (Wk 1-2). Architecture review, integration mapping, evaluation criteria locked in writing, regulator-pack scaffolded.
2. **Shadow build and integration** (Wk 3-7). Build the ingestion, retrieval, model layer, and integration surface end to end. Eval suite runs on every change. Shadow mode against live volume.
3. **Assist rollout, handover, parallel run** (Wk 8-12). Assist-mode rollout to the full user team. Runbook authored, dashboards live, named on-call on both sides. Two-week parallel run, then exit.

## Inputs we need from you

- A workflow that has cleared a Sprint or carries a high Discovery readiness score
- API access (read and write) to the systems of record involved in the workflow
- An operations-side product owner and an engineering counterpart for the duration of the build
- A change-management plan inside your team for the assist-mode rollout (we contribute, we do not run it)
- Sign-off from risk and legal on the rollout perimeter at the end of week one

## Outputs we ship

- Production AI system for one workflow, integrated end to end into your stack
- Evaluation harness running on every change, with shadow and assist mode results in version control
- Observability stack: latency, cost, accuracy drift, and exception monitoring
- Audit log and regulator pack: every input, prompt, retrieval, output, and review is captured and queryable
- Runbook with named on-call rotation, rollback procedure, and incident-response playbook
- Two-week parallel run with our team alongside yours before final exit

## Who this is not for

- Workflows without a measured baseline or without a clean regulatory frame; Discovery first
- Multi-workflow ambitions inside a single Build window; one workflow per Lighthouse
- Operations that cannot commit a product owner and an engineering counterpart for the full window
- Buyers needing a one-week, no-handover demo; the Lighthouse is a production SKU, not a demo SKU

## Frequently asked questions

### Do we need an AI Sprint before a Lighthouse Build?

It is the safer path on workflows we have not seen before. Where Discovery shows a high readiness score, a clean regulatory frame, and a workflow with a measured baseline, the Sprint can be skipped and the Lighthouse run directly. We will say so in writing during the scoping conversation; we do not push a sprint that the readiness score does not justify.

### How is the regulatory pack maintained during the build?

The regulator pack is scaffolded in week one and updated through every workstream. By the end of the build, it includes the EU AI Act risk classification, the Annex IV technical documentation, the GDPR Article 22 review where relevant, the evaluation results, and the audit-log architecture. The pack is yours and is structured for a regulator submission or an internal audit committee.

### Where does the system run?

Inside your environment, with EU residency by default for EU-perimeter operations, and with the regional residency the regulatory frame requires for non-EU operations. Foundation-model and retrieval layers are described in vendor-neutral terms during marketing; the specific selection is locked in week two of the build, against your procurement preferences and the architecture's needs.

### What happens if scope changes during the build?

A written change order, a fresh calendar, a fresh exit date. We do not absorb scope creep silently. The change order names the impact on cost, the impact on calendar, and the impact on the eval suite, and it goes through the same sponsor sign-off as the original scope.

### Who owns the code, prompts, and configuration at exit?

You do. Everything is delivered to your repository under your account, with the version history intact. We retain the right to use anonymised methodology lessons (not data, not prompts) in our internal playbooks.

### What is the difference between a Lighthouse Build and an AI platform deployment?

Lighthouse is workflow-first. A platform deployment is infrastructure-first, with the workflow added later. We see more value, in regulated industries, in shipping one workflow that survives audit, then composing across workflows, than in building infrastructure ahead of demand. Buyers who genuinely need infrastructure ahead of demand take TRACE Discovery first to size that workstream properly.

## Related

- [AI Sprint: precondition for high-confidence builds](https://impetora.com/services/ai-sprint)
- [AI operations layer: handover destination](https://impetora.com/services/ai-operations-layer)
- [AI for debt collection](https://impetora.com/industries/debt-collection)
- [Decision support systems](https://impetora.com/capabilities/decision-support-systems)

## About this service

**AI Lighthouse Build** - 8 to 12 week production AI system delivery for one workflow. Architecture, integrations, evaluation harness, observability, audit log, and runbook handover. The system survives audit on day one.

Submit a project: https://impetora.com/?service=ai-lighthouse-build#discovery-call
