Discovery is the first two to four weeks of the engagement. Its output is a written package that lets the organisation decide, with full information, whether to proceed to pilot. Discovery is paid work. It is not a sales motion dressed up as scoping. The deliverables matter on their own even if the programme stops at the end of Discovery.
The package contains five artefacts. First, a data-source map: every system the AI will read, the data it contains, the lawful basis for processing under GDPR, the retention policy, and the integration approach. Second, a workload diagram: the end-to-end process the AI will participate in, the human handoffs, the failure modes, and the escalation paths. Third, a target architecture: the components, the model selection rationale, the retrieval and grounding approach, the logging schema, and the human-oversight surface. Fourth, a risk classification: where the workload sits under the EU AI Act and what obligations apply. Fifth, a delivery plan: the pilot scope, the production-acceptance criteria, the timeline, the team, and the cost estimate.
The artefacts should be reviewable by four functions: the business owner, the DPO, the security lead, and the operations team that will run the system. If any of those four cannot make a decision based on the package, the package is incomplete and Discovery is not finished. The biggest mistake we see is treating Discovery as a one-week analyst exercise that produces slides for the steering committee. Slides are not artefacts. Decisions cannot be defended from slides three years later when the auditor arrives.
MIT CISR's research on AI implementation maturity makes the same point in a different vocabulary: the highest-performing organisations treat AI projects as enterprise-architecture decisions, not as individual workload decisions, and they invest disproportionately in the discovery and design phases [3]. The cost of a thorough Discovery is recovered many times over by the time the system is in production.