Hybrid is the median outcome, not a fence-sitting compromise. In a hybrid posture, the platform handles the parts of the workload it does well - identity, infrastructure, model hosting, generic capabilities - and a custom layer handles the parts the platform cannot: domain retrieval, schema-constrained outputs, regulatory logging, human-oversight surfaces, integration with internal systems of record.
The architectural pattern usually has the custom layer on top of, not parallel to, the platform. The platform is treated as a managed primitive: a model endpoint, an orchestration runtime, a vector database. The custom layer wraps that primitive with the policies, data flows, and oversight surfaces the workload requires. This shape gives the enterprise platform-grade reliability on the parts that are commodity, and custom-grade control on the parts that are differentiating.
The hybrid model also matches how AI procurement actually evolves. The platform is rarely a single vendor decision; most enterprises end up with two or three (a hyperscaler for model hosting, a SaaS vendor for embedded AI in the workforce stack, occasionally a specialist for a vertical capability). The custom layer becomes the integration fabric across these vendors. MIT CISR's research on AI implementation maturity describes this as the "AI-savvy" operating model, where a thin internal capability orchestrates a deeper external supply [3].
The pitfall in hybrid is letting the boundary between custom and platform drift. When ownership is unclear, both teams build the same component twice and neither owns the failure. The discipline is to write down the boundary in an architecture decision record, name the owner on each side, and update the record when the boundary moves.