I
Impetora

Build vs buy enterprise AI in 2026: a decision framework

By Impetora -

The build-vs-buy decision for enterprise AI in 2026 is the choice between commissioning a custom system from a consulting partner or licensing a productised platform, made on five dimensions: workload specificity, data sensitivity, EU AI Act risk classification, integration surface and total cost of ownership over a three-year horizon. Most enterprises now run a portfolio of both, with the build-side concentrated in regulated, decision-support and competitive-advantage workloads [1][3].

What does build-vs-buy actually mean for AI in 2026?

Three categories matter. Buy means licensing a productised AI platform, typically SaaS, with feature parity across customers. Examples include Microsoft Copilot, Salesforce Einstein, ServiceNow Now Assist, Glean, Harvey, Luminance, Quantexa Decision Intelligence Platform. Build means commissioning a custom system from a consulting partner or in-house team, usually integrating one or more foundation models with proprietary data, workflows and policy logic. Configure sits between the two: licensing a platform and paying a partner to customise it heavily, which is operationally a hybrid.

McKinsey's 2024 state-of-AI survey found that 65% of organisations now use generative AI in at least one function, up from 33% the prior year, and that the median enterprise had deployed three to five distinct AI use cases [3]. The portfolio mix matters more than any single vendor choice.

When does buying a productised AI platform win?

Three patterns favour buying. The first is when the workload is generic across customers and the platform is ahead of the in-house build curve, for example knowledge search inside Microsoft 365 with Copilot or developer assistance with GitHub Copilot. The second is when the buyer needs feature velocity and the platform vendor is investing more than any in-house team can match. The third is when the regulatory or assurance work has been done by the platform vendor and inheriting that work is faster than recreating it.

The trade-offs are vendor lock-in, data-residency constraints, customisation limits and the share of value that accrues to the platform vendor rather than the buyer. IDC forecasts the worldwide AI software market to grow at 27% CAGR through 2028, with the platform tier capturing the majority of that growth [6].

When does building custom AI win?

Four patterns favour building. First, when the workload is specific to your business and the platform vendors do not have a fit, for example a contract triage system tuned to a particular legal team's clauses, or a recoveries automation system tuned to a specific Baltic or Nordic language and policy band. Second, when the data is sensitive enough that residency, sub-processor and audit constraints rule out a multi-tenant platform. Third, when the EU AI Act risk classification is high and the buyer wants to be the provider on the conformity assessment rather than rely on a platform vendor's. Fourth, when the workload is a competitive differentiator and outsourcing it to a platform that competitors also use is strategically wrong.

The trade-offs are time-to-value (a custom build typically lands in production three to nine months later than a platform pilot), engineering and run cost over time, and the dependency on a partner's continued availability. The Forrester Wave for Generative AI Services in Q4 2024 noted that fewer than 20% of enterprise GenAI engagements were running in production at scale [2], which is the gap a build partner has to be able to clear.

How do the two paths shape cost differently?

Productised platforms scope at the per-seat or per-call level, with pricing that scales with the user base and a tier jump from general productivity AI to specialist platforms (legal, financial crime, customer support) to enterprise-tier deals with custom data ingestion. Annual licence cost for a large deployment is dominated by seat count plus implementation.

Custom builds scope at the engagement level. A discovery and pilot phase runs over weeks. A production build with an EU AI Act conformity track runs over months. Run cost depends on volume, document throughput and inference choice rather than seat count.

The honest framing is that the cost of a single productised platform is usually lower than the cost of a single custom build, but the cost of a portfolio of platforms is often higher than the cost of a portfolio of builds, because platform pricing scales with seats while build run-cost scales with usage. We quote engagements after a discovery call.

How does the EU AI Act change the build-vs-buy calculation?

It pulls some workloads towards build and some towards buy, in opposite directions. For high-risk systems under Annex III, the buyer wants control of the conformity assessment, the technical documentation and the post-market monitoring [5]. A custom build with a partner that signs the assessment is often the cleanest path. For general-productivity AI with limited risk classification, a platform vendor's compliance pack saves the buyer substantial work and is the rational choice.

The implication is that buyers should not run a single build-vs-buy decision across the portfolio. Each workload needs its own assessment, with the AI Act risk classification as an explicit input. The European Commission's AI Office is publishing successive guidance and that material is the canonical reference rather than vendor marketing pages [5].

A practical decision framework

Five questions, scored honestly, sort most workloads. Is the workload generic or specific to your business? Is the data sensitive enough to rule out multi-tenant hosting? Is the workload high-risk under Annex III? Is the workload a competitive differentiator? Is there a credible platform with a fit for the workload?

If the workload is generic, the data is not sensitive, the risk classification is limited, the workload is not differentiating, and a credible platform exists, buy. If the workload is specific, the data is sensitive, the risk classification is high, the workload is differentiating, or the platform fit is weak, build. Most workloads land in between, and the decision becomes a question of which trade-offs the buyer is willing to accept.

Where does Impetora fit in this picture?

Impetora is an enterprise AI consultancy and solutions partner on the build side. We do not sell a productised platform. We design, build and deploy custom AI systems in five workloads (document processing, customer support automation, internal knowledge AI, decision support, process orchestration) for enterprises in regulated industries. That makes us the right fit for the build-side of a portfolio decision, particularly when the workload is specific, the data is sensitive, the AI Act classification is high, or the workload is a competitive differentiator.

For the buy-side of a portfolio, we routinely advise clients on platform selection without taking a fee from the platform vendor. The decision should be made on the workload, not on which side of the table the consultant sits.

Frequently asked questions

Should we start with a build or a platform pilot?
Start with whichever path is faster to a real production deployment for a real workload. For most enterprises in 2026, that means a productised platform on a generic workload to build muscle, in parallel with a small custom build on a high-value specific workload to learn what the architecture should look like. The wrong answer is to spend twelve months on a strategy exercise before deploying anything. McKinsey's 2024 survey found that the organisations capturing the most value were those running multiple deployments in parallel, not those running a single perfect pilot.
Are foundation models like GPT-4 or Claude considered build or buy?
They are infrastructure inputs. Whether your AI workload is build or buy depends on what you do on top of them. Calling an OpenAI or Anthropic API directly from your own application is build. Using Microsoft Copilot, which calls those same models inside Microsoft's platform, is buy. The distinction matters for AI Act allocation: in build, your application is the high-risk system if classified as such, and you are the provider on the conformity assessment.
How do we avoid vendor lock-in when buying an AI platform?
Three contractual provisions reduce lock-in risk. A data-portability clause that names the export format and the timeline. A model-portability clause that names the foundation models the platform supports and commits the vendor to documenting model swaps. A run-it-elsewhere clause that gives the buyer the right to migrate the workload to another platform or to an in-house build, with the vendor obliged to support a reasonable handover. Few platform vendors give all three willingly. The right ones do.
When does it make sense to build for a workload that has a credible platform option?
When the workload is a strategic differentiator that you do not want competitors to access through the same vendor, when the data is too sensitive for multi-tenant hosting and the platform does not offer a single-tenant tier, when the AI Act classification is high and the platform's compliance pack does not match your risk appetite, or when the workload integration surface is so deep into your proprietary systems that the platform's connectors will never catch up.
What is the typical build-vs-buy mix in European enterprise AI portfolios?
There is no single benchmark, but the working pattern across regulated-industry enterprises in 2026 is roughly 70% buy and 30% build by spend, with the build share concentrated in three to five high-value workloads and the buy share spread across general productivity, sales and marketing, and IT-operations AI. The build share is materially higher in financial services, defence, and parts of healthcare where the regulatory bar pushes more workloads onto the build side.
Should small and mid-sized European enterprises build AI at all?
Most should not, on most workloads. Productised platforms are usually the rational choice for SMEs because the engineering and compliance overhead of a custom build does not amortise across a small user base. The exceptions are workloads that are core to the business, where a small custom build with a senior specialist partner can deliver a competitive system at a fraction of an enterprise budget. Boutique AI consultancies exist to deliver exactly this scope and a serious one will tell you when buying is the better answer.
Impetora

Ready to scope your project? Submit a short brief and we reply within one business day.

Sources cited

Sources cited (6) - show
  1. Magic Quadrant for Data and Analytics Service Providers. Gartner, 2024-09. https://www.gartner.com/en/documents/5378763
  2. The Forrester Wave: Generative AI Services, Q4 2024. Forrester, 2024-11. https://www.forrester.com/report/the-forrester-wave-generative-ai-services-q4-2024/RES181225
  3. The state of AI in early 2024. McKinsey & Company, 2024-05. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
  4. AI Index Report 2024. Stanford HAI, 2024-04. https://aiindex.stanford.edu/report/
  5. Regulation (EU) 2024/1689 (Artificial Intelligence Act). European Union, Official Journal, 2024-07-12. https://eur-lex.europa.eu/eli/reg/2024/1689/oj
  6. Worldwide Artificial Intelligence Spending Guide. IDC, 2024-08. https://www.idc.com/getdoc.jsp?containerId=prUS52320524
About Impetora
Impetora designs, builds, and deploys custom AI systems for enterprises in regulated industries. We operate from Vilnius and Amsterdam and work in five languages.