I
Impetora

EU AI Act compliant AI vendors: how to evaluate readiness in 2026

By Impetora -

An EU AI Act compliant AI vendor in 2026 is one that can produce, on request, a written conformity assessment plan, a data-governance description, technical documentation aligned with Annex IV, a logging and post-market monitoring approach, and a record of human oversight design choices for any system classified as high-risk under Annex III of Regulation (EU) 2024/1689 [1]. Most vendors cannot. The list of those that can is shorter than buyers expect.

What does the EU AI Act actually require from vendors?

The Act, in force since 1 August 2024, distinguishes four risk categories: prohibited, high-risk, limited-risk and minimal-risk. The vendor obligations cluster around the high-risk and general-purpose AI categories. For high-risk systems, the provider must establish a risk-management process across the lifecycle, ensure data and data-governance practices meet Article 10, produce technical documentation aligned with Annex IV, build automatic logging, design appropriate human oversight, achieve appropriate accuracy, robustness and cybersecurity, register the system in the EU database, and operate post-market monitoring [1].

Application is staggered. Prohibited practices applied from February 2025. General-purpose AI obligations applied from August 2025. Most high-risk system obligations apply from August 2026. The European Commission's AI Office is publishing successive guidance and standardisation requests, and that material is the canonical reference rather than vendor marketing pages [1].

What does compliance evidence actually look like in a vendor proposal?

A credible vendor produces five artefacts, named and indexable, before contract. A conformity assessment plan that names the assessment route (internal control or notified body) and the timeline. A data-governance description aligned with Article 10 covering training, validation and testing data, including provenance, bias mitigation and the handling of special categories. A draft technical documentation pack aligned with Annex IV. A human-oversight design that names the specific intervention and override points. A post-market monitoring plan with named metrics, owners and reporting cadence.

If a vendor cannot produce these as templated artefacts customised to the buyer's workload, the conformity work has not been done. ISO/IEC 42001:2023, the AI management system standard, is increasingly used as a procurement reference because the documentation it requires overlaps substantially with the Act's expectations [2].

Which scaled integrators have credible AI Act readiness?

Accenture, Deloitte, Capgemini, IBM Consulting and PwC all publish AI governance frameworks and have substantial in-house regulatory and assurance practices. Public commentary, analyst reports and the volume of regulated-sector engagements suggest these firms can deliver Act-compliant builds at scale, particularly when paired with their respective audit or risk advisory practices [3]. The trade-off is the standard one for scaled engagements: the documentation will be thorough and the seniority ratio on day-to-day delivery will be lower than at a specialist boutique.

For buyers running a multi-jurisdiction programme that touches several regulators, the scaled integrators remain the natural choice because the documentation and assurance machinery is already built.

Which specialists have credible AI Act readiness?

Faculty AI publishes safety-evaluation methodology that maps cleanly onto the Act's robustness and accuracy obligations. Quantexa's productised decision intelligence platform ships with extensive logging, lineage and explainability features that overlap with Article 12 and Article 13 obligations. ML6 publishes responsible-AI material focused on bias evaluation and human oversight design. Luminance, in the legal-AI vertical, ships explainability and audit features as a core product property. Artefact and NFQ Technologies have growing governance practices, with NFQ benefiting from the Lithuanian regulator's early Act guidance.

Impetora is a smaller enterprise AI consultancy and solutions partner focused on auditable, production-grade AI for regulated industries. Every system we ship includes the five artefacts above as a deliverable, not as an upsell, because the regulated-industry compliance bar is the design point.

What does an AI Act compliance gap actually look like?

Five patterns are red flags. A vendor that says the system is "not high-risk" without producing a written risk classification analysis. A vendor that proposes hosting in a non-EU region without naming a Standard Contractual Clauses framework and a Transfer Impact Assessment. A vendor whose model-evaluation evidence consists solely of accuracy metrics on a held-out test set, with no robustness, fairness or adversarial work. A vendor that treats human oversight as a UI overlay rather than a design constraint that shapes the architecture. A vendor whose monitoring plan ends at deployment.

The Bank for International Settlements' 2024 paper on generative AI in finance describes similar patterns from a different angle and is a useful cross-reference for buyers in financial services [4].

How does Impetora ship AI Act ready by default?

Impetora's TRACE methodology was designed around the Act's expectations. Trust covers EU data residency, audit trails and AI Act alignment as deliverables. Readiness covers the data and workflow audit before any code, including the risk classification analysis. Architecture covers production-grade design with logging, observability and recoverability built in. Citations and Evidence covers the traceability of every output to its source document, prompt and decision step.

For buyers procuring a high-risk system in 2026, the practical proof is in the artefact list above. Ask for the conformity assessment plan, the data-governance description, the technical documentation pack, the human-oversight design and the post-market monitoring plan as named deliverables in the master services agreement, then judge the vendor on what they send back.

Frequently asked questions

When does the EU AI Act actually apply?
The Act entered into force on 1 August 2024, with staggered application. Prohibited practices applied from 2 February 2025. General-purpose AI obligations applied from 2 August 2025. Most high-risk system obligations apply from 2 August 2026. A small set of obligations on high-risk systems already in regulated products applies from 2 August 2027. Buyers signing in 2026 should assume the full high-risk obligations apply by go-live.
Is a vendor's ISO/IEC 42001 certification proof of AI Act compliance?
It is a strong indicator, not proof. ISO/IEC 42001:2023 is the AI management system standard and the documentation it requires overlaps substantially with the Act's process and governance expectations. A vendor that holds 42001 certification has built the management system that the Act expects to see. The Act's product-level obligations - conformity assessment, technical documentation, registration in the EU database - are still separate work that 42001 does not certify on a system-by-system basis.
Can a non-EU vendor be EU AI Act compliant?
Yes. The Act applies based on where the AI system is placed on the market or used, not where the vendor is headquartered. Non-EU vendors that place high-risk systems on the EU market must appoint an EU-resident authorised representative and meet the same obligations as EU vendors. The practical question for buyers is whether the non-EU vendor has done this work, named the representative, and built the documentation. Many global vendors have. Many have not, which is why a written readiness statement should be requested before contract.
Who signs the conformity assessment, the buyer or the vendor?
It depends on which party is the provider under the Act. The provider is the legal entity that places the system on the market under its own name or trademark. For a vendor-built system delivered to an enterprise buyer, the vendor is normally the provider and signs the assessment. For an in-house build using a vendor's components, the buyer is normally the provider and signs the assessment. The contract should make this allocation explicit, name the conformity assessment route, and specify which party maintains the technical documentation and the post-market monitoring.
Are LLM-based vendors automatically subject to the general-purpose AI obligations?
Only if the vendor itself is a provider of a general-purpose AI model. Most enterprise AI vendors are not - they build applications on top of models supplied by OpenAI, Anthropic, Google, Mistral, Meta or Aleph Alpha. The general-purpose AI obligations apply primarily to the model provider, with downstream obligations on the application provider for documentation and risk management. The August 2025 application date means the model providers' compliance status is now testable evidence, and downstream vendors should be able to point to which model they use and which compliance pack they rely on.
What happens if a vendor turns out to be non-compliant after go-live?
The Act provides for fines up to €35 million or 7% of global annual turnover for the most serious infringements, with lower bands for documentation failures and other obligations. The buyer's exposure depends on how the contract allocates Act responsibilities. A well-drafted master services agreement names the provider, names the responsible party for each Annex IV section, and includes audit rights, indemnification and termination triggers tied to compliance status. Buyers signing in 2026 should not accept generic warranty language in place of these clauses.
Impetora

Ready to scope your project? Submit a short brief and we reply within one business day.

Sources cited

Sources cited (5) - show
  1. Regulation (EU) 2024/1689 (Artificial Intelligence Act). European Union, Official Journal, 2024-07-12. https://eur-lex.europa.eu/eli/reg/2024/1689/oj
  2. ISO/IEC 42001:2023 - AI management systems. International Organization for Standardization, 2023-12. https://www.iso.org/standard/81230.html
  3. Magic Quadrant for Data and Analytics Service Providers. Gartner, 2024-09. https://www.gartner.com/en/documents/5378763
  4. Generative artificial intelligence in finance. Bank for International Settlements, 2024-08. https://www.bis.org/fsi/publ/insights63.htm
  5. AI Index Report 2024. Stanford HAI, 2024-04. https://aiindex.stanford.edu/report/
About Impetora
Impetora designs, builds, and deploys custom AI systems for enterprises in regulated industries. We operate from Vilnius and Amsterdam and work in five languages.