I
Impetora
Methodology

TRACE: a four-pillar methodology for shipping AI in regulated industries

By Impetora / / 18 min read

Trust, Readiness, Architecture, Citations and Evidence. Why methodology matters more than model choice when you ship AI under the EU AI Act and GDPR. TRACE is the published methodology Impetora uses on every regulated build, and this essay is the long-form definition of what each pillar requires, what it produces, and why we publish it instead of treating it as proprietary.

<20%
Enterprise GenAI in production at scale
Forrester Wave Q4 2024
8 weeks
Typical TRACE engagement to assist-mode go-live
4 pillars
Trust, Readiness, Architecture, Citations and Evidence
2024/1689
EU AI Act regulation aligned from week one
EUR-Lex

Why methodology matters when the model doesn't

Vendor selection is a three-week decision. How the system gets built and operated is a three-year decision. Buyers and engineering leaders spend disproportionate energy on the first and almost none on the second, then wonder why their pilots stall.

The Forrester Wave on Generative AI Services for Q4 2024 noted that fewer than 20% of enterprise generative AI initiatives have actually moved into production at scale, even though most large enterprises have been running pilots for over eighteen months [1]. The Stanford AI Index 2024 reports the same shape from a different angle: investment and capability have grown sharply, while organisational readiness to deploy AI under regulatory and audit pressure has grown slowly [2]. The bottleneck is not capability. It is shipping discipline.

<20%
of enterprise GenAI is actually in production at scale
Forrester Wave Q4 2024

The shipping-discipline gap shows up in predictable ways. The pilot demonstrates that the LLM endpoint can answer a question. The production review then asks who picked the model, who reviews its outputs, who classifies the workload under the EU AI Act, who keeps the audit log, who decides when a prompt change qualifies as a model change, and who is on call when the system makes a wrong call against a regulated customer. None of those answers come from picking a better model. They come from a methodology.

TRACE is our answer to that gap, specifically tuned for the EU regulatory perimeter. Each letter is a discipline that has to be present before we will ship anything into a regulated workflow. Trust is alignment with the EU AI Act, GDPR, and EU residency expectations from day one. Readiness is the data and workflow audit before any code is written. Architecture is production-grade engineering, versioned, observable, recoverable. Citations and Evidence is the audit trail that turns every output into a defensible record. The pillars are not a sequence; they are constraints applied in parallel. Skipping any of them produces a system that demos well and breaks at the conformity assessment.

Continue reading

This essay is the long form. The short form lives at /methodology. The free risk classifier that implements pillar T lives at /tools/eu-ai-act-classifier. The author behind TRACE is described at /about/founder. Everything is public for a reason; we make that argument later in the piece.

The four pillars in one paragraph

Trust means the EU residency posture, the audit-trail discipline, and the EU AI Act and GDPR alignment are baked into the architecture before a line of code ships. Readiness means the data audit and workflow audit happen before any model is wired up, with a written brief that is signed before the build phase begins. Architecture means the system is built as a production-grade software system - versioned, observable, recoverable, with shadow-then-assist-then-autonomous rollout - rather than as a research notebook with a frontend. Citations and Evidence means every output the system produces is traceable to its source document, its prompt version, its model version, its policy rule, and the reviewer who signed off any override. None of these are optional. None are bolt-on. The pillars are how the engagement is gated, how the deliverables are structured, and how the system is defended once it is live.

T - Trust: regulator alignment from week one

Trust is the discipline of meeting the regulator before the regulator meets the system. In practical terms, the first artefact every TRACE engagement produces is a risk classification under the EU AI Act, regardless of whether the client expects to need one. Regulation (EU) 2024/1689 sets a tiered model: prohibited practices under Article 5, high-risk systems under Article 6 with the use-case list at Annex III, transparency obligations under Article 50 for systems that interact with people or generate synthetic content, and a residual category of minimal-risk systems with voluntary codes of conduct [3]. We map the proposed workflow against that taxonomy in week one, before the architecture diagram is drawn.

The classification matters because it changes what we are building, not just what we are documenting. A worked example: a credit-decisioning workflow auto-classifies as high-risk under Annex III paragraph 5(b), which covers AI systems intended to evaluate the creditworthiness of natural persons or to establish their credit score. From that moment, the build inherits the obligations of Title III Chapter 2 of the Act: a risk management system under Article 9, data and data governance under Article 10, technical documentation under Article 11 with the content list at Annex IV, automatic event logging under Article 12, transparency to deployers under Article 13, human oversight under Article 14, and accuracy, robustness and cybersecurity under Article 15 [3]. We ship the conformity-assessment scaffolding from day one rather than retrofitting at audit time. Retrofitting is more expensive than building correctly; we have priced both, and the ratio is about 3.4 to 1 once an external assessor is in the room.

Annex III
Risk classification done in week one, not at audit
EU AI Act, EUR-Lex

The Trust pillar also covers EU residency. Where the system processes EU personal data, we default to EU regional endpoints for the LLM endpoint, the embedding model, the vector store, and the orchestration layer. We document the sub-processor register on day one and keep it under change control for the life of the engagement. Where the client is bound by sectoral expectations - the EBA Guidelines on internal governance for banks, for example, or the European Insurance and Occupational Pensions Authority guidance for insurers - we align the residency posture with those expectations and make the alignment explicit in the technical documentation [4]. We also align with the EU Cloud Code of Conduct as a baseline for the cloud layer beneath the AI components.

None of this is optional in our delivery. The Trust pillar is the gate that lets us refuse projects we believe will not survive a conformity assessment. It is also the artefact that lets the client's compliance function evaluate the engagement on its own terms, before the model is chosen, before the prompt is written, before any code reaches a repository.

R - Readiness: the workflow audit before any code

Readiness is the one to two-week paid audit that happens before architecture work begins. It does four things, in this order, and the brief is not signed off until all four are complete.

First, we sample at least thirty days of real cases from the workflow the AI will sit inside. Applications, alerts, exceptions, queues, decisions - whatever the unit of work is. The sample is anonymised where required and accessed under a written data-processing agreement, but it is real. We refuse to scope against synthetic data or against an idealised description of the workflow. The reason is mechanical: synthetic data hides the long tail of edge cases, and the long tail is where regulated AI actually lives.

Second, we baseline the current system. Current handle time per case, current error rate against the ground truth the client already has, current exception ratio, current reviewer-load distribution. These numbers become the baseline against which the AI system will be measured. Without a baseline, claims of improvement are unfalsifiable, and unfalsifiable claims do not survive a conformity assessment under Article 15 of the EU AI Act.

Third, we document the workflow the AI will participate in, end to end. Inputs, decision points, escalation paths, downstream actions, regulatory triggers. We identify which decisions are GDPR Article 22 automated decisions producing legal or similarly significant effects, and which are not. The distinction is not academic; Article 22 imposes specific rights of human intervention and contestation that change the architecture of the human-oversight layer. The Court of Justice of the European Union in SCHUFA (C-634/21) confirmed in December 2023 that automated credit scoring squarely engages Article 22, even when a downstream human formally signs off, if the human's discretion is materially constrained by the score [5]. We classify each candidate decision against that test in writing.

Continue reading

Fourth, we build the evaluation set before the model. The eval set is the set of cases against which the system will be regression-tested for the rest of its life. It is built from the thirty-day sample, augmented with edge cases the client's reviewers flag, and signed off as the acceptance gate. Building the eval set first is the single largest predictor of whether the engagement will ship on time, because it forces the team to define success in operational terms before the temptation to optimise for impressive demos sets in.

Readiness ends with a written brief. The brief covers scope, data dictionary, success criteria, baseline numbers, target deltas, ROI model, an explicit out-of-scope list, the AI Act risk classification from the Trust pillar, and a list of the artefacts each subsequent phase will produce. The brief is the contract for the build. If the audit shows the workflow is unsuitable for AI - because the data is unstable, because the decision is too sparse for evaluation, because the regulatory exposure exceeds the available oversight capacity - we say so and decline the build. We have killed projects in week three for each of those reasons. The discipline is what makes the rest of the methodology credible.

Vendor selection is a three-week decision. How the system gets built and operated is a three-year decision. TRACE is the answer to the second.
Justas Butkus, Impetora

A - Architecture: production patterns specific to regulated work

Architecture is where TRACE diverges most visibly from the way most pilots are built. The systems we ship look, from the outside, like ordinary regulated software systems with AI components inside, rather than like AI products with regulated wrapping. That framing produces a specific set of patterns, each of which we apply on every engagement.

Versioning policy. Prompts, system messages, retrieval queries, evaluation suites, feature definitions, and policy rules are all versioned in the same repository as the rest of the application code, with a deliberate change-control process for each. The model version is pinned per environment and upgraded through the same release process as application code. A prompt change is treated as a code change, with a code review, an evaluation run, and a release note. There is no live editing of production prompts. Teams that allow live prompt edits cannot reconstruct the system that ran last Tuesday, which means they cannot answer the regulator's questions about a decision made last Tuesday.

Observability. Every call to the LLM endpoint emits a structured log entry with the immutable event ID, the user or system that triggered it, the input identifiers, the prompt-template ID and version, the model and model-version ID, the retrieval-set identifiers and snapshot version, the structured output, the validation status, and a timestamp. Latency, cost, and accuracy metrics are tracked per call. We treat the log as a first-class deliverable, not as an afterthought. The minimum standard is that any production output can be reconstructed from the log alone, with no recourse to a re-run of today's index.

Rollout discipline. Every system goes live in shadow mode first. The AI runs alongside the human reviewer, its outputs are logged, and the human's decision is the one that takes effect. Shadow mode runs until the agreed accuracy and refusal-rate gates clear against the eval set built in the Readiness phase. The system then moves to assist mode, in which the AI's output is presented to the reviewer as a recommendation with sources attached. Assist mode runs until the operational metrics stabilise and the reviewer override rate drops below the agreed threshold. Only then, and only where the workflow allows it, does the system move to autonomous mode. Each transition is gated by a written deliverable that the client signs.

Continue reading

Rollback paths. Every release has a rollback path that can be executed in under five minutes by an on-call engineer who was not involved in the release. The rollback path is rehearsed at least once before the release ships. We treat AI deployments as two-way doors, not one-way. The first time the model misbehaves in production, the available action must be more granular than turning the system off.

Idempotent writes. Any tool call that mutates state - a database write, an outbound message, a payment instruction - is idempotent against a request key derived from the originating event. Retries do not double-write. This is standard distributed-systems hygiene, which AI pilots routinely skip and AI production systems routinely require.

Long-running stateful orchestration. The model is a participant in the workflow, not its spine. The state machine that governs the case lives in a deterministic orchestration layer. The model is invoked at decision points, returns a structured output, and the orchestrator decides what happens next. This is the architectural pattern that lets the engineering team test the workflow independently of the model and lets the compliance team test the policy logic independently of the prompt.

The NIST AI Risk Management Framework version 1.0 and its Generative AI Profile (NIST.AI.600-1) describe the same shape from a control perspective rather than an engineering one. The four functions - GOVERN, MAP, MEASURE, MANAGE - map cleanly onto the architectural patterns above: GOVERN to versioning and change control, MAP to the readiness-phase classification, MEASURE to the observability layer, MANAGE to the rollout discipline and rollback paths [6]. We use the framework as the cross-reference between our engineering artefacts and the client's risk register.

C - Citations and Evidence: the audit trail as the deliverable

Citations and Evidence is the discipline that turns the system's outputs into a defensible record. The principle is simple to state and demanding to implement: every output the system produces should be traceable to the input features that drove it, the model version that generated it, the prompt version it ran under, the policy rule that gated it, and the reviewer who approved any override. The audit trail is not a logging feature added at the end; it is the deliverable.

The Federal Reserve's Supervisory Letter SR 11-7 on model risk management has been the canonical regulatory reference on this point for over a decade [7]. SR 11-7 expects the institution to be able to reconstruct the inputs, processing, assumptions, and outputs of any model-driven decision under independent validation. The European Banking Authority's Guidelines on Internal Governance carry the same expectation into the EU sectoral context, with explicit language on the documentation, validation, and oversight of automated decision-making [4]. Neither was written for generative AI, but both apply to it the moment a generative system is in the decision path.

A worked example. A banking decision-support system we have shipped under TRACE evaluates loan application narratives and surfaces a structured rationale that a credit officer reviews before approval. Each generated rationale is stored alongside a snapshot of the input feature set as it stood at the moment of inference, the hash of the prompt template, the identifier of the retrieval set returned by the vector store, the model and model-version identifier of the LLM endpoint, the policy rule that gated the rationale into the reviewer queue, and, after review, the credit officer's decision and rationale. An SR 11-7 validation team can pick any decision from cold and reconstruct exactly what the system saw, what it produced, and what the human decided. The reconstruction takes minutes, not weeks.

The same discipline applies in other regulated contexts with different vocabulary. In healthcare, the audit trail anchors clinical decision support to the provenance of the underlying evidence. In insurance, it anchors claims triage to the policy rules that were in force at the date of loss. In debt collection, it anchors a contact decision to the consent posture of the data subject and the regulatory posture of the jurisdiction. The pattern is constant: the output is only as defensible as the chain that produced it.

Continue reading

Two engineering details matter. First, citations must be part of the output schema, not a footnote. If the model is allowed to emit free-form text without citations, the citations are negotiable, which means they are not real. Second, the retrieval layer must be a versioned source-of-truth store, not a live re-derivation. When the regulator asks what the system knew on the day of the decision, the answer must come from the snapshot logged at inference time, not from a re-run of today's index. The two details together are what make Citations and Evidence operational rather than aspirational.

TRACE in 8 weeks: a typical engagement timeline

A typical TRACE engagement runs eight weeks from kickoff to assist-mode go-live. The shape varies with integration surface and regulatory complexity, but the gates are constant.

Week 1 to 2 - Discovery and readiness. Workflow audit. Thirty-day case sample. Baseline metrics. AI Act risk classification. Article 22 analysis. Eval set construction. Written readiness brief signed by the client's business owner, DPO, and engineering lead. No code is written in this phase. The deliverable is the brief.

Week 3 to 5 - Architecture and first eval harness. System design against the five architectural patterns. Versioning policy in place. Observability layer wired up. First integration with the client's source-of-truth stores. The eval harness runs continuously against the eval set built in week one. The deliverable is a system that passes the eval gates in a non-production environment, with the architecture diagram and technical documentation aligned to Annex IV of the EU AI Act.

Week 6 to 7 - Shadow-mode rollout. The system runs alongside the human reviewer in production. Outputs are logged but not actioned. We measure the live accuracy, refusal rate, and operational cost against the targets set in the brief. Drift, edge cases, and reviewer feedback feed back into the eval set. The deliverable is a shadow-mode performance report and a go/no-go recommendation for assist mode.

Continue reading

Week 8 - Assist-mode rollout and handover. The system moves to assist mode. The reviewer sees the AI output as a recommendation with citations attached. Override rates are tracked. Runbooks, on-call procedures, rollback rehearsals, and the technical documentation pack are handed to the client's team. The system goes live with a regulator-defensible audit trail. The deliverable is a working system, the documentation, and a written operate-phase plan covering drift review cadence, model-versus-baseline accuracy reviews, and re-classification triggers.

From week eight onward, the engagement is in operate mode. The patterns remain constant. The discipline does not relax. Most operate-mode work is unglamorous - eval-set refresh, model-version upgrades, drift triage, periodic re-classification - and that is exactly the point. A regulated AI system that survives is a regulated software system that is operated like one.

Why TRACE is published, not proprietary

TRACE is public on purpose. The methodology page is indexable. Every blog post that extends the methodology, including this one, is indexable. The conformity templates are linked from /tools. The risk classifier that operationalises the Trust pillar is free at /tools/eu-ai-act-classifier. There are two reasons for this stance, and they are deliberate.

First, buyers should be able to evaluate a methodology before signing anything. A regulated AI engagement is a high-trust, high-cost commitment. The buyer is entitled to know what they will be getting, in operational detail, before the first invoice. We disagree with the consulting-firm convention of treating methodology as a proprietary asset shown only under an NDA. If the methodology is sound, publication strengthens it; if the methodology is unsound, no NDA will save it.

Second, regulator-facing artefacts should not be a vendor secret. The conformity-assessment templates, the risk-classification logic, the evidence-chain schemas - these are documents the regulator could in principle write themselves. Treating them as proprietary creates a moat against the wrong opponent. Our moat is delivery quality, not artefact obscurity.

The contrast with the major consulting frameworks is instructive. Deloitte's Trustworthy AI framework and EY's Trusted AI framework are broadly aligned with TRACE in their principles - fairness, transparency, robustness, accountability, privacy, security - and they are useful at the boardroom level. They are principles a consulting firm asserts. TRACE specifies the artefacts a build firm produces. The difference is operational. A boardroom principle does not tell the engineering team which prompt-version field to log; an engineering methodology does. ISO/IEC 42001:2023 sits one layer below the consulting frameworks, in the management-system standard space, and TRACE aligns with it cleanly: an organisation running ISO 42001 properly will find most of the AI Act's documentation requirements falling out as a by-product, and the TRACE artefacts feed directly into the ISO management-system records [8].

Continue reading

The published-methodology stance also has a practical second-order effect we did not anticipate. Buyers arrive at the discovery call already having read the methodology, the risk classifier output for their workload, and the relevant blog post. The conversation starts at the level of "here is the workflow, here is the risk class, here is what the readiness brief should cover," not at the level of "what is AI consulting." That shortens the sales cycle and improves the quality of the engagement from week one.

What TRACE deliberately does not cover

TRACE is an engineering methodology for shipping AI in regulated industries. It is not a substitute for general business consultancy and does not pretend to be. We list the out-of-scope areas explicitly so prospective clients can decide whether they need a different partner alongside us.

Vendor selection. We are vendor-agnostic. The client chooses the LLM endpoint, the embedding model, the vector store, the orchestration layer, and the cloud region within the constraints of the Trust pillar. We will advise on trade-offs and we will refuse architectures we cannot audit, but we do not resell licences, we do not run referral programmes with model providers, and we do not have a recommended stack we push by default. The reason is alignment: the buyer's interest is in the right stack for their workload, and the only way to keep our advice clean is to keep our incentives clean.

Brand strategy. If the AI system is part of a customer-facing product, the brand-strategy work around it is not ours to do. We will integrate cleanly with the brand and the design system the client already runs, and we will document the integration, but we do not lead brand work.

Change management beyond AI-specific stakeholders. We coach the team that operates the AI system. We write the runbooks and the on-call procedures. We do not run the broader organisational change management programme that may surround a regulated AI deployment. That is a different competency, often well-served by the client's existing partners, and pretending otherwise would dilute both functions.

Continue reading

Organisational coaching. We are an engineering methodology, not a business consultancy. The TRACE pillars are about how the system is built and operated. They are not about how the organisation reports to its board, designs its incentive structures, or runs its strategic planning. Where those questions matter for the AI engagement, we will flag them; we will not staff them.

Where to read more

The canonical short definition of TRACE lives at /methodology. It is the page to send to a colleague who needs the methodology in two minutes rather than twenty. The four pillars are stated, the delivery model is illustrated, the FAQ covers the questions that arise most often.

The free risk classifier that implements step T from this post lives at /tools/eu-ai-act-classifier. Run a candidate workflow through it before scoping a project; the output is a risk-tier classification under Articles 5, 6 and 50 of the EU AI Act, with the corresponding obligations enumerated.

The regulatory checklist that operationalises the Trust pillar at the working level lives at /answers/eu-ai-act-implementation-checklist. The concrete RAG application of TRACE - source-of-truth separation, citation contracts, retrieval pinning, deterministic post-processing - is described at /answers/how-to-deploy-rag-regulated-enterprise.

The engineer behind TRACE is described at /about/founder. The companion essay on what auditable architecture actually looks like, including the five recurring mistakes that keep enterprise AI stuck in pilot, lives at /blog/building-ai-systems-that-survive-audit.

Continue reading

If you have a workload to scope, the discovery-call form is the path in. We reply within one business day and run the readiness audit before any code is written. The audit is paid, time-boxed, and produces a written brief regardless of whether the project proceeds to build. That is how we keep the methodology honest.

Frequently asked questions

Is TRACE a framework I can apply without hiring Impetora?
Yes. The methodology is published precisely so it can be applied without us. The pillar definitions, the engagement structure, and the architectural patterns are described in this essay and on the methodology page in enough detail for an in-house engineering team to use them as a checklist. We retain a delivery edge because we have applied the methodology repeatedly across regulated industries, but the methodology itself is not a trade secret.
How does TRACE relate to the EU AI Act conformity assessment?
TRACE produces, as a by-product of normal delivery, the artefacts that the conformity assessment requires for high-risk systems under Article 6 and Annex III: the risk management system under Article 9, the data governance documentation under Article 10, the technical documentation under Article 11 with the content list at Annex IV, the automatic event logging under Article 12, the deployer-transparency information under Article 13, the human-oversight design under Article 14, and the accuracy and robustness records under Article 15. Engagements that follow TRACE are therefore conformity-ready by construction rather than retrofitted before audit.
Does TRACE work for non-EU jurisdictions?
The architectural patterns are jurisdiction-neutral. The Trust pillar is tuned for the EU regulatory perimeter by default - EU AI Act, GDPR, sectoral guidelines from the EBA and EIOPA, the EU Cloud Code of Conduct - and we adapt it for clients deploying in jurisdictions with comparable expectations, including the UK GDPR, the Swiss FADP, and US sectoral regimes such as SR 11-7 for banks. The other three pillars apply unchanged.
What if my workload is not high-risk under the AI Act?
The full TRACE discipline still applies, but the conformity-assessment scaffolding does not. Most non-high-risk workloads still touch GDPR Article 22, sectoral guidance, or internal governance expectations that benefit from the Readiness, Architecture, and Citations and Evidence pillars. The Trust pillar is scaled to the actual obligations rather than the maximum ones. We document the scaling explicitly so the client and their compliance function can review it.
How is TRACE different from a Big Four AI framework?
Big Four frameworks state principles a consulting firm asserts. TRACE specifies artefacts a build firm produces. The difference is operational. A principle does not tell the engineering team which prompt-version field to log; a methodology does. We respect the Big Four frameworks at the boardroom level and align with them where they apply, but we do not substitute for them and they do not substitute for us.
Can TRACE be applied to a system that is already in production?
Yes, with caveats. We run a remediation engagement against the existing system: a Trust-pillar gap analysis against the AI Act and GDPR posture, a Readiness-pillar reconstruction of the eval set and the workflow documentation, an Architecture-pillar assessment of the versioning, observability, and rollback discipline, and a Citations and Evidence audit of the existing log schema. The remediation produces a written gap list and a sequenced plan to close the gaps. Costs scale with the gap count; we have seen remediation engagements run from four to sixteen weeks.
What does TRACE require from the client's team?
A named business owner empowered to sign the readiness brief. A DPO or equivalent function able to review the data-flow documentation. An engineering counterpart able to integrate with the client's source-of-truth stores and observability stack. A reviewer cohort large enough to operate shadow mode credibly. Where any of these are missing, we say so during the readiness audit and either pause until the gap is closed or recommend a different partner.
Impetora

Have a regulated AI workload to scope? Submit a short brief and we reply within one business day.

Sources cited

Sources cited (8) - show
  1. The Forrester Wave: Generative AI Services, Q4 2024. Forrester Research, 2024-11. https://www.forrester.com/report/the-forrester-wave-tm-generative-ai-services-q4-2024/RES181332
  2. Artificial Intelligence Index Report 2024. Stanford Institute for Human-Centered AI, 2024-04. https://aiindex.stanford.edu/report/
  3. Regulation (EU) 2024/1689 (Artificial Intelligence Act), Articles 6, 9, 14 and Annex III, Annex IV. European Union, Official Journal, 2024-07-12. https://eur-lex.europa.eu/eli/reg/2024/1689/oj
  4. Guidelines on Internal Governance under Directive 2013/36/EU (EBA/GL/2021/05). European Banking Authority, 2021-07. https://www.eba.europa.eu/regulation-and-policy/internal-governance/guidelines-on-internal-governance-revised
  5. SCHUFA Holding (C-634/21) judgment on automated credit scoring under GDPR Article 22. Court of Justice of the European Union, 2023-12-07. https://curia.europa.eu/juris/document/document.jsf?docid=280426
  6. AI Risk Management Framework Generative AI Profile (NIST.AI.600-1). NIST, 2024-07. https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf
  7. Supervisory Letter SR 11-7 on Guidance on Model Risk Management. Board of Governors of the Federal Reserve System, 2011-04. https://www.federalreserve.gov/supervisionreg/srletters/sr1107.htm
  8. ISO/IEC 42001:2023 Artificial intelligence management system. International Organization for Standardization, 2023-12. https://www.iso.org/standard/81230.html
About Impetora
Impetora designs, builds, and deploys custom AI systems for enterprises in regulated industries. We work in five languages and operate from EU-headquartered teams serving enterprise clients worldwide.
Discovery call

Book a discovery call

Tell us what you would like to build. We reply within one business day.

30-minute call. Free of charge. No obligation.