I
Impetora
Healthcare - Decision support

Decision-support AI for healthcare

Decision-support AI for healthcare is the practice of using AI to score, rank, and recommend inside a regulated decisioning workflow with human-in-the-loop sign-off - inside the regulatory shape healthcare actually operates under. Healthcare AI sits between EU MDR (when the software qualifies as a medical device), GDPR Article 9 special-category data rules, and WHO ethics guidance that defaults to assistive-only positioning when the system could influence a clinical decision. Every output Impetora ships in this category carries a citation back to the source it came from, so a reviewer can rebuild any decision in seconds.

Article 9
GDPR special-category data controls for health
100%
Decisions with reason codes attached
100%
Recommendations signed by a regulated person
6 wk
First-pilot deployment window (incl. shadow mode)
Citation-grounded decision support, scoped to the regulatory shape healthcare actually operates under.
Healthcare - Decision support
Section 01

What does decision support in healthcare actually look like?

Decision-support AI scores, ranks, and recommends inside a regulated workflow without ever taking the final decision automatically. The architectural guarantee is that the human who signs the decision sees the reason codes, the evidence chain, and the model version that produced the recommendation, before they sign.

Healthcare AI sits between EU MDR (when the software qualifies as a medical device), GDPR Article 9 special-category data rules, and WHO ethics guidance that defaults to assistive-only positioning when the system could influence a clinical decision.

The pipeline is the same shape across every Impetora decision support build: Case ingest -> Feature extraction -> Model scoring -> Evidence assembly -> Reason codes -> Human-in-the-loop sign-off -> Audit trail. Each stage is observable, each stage writes to the audit log, and each stage has a measurable failure mode the readiness sprint defines before any model is selected.

Section 02

What regulations apply?

EU AI Act Article 6 plus EU MDR where the output influences diagnosis or treatment - clinical decision-support is typically a Class IIa or IIb medical device; GDPR Article 9; WHO 6 principles on autonomy and supervision. [1]

Where it influences diagnosis or treatment, the build is most often a regulated medical device under EU MDR. This is rarely the right starting point. We typically build the assistive scaffolding first and refer the regulated component to specialised medical-device partners.

Every system Impetora ships carries the AI register entry, the risk classification, and the underlying analysis with it. A regulator or an internal audit team sees the full chain on a single page.

Section 03

What does TRACE require here?

Trust. EU data residency, EU AI Act risk classification documented, GDPR by default, sectoral regulator framing recorded inside the AI register.

Readiness. Healthcare workflows are sampled for at least 30 days before a model is selected. Baseline current handle time, current error rate, current escalation pattern. Document the workflow the AI sits inside.

Architecture. Versioned prompts, evaluation suites, shadow-mode rollout. Only what passes evaluation reaches production. ISO/IEC 42001-aligned governance scaffolding [5].

Citations. Every output - extracted field, drafted response, retrieved passage, decision recommendation - links back to the source it came from, the model version that produced it, and the timestamp. The audit trail rebuilds in seconds.

Section 04

What can go wrong and how do we prevent it?

Each case lands with its raw features, the model scores it and produces reason codes, the evidence assembly step pulls the citations behind each contributing feature, the regulated person reviews the full package and signs off (or rejects), and the audit log captures the model version, prompt, retrieval source, reason codes, and signer at the moment the decision was taken.

The failure modes we engineer against on every healthcare build: hallucinated content surfaces (mitigated by grounded retrieval and a "no source, no answer" fallback), drift over time (mitigated by quarterly drift reports against the eval set), permission leakage (mitigated by ACL-aware retrieval), and silent regression after a model swap (mitigated by shadow-mode redeploys with eval delta sign-off).

Case ingestFeature extractionModel scoringEvidence assemblyReason codesHuman-in-the-loop sign-offAudit trail
The decision support pipeline we ship in production.
Section 05

What gets shipped in a Lighthouse build?

Phase one (weeks 1-2) is the readiness sprint: data sampling, baseline measurement, AI Act risk classification, scope sign-off. Phase two (weeks 3-4) is the build and shadow-mode rollout, where the system runs alongside the healthcare team with output logged but not actioned. Phase three (from week 5) extends to production, additional document categories or channels or knowledge domains, and the recurring drift and accuracy review that keeps the system honest.

Pilot engagements at this scope start at EUR 25,000 for a single, well-scoped category. Full production deployments typically land between EUR 60,000 and EUR 150,000 depending on integration complexity, evaluation-set breadth, and the regulatory documentation depth your team requires. Submit a project for a custom estimate.

Section 06

How does this compare to off-the-shelf decision support tools?

Off-the-shelf platforms (UiPath, Salesforce Einstein, ServiceNow Now Assist, Glean, Microsoft Copilot for the healthcare variant) work well when your workflow is close to their reference customer. Where they break is when healthcare regulatory documentation has to be produced for the specific decision the system took, on the specific document or interaction it took it on, against the specific model version that was running at the time. The matrix combination of EU AI Act risk classification, sectoral regulator (EU MDR, WHO), and your own internal control framework rarely fits a vendor template. Custom builds are how that fit is achieved.

Honesty

What we don't build

We will not let the system take the final decision

The architecture is designed so the model cannot be the signer. The regulated person in your healthcare workflow signs every decision, and the audit log records who signed, on what evidence, and against which model version.

We will not ship a model we cannot explain

Reason codes for every recommendation are a hard requirement. If a model variant scores higher on the eval set but cannot produce reason codes the reviewer trusts, we ship the explainable variant.

We will not skip the shadow-mode rollout

The system runs alongside the human team for at least four weeks with output logged but not actioned. We measure the disagreement rate and the underlying reasons before any decision is automated.

Frequently asked questions

Is decision support for healthcare high-risk under the EU AI Act?

Where it influences diagnosis or treatment, the build is most often a regulated medical device under EU MDR. This is rarely the right starting point. We typically build the assistive scaffolding first and refer the regulated component to specialised medical-device partners.

Where is the data processed and stored?

By default, processing and storage runs in EU regions on infrastructure under EU jurisdiction. We support specific regional pinning when a regulator or contract requires it. Original documents and interaction logs land in immutable EU object storage with hashes recorded in the audit log. We do not train any model on your data unless you ask us to and the contract permits it.

How do you handle the regulator audit trail?

Every output the system produces - extracted field, drafted response, retrieved passage, decision recommendation - writes a structured event to a queryable, append-only audit log with the model version, prompt, retrieval source, confidence, and the human signer (where one exists) at the moment the action was taken. GDPR Article 9 special-category controls and pseudonymisation extend that log shape. The trail rebuilds any decision in under 10 seconds.

Can it work with our existing systems?

Yes. The delivery layer sits in front of the system of record you already use - case management, claims platform, EHR, PACS, hospital information system, ticketing, document repository, contract lifecycle - and writes back through documented APIs or queue-based bridges with idempotent writes. The audit log writes regardless of where the data lands.

What does this cost?

Pilot engagements at this scope start at EUR 25,000 for a single, well-scoped category. Full production deployments typically land between EUR 60,000 and EUR 150,000 depending on integration complexity, evaluation-set breadth, and the regulatory documentation depth your team requires. We quote against your specific scope before any code is written.

How long does a deployment take?

A first pilot reaches production-grade behaviour in 6 weeks: 1-2 weeks readiness, 1-2 weeks build, then a minimum 4-week shadow-mode period before any decision is automated. Subsequent decision categories add 2-3 weeks each.

Book a discovery call

Submit a project for a custom estimate. We will quote against your specific healthcare decision support scope before any code is written.

Discovery call

Book a discovery call

Tell us what you would like to build. We reply within one business day.

30-minute call. Free of charge. No obligation.