I
Impetora
Industry: Healthcare

AI for healthcare teams, from clinical document structuring to assistive decision support.

AI for healthcare teams is the design and deployment of custom systems that extract clinical data, triage patient communications, support documentation and coding, and surface decision-support evidence while preserving the audit trail that clinicians, regulators and data-protection authorities require. Impetora builds these systems for hospitals, clinics, payers and digital health platforms, with classification against EU AI Act risk tiers, alignment to the WHO ethics and governance guidance for AI for health, and GDPR Article 9 controls for special-category data.

Annex III §3
EU AI Act risk classification for biometric and healthcare AI
Article 9
GDPR special-category controls embedded by default
11d
Median pilot deployment for non-SaMD scopes
100%
Outputs with reviewer-traceable audit pointers
01

How AI is reshaping healthcare operations in 2026

The bottleneck in healthcare AI is not capability. It is the boundary between assistive and autonomous, and the documentation that proves which side a system sits on.

Healthcare organisations sit on the largest unstructured-text problem in any regulated industry: discharge letters, referral notes, lab reports, prior-authorisation forms, consent paperwork, payer correspondence. Most of it never reaches a structured field, and the cost of that gap shows up as documentation burden, coding error, and avoidable delay in care.

The WHO 2024 guidance on the ethics and governance of AI for health sets out six core principles - protecting autonomy, promoting safety, ensuring transparency, fostering responsibility, ensuring inclusiveness, promoting responsive AI - that any production deployment is expected to evidence. The FDA AI/ML SaMD action plan and the EMA reflection paper on AI in the medicines lifecycle draw the same boundary on the regulated side: anything that contributes to a clinical decision is a regulated device pathway, anything that supports operations around the decision is not.

The unsolved problem is not capability; it is the boundary between assistive and autonomous. Impetora ships systems that sit firmly on the assistive side, with explicit human sign-off, special-category data controls under GDPR Article 9, and the documentation a Notified Body or hospital information-governance committee will ask to see before go-live.

AI in health holds great promise, but its deployment must be governed by ethics and human rights. Transparency, accountability and human oversight are non-negotiable.
World Health Organisation, Ethics and governance of AI for health (2024)
02

Use cases we deliver for healthcare teams at hospitals, clinics, payers and digital health platforms

Clinical document extraction and structuring

Discharge letters, referrals, and lab reports arrive as PDFs and scans. Clinicians and admin staff re-key key fields into the EHR, which drives documentation burden and introduces transcription error.

70%Reduction in re-keying time, with field-level source pointers preserved

Patient triage automation across digital channels

Inbound patient messages, portal forms, and email queues mix urgent clinical questions with admin requests. Triage staff spend most of the day routing rather than resolving.

5xFaster routing to the right clinician or admin queue, with clinician override surfaced first

Clinical decision support with explainability

Guideline lookups, drug-interaction checks, and protocol references are scattered across PDFs and intranet pages. Clinicians spend cognitive cycles finding the source rather than weighing the decision.

Assistive onlyReviewer-traceable evidence surfaced beside the clinician, never autonomous

Medical coding and billing automation

ICD-10, CPT, and DRG coding from physician notes is high-volume, error-sensitive, and a frequent source of payer denials and audit exposure. Coder time scales linearly with case volume.

0.5%Code-level error rate after evaluation tuning, with note-level audit pointers

Compliance and consent-tracking automation

Consent forms, DPIAs, and information-governance approvals live across SharePoint, email and paper. Demonstrating that a specific dataset has the right consent for a specific use is slow and error-prone at audit time.

Audit-readyConsent and lawful-basis chain reproducible per record, with citation to source

Predictive care utilisation forecasting

Bed occupancy, theatre scheduling, and outpatient demand fluctuate weekly. Operations teams forecast in spreadsheets that lag reality, which drives idle capacity and cancellation.

WeeklyOperational forecasts with reasoning traces and named confidence bounds
03

How TRACE applies to healthcare AI

T

Trust

We classify every system against GDPR Article 9 special-category controls and the EU AI Act Annex III §3 high-risk scope for biometric and healthcare AI. Anything contributing to a clinical decision is treated as a regulated SaMD pathway, never bolted on after build.
R

Readiness

Before any model is selected, a 1 to 2 week workflow audit. We sample 30 days of real records, baseline current handle time and error rate, map data flows for DPIA and information-governance review, and document the workflow the AI will sit inside.
A

Architecture

FHIR-native data exchange where the source system supports it, immutable storage of source documents in EU regions, versioned prompts with eval suites tied to clinician-reviewed gold sets, shadow-mode rollouts where the AI runs alongside the clinician with output logged but not actioned, and pseudonymisation at the boundary by default.
C

Citations and evidence

Every output links to the source document, the page, the prompt version, and the model run. A clinician or coder signing off on an exception can trace the suggestion to its cause in under 10 seconds, and the audit log is the artefact a Notified Body or DPA can inspect.
04

Regulatory considerations for healthcare AI

Healthcare AI sits inside multiple overlapping regulatory frameworks, including GDPR Article 9, the EU AI Act, the Medical Device Regulation, FDA SaMD guidance, and WHO ethics principles. We map every engagement to the relevant authority before code is written.

  1. 01

    EU AI Act Annex III §3 - high-risk classification

    AI systems used for biometric categorisation and certain healthcare-adjacent uses are high-risk under the EU AI Act. Mandatory conformity assessment, risk management, data governance, transparency, and human-oversight controls.
    EUR-Lex
  2. 02

    EU MDR - regulated SaMD pathway

    Software intended for diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of disease falls under the Medical Device Regulation. We scope conformity assessment work into engagements where this applies.
    EUR-Lex
  3. 03

    GDPR Article 9 - special category health data

    Processing of health data is prohibited unless an Article 9(2) condition applies. Explicit consent, public-interest health, and healthcare-provision are the common lawful bases. We embed these controls into the data flow at the boundary.
    GDPR-Info
  4. 04

    WHO - Ethics and governance of AI for health (2024)

    Six core principles: protecting autonomy, promoting safety, ensuring transparency, fostering responsibility, ensuring inclusiveness, promoting responsive AI. Production deployments are expected to evidence each principle.
    World Health Organisation
  5. 05

    FDA - AI/ML SaMD action plan

    Regulatory direction for software functions that meet the device definition. Total Product Lifecycle approach, Good Machine Learning Practice principles, and pre-determined change control plans.
    FDA
  6. 06

    NICE - evidence standards framework

    UK procurement-aligned evidence expectations for digital health technologies, including AI components. We use the framework to scope evaluation rigour proportionate to clinical risk tier.
    NICE
05

How we typically engage

Three phases. The discovery sprint always comes first, and the cost of doing it is recovered the moment scope is locked correctly and the regulated boundary is named explicitly.

  1. 011 to 2 weeks

    Discovery

    Workflow audit, DPIA inputs and information-governance baseline, sample 30 days of real records, scope sign-off with named success metrics. Output is a written diagnosis with risk classification under the EU AI Act and an explicit determination of whether the system falls under MDR.

  2. 024 to 12 weeks

    Build

    Production architecture, eval suite tied to clinician-reviewed gold sets, FHIR-native data exchange where supported, shadow-mode rollout where the AI runs alongside the clinician or coder with output logged but not actioned, audit-log delivery aligned to the WHO transparency principle.

  3. 03Ongoing

    Operate

    Quarterly drift reports, eval-set growth from real human corrections, model-version upgrades behind a regression suite, regulatory-update tracking across EU AI Act, MDR, GDPR, FDA and NICE. The system stays accurate as the case mix and the regulation evolve.

06

Frequently asked questions

Is AI for healthcare data safe under GDPR Article 9?

Yes, when the system is designed correctly. Health data is special-category and prohibited from processing unless an Article 9(2) condition applies, most commonly explicit consent, public-interest health, or the provision of healthcare under a contract with a regulated professional. We deploy on EU regions by default, sign DPAs that include zero-retention and no-training clauses for inference traffic, pseudonymise at the boundary, and produce a DPIA-ready data-flow diagram before any system goes live. Article 9 is met when the technical and contractual stack is built around it, not bolted on. Where your information-governance committee or DPO needs additional artefacts, we ship them as part of the build, not as an after-thought.

Does Impetora build clinical-decision-making AI?

No. Impetora ships assistive systems that surface evidence, structure documents, and accelerate operations around the clinical decision. Anything that performs a function intended for diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of disease falls under the Medical Device Regulation in the EU and the FDA SaMD framework in the US, and requires a conformity-assessment pathway with a Notified Body or pre-market submission. Where a client engagement requires that pathway, we say so explicitly during discovery, scope the regulated work into the proposal, and partner with the appropriate quality-management partner. We never ship clinical-decision-making AI on a non-regulated route.

How do you handle EU AI Act high-risk classification for healthcare AI?

The EU AI Act classifies a number of healthcare-adjacent uses as high-risk under Annex III §3, which triggers obligations on risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity. We build conformity-assessment scaffolding into the system from week one: an ISO 42001-aligned governance memo, the technical documentation pack the regulation requires, an append-only audit log, and a documented human-in-the-loop step for any output that affects a patient or care decision. If your specific use case is limited-risk rather than high-risk, we ship the proportionate controls, but we never default to less than the regulation requires.

What is the typical scope for a healthcare AI engagement?

A first engagement targets one workflow with a measurable baseline, runs 4 to 12 weeks to production, and lands as a single signed-off system inside one EHR, RIS, LIS, or operations surface. Common scopes are: clinical document extraction across one or two document types; patient triage automation across one or two digital channels; medical coding automation across one or two specialties; consent and audit-readiness automation. Submit a project with the workflow you have in mind and the rough volume, and we scope and price the discovery phase before any code is written.

Can the system integrate with EHRs and digital health platforms?

Yes. The delivery layer is built around your data surface, not the other way around. We ship FHIR-native integrations where the source system supports them, HL7 v2 bridges where it does not, and queue-based bridges with idempotent writes for legacy systems. We integrate with the major hospital information systems and with the digital health platforms our clients build on. The audit log writes regardless of where the data lands, so you can prove lineage even when the downstream system cannot.

How accurate is medical coding automation in production?

Production-grade deployments see code-level error rates of 0.4 to 0.7% on routine specialties after the first three weeks of evaluation tuning, against typical human-only baselines reported in industry studies. Accuracy depends on specialty, documentation quality, and the breadth of the evaluation set. We do not claim a single accuracy number across all specialties. We baseline first, target a specific delta against your current process, and report against it weekly through the pilot. A coder always signs off; the AI structures and proposes, the human decides.

Where is the data processed, and do you train on our records?

By default, all processing and storage runs in EU regions on infrastructure under EU jurisdiction. We support regional pinning when a regulator or contract requires it (Germany-only, France-only, Lithuania-only, US-only). Original documents land in immutable EU object storage with hashes recorded in the audit log. We do not train any model on your records, full stop. If your contract requires US-resident processing for a US-only deployment, we expose that as an explicit configuration toggle, never a default.

What does a healthcare AI engagement cost?

Pricing is set after the discovery sprint, against your specific workflow, integration surface, and regulatory tier. We do not publish a flat rate because the scope variation across healthcare AI is wide: a non-clinical document extraction system on a uniform corpus is a different build from a system that touches the regulated SaMD boundary. Submit a project with the workflow and rough volume, and we come back with a discovery proposal within one business day. Production deployments that sit on the regulated SaMD boundary include a conformity-assessment work package which is scoped explicitly in the proposal.

Considering AI for your healthcare team?

Tell us the workflow you have in mind and we come back within one business day with a discovery proposal.