I
Impetora
Industry: Public sector

AI for the public sector, from citizen-facing service intake to procurement, internal knowledge and regulatory monitoring.

AI for the public sector is the design and deployment of custom systems for ministries, municipalities, regulatory authorities, tax administrations, healthcare-system operators, education ministries, public-procurement bodies and EU institutions themselves. It is distinct from private-sector AI in three concrete ways: heavier transparency obligations under EU AI Act Article 49 (public-sector deployment register), stricter procurement constraints, and citizen-facing accountability that ties every output back to the source policy or law.

Art. 49
EU AI Act public-sector deployment register obligation
Art. 27
Fundamental-rights impact assessment scope
1-2 wk
Procurement-aware discovery sprint
100%
Outputs traceable to source policy or law
01

How AI is reshaping the public sector in 2026

Capability is no longer the bottleneck. Accountability is. The public bodies winning with AI treat the Article 49 register entry, the fundamental-rights impact assessment and the procurement file as the deliverable, not the afterthought.

The public sector covers ministries, municipalities, regulatory authorities, tax administration, healthcare-system operators, education ministries, public-procurement bodies, and the EU institutions themselves. The AI surface is the same as in the private sector - intake, classification, drafting, monitoring - but the obligations are not. Public deployments are listed in a public register, the underlying decisions are subject to a fundamental-rights impact assessment, and the procurement path is governed end to end.

The European Commission AI in public administration agenda and the OECD AI in government work both call out citation traceability, multilingual coverage and procurement discipline as the load-bearing parts of any deployment. The ENISA cybersecurity guidance for public administration and the NIST AI Risk Management Framework set the technical bar for resilience and governance.

The unsolved problem is not capability; it is accountability. Citizens, oversight bodies and journalists all want the same artefact: a verifiable record of what the system saw, what it produced, which version of the prompt and weights ran, and which civil servant approved any non-trivial decision. We treat that record as the deliverable.

AI in government must be transparent, accountable and built around human-centred values, with citation back to the source policy at every step.
OECD, AI in government
02

Use cases we deliver for public-sector innovation, it, and procurement teams in eu member-state government, agencies, and eu institutions

Citizen-facing service intake assistant

Front-line public-service intake (benefits, permits, registrations) is dominated by repetitive, multilingual queries. Human agents spend most of their time on routing and policy lookup, and citizens wait. Any answer must be defensible against the source policy or law.

100%Answers cited back to the source policy chain

Public-procurement document automation

Procurement officers ingest TED notices, eForms, supplier filings and historic award files. Drafting tender packs, evaluating submissions and producing audit-ready award memos is heavy manual work, and every step has a regulator looking over the shoulder.

TEDNative ingestion of TED and eForms with audit-ready evaluation packs

Internal-knowledge AI for civil servants

Civil servants spend hours hunting through legislation databases, internal memos and circulars. The fragmentation across ministries and across language versions amplifies the cost. Any answer must be grounded in the legislation database, not in the model's general knowledge.

10sMedian time to a cited answer grounded in the legislation database

Tax and customs document classification

Tax administrations and customs authorities receive high volumes of declarations, correspondence and supporting documents in many formats. Triage, classification and routing absorb FTE that should be applied to substantive review.

0.4%Field-level error rate on extraction with audit pointers per field

Multilingual social-services correspondence

Social services, healthcare-system operators and education ministries correspond with citizens in multiple languages. Drafting, translating and reviewing each letter against current policy is slow, and inconsistent letters create appeals.

3xFaster drafting cycle with cited policy basis on every letter

Regulatory monitoring and impact analysis

Ministries track EU and national regulatory output continuously. Spotting changes, mapping them to in-force national rules and producing an impact memo for the responsible directorate is a slow, manual loop.

DailyRegulatory-change surfacing with cited impact analysis per item
03

How TRACE applies to public-sector AI

T

Trust

We classify every system against EU AI Act Article 49 (public-sector deployment register) and Article 27 (fundamental-rights impact assessment), GDPR Articles 22, 35 and 36, and the relevant national regulator (BSI in Germany, ANSSI in France, ENS in Spain, INCIBE for cybersecurity).
R

Readiness

Procurement-aware discovery. Before any model is selected, a 1 to 2 week audit that walks the workflow, baselines current handle time and error rate, and aligns on the procurement path (framework agreement, dynamic purchasing system, innovation partnership, design contest) so the build phase does not stall on tendering.
A

Architecture

Sovereign EU residency by default, open formats, no vendor lock to a proprietary file or schema. Audit logs, model versions and prompt versions are exportable. The public body can re-procure the operational layer without losing the asset.
C

Citations and lineage

Every output traces back to the source policy, regulation or law. A citizen, a journalist or an oversight body can follow the chain from the AI-generated text to the authoritative source in under 10 seconds. Article 49 register entries are kept current.
04

Regulatory considerations for public-sector AI

Public-sector AI sits inside multiple overlapping regulatory frameworks. We map every engagement to the relevant authority before code is written.

  1. 01

    EU AI Act Article 49 and Annex III - public-services AI is high-risk

    Annex III §5 lists AI used by public authorities to evaluate eligibility for essential public benefits and services as high-risk. Article 49 requires high-risk public-sector deployments to be entered in the EU public register. Article 27 requires a fundamental-rights impact assessment before deployment. We build to that bar by default and keep the register entry current.
    EUR-Lex
  2. 02

    GDPR Articles 35 and 36 - DPIA and prior consultation

    Public-sector processing of citizen data triggers a Data Protection Impact Assessment under Article 35, and prior consultation with the supervisory authority under Article 36 where residual risk remains high. We deliver the DPIA pack as part of discovery, not as an afterthought.
    GDPR-Info
  3. 03

    EU Public Procurement Directive 2014/24/EU

    Public-sector AI procurement runs through the Directive's procedures: framework agreements, dynamic purchasing systems, innovation partnerships and design contests. We align the build plan to the procurement path so the contract and the work plan match.
    EUR-Lex
  4. 04

    eIDAS 2.0 and the EU Digital Identity Wallet

    Citizen-facing services that authenticate users align with the eIDAS 2.0 regulation and the European Digital Identity Wallet. We integrate the wallet flow where the public body has rolled it out, and keep the door open where it has not.
    EUR-Lex
  5. 05

    ENISA cybersecurity guidance for public administration

    The European Union Agency for Cybersecurity publishes binding-by-reference guidance on resilience, supply-chain risk and incident handling for public administration. We deliver the architecture and incident playbook against ENISA expectations.
    ENISA
  6. 06

    EU Cloud Code of Conduct and national cybersecurity authorities

    Sovereign cloud expectations vary by member state. BSI in Germany (C5 catalogue), ANSSI in France (SecNumCloud), ENS in Spain, INCIBE for incident reporting. We map the deployment to the right national cybersecurity authority and the EU Cloud Code of Conduct before infrastructure is selected.
    EU Cloud CoC
05

How we typically engage

Three phases. The procurement-aware discovery sprint always comes first, and the cost of doing it is recovered the moment scope is locked correctly against the procurement path and the Article 49 register obligation.

  1. 011 to 2 weeks

    Discovery

    Procurement-aware audit, workflow walkthrough, baseline of current handle time and error rate, scope sign-off with named success metrics. Output is a written diagnosis with EU AI Act risk classification, the Article 49 register draft and the fundamental-rights impact assessment scope.

  2. 024 to 12 weeks

    Build

    Pilot under the DPIA. Production architecture on sovereign EU infrastructure, eval suite tied to the case mix, shadow-mode rollout where the AI runs alongside the civil servant with output logged but not actioned, integration into the source-of-truth systems, and the Article 49 register entry filed.

  3. 03Ongoing

    Operate

    Quarterly drift reports, eval-set growth from real human corrections, model-version upgrades behind a regression suite, regulatory-update tracking, and an annual fundamental-rights impact review.

Boundaries

What Impetora does not build

An honest list. These systems we will not build because they breach professional ethics, regulation, or our own risk policy.

Predictive policing or social scoring
We do not build predictive-policing systems or general-purpose social-scoring systems. EU AI Act Article 5 prohibits both, and we treat the prohibition as a hard line, not a negotiation.
Untransparent biometric categorisation in public spaces
We do not build real-time biometric categorisation in publicly accessible spaces outside the narrow exemptions. Where a public body has a lawful, narrow use case, we deliver only with the explicit Article 5 derogation framework, the fundamental-rights impact assessment and the supervisory-authority sign-off in place.
Fully automated benefit denial
We do not deploy systems that deny a citizen a benefit, permit or service without a human reviewer signing the decision. GDPR Article 22 and EU AI Act Article 27 require human-in-the-loop, and the workflow is built around that constraint.
Migration or asylum AI without explicit Council authorisation framework
We do not build AI for migration, asylum or border-control case decisions without an explicit national authorisation framework consistent with the Council position on the AI Act. The fundamental-rights stakes are too high for an off-the-shelf engagement.
Architecture

How a public sector AI system flows

The typical value chain from input to audit log. Every node is a reviewable stage with guardrails.

Citizen requestPolicy lookupCited draftCivil servant reviewAudit logArticle 49 register
06

Frequently asked questions

How does the EU AI Act Article 49 public-sector register work in practice?

Article 49 of Regulation (EU) 2024/1689 requires public authorities deploying high-risk AI listed in Annex III to register the system in the EU public database before putting it into service. The register entry covers the provider, the deployer, the high-risk classification, the intended purpose, and the relevant fundamental-rights impact assessment. We file the entry as part of the build phase and keep it current as the system evolves. The register is public, so the entry is written to be defensible to a journalist as well as to the supervisor.

How do you handle public-procurement constraints?

Procurement is the first conversation, not the last. In the discovery phase we identify the procurement path that fits: a framework agreement the body already holds, a dynamic purchasing system, an innovation partnership under Directive 2014/24/EU Article 31, or a design contest. The build plan is then aligned to that path so the contract and the work plan match. Where the body needs to run a fresh tender, we provide the technical specifications and evaluation criteria as input.

Can Impetora bid as a sole-trader-style entity in EU public tenders?

Yes, where the tender admits SMEs and individual specialists. We satisfy the standard self-declaration (ESPD), insurance, conflict-of-interest and exclusion-grounds requirements. For large multi-vendor frameworks we partner with prime contractors and take on the AI-specific lots. We do not bid where the tender requires capacity we cannot deliver to the bar.

How do you handle national-language deployment across member states?

Every system ships with first-class support for the official language or languages of the deploying body, including Lithuanian, German, French, Spanish and the other 20 EU official languages. Citation strings, audit logs and the citizen-facing UI are localised and reviewed by a human translator before go-live. Where the body operates across multiple languages, the citation chain points to the language-specific authoritative version of the source.

How do you handle citizen explainability rights?

Every citizen-facing decision the system contributes to carries a cited rationale. The citizen can request the underlying chain - source policy or law, the version of the system that produced the output, the civil servant who signed any non-trivial step. The interface for that disclosure is part of the build, not a manual back-office process.

What about the automated-decisions exemption in GDPR Article 22?

Article 22 prohibits decisions producing legal or similarly significant effects from being made solely on automated processing. In the public sector the safe path is human-in-the-loop on every decision that affects a citizen's rights or benefits. We build the workflow so the AI structures the decision packet (facts, applicable rules, comparable cases, draft rationale) and the responsible civil servant signs the actual decision with the ability to override. Every override is logged with reason codes.

Where is the data processed?

By default, all processing and storage runs in EU regions on infrastructure under EU jurisdiction. We support member-state pinning when a regulator or contract requires it (Germany-only, France-only, Spain-only, Lithuania-only). Citizen data lands in immutable EU object storage with hashes recorded in the audit log. We do not train any model on citizen data, full stop.

What is the typical scope for a first public-sector engagement?

A first engagement targets one workflow with a measurable baseline, runs 4 to 12 weeks to production, and lands as a single signed-off system inside one source-of-truth platform. Common first scopes are: citizen-intake assistant for one service line, procurement document automation on one category of tenders, internal-knowledge AI for one ministry's legislation database, or regulatory monitoring for one directorate. Submit a project with the workflow you have in mind and the procurement path you can use, and we scope the discovery phase before any code is written.

Considering AI for your public-sector team?

Tell us the workflow you have in mind and the procurement path you can use, and we come back within one business day with a discovery proposal.

Discovery call

Book a discovery call

Tell us what you would like to build. We reply within one business day.

30-minute call. Free of charge. No obligation.