I
Impetora
For: Data Protection Officer

Custom AI for the DPO: how we ship AI that survives a DPIA, an audit, and a rights request.

A Data Protection Officer reviewing an AI system asks four questions: does it trigger Article 35 DPIA, what is the lawful basis under GDPR, where does it sit under the EU AI Act, and can the system support a data-subject rights request without forensic archaeology. We design every build to answer those four questions in writing before the system goes live. The DPIA pack, the ROPA entry, the sub-processor register, and the rights-request playbook are deliverables, not commitments.

DPIA template, sub-processor list, automated-decision exception flow, data-minimisation evidence, and the AI Act §10 data-governance documentation. Delivered before launch.

Art 35
GDPR DPIA mandatory for systematic large-scale or sensitive processing
GDPR-Info
Art 22
Right not to be subject to a solely automated decision with legal or significant effects
GDPR-Info
AI Act §10
Data and data-governance obligations for high-risk AI systems
EUR-Lex
EUR 20M
Or 4% of turnover, GDPR maximum fine under Article 83(5)
GDPR-Info
What DPOs actually care about

The five concerns we hear on every DPO discovery call.

DPIA scope and trigger

AI systems involving systematic large-scale processing, special-category data, or automated decisions almost always trigger Article 35. The DPIA has to be drafted, signed, and supervisory-authority-ready.

Lawful basis for AI processing

Consent, contract, legitimate interest, legal obligation, vital interests, or public task. The lawful basis has to be documented per processing activity and revisited when the workflow changes.

Data minimisation and purpose limitation

AI systems are data-hungry by default. Building minimisation in (only the fields needed, retention bounded, purpose locked) is harder than retrofitting it later.

Sub-processor register

Every party touching personal data has to be on the register, with category, residency, legal basis, and the standard-contractual-clauses position when relevant.

Data-subject rights

Access, rectification, erasure, portability, objection, and the Article 22 right not to be subject to solely automated decisions. Each one has to be supportable through the system, not around it.

Cross-border transfer posture

Where personal data leaves the EEA, transfer impact assessments, SCCs, and where applicable adequacy decisions have to be in place. Foundation-model traffic is not exempt.
TRACE pillar focus

For DPOs, the spine is Trust.

For a DPO, the spine is Trust, with Citations close behind. Trust covers data residency, lawful-basis posture, sub-processor visibility, and audit-grade logging. Citations covers the evidence chain that supports a data-subject access request, a rectification, or an Article 22 contestation. We design every system around the assumption that a regulator may open a file and that a data subject may exercise their rights, and the artefacts have to be in place before either happens.

If your AI system cannot answer a Subject Access Request without forensic archaeology, it is already non-compliant.
Impetora DPIA workshop notes
Engagement model

What the engagement looks like from your seat.

Processing mapDiscoveryDPIA packPre-buildROPA entryPre-launchBuild + logEvidenceRights playbookOperate
How a DPO engagement runs end to end.
Deliverables

What DPOs need from a partner, and what we ship.

DPIA template and pack

A drafted Data Protection Impact Assessment covering processing description, necessity and proportionality, risks to data subjects, mitigations, and the residual-risk assessment.

ROPA entries

Records of Processing Activities entries for every AI workflow we ship, with controller, processor, categories of data subjects, categories of data, recipients, and retention.

Sub-processor register

A current list of every third-party that touches personal data, with category, residency, legal basis, transfer mechanism (SCCs or adequacy), and notification terms.

Automated-decision exception flow

Where the workflow falls under Article 22, the human-review surface, the contestation path, and the EDPB-aligned design pattern. Documented and exposed in the audit log.

Data-minimisation evidence

A documented mapping of fields ingested, retention period, and purpose. Anything outside the minimisation envelope requires a written exception.

Rights-request playbook

A documented procedure for SAR, rectification, erasure, portability, and objection requests through the system. Tested with a sample request before launch.

DPO questions, answered.

Will this trigger an Article 35 DPIA?

Almost always, yes. AI systems that involve systematic large-scale processing, special-category data, profiling, or automated decisions with legal or similarly significant effects trigger Article 35 by default. We assume DPIA is in scope and draft the pack in Discovery, so by the time Build phase begins you have a DPIA your supervisory authority can review. Where the processing turns out not to trigger Article 35, the same document set serves as a written record of the analysis, which is itself best practice under the EDPB guidelines.

How do you support data-subject rights requests?

Every system we build is designed so that an SAR can be answered from the audit log without forensic archaeology. The log captures every interaction involving personal data, including the input, the retrieved context, the model version, and the output. For erasure, the system supports targeted deletion by data-subject identifier across the storage layers, with the foundation-model provider's no-retention posture documented. For rectification, the upstream system of record is the source of truth, and the AI re-runs against the corrected record. For Article 22 contestation, the human-review surface is exposed as a first-class workflow with the reasoning trail attached.

What is your sub-processor list?

Our current sub-processor list is published at impetora.com/sub-processors and updated when it changes. For any specific engagement, you receive an engagement-specific sub-processor register at DPA signing, covering every party that processes personal data on the project, the data category each one processes, the residency, the legal basis, and the transfer mechanism (SCCs or adequacy). When the list changes during the engagement, you are notified under contract with sufficient lead time to object.

How do you handle cross-border data transfers?

By default, we deploy on EU regions and contract with foundation-model providers on EU-resident inference where the provider supports it. Where personal data has to leave the EEA, we put SCCs in place, run a transfer impact assessment under Schrems II logic, and document the supplementary measures. Where adequacy is available (UK, Switzerland, several others), we use it. Cross-border posture is documented in the DPIA and refreshed when the regulator's guidance changes.

How do you align with AI Act §10 data-governance obligations?

AI Act Article 10 requires high-risk systems to use training, validation, and testing datasets that are relevant, representative, free of errors, and complete to the extent possible, with documented data-governance practices covering design choices, data collection, annotation, and bias mitigation. Where we fine-tune or build retrieval pipelines on your data, we maintain the documentation Article 10 expects: provenance, processing operations, assumptions, availability, and known limitations. Where we use a foundation model, we rely on the provider's documented Article 10 posture and capture it in the regulator-pack.

Bring us the DPO mandate. We bring the audit-ready system.

Discovery starts with a scoped audit. The deliverable is yours either way. We respond within two business days at info@ainora.lt.

Discovery call

Book a discovery call

Tell us what you would like to build. We reply within one business day.

30-minute call. Free of charge. No obligation.