Custom AI for the DPO: how we ship AI that survives a DPIA, an audit, and a rights request.
A Data Protection Officer reviewing an AI system asks four questions: does it trigger Article 35 DPIA, what is the lawful basis under GDPR, where does it sit under the EU AI Act, and can the system support a data-subject rights request without forensic archaeology. We design every build to answer those four questions in writing before the system goes live. The DPIA pack, the ROPA entry, the sub-processor register, and the rights-request playbook are deliverables, not commitments.
DPIA template, sub-processor list, automated-decision exception flow, data-minimisation evidence, and the AI Act §10 data-governance documentation. Delivered before launch.
The five concerns we hear on every DPO discovery call.
DPIA scope and trigger
Lawful basis for AI processing
Data minimisation and purpose limitation
Sub-processor register
Data-subject rights
Cross-border transfer posture
For DPOs, the spine is Trust.
For a DPO, the spine is Trust, with Citations close behind. Trust covers data residency, lawful-basis posture, sub-processor visibility, and audit-grade logging. Citations covers the evidence chain that supports a data-subject access request, a rectification, or an Article 22 contestation. We design every system around the assumption that a regulator may open a file and that a data subject may exercise their rights, and the artefacts have to be in place before either happens.
If your AI system cannot answer a Subject Access Request without forensic archaeology, it is already non-compliant.
Where DPOs typically engage us first.
Customer support automation
Decision support
Internal knowledge AI
Document processing
What the engagement looks like from your seat.
What DPOs need from a partner, and what we ship.
DPIA template and pack
ROPA entries
Sub-processor register
Automated-decision exception flow
Data-minimisation evidence
Rights-request playbook
DPO questions, answered.
Will this trigger an Article 35 DPIA?
Almost always, yes. AI systems that involve systematic large-scale processing, special-category data, profiling, or automated decisions with legal or similarly significant effects trigger Article 35 by default. We assume DPIA is in scope and draft the pack in Discovery, so by the time Build phase begins you have a DPIA your supervisory authority can review. Where the processing turns out not to trigger Article 35, the same document set serves as a written record of the analysis, which is itself best practice under the EDPB guidelines.
How do you support data-subject rights requests?
Every system we build is designed so that an SAR can be answered from the audit log without forensic archaeology. The log captures every interaction involving personal data, including the input, the retrieved context, the model version, and the output. For erasure, the system supports targeted deletion by data-subject identifier across the storage layers, with the foundation-model provider's no-retention posture documented. For rectification, the upstream system of record is the source of truth, and the AI re-runs against the corrected record. For Article 22 contestation, the human-review surface is exposed as a first-class workflow with the reasoning trail attached.
What is your sub-processor list?
Our current sub-processor list is published at impetora.com/sub-processors and updated when it changes. For any specific engagement, you receive an engagement-specific sub-processor register at DPA signing, covering every party that processes personal data on the project, the data category each one processes, the residency, the legal basis, and the transfer mechanism (SCCs or adequacy). When the list changes during the engagement, you are notified under contract with sufficient lead time to object.
How do you handle cross-border data transfers?
By default, we deploy on EU regions and contract with foundation-model providers on EU-resident inference where the provider supports it. Where personal data has to leave the EEA, we put SCCs in place, run a transfer impact assessment under Schrems II logic, and document the supplementary measures. Where adequacy is available (UK, Switzerland, several others), we use it. Cross-border posture is documented in the DPIA and refreshed when the regulator's guidance changes.
How do you align with AI Act §10 data-governance obligations?
AI Act Article 10 requires high-risk systems to use training, validation, and testing datasets that are relevant, representative, free of errors, and complete to the extent possible, with documented data-governance practices covering design choices, data collection, annotation, and bias mitigation. Where we fine-tune or build retrieval pipelines on your data, we maintain the documentation Article 10 expects: provenance, processing operations, assumptions, availability, and known limitations. Where we use a foundation model, we rely on the provider's documented Article 10 posture and capture it in the regulator-pack.
Where to go next.
The full list of third-party services used to operate Impetora, with residency and category.
GDPR-aligned data handling, retention periods, and the rights data subjects can exercise.
SCHUFA, EDPB 2024 guidelines, AI scope, design pattern for compliance.
Bring us the DPO mandate. We bring the audit-ready system.
Discovery starts with a scoped audit. The deliverable is yours either way. We respond within two business days at info@ainora.lt.