I
Impetora

EU AI Act vs GDPR overlap: how the two regimes interact in 2026

By Impetora -

The General Data Protection Regulation and the EU AI Act regulate the same systems from different angles. The GDPR governs the processing of personal data; the AI Act governs the placing on the market and use of AI systems. Where an AI system processes personal data - which is most of them - both regimes apply simultaneously, with overlapping but non-identical obligations on documentation, oversight, transparency and risk management [1]. Treating them as one compliance programme rather than two siloed projects is the only practical way through.

Two regimes
GDPR + AI Act apply concurrently
EUR-Lex
DPIA + FRIA
two impact assessments now required
GDPR Art. 35
20M / 35M
max GDPR vs AI Act fine bands (EUR)
EUR-Lex

What does each regulation actually govern?

The GDPR (Regulation (EU) 2016/679) regulates the processing of personal data of individuals in the EU, regardless of where the controller or processor is located. Its obligations attach to controllers and processors, are organised around lawful bases, data-subject rights, transparency, security, transfers, accountability and impact assessment, and are enforced by national supervisory authorities under the European Data Protection Board's coordination.

The EU AI Act (Regulation (EU) 2024/1689) regulates AI systems and general-purpose AI models placed on the market or put into service in the EU. Its obligations attach to providers, deployers, importers and distributors, are organised around four risk classes (prohibited, high-risk, limited-risk, minimal-risk) plus general-purpose AI obligations, and are enforced by national competent authorities and the European Commission's AI Office [1].

Where exactly do GDPR and the AI Act overlap?

Six concrete overlaps. First, impact assessments: GDPR Article 35 requires a DPIA for high-risk processing of personal data; AI Act Article 27 requires a Fundamental Rights Impact Assessment for certain high-risk AI deployers. The two assessments share substantial content but are not identical, and Article 27 explicitly says it does not replace the DPIA [2]. Second, transparency: GDPR Articles 13-15 require information to data subjects; AI Act Articles 13 and 50 require information to deployers and to natural persons interacting with AI systems. Third, automated decisions: GDPR Article 22 governs solely automated decisions with significant effects; AI Act Article 14 governs human oversight of high-risk systems.

Fourth, data governance: GDPR Articles 5 and 6 set the lawful-basis and quality principles; AI Act Article 10 sets training-data quality, bias-mitigation and special-category-handling rules. Fifth, accountability and records: GDPR Article 30 records of processing meet AI Act Article 11 technical documentation in practice but use different templates. Sixth, security: GDPR Article 32 meets AI Act Article 15 (accuracy, robustness, cybersecurity) on the security strand.

DPIA + FRIA
two impact assessments under one programme
GDPR Art. 35

Where do GDPR and the AI Act diverge?

The first divergence is scope. GDPR applies to any processing of personal data; the AI Act applies to AI systems regardless of whether they process personal data. A purely synthetic-data AI system can be in scope of the AI Act and out of scope of GDPR. A non-AI personal-data process is in scope of GDPR and out of scope of the AI Act. The intersection is where most enterprise AI sits.

The second divergence is the operative entity. GDPR's central role is the controller; the AI Act's central role is the provider. The same legal entity is often both, but not always. A bank deploying a vendor-built credit-scoring system is the GDPR controller and the AI Act deployer, while the vendor is the processor and the provider. The contract has to allocate both sets of obligations cleanly. Third, supervision. GDPR is supervised by data-protection authorities; the AI Act is supervised by national competent authorities (often different bodies) and the AI Office. Coordination mechanisms exist but are still maturing.

Fourth, fines. GDPR caps at EUR 20 million or 4% of global annual turnover. The AI Act caps at EUR 35 million or 7% of global annual turnover for prohibited-practice infringements, EUR 15 million or 3% for high-risk obligation breaches, and EUR 7.5 million or 1% for documentation failures. The risk profile for high-impact AI is therefore materially higher under the AI Act than under GDPR alone [1].

How does an enterprise sequence GDPR and AI Act work?

The pragmatic sequence is risk-classify first, then document once for both. Step one: classify the AI system under the AI Act risk taxonomy (prohibited / high-risk / limited / minimal / GPAI). Step two: assess whether personal data is processed and identify the GDPR roles (controller / processor / joint controllers). Step three: run a single combined assessment that produces the Article 35 DPIA and the AI Act Article 27 FRIA in one documented exercise, with the personal-data parts feeding the DPIA and the fundamental-rights parts feeding the FRIA. Step four: build the technical documentation pack to satisfy both Article 30 GDPR and Annex IV of the AI Act, structured so each clause is named to one or both.

Step five: design human oversight to meet the higher of Article 14 AI Act and Article 22 GDPR (in practice, Article 14 sets the higher bar). Step six: integrate post-market monitoring (AI Act Article 72) with the GDPR breach-notification and supervisory cooperation processes. Step seven: align the contractual stack - DPA, master services agreement, AI Act provider/deployer obligations - on a single template. The European Data Protection Board's guidelines on Article 22 and the upcoming joint EDPB/AI Office guidance will be the canonical references for the boundary [3].

What are the common traps in dual GDPR + AI Act compliance?

Five recurring traps. First, treating the AI Act as a "GDPR for AI" - it is structured very differently and does not replace any GDPR obligation. Second, running the DPIA and FRIA as separate workstreams instead of one combined assessment, doubling the work and producing inconsistent documentation. Third, missing the role distinction: the GDPR controller is not always the AI Act provider, and the contract has to allocate both. Fourth, defaulting to "consent" as the GDPR lawful basis for AI training when contract performance, legitimate interests, or specific legal basis would be more defensible and more sustainable. Fifth, treating ISO/IEC 42001 as proof of AI Act compliance - it is the management-system layer, not the conformity-assessment layer [4].

The Bank for International Settlements' 2024 paper on generative AI in finance is a useful cross-reference for sequencing under sector regulation, where DORA, MiFID II and similar regimes layer on top of GDPR and the AI Act [5].

How does Impetora deliver under both regimes at once?

Impetora's TRACE methodology was built for the dual-regime reality. Trust covers the GDPR-on-AI baseline: residency, processing register, DPA stack, lawful-basis design, transfer mechanisms. Readiness produces the combined DPIA/FRIA in a single exercise per system. Architecture builds the technical documentation pack to satisfy Annex IV of the AI Act and Article 30 of the GDPR from one source. Citations and Evidence delivers the per-decision traceability that makes Article 15(1)(h) GDPR access requests and AI Act Article 13 transparency obligations operationally satisfiable.

The practical procurement test for buyers is to ask the vendor for a sample combined DPIA/FRIA from a comparable past project. A vendor that has done dual-regime work can produce one. A vendor that hands back a separate DPIA template and a separate FRIA template has not yet integrated the two regimes operationally.

Frequently asked questions

Does an AI system that does not process personal data still fall under GDPR?
No. GDPR applies only to processing of personal data of identified or identifiable natural persons. A system trained on fully synthetic or fully anonymous data, with no inputs or outputs that identify individuals, sits outside GDPR. It can still fall fully inside the AI Act if it is a high-risk system under Annex III or a prohibited practice under Article 5. The two scopes do not coincide.
Is an AI Act FRIA the same as a GDPR DPIA?
No. The DPIA under GDPR Article 35 focuses on risks to data-protection rights and freedoms arising from the processing of personal data. The FRIA under AI Act Article 27 focuses on risks to fundamental rights more broadly arising from the deployment of a high-risk AI system. There is significant content overlap (purpose, affected individuals, risk identification, mitigation), and Article 27(4) explicitly says the FRIA does not replace the DPIA. In practice, mature programmes run a single combined exercise that produces both deliverables.
Who is responsible when a vendor builds a high-risk AI system for a bank?
Both, in different roles. The vendor is the AI Act provider (responsible for risk management, technical documentation, conformity assessment, post-market monitoring of the system itself) and typically the GDPR processor for the bank's data. The bank is the AI Act deployer (responsible for use within intended purpose, human oversight, monitoring, FRIA) and the GDPR controller. The master services agreement and DPA must allocate both sets of obligations explicitly, including audit rights, incident reporting, indemnification and termination triggers.
Does ISO/IEC 42001 satisfy GDPR or AI Act obligations?
ISO 42001 is the AI management-system standard. It builds the documentation engine that supports compliance with both regimes but does not on its own discharge any specific GDPR or AI Act obligation. A 42001-certified organisation has the policy, lifecycle, impact-assessment and supplier-risk machinery in place, which makes the per-system DPIA, FRIA, conformity-assessment and post-market monitoring work feasible at scale. The product-level obligations remain separate.
Which regulator do you notify in an AI incident that involves personal data?
Both, when the relevant thresholds are met. Personal data breaches with risk to data subjects trigger GDPR Article 33 notification to the data-protection authority within 72 hours. Serious incidents with high-risk AI systems trigger AI Act Article 73 notification to the relevant national competent authority. The two routes are independent. Mature incident-response playbooks identify both notification triggers from a single incident-classification step and run the two notifications in parallel under common case management.
Can an AI system rely on legitimate interests under GDPR for training?
Yes, with caveats. Legitimate interests under GDPR Article 6(1)(f) is a lawful basis available for training where the controller can pass the three-part test (interest, necessity, balancing). The 2024 EDPB Opinion 28/2024 on AI models under GDPR provides the current EDPB position on lawful-basis design for AI training and inference. Special-category data under Article 9 requires a separate Article 9 condition. Cross-border transfers under Chapter V still need a transfer mechanism. Legitimate interests is rarely a complete answer on its own.
Does the AI Act override sector-specific regulation?
No. The AI Act is horizontal regulation. Sector regulation - DORA in financial services, MDR/IVDR in medical devices, MiFID II for investment services, the Machinery Regulation for products - continues to apply on top of the AI Act. Where the AI Act and sector regulation both impose conformity-assessment obligations, the AI Act provides for integration with existing sectoral procedures (Article 6(1) and Annex I) rather than parallel duplicate processes. Sector-specific compliance teams need to be at the table for the design.
Impetora

Ready to scope your project? Submit a short brief and we reply within one business day.

Sources cited

Sources cited (7) - show
  1. Regulation (EU) 2024/1689 (Artificial Intelligence Act). European Union, Official Journal, 2024-07-12. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689
  2. Article 35 GDPR - Data Protection Impact Assessment. European Union (gdpr-info.eu), 2018-05-25. https://gdpr-info.eu/art-35-gdpr/
  3. Guidelines on automated individual decision-making and profiling. European Data Protection Board, 2024. https://www.edpb.europa.eu/our-work-tools/our-documents/guidelines
  4. ISO/IEC 42001:2023 - AI management systems. International Organization for Standardization, 2023-12. https://www.iso.org/standard/81230.html
  5. Generative artificial intelligence in finance. Bank for International Settlements, 2024-08. https://www.bis.org/fsi/publ/insights63.htm
  6. Article 22 GDPR - Automated decision-making. European Union (gdpr-info.eu), 2018-05-25. https://gdpr-info.eu/art-22-gdpr/
  7. AI Risk Management Framework (AI RMF 1.0). NIST, 2023-01. https://www.nist.gov/itl/ai-risk-management-framework
About Impetora
Impetora designs, builds, and deploys custom AI systems for enterprises in regulated industries. We operate from Vilnius and Amsterdam and work in five languages.