When GDPR enforcement began in May 2018, most organizations treated the regulation as a data protection exercise: update the privacy policy, appoint a DPO, build a consent mechanism. The fines were theoretical. Four years later, Meta paid EUR 1.2 billion for a single violation. Clearview AI accumulated EUR 65.2 million in penalties across five EU member states for scraping facial images to train AI models [Multiple EU DPAs, 2022-2024]. The compliance teams who built their data protection infrastructure early absorbed the cost over years. The ones who waited absorbed it in months, under enforcement pressure.
The EU AI Act follows the identical enforcement arc, eighteen months behind. It references GDPR more than 30 times throughout its recitals and articles [IAPP, “Top 10 Operational Impacts of the EU AI Act,” Nov 2024]. The two regulations share supervisory territory, overlap on transparency and risk assessment requirements, and create dual-enforcement exposure for every organization deploying AI systems that process personal data. GDPR fines totaled EUR 5.65 billion across 2,245 enforcement actions since 2018, with 2025 recording EUR 2.3 billion alone, a 38% increase over the prior year [GDPR Enforcement Tracker, 2025]. The AI Act raises the ceiling further: up to EUR 35 million or 7% of global annual turnover for prohibited practices [EU AI Act, Article 99].
Seven articles in the AI Act create direct obligations that intersect with GDPR requirements. Organizations building separate compliance programs for each regulation will duplicate effort, create conflicting documentation, and miss the structural connections between the two frameworks. The intersection points are specific, the overlaps map to specific implementation steps, and the organizations that build a unified program now will spend a fraction of what those retrofitting under enforcement spend later.
The EU AI Act and GDPR intersect at seven specific articles covering data governance, human oversight, risk assessments, transparency, and incident reporting. Organizations deploying AI systems that process personal data face dual-framework penalties reaching EUR 35 million or 7% of turnover under the AI Act, on top of GDPR enforcement [EU AI Act, Article 99; GDPR].
Where Does the EU AI Act Directly Reference GDPR?
Seven articles in the AI Act create explicit procedural or substantive links with GDPR. It was drafted with GDPR as the existing baseline, not as a standalone regulation. Understanding these links is prerequisite to building a compliance program that satisfies both.
Article 10(5): The Bias Detection Exception
GDPR Article 9 prohibits processing special category data: race, ethnicity, health status, biometrics, political opinions, and other sensitive attributes. AI Act Article 10(5) creates a narrow exception. Providers of high-risk AI systems may process special category data when “strictly necessary for bias monitoring, detection and correction” [EU AI Act, Article 10(5)]. The exception applies only when three conditions are met: bias detection cannot be achieved using synthetic or anonymized data, state-of-the-art security measures are applied, and pseudonymization is implemented.
Critical nuance: the AI Act exception alone does not provide a lawful basis under GDPR. Organizations still need a valid GDPR Article 9(2) ground. Recital 70 of the AI Act suggests the “substantial public interest” exception in GDPR Article 9(2)(g) may apply, but no enforcement decision has confirmed this interpretation. Until one does, organizations processing sensitive data for bias detection carry legal risk under both frameworks simultaneously.
Article 14: Human Oversight and Automated Decisions
GDPR Article 22(1) gives data subjects the right not to be subject to “solely automated” decisions with legal or similarly significant effects. The AI Act’s Article 14 applies more broadly: it requires all high-risk AI systems to include “human-machine interface tools enabling effective oversight,” regardless of whether the decision is solely automated [EU AI Act, Article 14]. Deployers must assign competent oversight personnel with the authority to override the system.
The gap between the two provisions creates a compliance trap. A low-risk AI system under the AI Act could still make solely automated decisions that trigger GDPR Article 22 obligations. An organization that classifies a system as low-risk under the AI Act and skips human oversight requirements may still violate GDPR if that system makes automated decisions with legal effects on individuals. The AI Act classification does not override GDPR requirements.
Articles 26(8) and 27: The DPIA-FRIA Connection
AI Act Article 26(8) requires deployers to use the information provided by the AI system provider (under Article 13) to conduct their GDPR Data Protection Impact Assessment. This creates a direct procedural link: the provider’s transparency documentation feeds the deployer’s DPIA [EU AI Act, Article 26(8)].
Article 27 introduces the Fundamental Rights Impact Assessment (FRIA), required for deployers of high-risk AI systems before deployment. Article 27(4) explicitly states that where a GDPR Article 35 DPIA has already been conducted, the FRIA “shall complement” it rather than replace it [EU AI Act, Article 27(4)]. Organizations can conduct both within a single assessment document, provided it addresses the FRIA’s broader scope: fundamental rights beyond data protection, including non-discrimination, freedom of expression, and human dignity.
Practical implication: organizations already conducting DPIAs for AI systems are halfway to FRIA compliance. Those that have not started either assessment face a double obligation.
Audit Fix
Build a single integrated assessment template that satisfies both GDPR DPIA (Article 35) and AI Act FRIA (Article 27) requirements. Start with your existing DPIA template. Add sections covering fundamental rights beyond data protection: non-discrimination impact, freedom of expression implications, human dignity considerations, and environmental impact. Map each section to the specific GDPR and AI Act article it satisfies. Conduct the integrated assessment before deploying any high-risk AI system.
How Do AI Act and GDPR Transparency Requirements Differ?
Both GDPR and the AI Act impose transparency obligations, but they target different aspects of AI system operation. Understanding where they overlap and where they diverge determines whether your transparency program satisfies one regulation, both, or neither.
GDPR Transparency (Articles 12-15)
GDPR requires data controllers to provide information about processing purposes, legal basis, retention periods, and data subject rights. For automated decision-making under Article 22, controllers must provide “meaningful information about the logic involved” [GDPR, Article 13(2)(f)]. The standard is the logic of the processing, explained at a level a data subject can understand.
AI Act Transparency (Articles 50 and 86)
AI Act Article 50 requires deployers of limited-risk AI systems to inform individuals about AI interaction “unless obvious from context” [EU AI Act, Article 50]. This parallels GDPR transparency but applies to a broader category of systems, including chatbots, emotion recognition, and deepfake generation that may not involve personal data processing.
Article 86 introduces the right to “clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken” [EU AI Act, Article 86]. This right fills a specific gap. GDPR Article 22 applies only to “solely automated” decisions. Many AI-assisted decisions involve human participation: a hiring manager who uses an AI screening tool but makes the final call, or a loan officer who reviews an AI risk score before approving the application. These decisions fall outside GDPR Article 22’s scope. Article 86 captures them.
Article 86(3) specifies that the right does not apply where GDPR or other EU law already provides a right to explanation for fully automated systems. The two rights are complementary, not duplicative. GDPR covers solely automated decisions. The AI Act covers AI-assisted decisions where a human participates but the AI system plays a material role.
Map every AI system in your inventory to both transparency regimes. Systems making solely automated decisions with legal effects trigger GDPR Article 22 and AI Act Article 86. Systems where AI assists a human decision trigger only Article 86. Systems interacting with individuals trigger Article 50. The transparency obligations differ for each category.
Audit Fix
Create an AI system transparency matrix. For each AI system in your inventory, document: (1) whether it makes solely automated decisions, AI-assisted decisions, or direct interactions; (2) which GDPR transparency articles apply (12-15, 22); (3) which AI Act transparency articles apply (50, 86); (4) the specific explanation content required under each applicable article. Use this matrix as your transparency compliance specification. Update it when systems change or new systems are deployed.
Who Enforces What Under Dual-Framework AI Regulation?
The enforcement architecture for AI systems processing personal data involves multiple authorities with overlapping jurisdiction. This is not a theoretical concern. The EDPB announced transparency and information obligations as the topic for its coordinated enforcement focus in 2026, directly intersecting with AI Act transparency requirements [EDPB, 2025].
Regulatory Authority Fragmentation
GDPR is enforced by national Data Protection Authorities (DPAs). The AI Act is enforced by national market surveillance authorities and the EU AI Office. In some member states, the national DPA serves as both the data protection authority and the AI Act market surveillance authority. In others, separate bodies hold each mandate. This creates a real risk: two different authorities within the same jurisdiction may enforce overlapping provisions of the same AI system deployment, potentially reaching conflicting conclusions.
The EDPB Opinion 28/2024, issued December 2024, addressed three questions critical to the intersection [EDPB, “Opinion 28/2024 on AI models,” Dec 2024]. First, when can an AI model be considered “anonymous”? The EDPB set a high bar: it must be “very unlikely” that individuals can be identified or that personal data can be extracted via queries. Second, legitimate interest can serve as a legal basis for AI model training, but it must pass GDPR’s three-step balancing test. Third, unlawful processing during AI model development may taint the model itself, not only the training data.
Penalty Stacking
A single AI system incident involving personal data can trigger penalties under both frameworks. The penalty structures, notification timelines, and enforcement authorities differ across every dimension.
| Dimension | GDPR | EU AI Act | Implication |
|---|---|---|---|
| Maximum penalty | EUR 20M or 4% global turnover | EUR 35M or 7% global turnover | AI Act penalties exceed GDPR by 75% |
| Breach notification | 72 hours (Article 33) | 15 days; 2 days for deaths (Article 73) | Single incident may trigger both timelines |
| Enforcement authority | National DPA | Market surveillance authority + EU AI Office | Dual enforcement risk in same jurisdiction |
| Impact assessment | DPIA (Article 35) | FRIA (Article 27) | FRIA complements DPIA; single document possible |
| Transparency scope | Solely automated decisions (Article 22) | All AI interactions + AI-assisted decisions (Articles 50, 86) | AI Act covers decisions GDPR misses |
| Right to explanation | “Logic involved” (Article 13/22) | “Role of AI system and main elements” (Article 86) | Article 86 fills gap for human-AI collaboration |
Every dimension in the table differs between the two frameworks: different penalty ceilings, different notification timelines, different enforcement authorities, different assessment requirements. The frameworks do not explicitly address whether penalties can stack for the same underlying conduct [EU AI Act, Article 99]. Until case law clarifies, organizations should assume dual exposure.
Enforcement precedents are already forming. Italy fined OpenAI EUR 15 million in November 2024 for GDPR violations in ChatGPT training data processing. Although the Court of Rome annulled the decision in March 2026, the enforcement theory remains significant: DPAs are actively pursuing AI companies under GDPR for training data practices [Italian Garante, Nov 2024]. When AI Act enforcement begins, the same conduct may face scrutiny under both regimes.
Audit Fix
Identify which authority in each jurisdiction where you operate serves as the GDPR DPA and which serves as the AI Act market surveillance authority. Document whether a single authority holds both mandates or whether separate bodies govern each framework. Build your incident response plan to address notification requirements under both frameworks simultaneously: GDPR Article 33 requires data breach notification within 72 hours, while AI Act Article 73 requires serious incident reporting within 15 days (2 days for critical infrastructure deaths or serious health damage). A single incident may trigger both timelines.
How Should Organizations Build a Unified AI Act and GDPR Program?
68% of privacy professionals now handle AI governance responsibilities, a dramatic expansion from traditional compliance roles [IAPP, AI Governance Profession Report 2025]. Only 43% of organizations have an AI governance policy, and only 18% have fully implemented AI governance frameworks despite 90% using AI in daily operations [AI Data Analytics Network, 2025; Knostic, 2025]. The gap between AI adoption and AI governance is where regulatory risk concentrates.
Organizational Structure
The DPO and AI governance officer should not be the same person in most organizations. The DPO role is mandatory under GDPR with statutory independence requirements. The AI Officer addresses broader risks: bias, safety, environmental impact, and fundamental rights beyond data protection. Combining the roles creates conflict-of-interest risks when data protection priorities compete with AI deployment timelines. The 98% overlap rate between AI and privacy responsibilities reported by IAPP reflects the subject matter connection, not an argument for role consolidation [IAPP, AI Governance Profession Report 2025].
Build a RACI matrix that defines clear boundaries. The DPO owns GDPR compliance decisions. The AI Officer owns AI Act compliance decisions. Both participate in integrated assessments (DPIA/FRIA), transparency documentation, and incident response. Neither has authority to override the other’s domain requirements.
The Digital Omnibus Factor
The European Commission proposed the Digital Omnibus package in November 2025 to consolidate overlapping digital regulations. The proposal may postpone high-risk AI obligations for Annex III systems from August 2026 to December 2027. The EDPB and EDPS issued Joint Opinion 1/2026 expressing concerns about the proposed changes [EDPB-EDPS, “Joint Opinion 1/2026,” Jan 2026]. Organizations should build their compliance programs against the current timeline. If the Omnibus grants additional time, that time becomes a competitive advantage for organizations that used it to refine their programs rather than delay them.
- Complete an AI system inventory covering all systems that process personal data, including third-party embedded models [EU AI Act, Article 26]
- Classify each system under the AI Act risk framework (prohibited, high-risk, limited-risk, minimal-risk) [EU AI Act, Article 6]
- Map GDPR processing activities to AI Act system classifications for each system
- Build an integrated DPIA/FRIA template that satisfies both GDPR Article 35 and AI Act Article 27
- Create an AI transparency matrix documenting which transparency articles apply to each system
- Establish separate DPO and AI Officer roles with a documented RACI matrix
- Build a dual-framework incident response plan with parallel notification timelines (72 hours GDPR, 15 days AI Act)
- Identify the governing authorities (DPA and market surveillance) in each jurisdiction of operation
- Document lawful basis for any special category data processing used in bias detection [GDPR Article 9(2); EU AI Act Article 10(5)]
Audit Fix
Start with the AI system inventory. For each system that processes personal data, complete one row in a unified compliance matrix: AI Act risk classification, GDPR processing basis, applicable transparency articles (GDPR 12-15/22 and AI Act 50/86), assessment requirement (DPIA only, FRIA only, or integrated DPIA/FRIA), and governing authorities in each jurisdiction. Assign the DPO as owner of GDPR columns and the AI Officer as owner of AI Act columns. Review the matrix quarterly and update when systems change or new guidance is issued.
Organizations that build separate AI Act and GDPR programs will pay twice for the same compliance infrastructure and miss the seven intersection points where the frameworks reinforce each other. The enforcement convergence is already visible: the EDPB’s 2026 coordinated enforcement focus on transparency directly overlaps with AI Act Article 50 and 86 obligations. Build one unified program now. The organizations that integrate early will absorb the cost over 18 months. The ones that wait will absorb it in 90 days under enforcement pressure, exactly as they did with GDPR.
Frequently Asked Questions
How do the EU AI Act and GDPR intersect on compliance requirements?
The AI Act references GDPR more than 30 times and creates seven specific intersection points covering data governance, human oversight, risk assessments, transparency, and incident reporting [IAPP, Nov 2024]. Organizations deploying AI systems that process personal data must satisfy obligations under both frameworks simultaneously, with unified assessment processes available for DPIA/FRIA requirements.
Do I need both a DPIA and FRIA for high-risk AI systems?
Not as separate documents. AI Act Article 27(4) states the FRIA “shall complement” an existing GDPR DPIA [EU AI Act, Article 27(4)]. Organizations can conduct both within a single integrated assessment, but must address the FRIA’s broader scope beyond data protection: non-discrimination, freedom of expression, and human dignity.
Can I use sensitive personal data to train AI for bias detection?
AI Act Article 10(5) permits processing special category data “strictly necessary” for bias detection in high-risk systems, but only when synthetic or anonymized data cannot achieve the same result [EU AI Act, Article 10(5)]. A valid GDPR Article 9(2) legal basis is still required. Pseudonymization and state-of-the-art security measures are mandatory.
Should the DPO also serve as the AI governance officer?
Not in most organizations. The DPO has statutory independence requirements under GDPR. The AI Officer addresses risks beyond data protection: bias, safety, and environmental impact. Combining roles creates conflict-of-interest risks. Build a RACI matrix with clear domain boundaries and shared participation in integrated assessments.
How do AI Act and GDPR incident reporting timelines differ?
GDPR Article 33 requires data breach notification within 72 hours. AI Act Article 73 requires serious incident reporting within 15 days, reduced to 2 days for critical infrastructure deaths or serious health damage [GDPR, Article 33; EU AI Act, Article 73]. A single AI incident involving personal data may trigger both timelines simultaneously.
What is the AI Act right to explanation and how does it differ from GDPR?
AI Act Article 86 provides a right to “clear and meaningful explanations” of the AI system’s role in decisions affecting individuals [EU AI Act, Article 86]. GDPR Article 22 covers solely automated decisions. Article 86 fills the gap for AI-assisted decisions where a human participates but the AI system plays a material role in the outcome.
Which authority enforces AI Act versus GDPR violations?
GDPR is enforced by national Data Protection Authorities. The AI Act is enforced by national market surveillance authorities and the EU AI Office. In some member states, the DPA holds both mandates. In others, separate bodies govern each framework. Dual enforcement of a single AI system by different authorities in the same jurisdiction is a documented risk.
Will the Digital Omnibus change the AI Act and GDPR relationship?
The European Commission’s Digital Omnibus package (November 2025) may postpone high-risk AI obligations for Annex III systems to December 2027. The EDPB and EDPS expressed concerns in Joint Opinion 1/2026 [EDPB-EDPS, Jan 2026]. Build compliance programs against the current timeline. If the Omnibus grants additional time, use it to refine, not delay.
Get The Authority Brief
Weekly compliance intelligence for security leaders. Frameworks decoded. Audit strategies explained. Regulatory updates analyzed.