Your general counsel forwards a regulatory alert from the EU AI Office. The subject line reads: eight months until high-risk AI system rules take effect. Your HR team uses an AI-powered screening tool to filter 3,000 applicants per quarter.
Your finance group runs a credit-risk model for commercial lending decisions. Neither team has completed an AI system inventory.
The EU AI Act imposes fines reaching EUR 35 million or 7% of global annual turnover for the most serious violations [EU AI Act Art. 99]. Over half of organizations lack systematic inventories of AI systems currently in production [Compliance and Risks 2026]. The gap between AI adoption and AI governance is widening, not closing.
This guide maps every EU AI Act compliance timeline deadline from 2024 through 2027, identifies the obligations triggered at each enforcement phase, and delivers the governance actions your team needs to execute before each date arrives.
The EU AI Act compliance timeline spans four enforcement phases: prohibited AI practices (February 2025), GPAI transparency rules (August 2025), high-risk AI system requirements (August 2026), and legacy system obligations (August 2027). Organizations deploying AI in the EU must classify every AI system by risk tier and implement governance controls before each deadline [EU AI Act Implementation Timeline].
Four Enforcement Phases of the EU AI Act Compliance Timeline
The EU AI Act entered into force on August 1, 2024, following publication in the Official Journal on July 12, 2024. Enforcement rolls out across four phases over three years. Each phase activates new obligations for providers, deployers, and importers of AI systems operating within the European Union.
Phase 1: Prohibited AI Practices (February 2, 2025)
The first enforcement wave banned the highest-risk AI applications outright. Article 5 prohibits social scoring systems, AI designed to exploit vulnerabilities of specific groups (age, disability, socioeconomic status), untargeted scraping of facial images for biometric databases, emotion recognition in workplaces and educational institutions, and real-time remote biometric identification in publicly accessible spaces for law enforcement (with narrow exceptions) [EU AI Act Art. 5].
Organizations deploying AI in any of these categories faced immediate compliance obligations starting February 2, 2025. Violations under this tier carry the highest penalties: EUR 35 million or 7% of global annual turnover, whichever is higher [EU AI Act Art. 99(3)].
Phase 2: GPAI Models and AI Literacy (August 2, 2025)
The second phase activated transparency and governance obligations for general-purpose AI (GPAI) model providers. Article 53 requires GPAI providers to maintain technical documentation, comply with EU copyright law, and publish sufficiently detailed summaries of training content [EU AI Act Art. 53]. The GPAI Code of Practice, published July 10, 2025, provides a voluntary compliance pathway across three chapters: Transparency, Copyright, and Safety and Security.
GPAI models exceeding the 1025 FLOPs computational threshold carry a presumption of systemic risk [EU AI Act Art. 51]. Providers of these models must notify the EU AI Office within two weeks of crossing the threshold and implement adversarial testing, incident reporting, and cybersecurity protections [EU AI Act Art. 55].
This phase also established the enforcement infrastructure. The EU AI Office became operational on August 2, 2025, alongside the AI Board, a coordination body of Member State representatives. Every Member State designated at least one national market surveillance authority and one notifying authority by this date [EU AI Act Art. 70].
Phase 3: High-Risk AI Systems (August 2, 2026)
August 2, 2026 is the critical deadline for most organizations. The full compliance framework for high-risk AI systems under Annex III becomes enforceable. Eight categories of AI applications trigger mandatory risk management, technical documentation, data governance, human oversight, transparency, and cybersecurity requirements [EU AI Act Art. 6(2), Annex III].
Organizations running AI for hiring decisions, credit scoring, medical triage, student assessment, or law enforcement analytics fall under this classification.
Provider obligations under Articles 8 through 15 require a conformity assessment before placing any high-risk AI system on the EU market. The assessment includes a complete technical documentation dossier (Annex IV), a documented risk management system, and evidence of data governance practices [EU AI Act Art. 16]. Fines for GPAI providers under Article 101 also become applicable at this date.
Phase 4: Legacy Systems in Regulated Products (August 2, 2027)
The final enforcement phase addresses two categories. GPAI models placed on the market before August 2, 2025 receive a two-year transition period to achieve full compliance [EU AI Act Art. 111(3)]. High-risk AI systems already integrated into products covered by Annex I sector-specific EU legislation (medical devices, machinery, toys, aviation, automotive) must meet all AI Act requirements by this date [EU AI Act Art. 6(1)].
Organizations relying on legacy AI models or AI-embedded regulated products need to treat August 2027 as their compliance deadline, not August 2026.
Build an internal enforcement calendar with all four EU AI Act dates: February 2025, August 2025, August 2026, and August 2027. Assign an executive owner for each phase. Map every AI system in your organization to the specific deadline governing its compliance obligations.
Present this calendar to your board or leadership team within 30 days.
High-Risk AI Classification Under Annex III
The EU AI Act compliance timeline converges on a single question for most organizations: do your AI systems qualify as high-risk? Annex III defines eight domains where AI applications automatically trigger the full compliance framework, unless the provider demonstrates the system does not pose a significant risk to health, safety, or fundamental rights [EU AI Act Art. 6(3)].
The Eight High-Risk Domains
| Domain | AI System Examples | Affected Industries |
|---|---|---|
| Biometrics | Remote biometric ID, emotion recognition, biometric categorisation | Security, retail, law enforcement |
| Critical Infrastructure | Safety components for digital infrastructure, road traffic, energy supply | Energy, transport, utilities |
| Education | Student assessment, admission decisions, proctoring | Higher education, K-12, vocational training |
| Employment | Recruitment screening, performance evaluation, task allocation | All industries using AI hiring tools |
| Essential Services | Credit scoring, insurance risk assessment, emergency dispatch | Financial services, insurance, public services |
| Law Enforcement | Evidence reliability analysis, crime analytics, risk profiling | Law enforcement agencies |
| Migration and Borders | Security risk assessment, asylum application processing | Government, border agencies |
| Justice and Democracy | Judicial decision support, democratic process tools | Courts, election administration |
Source: EU AI Act Annex III
Determining Your Risk Classification
Article 6 creates two classification pathways. Pathway one: the AI system functions as a safety component of a product (or is itself a product) covered by Annex I EU harmonization legislation, such as medical devices, machinery, or automotive regulations [EU AI Act Art. 6(1)]. These systems face the August 2027 deadline.
Pathway two: the AI system falls within one of the eight Annex III domains listed above [EU AI Act Art. 6(2)]. These systems face the August 2026 deadline. The regulation includes a narrow exception: a provider who determines the system does not pose a significant risk to health, safety, or fundamental rights must document this assessment and register the system in the EU database before deployment [EU AI Act Art. 6(3)].
Technology risk in 2026 extends beyond the EU AI Act alone. Shadow AI deployments, where teams adopt AI tools without governance oversight, create classification blind spots. An AI hiring tool adopted by one department without IT or legal review still triggers Annex III obligations the moment it processes EU resident data.
Conduct a complete AI system inventory across every department. Include vendor-provided AI features embedded in SaaS platforms, not only custom-built models. Map each system to one of four EU AI Act risk tiers: prohibited, high-risk, limited-risk, or minimal-risk.
Document the classification rationale for every high-risk determination. Complete this inventory before Q2 2026.
Provider Obligations for High-Risk AI Systems
The EU AI Act compliance timeline for high-risk systems centers on seven articles (8 through 15) defining provider obligations. These requirements apply before a high-risk AI system enters the EU market and continue throughout the system’s lifecycle. Organizations acting as providers, deployers, importers, or distributors of high-risk AI face distinct responsibilities under each article.
Risk Management System (Article 9)
Article 9 mandates a continuous, iterative risk management process running throughout the entire lifecycle of a high-risk AI system [EU AI Act Art. 9]. The system must identify and analyze known and reasonably foreseeable risks to health, safety, and fundamental rights. It must estimate and evaluate risks arising from both intended use and conditions of reasonably foreseeable misuse.
Testing protocols must validate the system performs consistently for its intended purpose.
The risk management obligation mirrors frameworks familiar to governance professionals. The NIST AI Risk Management Framework provides a structured methodology covering similar ground: Map, Measure, Manage, and Govern. Organizations already using NIST AI RMF have a head start on Article 9 compliance.
Those without a documented AI risk management system need to build one from scratch.
Data Governance and Technical Documentation (Articles 10-11)
Article 10 requires training, validation, and testing datasets to meet specific governance standards. Providers must document data collection processes, data origins, annotation and labeling procedures, cleaning and enrichment methods, and bias assessments [EU AI Act Art. 10]. The regulation explicitly requires examination of datasets for biases affecting the health and safety of persons, particularly for protected groups.
Article 11 and Annex IV define the technical documentation dossier. This dossier includes a complete system description, design specifications, development methodology, risk assessment results, testing evidence, user instructions, and an EU Declaration of Conformity [EU AI Act Art. 11, Annex IV]. Think of it as the audit evidence package: everything a notified body or market surveillance authority needs to evaluate compliance in a single document set.
Human Oversight, Transparency, and Cybersecurity (Articles 13-15)
Article 13 requires providers to design high-risk AI systems for transparency. Deployers must receive sufficient information to interpret the system’s output and use it appropriately [EU AI Act Art. 13]. Article 14 mandates human oversight measures, including the ability for human operators to override or interrupt the AI system’s operation [EU AI Act Art. 14].
Article 15 sets accuracy, robustness, and cybersecurity standards. High-risk AI systems must maintain declared performance levels, resist errors and inconsistencies, and withstand unauthorized third-party manipulation [EU AI Act Art. 15]. Organizations deploying AI in healthcare settings face overlapping requirements between the AI Act’s cybersecurity provisions and sector-specific regulations like HIPAA or the EU Medical Device Regulation.
Start with Article 9. Select your highest-priority high-risk AI system and build a documented risk management process covering risk identification, estimation, evaluation, and mitigation. Use NIST AI RMF (AI 100-1) as the methodological backbone.
Validate the approach with legal counsel before expanding to remaining systems. Target completion of the first risk management system within 60 days.
What Penalties Does the EU AI Act Impose for Non-Compliance?
Article 99 establishes a three-tier penalty structure scaled by violation severity. The fines rival GDPR penalties in magnitude and exceed them for the most serious violations. Understanding the financial exposure drives the business case for early compliance investment.
Fine Tiers Under Article 99
| Violation Type | Maximum Fine | Turnover Percentage |
|---|---|---|
| Prohibited practices (Article 5) | EUR 35 million | 7% global annual turnover |
| High-risk and other obligations | EUR 15 million | 3% global annual turnover |
| False or misleading information | EUR 7.5 million | 1% global annual turnover |
Source: EU AI Act Article 99
For large enterprises, the turnover-based calculation produces higher penalties. A company with EUR 5 billion in global revenue faces a maximum fine of EUR 350 million (7%) for prohibited practice violations. The penalty regime became enforceable on August 2, 2025, with GPAI-specific fines under Article 101 activating on August 2, 2026 [EU AI Act Art. 99, Art. 101].
SME Protections and Enforcement Mechanics
The regulation provides proportionality protections for SMEs and startups. Article 99(6) applies a “lower-of” test: SMEs receive whichever is lower between the fixed EUR amount and the percentage of turnover [EU AI Act Art. 99(6)]. A startup with EUR 2 million in annual revenue faces a maximum prohibited-practice fine of EUR 140,000 (7% of turnover), not EUR 35 million.
National competent authorities, specifically market surveillance authorities designated by each Member State, determine and impose penalties. Each Member State establishes its own procedural rules for enforcement, creating potential variation in regulatory approach across the EU. The AI Office coordinates enforcement for GPAI models at the EU level [EU AI Act Art. 88-94].
Calculate your organization’s maximum financial exposure under each penalty tier using your global annual turnover figure. Map existing AI systems to the corresponding violation category. Present this risk calculation to your board, audit committee, or executive leadership as the quantified business case for AI governance investment.
Include the calculation in your next enterprise risk assessment.
How Do You Build an EU AI Act Governance Program?
The EU AI Act compliance timeline leaves a narrowing window for governance program development. Organizations with no AI governance infrastructure face 12 to 18 months of implementation work for high-risk system compliance. Those with existing frameworks need to map their current controls to AI Act requirements and close gaps.
The Six-Step Compliance Roadmap
Step 1: AI system inventory. Catalog every AI system in production and development, including vendor-embedded AI features in SaaS platforms. Record the system purpose, data inputs, decision outputs, affected populations, and deploying department. Over 50% of organizations lack this baseline inventory [Compliance and Risks 2026].
Step 2: Risk classification. Apply the Article 6 classification framework to every inventoried system. Determine whether each system falls under prohibited, high-risk (Annex I or Annex III), limited-risk, or minimal-risk categories. Document the classification rationale.
Step 3: Gap analysis. Compare each high-risk system’s current governance controls against Articles 8-15 requirements. Identify missing risk management documentation, data governance practices, technical documentation, human oversight mechanisms, and cybersecurity controls.
Step 4: Risk management system. Build the Article 9 risk management process for each high-risk AI system. Implement continuous risk identification, estimation, evaluation, and mitigation procedures covering the full system lifecycle.
Step 5: Technical documentation. Compile the Annex IV technical documentation dossier for each high-risk system. Include system architecture, training data descriptions, testing results, performance metrics, and user instructions.
Step 6: Conformity assessment. Prepare for the conformity assessment procedure applicable to your system category. Most Annex III high-risk systems follow an internal conformity assessment (self-assessment). Systems involving biometrics for law enforcement require third-party assessment by a notified body [EU AI Act Art. 43].
Connecting to Existing Governance Frameworks
Organizations already operating under established governance frameworks hold a structural advantage. ISO 42001 (AI management systems) maps directly to the AI Act’s risk management and documentation requirements. The NIST AI RMF covers risk mapping, measurement, and management functions aligned with Article 9.
GDPR compliance programs provide transferable capabilities in data governance, data protection impact assessments, and documentation practices overlapping with Articles 10-11 [GDPR Art. 35].
The convergence point: organizations treating AI governance as a standalone project face redundant effort. Those integrating AI Act compliance into existing GRC infrastructure, alongside GDPR, ISO 27001, and sector-specific regulations, reduce implementation time and build a unified control framework. AI governance spending is projected to reach $492 million in 2026, reflecting the scale of organizational investment underway [Gartner 2025].
Healthcare organizations face a specific intersection. AI tools processing patient data must satisfy both the EU AI Act’s high-risk requirements and data protection obligations under GDPR. For organizations also operating in the U.S., identifying protected health information in AI tools creates additional compliance layers under HIPAA.
Assemble a cross-functional AI governance team with representatives from legal, engineering, data science, risk management, and compliance. Assign executive sponsorship at the C-suite or board level. Set a 90-day sprint to complete Steps 1 through 3 (inventory, classification, gap analysis) before initiating the technical documentation build.
Use ISO 42001 or NIST AI RMF as the foundational methodology to avoid building a framework from scratch.
The EU AI Act creates the first binding, horizontal regulatory framework for AI systems operating in a major global market. Organizations treating August 2, 2026 as a distant deadline face a compressed implementation window for risk management systems, technical documentation, and conformity assessments. Start with the AI system inventory: everything else builds from it.
Frequently Asked Questions
What is the EU AI Act compliance timeline?
The EU AI Act compliance timeline consists of four enforcement phases following the regulation’s entry into force on August 1, 2024. Prohibited AI practices took effect February 2, 2025. GPAI transparency obligations activated August 2, 2025.
High-risk AI system requirements become enforceable August 2, 2026. Legacy system obligations close August 2, 2027 [EU AI Act Art. 113].
When do high-risk AI system rules take effect?
High-risk AI system requirements under Annex III become enforceable on August 2, 2026. Provider obligations covering risk management (Article 9), data governance (Article 10), technical documentation (Article 11), and human oversight (Article 14) all activate at this date. High-risk systems embedded in Annex I regulated products receive an extended deadline of August 2, 2027 [EU AI Act Art. 6, Art. 113].
What fines does the EU AI Act impose for non-compliance?
The EU AI Act Article 99 establishes three penalty tiers scaling from EUR 7.5 million (1% turnover) for false information up to EUR 35 million (7% turnover) for prohibited practice violations. Prohibited practice violations carry fines up to EUR 35 million or 7% of global annual turnover. Non-compliance with high-risk system obligations triggers fines up to EUR 15 million or 3% of turnover [EU AI Act Art. 99].
Which AI systems qualify as high-risk under Annex III?
EU AI Act Annex III identifies eight domains where AI applications automatically trigger the full high-risk compliance framework: biometrics, critical infrastructure, education, employment, essential private and public services, law enforcement, migration and border control, and administration of justice. AI systems deployed in these areas, such as hiring algorithms, credit scoring models, and medical diagnostic tools, trigger the full high-risk compliance framework [EU AI Act Annex III].
How does the EU AI Act affect U.S. companies?
U.S. companies deploying AI systems affecting individuals in the EU fall within the regulation’s scope, regardless of where the company is headquartered [EU AI Act Art. 2]. A U.S. company using an AI hiring tool to screen candidates located in the EU must comply with Annex III high-risk requirements. This mirrors GDPR’s extraterritorial application.
Companies operating tools like Microsoft Copilot across jurisdictions should evaluate compliance under both EU and domestic frameworks.
What is a GPAI model under the EU AI Act?
A general-purpose AI (GPAI) model is an AI model trained with a large amount of data using self-supervision at scale, displaying significant generality, and capable of performing a wide range of distinct tasks [EU AI Act Art. 3(63)]. Models exceeding 1025 FLOPs of computation carry a presumption of systemic risk, triggering additional safety and security obligations [EU AI Act Art. 51].
How does the EU AI Act relate to GDPR?
The EU AI Act complements GDPR without replacing it, meaning AI systems processing personal data must comply with both regulations simultaneously across overlapping requirements. AI systems processing personal data must comply with both regulations simultaneously. Overlapping requirements include data governance (AI Act Article 10 and GDPR Article 5), impact assessments (AI Act risk management and GDPR DPIAs), and transparency (AI Act Article 13 and GDPR Articles 13-14) [EU AI Act Recital 10, GDPR Art. 35].
Get The Authority Brief
Weekly compliance intelligence for security leaders and technology executives. Frameworks decoded. Audit strategies explained. Regulatory updates analyzed.