AI Governance

EU AI Act Penalties: €35M Fines for Prohibited Practices

| | 17 min read

Bottom Line Up Front

The EU AI Act imposes three penalty tiers: EUR 35 million or 7% of global turnover for prohibited AI practices, EUR 15 million or 3% for high-risk AI non-compliance, and EUR 7.5 million or 1% for misleading information. Prohibited practices enforcement began February 2, 2025. Full enforcement activates August 2, 2026.

Your AI vendor sends a routine product update. Buried in the changelog: a new feature scoring job applicants on behavioral patterns inferred from social media activity, active across three EU subsidiaries for six weeks. The feature matches the definition of social scoring under Article 5(c) of the EU AI Act [Regulation 2024/1689 Article 5(c)].

The penalty for prohibited AI practices reaches EUR 35 million or 7% of global annual turnover, whichever is higher [EU AI Act Article 99(3)]. For a company generating EUR 2 billion in annual revenue, the turnover-based calculation produces a EUR 140 million exposure. Prohibited practices enforcement became active across all 27 EU member states on February 2, 2025.

This analysis breaks down the EU AI Act penalty structure, identifies the violations triggering each level, examines enforcement patterns emerging from GDPR precedent, and delivers a strategic prevention framework for organizations preparing before the August 2, 2026 full enforcement deadline.

The EU AI Act imposes three penalty tiers: EUR 35 million or 7% of global turnover for prohibited practices, EUR 15 million or 3% for high-risk AI non-compliance, and EUR 7.5 million or 1% for misleading information to authorities. Prohibited practices enforcement began February 2, 2025. Full enforcement activates August 2, 2026 [EU AI Act Article 99].

The Three-Tier EU AI Act Penalty Structure Under Article 99

EU AI Act penalties follow a three-tier structure calibrated to violation severity [EU AI Act Article 99]. The highest penalties target prohibited practices. The middle tier addresses high-risk AI system non-compliance. The lowest tier covers information provision failures.

Tier 1: EUR 35 Million or 7% of Global Turnover

Non-compliance with Article 5 prohibited practices triggers the maximum penalty: up to EUR 35 million or 7% of the preceding financial year’s worldwide annual turnover, whichever is higher [EU AI Act Article 99(3)]. For the largest technology companies, the turnover calculation dwarfs the fixed amount. A company generating EUR 50 billion in revenue faces a theoretical maximum of EUR 3.5 billion.

This tier exclusively covers the eight categories of prohibited AI practices defined in Article 5. The “whichever is higher” provision creates a EUR 35 million floor preventing inadequate deterrence for smaller operators.

Tier 2: EUR 15 Million or 3% of Global Turnover

Non-compliance with high-risk AI system obligations triggers the second tier: up to EUR 15 million or 3% of global turnover [EU AI Act Article 99(4)]. This tier covers requirements for risk management, data governance, technical documentation, transparency, human oversight, and accuracy standards under Articles 8 through 15.

Organizations deploying AI for hiring decisions, credit scoring, medical diagnostics, or critical infrastructure management fall under high-risk classification [EU AI Act Annex III]. The penalty exposure reflects the regulatory expectation of proportionate investment in AI governance infrastructure.

Tier 3: EUR 7.5 Million or 1% of Global Turnover

Supplying incorrect, incomplete, or misleading information to notified bodies or national competent authorities triggers the third tier: up to EUR 7.5 million or 1% of global turnover [EU AI Act Article 99(5)]. This tier addresses transparency failures rather than operational non-compliance.

The information provision tier carries strategic significance beyond its lower ceiling. Regulators investigating potential Tier 1 or Tier 2 violations rely on organizational self-reporting and documentation requests. Providing misleading responses during an investigation compounds the original violation with an independent penalty.

Penalty Tier Maximum Fine Violation Category
Tier 1 EUR 35M or 7% of global turnover Article 5 prohibited AI practices
Tier 2 EUR 15M or 3% of global turnover High-risk AI system non-compliance
Tier 3 EUR 7.5M or 1% of global turnover Misleading information to authorities

Create a register listing each AI system, its risk classification (prohibited, high-risk, limited, or minimal), the applicable Article 99 tier, and the calculated maximum penalty based on your organization’s global turnover. Present this register to the board. Penalty exposure quantified in currency converts regulatory abstraction into a budgetary line item.

Article 5 Prohibited Practices: The EUR 35 Million Red Lines

Article 5 defines eight categories of AI practices banned outright across the EU [Regulation 2024/1689 Article 5]. These are not high-risk systems requiring compliance frameworks. These are systems regulators consider fundamentally incompatible with EU values. Deployment triggers the maximum penalty.

The Eight Prohibited Categories

  • Subliminal manipulation and deceptive techniques distorting behavior and impairing informed decision-making [EU AI Act Article 5(1)(a)].
  • Exploitation of vulnerable groups based on age, disability, or social/economic situation to distort behavior causing harm [EU AI Act Article 5(1)(b)].
  • Social scoring evaluating individuals based on social behavior where the score leads to detrimental treatment in unrelated contexts [EU AI Act Article 5(1)(c)].
  • Criminal risk profiling assessing likelihood of criminal offenses based solely on profiling or personality traits [EU AI Act Article 5(1)(d)].
  • Untargeted facial recognition scraping from the internet or CCTV to build or expand facial recognition databases [EU AI Act Article 5(1)(e)].
  • Workplace and education emotion recognition inferring emotions of individuals, with exceptions for medical and safety systems [EU AI Act Article 5(1)(f)].
  • Biometric categorization for sensitive attributes using biometric data to infer race, political opinions, religion, sex life, or sexual orientation [EU AI Act Article 5(1)(g)].
  • Real-time remote biometric identification in public spaces for law enforcement, with narrow exceptions requiring prior judicial authorization [EU AI Act Article 5(1)(h)].

Where Organizations Misread the Boundaries

Two prohibited categories create the most organizational exposure. Workplace emotion recognition (f) catches interview analysis tools scoring candidates on facial expressions, voice tone, or behavioral signals. Companies marketing these tools as “AI-powered candidate assessment” deploy prohibited technology. The medical and safety exemption does not extend to HR applications.

Subliminal manipulation (a) catches AI systems deployed in marketing, product design, or user interface optimization when the system materially distorts behavior and causes significant harm. The test is outcome-based: if the AI system impairs informed decision-making, the prohibition applies regardless of the developer’s stated intent.

Audit every AI system touching EU data subjects against all eight Article 5 categories. Pay specific attention to HR technology: interview analysis tools, employee monitoring systems, and workforce analytics platforms frequently overlap with prohibited categories (f) and (a). Document the audit findings in a formal prohibited practices assessment. Retain the assessment as evidence of due diligence. Regulators evaluating penalty severity consider documented compliance efforts as a mitigating factor [EU AI Act Article 99(7)].

High-Risk AI Compliance Obligations and the EUR 15 Million Tier

High-risk AI systems listed under Annex III trigger the second penalty tier: EUR 15 million or 3% of global turnover for non-compliance with operational requirements [EU AI Act Article 99(4)]. The obligations are specific, documented, and auditable.

What Qualifies as High-Risk

Annex III identifies eight domains triggering high-risk classification: biometric identification, critical infrastructure, education access, employment and workforce management, essential services (credit scoring, insurance), law enforcement, migration, and democratic processes [EU AI Act Annex III]. The classification follows the risk-based approach underpinning the regulatory framework.

Seven Compliance Obligations Triggering Penalties

High-risk AI system providers face seven mandatory compliance categories. Failure in any category creates Tier 2 penalty exposure:

  • Risk management system running throughout the AI system lifecycle [EU AI Act Article 9].
  • Data governance meeting quality criteria for training, validation, and testing datasets [EU AI Act Article 10].
  • Technical documentation maintained and updated throughout the system lifecycle [EU AI Act Article 11].
  • Record-keeping with automatic logging enabling traceability [EU AI Act Article 12].
  • Transparency through instructions enabling deployers to interpret system output [EU AI Act Article 13].
  • Human oversight enabling effective supervision by natural persons [EU AI Act Article 14].
  • Accuracy, robustness, and cybersecurity achieving appropriate performance levels [EU AI Act Article 15].

The EUR 15 million tier penalizes structural governance failures, not isolated technical defects. Organizations investing in AI risk management systems, documentation practices, and human oversight frameworks before August 2026 convert potential penalty exposure into operational maturity.

Build a compliance matrix mapping each Annex III high-risk system against all seven obligation categories (Articles 9 through 15). Document the current state: fully compliant, partially implemented, or not started. Prioritize risk management (Article 9) and technical documentation (Article 11) first. GDPR enforcement patterns show documentation gaps triggered the earliest penalties.

EU AI Act Enforcement Architecture Across Member States

Enforcement of the EU AI Act operates through national market surveillance authorities designated by each member state, coordinated at the EU level by the European AI Office within the European Commission [EU AI Act Article 70]. Each member state was required to designate national competent authorities by August 2, 2025. The variation in how member states structure enforcement creates a non-uniform regulatory environment.

Finland: The First Mover

Finland became the first EU member state with fully operational AI Act enforcement powers when the President of the Republic approved national supervision legislation on December 22, 2025, with laws taking effect January 1, 2026 [Finnish Ministry of Economic Affairs and Employment, December 2025]. Finland designated a decentralized enforcement model with 10 existing market surveillance authorities, including the Finnish Transport and Communications Agency (Traficom), the Energy Authority, and the Medicines Agency.

The decentralized model carries strategic implications. Sector-specific regulators bring domain expertise to AI enforcement. A medicines regulator evaluating AI diagnostic tools understands clinical risk in ways a generalist technology regulator does not.

Enforcement Variation Patterns

Member states are choosing between centralized models (single national authority) and decentralized models (sector-specific regulators). GDPR enforcement history reveals how structural choices translate into penalty patterns. Ireland issued EUR 3.5 billion in GDPR fines since 2018, four times the second-ranked jurisdiction [DLA Piper GDPR Survey January 2025]. Spain issued 932 individual penalties. AI Act enforcement will similarly concentrate where regulated organizations are headquartered.

The European AI Office

The European AI Office within the European Commission provides coordination, guidance, and direct enforcement authority over general-purpose AI models [EU AI Act Article 64]. The office published guidelines on prohibited practices in February 2025 and manages the EU AI database coordinating cross-border enforcement.

Identify every EU member state where your organization deploys AI systems or processes data from EU subjects. For each jurisdiction, determine the designated market surveillance authority and review published enforcement guidance. The European Commission maintains a public registry of national implementation plans at artificialintelligenceact.eu. Allocate regulatory liaison resources based on your primary enforcement jurisdiction.

Strategic Prevention Framework for EU AI Act Penalties

Preventing EU AI Act penalties requires five organizational capabilities deployed before the August 2, 2026 full enforcement deadline. Each capability addresses a specific penalty vector identified in Articles 5, 8-15, and 99 [Regulation 2024/1689].

1. AI System Inventory and Classification

Build a complete inventory of every AI system deployed, developed, or procured by your organization. Classify each system against the EU AI Act risk taxonomy: prohibited (Article 5), high-risk (Annex III), limited risk (Article 50 transparency obligations), or minimal risk. The inventory serves as the foundation for every subsequent compliance activity.

Include AI systems embedded in third-party vendor products. The organization deploying the system carries deployer obligations regardless of whether it developed the technology. Vendor-embedded AI scoring candidates, approving credit applications, or monitoring employee productivity triggers the same classification requirements as internally developed systems.

2. Prohibited Practices Firewall

Establish a pre-deployment review process screening every AI system against all eight Article 5 categories before activation. The review involves legal, compliance, and technical stakeholders. No AI system touches EU data subjects without clearing this screen.

The firewall requires continuous operation, not one-time assessment. Vendor product updates and algorithm modifications alter system behavior. The social scoring feature buried in a vendor changelog, described in the opening scenario, represents the most common exposure vector: incremental feature drift crossing regulatory boundaries without triggering formal review.

3. Risk Management System Architecture

Article 9 requires a risk management system running continuously throughout the AI system lifecycle [EU AI Act Article 9]. The system must identify known and foreseeable risks, evaluate risks from intended use and foreseeable misuse, and adopt mitigation measures. Build this architecture using the NIST AI Risk Management Framework as the structural model, which maps directly to EU AI Act requirements and provides implementation methodology the regulation itself does not specify.

4. Documentation and Evidence Pipeline

Article 11 documentation requirements demand continuous maintenance, not point-in-time snapshots. Build an automated pipeline capturing system design decisions, training data provenance, performance metrics, and modification histories. GDPR enforcement patterns confirm documentation gaps trigger early penalties. Regulators start with paperwork. AI Act enforcement will follow the same sequence.

5. Board-Level Governance Reporting

Present AI risk exposure in financial terms the board understands. Map penalty tiers to organizational turnover. Quantify the number of high-risk systems, the compliance investment required, and the penalty exposure gap between current and target states. Boards approve governance budgets when risk is expressed in currency, not in regulatory abstraction.

Deploy all five capabilities in sequence over 120 days. Days 1-30: complete the AI system inventory. Days 31-60: establish the prohibited practices firewall and screen all inventoried systems. Days 61-90: architect the risk management system for high-risk classifications. Days 91-120: activate the documentation pipeline and deliver the first board-level governance report.

GDPR Penalty Patterns: What Transfers to AI Act Enforcement

Six years of GDPR enforcement data provide the strongest available predictor for how AI Act penalties will materialize. Both regulatory structures share common architecture: EU-wide regulation, member state enforcement discretion, extraterritorial application, and tiered penalties calibrated to global turnover.

Enforcement Scale After Six Years

GDPR regulators issued 2,245 fines totaling approximately EUR 5.88 billion between May 2018 and January 2025 [DLA Piper GDPR Survey January 2025]. The average fine: EUR 2.36 million. The largest single penalty: EUR 1.2 billion against Meta in 2023 for unauthorized data transfers.

The trajectory matters more than the total. Year-over-year enforcement expanded as regulators built investigative capacity. Early GDPR enforcement focused on documentation failures before escalating to substantive processing violations. AI Act enforcement will follow a parallel escalation pattern.

Three Patterns Transferring to AI Act Enforcement

Concentration by jurisdiction. Ireland issued EUR 3.5 billion in GDPR fines, four times the second-ranked authority [DLA Piper GDPR Survey January 2025]. Identify your primary AI Act enforcement jurisdiction early.

Documentation triggers first. Early GDPR enforcement targeted privacy policies and data processing records before addressing algorithmic processing. AI Act regulators will request technical documentation (Article 11) and risk management records (Article 9) before investigating system behavior.

Maximum penalties target the largest operators. Of the 20 largest GDPR fines, 18 targeted companies exceeding EUR 10 billion in annual revenue. Mid-market organizations face lower individual penalty risk but higher aggregate regulatory burden across multiple member states.

The Digital Omnibus Act proposed by the European Commission in November 2025 would extend the deadline for high-risk AI system obligations by up to 16 months (to approximately December 2027), contingent on harmonized standards becoming available [European Commission Digital Omnibus Proposal, November 2025]. The proposal remains under ordinary legislative procedure; organizations should continue preparing for August 2, 2026 while monitoring the process. Prohibited practices enforcement (Tier 1 penalties) remains unaffected by the proposed delay.

Study the GDPR enforcement trajectory for your primary operating jurisdictions. If your EU headquarters sits in Ireland, Luxembourg, France, or Italy, expect aggressive early enforcement. Build the evidence archive now. The investigative request arrives without advance notice, and the production timeline regulators impose does not accommodate retroactive documentation assembly.

Extraterritorial Reach and SME Penalty Provisions

The EU AI Act applies to organizations outside the EU when AI system outputs affect individuals within the EU [EU AI Act Article 2]. A US company deploying an AI hiring tool screening candidates for an EU-based position falls within scope. A GRC engineering team managing compliance across jurisdictions must account for this extraterritorial application.

Extraterritorial Application

Three categories of non-EU organizations trigger AI Act obligations: providers placing AI systems on the EU market, deployers located outside the EU where the AI system output is used within the EU, and importers making AI systems available on the EU market [EU AI Act Article 2(1)].

The “output used within the EU” provision creates the broadest jurisdictional hook. A US-based AI vendor processing data entirely on US infrastructure falls within scope if the output informs a decision affecting an EU individual. This mirrors GDPR’s extraterritorial reach under Article 3(2).

SME and Startup Penalty Provisions

Article 99 includes a proportionality provision for SMEs. For small and medium-sized enterprises, including startups, each penalty is capped at the lower of the fixed amount or the turnover percentage [EU AI Act Article 99(6)]. A startup generating EUR 5 million in revenue faces a maximum prohibited practices penalty of EUR 350,000 (7% of turnover) rather than EUR 35 million.

The proportionality provision reduces the nominal penalty but not the compliance obligation. SMEs deploying high-risk AI systems face identical documentation, risk management, and transparency requirements as enterprise organizations.

Non-EU organizations: determine your AI Act exposure by mapping every AI system whose output touches an EU data subject. Appoint an EU-based authorized representative for any high-risk AI systems placed on the EU market [EU AI Act Article 22]. SMEs: calculate your actual maximum penalty exposure using the lower-of provision. Document the calculation. Use the result to calibrate compliance investment proportionate to both regulatory risk and organizational capacity.

The EU AI Act penalty structure signals regulatory intent: prohibited practices face fines exceeding GDPR maximums, enforcement infrastructure is operational across member states, and six years of GDPR precedent provide the enforcement playbook. Organizations treating August 2, 2026 as a distant deadline face the same strategic miscalculation organizations made with GDPR in 2018. Build the governance architecture now. The cost of compliance is a fraction of the cost of enforcement.

Frequently Asked Questions

What are the EU AI Act penalties for prohibited practices?

Prohibited AI practices under Article 5 trigger penalties of up to EUR 35 million or 7% of global annual turnover, whichever is higher [EU AI Act Article 99(3)]. The eight prohibited categories include social scoring, subliminal manipulation, exploitation of vulnerable groups, untargeted facial recognition scraping, workplace emotion recognition, biometric categorization, criminal risk profiling, and real-time biometric identification in public spaces. Enforcement became active February 2, 2025.

How do EU AI Act fines compare to GDPR penalties?

EU AI Act maximum penalties exceed GDPR maximums. GDPR caps fines at EUR 20 million or 4% of global turnover [GDPR Article 83]. The AI Act’s highest tier reaches EUR 35 million or 7% of global turnover. After six years, GDPR regulators issued approximately EUR 5.88 billion in cumulative fines across 2,245 enforcement actions [DLA Piper GDPR Survey January 2025]. AI Act enforcement is expected to follow a similar escalation trajectory.

When do EU AI Act penalties take effect?

Penalties for prohibited practices (Tier 1) became enforceable February 2, 2025. Penalties for high-risk AI system non-compliance (Tier 2) and information provision failures (Tier 3) activate August 2, 2026 [EU AI Act Article 113]. The European Commission’s Digital Omnibus proposal, if adopted, would extend the high-risk deadline by up to 16 months. Prohibited practices enforcement remains unaffected by the proposed delay.

Do US companies face EU AI Act penalties?

US companies fall within EU AI Act scope when their AI system outputs affect individuals in the EU [EU AI Act Article 2]. A US-based company using AI to screen candidates for EU positions, score EU consumers’ creditworthiness, or process EU medical data triggers compliance obligations. Providers of high-risk AI systems must appoint an authorized representative in the EU before placing systems on the market [EU AI Act Article 22].

How do SME penalty provisions differ from enterprise enforcement?

SMEs and startups receive a proportionality cap: each fine is limited to the lower of the fixed amount or the turnover percentage [EU AI Act Article 99(6)]. A startup with EUR 5 million in revenue faces a maximum prohibited practices penalty of EUR 350,000 (7% of turnover) rather than EUR 35 million. The compliance obligations remain identical regardless of organization size.

Which EU member state enforces the AI Act most aggressively?

Finland became the first member state with fully operational enforcement powers on January 1, 2026, designating 10 sector-specific market surveillance authorities [Finnish Ministry of Economic Affairs and Employment, December 2025]. GDPR patterns suggest enforcement intensity will concentrate in jurisdictions hosting major technology company headquarters.

What factors determine the actual penalty amount?

Article 99(7) lists specific factors: nature, gravity, and duration of the infringement; number of affected persons; damage suffered; intentional or negligent character; cooperation with authorities; actions to mitigate harm; financial benefits gained; and previous violations [EU AI Act Article 99(7)]. Documented compliance efforts serve as a mitigating factor.

What is the Digital Omnibus Act’s impact on AI Act enforcement?

The European Commission proposed the Digital Omnibus in November 2025, deferring high-risk AI obligations by up to 16 months (to December 2027), contingent on harmonized standards becoming available [European Commission Digital Omnibus Proposal, November 2025]. The proposal is undergoing legislative procedure. Prohibited practices enforcement (Tier 1) and AI literacy obligations remain unaffected. Continue compliance preparation while monitoring the process.

Get The Authority Brief

Weekly compliance intelligence for security leaders and technology executives. Frameworks decoded. Audit strategies explained. Regulatory updates analyzed.

Discipline in preparation. Confidence in the room.

Josef Kamara, CPA, CISSP, CISA, Security+
Josef Kamara
Josef Kamara
CPA · CISSP · CISA · Security+

Former KPMG and BDO. Senior manager over third-party risk attestations and IT audits at a top-five global firm, and former technology risk leader directing the IT audit function at a Fortune 500 medical technology company. Advises growth-stage SaaS companies on SOC 2, HIPAA, and AI governance certifications.