Most organizations treating the EU AI Act as a 2026 problem have already made a costly mistake. The high-risk AI requirements, the transparency obligations, the conformity assessments: those timelines run into 2026 and beyond. But Article 5 is not on that schedule. The eight prohibited AI practices became enforceable on February 2, 2025. No grace period. No phased implementation. A hard cutoff.
The enforcement gap is real and it runs in one direction. Regulators across EU member states have authority to investigate prohibited practices right now. The maximum penalty for a prohibited practice violation is 35 million EUR or 7% of worldwide annual turnover, whichever is higher [EU AI Act Art. 99]. That ceiling sits above every other penalty tier in the regulation. Organizations waiting for the 2026 enforcement wave to begin their AI Act compliance work have skipped past the tier with the highest exposure.
Eight practices are banned outright. No exception for research purposes. No carve-out for small deployments. No safe harbor for tools already in production. If your organization uses AI for employee monitoring, hiring, customer profiling, or security screening, at least one of these prohibitions warrants immediate review. The bans are categorical, the penalties are severe, and the window to self-assess before regulators do has been open since February 2025.
EU AI Act Prohibited AI Practices at a Glance
Article 5 of the EU AI Act bans eight categories of AI outright, effective February 2, 2025. Prohibited practices include subliminal manipulation, exploitation of vulnerabilities, social scoring by public or private entities, real-time remote biometric identification in public spaces, predictive policing through individual profiling, untargeted facial recognition scraping, emotion recognition in workplaces and schools, and biometric categorization by protected attributes. Violations carry penalties up to 35 million EUR or 7% of global revenue [EU AI Act Art. 99].
The Eight Absolute Bans Under Article 5
Article 5 does not create a risk management system for these practices. It bans them. The distinction matters for compliance strategy: high-risk AI systems under Article 6 require documentation, human oversight, and conformity assessments. Prohibited systems under Article 5 require removal. No amount of documentation or oversight transforms a prohibited system into a compliant one [EU AI Act Art. 5].
Each prohibition has defined scope. Read the boundaries carefully before concluding your systems fall outside them.
1. Subliminal Manipulation
AI systems that deploy subliminal techniques beyond a person’s consciousness to materially distort their behavior in a way that causes or is likely to cause harm are banned [EU AI Act Art. 5(1)(a)]. The key phrase is “beyond a person’s consciousness.” A recommendation algorithm that makes an obvious suggestion is not subliminal. A system architected to influence decisions through stimuli the user cannot consciously detect crosses the line.
The harm requirement adds a limiting principle: the distortion must cause or risk causing harm to that person or another. But regulators read “harm” broadly in consumer protection contexts. Psychological manipulation that leads a person to make decisions against their financial interests, health interests, or personal safety qualifies. A customer-facing AI that uses micro-targeted emotional cues to drive purchases at the margin of detectability is worth scrutinizing.
2. Exploitation of Vulnerabilities
AI systems that exploit the vulnerabilities of specific groups due to age, disability, or social or economic situation to materially distort their behavior are prohibited [EU AI Act Art. 5(1)(b)]. This prohibition targets systems designed to take advantage of conditions that reduce a person’s capacity for autonomous decision-making.
The scope covers AI deployed in elder care facilities that applies pressure tactics, systems targeting children with manipulative engagement patterns, and tools used in social service contexts that exploit economic desperation. Importantly, the system need not be purpose-built for exploitation. A general AI system that an organization deploys in a context where it foreseeably exploits a vulnerable population’s specific conditions falls within the ban.
3. Social Scoring by Public Authorities
AI systems used by public authorities to evaluate or classify natural persons based on their social behavior or personal characteristics in a way that leads to detrimental or unfavorable treatment in unrelated social contexts, or treatment that is disproportionate to the behavior, are prohibited [EU AI Act Art. 5(1)(c)]. This is the provision most people associate with Chinese-style social credit systems.
The ban also reaches private entities acting on behalf of public authorities or performing public functions. A private contractor operating a government benefits eligibility system that aggregates behavioral data to produce social rankings falls within scope. The analysis is functional, not formal: what the system does, not whether the operator holds a government ID.
4. Predictive Policing Through Individual Profiling
AI systems that assess the risk of a natural person committing a criminal offense based solely on profiling or on assessing personality traits and characteristics are prohibited [EU AI Act Art. 5(1)(d)]. The ban targets purely profile-based prediction. Systems that assess risk based on documented, objective behavior linked to that individual are not automatically prohibited; the focus is on systems that generate criminal risk scores from demographic profiles, personality assessments, or behavioral patterns with no connection to the specific individual’s prior conduct.
Law enforcement agencies and security vendors operating in the EU should review predictive policing tools immediately. Several risk scoring products marketed to police forces that generate individual threat scores from neighborhood demographics, social network mapping, or behavioral modeling without prior individual criminal history fall directly within this prohibition.
5. Untargeted Facial Recognition Scraping
Creating or expanding facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage is banned [EU AI Act Art. 5(1)(e)]. The word “untargeted” does the analytical work here. A system that scrapes all faces visible in public footage to build or augment a biometric database is prohibited. The prohibition applies regardless of whether the resulting database is used for law enforcement, commercial, or research purposes.
This ban has immediate implications for any organization building biometric identity systems, fraud detection tools that incorporate facial matching, or security platforms that train on publicly sourced facial images. The training data provenance question is live: if the model was trained on mass-scraped facial data, examine whether that training process itself constituted a prohibited practice under Article 5(1)(e).
6. Emotion Recognition in Workplaces and Educational Institutions
AI systems that infer emotions of natural persons in workplaces and educational institutions are prohibited, with limited exceptions for medical or safety reasons [EU AI Act Art. 5(1)(f)]. The scope of this ban surprises many HR technology vendors. Systems that analyze facial expressions, vocal patterns, or physiological signals to assess employee mood, engagement, stress levels, or emotional state during work hours fall within the prohibition.
The exceptions are narrow. Medical use cases supervised by health professionals and safety-critical monitoring in industrial settings where emotional impairment creates physical danger qualify. General employee productivity monitoring, interview scoring tools that analyze candidate emotional reactions, and student engagement platforms that use facial analysis to assess learning interest do not. Several widely deployed HR analytics tools are operating in the prohibited zone under this provision.
7. Biometric Categorization by Sensitive Attributes
AI systems that categorize natural persons individually based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation are prohibited [EU AI Act Art. 5(1)(g)]. The emphasis is on inference: a system that processes facial geometry to deduce political affiliation or sexual orientation from physical characteristics is banned.
This prohibition intersects with GDPR’s special category data protections under Article 9. Processing biometric data to infer protected characteristics triggers both frameworks. The EU AI Act adds the categorical ban on top of GDPR’s existing restrictions, meaning the analysis is not just whether processing is lawful under GDPR, but whether the AI system itself falls within a prohibited category under Article 5.
8. Real-Time Remote Biometric Identification in Public Spaces
Real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes are prohibited, with defined exceptions [EU AI Act Art. 5(1)(h)]. This is the most qualified prohibition in Article 5. The ban applies to law enforcement use in real-time. The exceptions permit real-time identification for targeted searches for missing persons and victims, prevention of specific and imminent terrorist threats, and identification of suspects in serious crimes [EU AI Act Art. 5(2)].
The exceptions require prior judicial authorization except in cases of urgency, and post-hoc authorization within 24 hours in those cases. Commercial operators running real-time facial identification in retail spaces, transit systems, or event venues for non-law-enforcement purposes occupy a different regulatory position: they fall under the high-risk AI classification requirements of Annex III, not the outright ban under Article 5. The ban is specifically targeted at law enforcement real-time identification. Non-law-enforcement real-time biometric identification in public spaces is not categorically prohibited but is heavily regulated.
The Compliance Exposure Most Teams Miss
The Article 5 prohibitions apply to AI providers, deployers, and importers within the EU supply chain. A US company that builds an emotion recognition tool and sells it to an EU employer is an AI provider subject to the prohibition. A US company that deploys any of the eight prohibited practices against EU residents, even from US infrastructure, falls within the territorial scope of the regulation [EU AI Act Art. 2]. The geographic boundary is where the AI system’s output affects people, not where the servers sit.
Who the Prohibitions Apply To: Providers, Deployers, and Importers
Article 5 applies across the entire AI value chain. The prohibition is not limited to the organization that built the AI system. A company that deploys a third-party AI tool that falls within a prohibited category is itself in violation, even if the vendor built the underlying system [EU AI Act Art. 5, Art. 3].
Three categories carry direct Article 5 obligations:
- Providers: Organizations that develop AI systems and place them on the EU market or put them into service in the EU, including providers established outside the EU whose systems affect EU residents.
- Deployers: Organizations that use AI systems under their authority. An employer deploying an emotion recognition tool purchased from a US vendor is a deployer subject to the prohibition.
- Importers and distributors: Organizations that bring AI systems from non-EU providers into the EU market carry due diligence obligations that include confirming the system is not prohibited under Article 5.
The practical consequence: third-party AI risk assessments must include an Article 5 screening. Before deploying any AI system in an EU context, confirm the system’s capabilities against each of the eight prohibited categories. Vendor assurances are not sufficient. The deploying organization bears independent responsibility [EU AI Act Art. 26].
For guidance on building a structured review process, the EU AI Act risk management system requirements provide the framework for ongoing AI governance beyond the initial Article 5 screening.
Enforcement Mechanics: How Article 5 Violations Surface
EU member states are responsible for designating national competent authorities to supervise and enforce the AI Act [EU AI Act Art. 70]. By the February 2025 enforcement date, member states were required to have these authorities operational. Enforcement power includes the authority to request documentation, conduct on-site audits, order system suspensions, and impose penalties.
Three enforcement pathways create exposure:
Regulatory investigation. National authorities may open investigations proactively or in response to complaints from affected individuals, civil society organizations, or competitors. The investigation power is broad: authorities can demand access to training data, system documentation, and deployment records.
Individual complaints. Affected persons have rights under the AI Act to file complaints with national supervisory authorities. An employee who believes their workplace deployed prohibited emotion recognition has a direct complaint pathway. An EU resident who suspects real-time biometric identification was used in a retail context can trigger a formal investigation.
Cross-border coordination. The EU AI Act penalty structure includes coordination mechanisms between member state authorities and the European AI Office for systemic violations. A prohibited practice deployed at scale across multiple EU markets elevates to cross-border enforcement.
The penalty floor for prohibited practices is the highest in the regulation: up to 35 million EUR or 7% of total worldwide annual turnover [EU AI Act Art. 99(3)]. For context, the equivalent penalty for high-risk system violations is 15 million EUR or 3% of turnover. The 7% ceiling places prohibited practice violations in the same enforcement severity category as the most serious GDPR violations.
Understanding where your systems sit in the overall compliance picture starts with the EU AI Act compliance timeline. Article 5 was the first milestone. The remaining obligations layer in through August 2026.
Article 5 Self-Assessment: A Systematic Review Process
Article 5 compliance requires a definitive answer for each prohibition, not a risk score. These are binary determinations: either the system falls within a prohibited category or it does not. The assessment process follows three steps.
Step 1: Build a complete AI system inventory. Before assessing against Article 5, identify every AI system your organization uses, builds, or deploys. A system inventory that covers only internally developed tools misses the majority of exposure. Include SaaS tools with AI features, embedded AI in HR platforms, customer service chatbots, fraud detection engines, and any AI-enabled monitoring capability. The AI system inventory methodology provides the operational framework for this step.
Step 2: Screen each system against the eight prohibitions. For each system, answer eight questions:
Article 5 Prohibition Screening Checklist
- Does this system use techniques designed to influence behavior below the threshold of conscious awareness? If yes, does the design produce harmful outcomes?
- Does this system target individuals based on age, disability, or social or economic vulnerability to distort their decision-making?
- Does this system aggregate behavioral or personal characteristic data to produce social rankings or scores used by a public authority or entity performing public functions?
- Does this system generate individual criminal risk scores based on profiling or personality assessment, without reference to that individual’s prior documented behavior?
- Does this system scrape facial images from public sources or CCTV footage without targeting specific individuals, for the purpose of building or expanding a biometric database?
- Does this system infer emotional states of workers or students through facial, vocal, or physiological analysis in a workplace or educational setting?
- Does this system use biometric data to categorize individuals by race, political opinion, union membership, religion, sex life, or sexual orientation?
- Does this system perform real-time identification of individuals in public spaces for law enforcement purposes outside the defined Article 5(2) exceptions?
A “yes” answer to any question requires immediate escalation to legal counsel and consideration of system suspension. A “possibly” answer requires the same escalation. Document all determinations with a rationale and a date.
Step 3: Document the analysis and remediation decisions. Regulators assessing Article 5 compliance will look for evidence of a systematic review. An undocumented oral determination that a system does not violate Article 5 provides no protection. The documentation package should include the inventory record for the system, the prohibition screening worksheet, the legal or compliance analysis supporting the determination, and any remediation actions taken.
For organizations subject to broader EU AI Act requirements, the human oversight requirements apply to high-risk AI systems and provide a parallel compliance framework for systems that clear the Article 5 threshold but fall into Annex III categories.
US Organizations Operating in the EU: Where the Extraterritorial Reach Applies
Article 2 of the EU AI Act establishes territorial scope through output, not origin. The regulation applies when an AI system’s output affects EU residents, regardless of where the provider or deployer is established [EU AI Act Art. 2(1)]. A US company running an employee monitoring platform for an EU workforce is a deployer subject to Article 5. A US vendor selling a candidate scoring tool to EU employers is a provider subject to Article 5.
Four operational scenarios create direct US company exposure:
HR technology deployed against EU employees. Emotion recognition tools, productivity monitoring platforms that analyze behavioral signals, and candidate screening tools that apply personality profiling all require Article 5 screening before EU deployment. The employment relationship does not create an exception to the prohibition on emotion recognition or profiling-based prediction.
Customer-facing AI in EU markets. Recommendation engines designed to detect and exploit emotional states, personalization systems that use behavioral profiling to target vulnerable users, and any system that applies subliminal persuasion techniques in a consumer context require assessment against Article 5(1)(a) and (b).
Security and access control systems. Facial recognition deployments in EU facilities, buildings, or campuses operated by US companies fall within the biometric identification provisions. The real-time ban applies to law enforcement contexts; commercial facility access control sits under the high-risk classification regime. But biometric categorization by sensitive attributes under Article 5(1)(g) applies regardless of context.
AI training data sourced from EU residents. The prohibition on untargeted facial recognition scraping applies to training data acquisition. If a US company scraped facial images from EU social media platforms or EU CCTV feeds to train a biometric model, that training process itself may constitute a prohibited practice under Article 5(1)(e). The violation is in the data collection, not just in the deployment.
Understanding what qualifies as high-risk AI for purposes of EU market access starts with AI governance fundamentals. Article 5 is the floor. Everything above it requires structured governance rather than outright prohibition.
Josef’s Assessment
Article 5 is the one part of the EU AI Act where compliance is binary. Every other provision involves documentation, risk management, and proportionate response. These eight prohibitions involve a single question: does the system operate or not? Organizations that have not completed a systematic Article 5 screening by now are operating with unknown exposure at the highest penalty tier in the regulation. Run the checklist. Document the analysis. Where the answer is uncertain, treat it as a yes until legal counsel confirms otherwise. A suspended system costs time. A 35 million EUR penalty costs more.
Frequently Asked Questions
When did EU AI Act Article 5 prohibitions take effect?
February 2, 2025. Article 5 was the first substantive enforcement milestone under the EU AI Act, which entered into force in August 2024. The 6-month implementation period for prohibited practices ended February 2, 2025 [EU AI Act Art. 113]. Member states were required to have national supervisory authorities operational by that date.
Does Article 5 apply to US companies with no EU presence?
Yes, when the AI system’s output affects EU residents. Article 2 applies the regulation to providers and deployers established outside the EU whose AI systems are placed on the EU market or whose outputs affect persons located in the EU [EU AI Act Art. 2(1)(c)]. A US company that deploys a prohibited AI system against EU employees, EU customers, or EU residents falls within scope regardless of where its servers are located.
Is all facial recognition banned under Article 5?
No. Article 5 bans two specific facial recognition practices: untargeted scraping to build facial recognition databases, and real-time remote biometric identification in public spaces for law enforcement purposes outside the defined exceptions. Commercial facial recognition for access control, identity verification, or fraud detection in non-public-space contexts is not categorically banned. Those applications fall under the high-risk AI classification framework and require conformity assessments, not prohibition compliance.
Does the emotion recognition ban apply to video interview platforms?
Video interview platforms that analyze candidate facial expressions, vocal tone, or emotional reactions to assess candidate suitability operate in the prohibited zone under Article 5(1)(f). The ban covers emotion recognition in workplaces, and a hiring interview is a workplace context. The exceptions for medical use or genuine safety monitoring in high-risk environments do not apply to candidate assessment. Several commercial video interview platforms are currently under review by EU data protection and AI authorities for this reason.
What is the difference between prohibited AI and high-risk AI under the EU AI Act?
Prohibited AI under Article 5 cannot be operated in any form. High-risk AI under Article 6 and Annex III can be operated if the provider and deployer meet documentation, risk management, human oversight, and conformity assessment requirements. The practical test: an Article 5 determination results in a shutdown decision. An Annex III determination results in a compliance program. The two categories are mutually exclusive for a given system.
Can an organization get a waiver or exemption from the Article 5 prohibitions?
No. Article 5 does not include a waiver mechanism, a research exemption, or an authorized testing exception for prohibited practices. The real-time biometric identification provision includes defined exceptions for specific law enforcement purposes [EU AI Act Art. 5(2)], but those are structural carve-outs written into the prohibition itself, not post-hoc waivers. For all other Article 5 prohibitions, the ban is absolute.
How does Article 5 interact with GDPR biometric data protections?
GDPR Article 9 restricts processing of biometric data as a special category, requiring explicit consent or a specific legal basis exception. EU AI Act Article 5 adds categorical prohibitions on top of GDPR restrictions for specific AI applications involving biometric data. A system that processes biometric data in compliance with GDPR’s special category requirements might still violate Article 5 if it falls within a prohibited practice category. Both frameworks apply independently. Compliance with one does not confirm compliance with the other.
Get The Authority Brief
Weekly compliance intelligence for security leaders. Frameworks decoded. Audit strategies explained. Regulatory updates analyzed.