A compliance officer at a mid-size SaaS company opens the EU AI Office’s notification portal in September 2025. The company integrated GPT-4 into its customer support platform six months ago. The portal asks a question she cannot answer: is her organization a GPAI provider, a deployer, or both? The answer determines whether she faces four new regulatory obligations, fifteen million euros in potential fines, or a quiet exemption she did not know existed.
That question is not academic. Twenty-six major AI providers signed the GPAI Code of Practice by August 2025 [EU AI Office, August 2025]. Meta refused. xAI signed one chapter out of four. Compliance obligations for general-purpose AI models are fragmenting in real time, and most organizations have not determined where they fall.
The EU AI Act created a distinct regulatory category for general-purpose AI models. The obligations differ from high-risk AI system requirements, the timelines differ from the broader AI Act rollout, and the enforcement mechanisms are already active. Four obligations apply to every GPAI provider. Five additional obligations apply to models with systemic risk. The open-source exemption covers some requirements but not others.
EU AI Act GPAI provider obligations require all general-purpose AI model providers to maintain technical documentation, supply downstream information, establish a copyright compliance policy, and publish a training data summary. Models exceeding 10^25 FLOPs face five additional systemic risk obligations. Full enforcement begins August 2, 2026 [EU AI Act, Articles 53, 55, 101].
What Makes a Model “General-Purpose” Under the EU AI Act?
Classification drives every downstream compliance decision. A GPAI model is any AI model trained on broad data at scale that can serve a wide range of tasks, no matter how it reaches the market [EU AI Act, Article 3(63)]. Foundation models, large language models, multimodal systems: all qualify. Deployment context, including high-risk classification, is irrelevant. The model itself triggers the obligations, not its downstream application.
Organizations that integrate third-party models into their products may become GPAI providers themselves. Downstream entities cross that line only if modification compute exceeds one-third of the original model’s training compute [EU AI Act, Article 53]. Below that threshold, the original provider retains its obligations.
Consider what that means in practice. A company fine-tuning GPT-4 for its industry vertical does not become a GPAI provider unless that fine-tuning consumed more than one-third of GPT-4’s original training compute. For models trained on billions of dollars of compute, that threshold is effectively unreachable for most enterprises. Your provider’s problem. Not yours.
Audit Fix
Map every AI model in your technology stack to its provider. For each model, determine: (1) who is the GPAI provider under the Act, (2) whether your organization’s modifications exceed the one-third fine-tuning threshold, and (3) whether the model qualifies for the open-source exemption. Document this classification in your AI system inventory. The classification drives every subsequent compliance decision.
What Are the Four Core GPAI Provider Obligations?
Article 53 imposes four obligations on every GPAI provider. Model size, capability, and deployment context do not affect these requirements [EU AI Act, Article 53]. No exemption exists except the partial open-source carve-out discussed below.
Obligation 1: Technical documentation. Ten years of retention. That is how long providers must maintain documentation describing the model’s training process, testing methodology, and evaluation results. The European Commission published formal guidelines on July 18, 2025, specifying what “sufficient” documentation includes: training data sources, preprocessing decisions, hyperparameter choices, evaluation benchmarks, known limitations, and mitigation measures [European Commission, Guidelines for Providers of GPAI Models, July 2025].
Obligation 2: Downstream provider information. When a deployer integrates your model into an AI system, they inherit their own AI Act obligations. They cannot meet those obligations without your help. Under the Code of Practice, providers must respond to downstream information requests within 14 days [EU AI Office, GPAI Code of Practice, July 2025]. The information must cover model capabilities, limitations, intended and foreseeable use cases, and integration guidance sufficient for deployers to conduct their own risk assessments.
Obligation 3: Copyright compliance policy. This is the obligation that split the industry. Providers must honor text and data mining opt-outs under Article 4(3) of the Copyright Directive (EU 2019/790), meaning they cannot train on content where the rights holder has posted machine-readable opt-out instructions (such as robots.txt for web scraping). xAI signed only the Safety and Security chapter of the Code of Practice, refusing the copyright section entirely, calling it “profoundly detrimental to innovation” [TTMS EU AI Act Update, 2025].
Obligation 4: Training data summary. The EU AI Office published a mandatory template on July 24, 2025, and every provider must use it [European Commission, GPAI Training Data Summary Template, July 2025]. Summaries must be updated every six months if additional training occurs post-market, or sooner for material changes. Data sources, data types, collection methodology, and filtering or curation decisions all require disclosure. “Sufficiently detailed” is the standard, and the template defines what that means.
Audit Fix
Create a GPAI compliance checklist with four deliverables: (1) technical documentation package per Commission guidelines, (2) downstream provider information response process with 14-day SLA, (3) copyright compliance policy with text and data mining opt-out verification process, (4) training data summary using the mandatory template. Assign ownership for each deliverable. The template was published July 24, 2025, and is available on the EU AI Office website.
How Does the Open-Source Exemption Work for GPAI Models?
Article 53(2) provides a partial exemption for open-source GPAI models, but the boundaries are narrower than most organizations assume [EU AI Act, Article 53(2)]. The exemption applies only to models released under free and open-source licenses with publicly available parameters, weights, model architecture, and usage information.
Models meeting those criteria are exempt from two of the four obligations: technical documentation (Obligation 1) and downstream provider information (Obligation 2). They must still comply with copyright policy (Obligation 3) and publish a training data summary (Obligation 4).
The exemption disappears entirely for models classified as systemic risk under Article 51. A model like Meta’s Llama, even if released under an open-source license, would lose the exemption if it exceeds the 10^25 FLOP threshold or is designated as systemically risky by the Commission based on capabilities. Meta’s refusal to sign the Code of Practice in late 2025 compounds this exposure [EU AI Office ecosystem investigation, January 2026].
For enterprise compliance teams evaluating open-source model adoption, the decision tree is specific. First: does the model meet the open-source definition (publicly available weights, architecture, and parameters)? Second: does the model exceed 10^25 FLOPs or present systemic risk? If yes to the first and no to the second, the partial exemption applies. If the model crosses the systemic risk threshold, treat it identically to any proprietary GPAI model.
Audit Fix
For each open-source model in your AI inventory, verify three conditions: (1) the license meets the AI Act’s open-source definition, (2) parameters, weights, and architecture are publicly available, (3) the model does not exceed 10^25 FLOPs or carry a systemic risk designation. Document this analysis. The exemption is partial (copyright and training summary obligations remain) and conditional (it revokes for systemic risk models).
What Additional Obligations Apply to Systemic Risk GPAI Models?
Models trained using 10^25 or more floating point operations trigger a rebuttable presumption of systemic risk under Article 51 [EU AI Act, Article 51]. GPT-4, Gemini 1.5 Pro, Grok 3, Claude 3.7 Sonnet, Claude 4 Opus, OpenAI o3, and Gemini 2.5 Pro all exceed this threshold [CSET Georgetown, 2025]. The Commission can also designate models below the threshold based on user numbers, scalability, tool access, or equivalent market impact.
Article 55 adds five obligations beyond the four core requirements [EU AI Act, Article 55]:
Model evaluations. Providers must conduct evaluations using standardized protocols, including red teaming, benchmark testing, and human uplift studies. The evaluations must assess the model’s potential for misuse, generation of harmful content, and capability for autonomous action beyond intended parameters.
Systemic risk assessment and mitigation. Providers must identify, assess, and mitigate systemic risks throughout the model’s lifecycle, following the AI Act’s risk management framework. This goes beyond initial deployment to cover model updates, capability extensions, and downstream integration patterns that could amplify risk.
Serious incident reporting. Providers must track and report serious incidents to the AI Office on compressed timelines: within 2 days for widespread infringements, 10 days if a death may have been caused, 15 days in all other cases [EU AI Act, Article 55].
Cybersecurity protections. Adequate safeguards must cover the model and its infrastructure. This includes model weight security, API access controls, inference infrastructure hardening, and supply chain security for model artifacts.
Safety and Security Framework. Providers must establish this framework within 4 weeks of receiving systemic risk notification from the Commission. The framework must document governance structures, risk management processes, incident response procedures, and ongoing monitoring commitments.
Audit Fix
If your organization develops or fine-tunes models approaching the 10^25 FLOP threshold, establish a monitoring process for compute accumulation. The two-week notification window after reaching the threshold leaves minimal time for compliance preparation. Build the Safety and Security Framework template before you need it. Providers must notify the Commission within 2 weeks of meeting or reasonably foreseeing the threshold [EU AI Act, Article 51].
How Does the GPAI Code of Practice Affect Compliance?
The Code of Practice, published July 10, 2025, provides a voluntary compliance pathway that 26 major providers adopted by August 2025 [EU AI Office, August 2025]. Signing is not legally required, but the Commission formally endorsed the Code as “an adequate tool for demonstrating compliance” with GPAI obligations. Non-signatories face a different regulatory posture.
The practical difference: Code of Practice signatories receive a supervised compliance ramp-up during the first year (August 2025 to August 2026). The AI Office informally acknowledged that signatories will not face penalties for incomplete implementation during this period. Non-signatories face “a larger number of requests for information and requests for access,” which translates to more frequent regulatory engagement and faster escalation paths.
Meta’s decision to refuse the Code illustrates the stakes. In January 2026, the EU AI Office launched a formal investigation into Meta’s WhatsApp Business APIs [EU AI Office ecosystem investigation, January 2026]. While the investigation touches broader AI Act obligations, Meta’s Code of Practice refusal removed its primary shield against aggressive enforcement during the grace period.
Only 35.7% of managers feel adequately prepared for AI Act compliance, and only 26.2% have started concrete compliance activities [Deloitte AI Act Readiness Survey, 2025]. The Code of Practice provides a structured compliance framework for organizations that want to demonstrate good faith before full enforcement begins in August 2026.
Audit Fix
Review the Code of Practice text at code-of-practice.ai and assess alignment with your current AI governance program. For downstream deployers (not model providers), the Code’s transparency and information-sharing provisions define what you should demand from your GPAI providers. Build these requirements into vendor contracts and procurement checklists before full enforcement begins August 2, 2026.
The twelve months between August 2025 and August 2026 will separate organizations that built GPAI governance proactively from those that built it under regulatory pressure. Enforcement actions against X and Meta in January 2026 confirm that the EU AI Office treats the grace period as a supervision window, not a free pass [European Commission enforcement order, January 2026]. Classify your GPAI exposure now. Map your four obligations. Determine whether you are a provider, a deployer, or both. The regulatory environment will not wait for your compliance program to catch up.
Frequently Asked Questions
What are GPAI provider obligations under the EU AI Act?
Article 53 requires all GPAI providers to maintain technical documentation, supply information to downstream providers within 14 days, establish a copyright compliance policy respecting text and data mining opt-outs, and publish a training data summary using the EU AI Office’s mandatory template [EU AI Act, Article 53]. These four obligations apply regardless of model size. Models exceeding 10^25 FLOPs face five additional obligations under Article 55, including red teaming, incident reporting, and establishing a Safety and Security Framework.
When did GPAI obligations take effect?
GPAI obligations became enforceable on August 2, 2025, for models placed on the market after that date. Models already on the market before August 2, 2025, have until August 2, 2027, to comply with the full set of requirements. Full enforcement with financial penalties begins August 2, 2026, when the EU AI Office gains authority to impose fines up to 15 million EUR or 3% of global annual turnover [EU AI Act, Article 101].
How is a GPAI model classified as having systemic risk?
Under Article 51, any model trained using 10^25 or more floating point operations is presumed to have high-impact capabilities that create systemic risk [EU AI Act, Article 51]. The Commission can also designate models below the FLOP threshold based on user numbers, scalability, tool access, or equivalent market impact. Providers may rebut the presumption by submitting evidence that their model does not present systemic risk despite exceeding the compute threshold.
Are open-source GPAI models exempt from EU AI Act obligations?
Open-source models with publicly available weights, architecture, and parameters are exempt from two of four obligations: technical documentation and downstream provider information [EU AI Act, Article 53(2)]. They must still comply with copyright policy and training data summary requirements. The exemption does not apply to models classified as systemic risk, regardless of their license type.
What is the GPAI Code of Practice and is it mandatory?
The Code of Practice, published July 10, 2025, is a voluntary compliance framework covering transparency, copyright, and safety obligations [EU AI Office, GPAI Code of Practice, July 2025]. Twenty-six providers signed by August 2025. While not legally binding, the Commission endorsed it as an adequate tool for demonstrating compliance, and signatories receive a supervised grace period through August 2026.
What do downstream deployers need from GPAI providers?
Deployers integrating GPAI models into AI systems need technical documentation on model capabilities and limitations, integration guidance, and sufficient information to meet their own AI Act deployer obligations [EU AI Act, Article 53]. The Code of Practice establishes a 14-day response window for provider information requests. Procurement teams should build these requirements into vendor contracts before full enforcement begins.
What are the fines for GPAI non-compliance?
GPAI-specific fines under Article 101 reach 15 million EUR or 3% of global annual turnover, whichever is higher [EU AI Act, Article 101]. Broader AI Act violations under Article 99 allow fines up to 35 million EUR or 7% of global turnover. Signing the Code of Practice is a mitigating factor during enforcement but does not provide immunity from penalties.
Get The Authority Brief
Weekly compliance intelligence for security leaders. Frameworks decoded. Audit strategies explained. Regulatory updates analyzed.