Overview
As of August 2, 2025, the obligations for Providers of General-Purpose AI (GPAI) models under the EU AI Act (Regulation (EU) 2024/1689) have entered into application. We will detail below the organizations concerned and the obligations that they need to comply with, the documents that have been published by the AI Office to guide compliance, the risks associated with noncompliance, and the next steps.
1. What is Concerned?
Chapter V of the EU AI Act is addressed to Providers of GPAI models.
The EU AI Act defines a GPAI model as one that demonstrates significant generality, is capable of performing a wide range of distinct tasks, and can be integrated into various downstream systems. Some GPAI models meeting these criteria are nevertheless excluded from the scope of the AI Act. For further information, please consult our related EU AI Act Decoded issue here.
Pursuant to the EU AI Act, a Provider of GPAI model is the entity that develops the GPAI model, or that has it developed and placed on the market or put into service in the EU under its own name or trademark, irrespective of whether this Provider is established or located within the Union or in a third country. For further information, please consult our related EU AI Act Decoded issue here.
2. Which Obligations Do Concerned Organizations Need to Comply With?
Pursuant to Chapter V of the EU AI Act, the obligations applicable to Providers of GPAI models are primarily centered on transparency and risk mitigation, and notably include:
-
Transparency requirements: Providers must document and provide information about the model, including its training data and technical documentation.
-
Public summary of training content: A "sufficiently detailed public summary of the content used for the training of the model" must be created and made public using the template published by the AI Office (see section 3 below).
-
Copyright compliance: Providers must have a policy in place to respect EU copyright laws, particularly regarding the data used for training.
Some GPAI models presenting high-impact capabilities are qualified as "GPAI models with systemic risk" and subject to additional obligations. Such additional obligations include notifying the AI office when a GPAI model meets the criteria for this classification, conducting model evaluations, assessing and mitigating systemic risks, and reporting serious incidents to the AI Office.
For a more detailed overview of these obligations and the steps that need to be undertaken to comply with these, please consult our dedicated EU AI Act Decoded issue available here.
3. Has the AI Office Published any Guidelines, Templates and/or Tools to Facilitate Demonstration of Compliance?
A few days before the deadline, the AI Office published a number of documents to guide organizations in their compliance efforts.
-
Guidelines on the scope of obligations for Providers of GPAI models under the EU AI Act
These guidelines cover four key topics:
-
Notion of "GPAI model": Given the wide variety of capabilities and use cases for GPAI models, the AI Office considers that it is not feasible to provide a precise list of capabilities that a model must display and tasks that it must be able to perform in order to determine whether it is a GPAI model within the meaning of the EU AI Act. Instead, the AI Office suggests using the amount of computational resources used to train the model, measured in Floating Point Operation (FLOP), as well as the model’s modalities to determine whether it should be considered a GPAI model. Specifically, the AI Office considers that a model with training compute exceeding 1023FLOP and capable of generating language (whether in the form of text or audio), text-to-image, or text-to-video should be considered a GPAI model.
-
Notion of "GPAI model with Systemic Risk" and related notification obligation: A GPAI model is classified as having systemic risk if it meets one of two conditions:
-
It has high-impact capabilities, meaning capabilities that match or exceed those of the most advanced models. The EU AI Act presumes this to be the case if the model was trained using more than 1025FLOPs of cumulative computation; or
-
The European Commission designates it as such, either on its own initiative or following a qualified alert from a scientific panel, based on criteria in Annex XIII of the EU AI Act.
-
Providers of GPAI models are required to notify the Commission, without delay and in any event within two weeks, if their model meets, or is expected to meet, the requirements for having high-impact capabilities. The AI Office stresses that this notification can be required even before the model's training is complete, as Providers are expected to plan and allocate compute resources in advance. Therefore, Providers should estimate the cumulative training compute before starting a large training run and notify the Commission if the estimated value meets the specified threshold.
The Provider can, however, contest this classification by presenting substantiated arguments demonstrating that their model does not pose systemic risks, even if it meets the computational threshold. The Commission will then assess these arguments and make a final decision. If the classification is rejected, the Provider remains subject to the obligations for models with systemic risk.
-
Notion of "Provider": The guidelines clarify when an entity is considered a Provider placing a GPAI model on the EU market. According to the AI Office, a model is considered "placed on the market" when it is first made available in the EU, for example, via a software library, API, or direct download. The guidelines also address downstream actors who modify existing GPAI models. The AI Office clarifies that a downstream modifier becomes a new Provider of the modified GPAI model if the modification results in a significant change to the model's generality, capabilities, or systemic risk. An indicative criterion for this is if the training compute used for the modification is greater than one-third of the training compute of the original model. If the original model had systemic risk and the modification meets this threshold, the modified model is also presumed to have systemic risk, and the new Provider must comply with the corresponding obligations, including notifying the Commission.
-
Exemptions from certain obligations for certain models released as open-source: The AI Office clarifies that to qualify for the exemptions, the open-source model must meet the following conditions:
-
Open-Source License: The license must explicitly permit the free access, use, modification, and distribution of the model without payment or other restrictive conditions. This means that anyone should be able to obtain the model, freely alter it, and distribute it or modified versions of it. Further, the Provider cannot use intellectual property rights to restrict or charge for the model's use.
-
No Monetization: No monetary compensation should be required in exchange for access, use, modification, and distribution of the AI model. Monetization should be understood as encompassing not only the provision of the model against a price but also other types of monetization strategies (e.g., dual licensing model);
-
Public Availability: The model's parameters (including weights), architecture information, and usage information must be made publicly available in a format that enables access, use, and modification. The usage information must, at a minimum, detail the model's capabilities, limitations, and the technical means required for its integration.
-
-
Template for the Public Summary of Training Content
Article 53(1)(d) of the AI Act mandates that all GPAI model Providers create and publicly release a "sufficiently detailed public summary of the content used for the training of the model." The AI Office has just published the template that must be used to comply with this obligation.
The primary objective of the summary is to increase transparency on the data used for training GPAI models, especially text and data protected by copyright law, notably to help parties with legitimate interests (such as copyright holders) to exercise and enforce their rights under EU laws. The summary should be "generally comprehensive" but not "technically detailed," and it must cover all stages of model training, including pre-training and fine-tuning.
The template for this summary issued by the AI Office is divided into three main sections:
-
General information: This section requires information to identify the Provider and the model, as well as the modalities, size, and general characteristics of the training data.
-
List of data sources: This section requires Providers to disclose the main datasets used, a narrative description of data scraped from the internet (including top domain names), and a description of other data sources.
-
Relevant data processing aspects: This section requires the disclosure of data processing details relevant for parties with legitimate interests, particularly for compliance with copyright law and the mitigation of risks from illegal content.
The summary should be published on the Provider's website in a clearly visible and accessible manner.
-
GPAI Code of Practice
The Code of Practice is designed to help Providers demonstrate compliance with their obligations under Articles 53 and 55 of the AI Act. The Code is divided into several chapters, including:
-
Transparency: This section outlines measures for complying with transparency obligations. Key commitments include:
-
Drawing up and maintaining model documentation: Providers must document at least the information specified in the accompanying Model Documentation Form and keep this documentation updated;
-
Providing relevant information: Providers must publicly disclose contact information for the AI Office and downstream Providers to request access to the documentation. They must provide the requested information to the AI Office or national competent authorities within the specified timeframe. They must also provide information to downstream Providers so they can understand the model's capabilities and limitations and comply with their own obligations under the AI Act;
-
Ensuring quality, integrity, and security of information: Providers must ensure that the documented information is controlled for quality and integrity, retained as evidence of compliance, and protected from alterations.
-
-
Safety and Security: Relevant for GPAI models with systemic risk, this chapter focuses on principles of lifecycle management, risk assessment, and incident reporting. Key commitments include:
-
Adopting a state-of-the-art Safety and Security Framework: the purpose of the Framework is to outline the systemic risk management processes and measures implemented to ensure the systemic risks stemming from the GPAI model are acceptable. This framework must be notified to the AI Office;
-
Identifying and analyzing systemic risk: Providers must conduct a contextual risk assessment that is proportionate to the risks posed by the model and its intended use, considering the model's architecture and the systems it integrates with. They must also define clear responsibilities for managing the systemic risks stemming from their models across all levels of their organization and allocate appropriate resources;
-
Implementing safety and security mitigation measures: Providers must implement safety mitigations throughout the model's lifecycle to ensure systemic risks are acceptable. They must also implement adequate cybersecurity protections for their models and physical infrastructure to mitigate risks from unauthorized access or theft. Providers must create and maintain a Safety and Security Model Report before placing a model on the market. This report must be submitted to the AI Office and kept up-to-date;
-
Reporting serious incidents: Providers must implement appropriate processes and measures for keeping track of, documenting, and reporting to the AI Office and, as applicable, to national competent authorities, without undue delay relevant information about serious incidents along the entire model lifecycle and possible corrective measures to address them.
-
-
Copyright: This section helps Providers understand their obligations related to copyright. Key commitments include:
-
Implementing a copyright policy: Providers must have and maintain a policy to comply with Union copyright law. This includes identifying and respecting the rightsholders' reservation of rights, as outlined in the EU Copyright Directive.
-
Implementing measures to protect copyright when crawling the World Wide Web: this includes ensuring that they only reproduce and extract lawfully accessible works and other protected subject matter, as well as identifying and complying with machine-readable reservations of rights expressed pursuant to Article 4(3) of Directive (EU) 2019/790;
-
Designating a point of contact and enabling the lodging of a complaint.
-
Adherence to the Code is optional and does not serve as definitive proof of compliance. However, following the Code can facilitate demonstrating compliance and simplify any enforcement actions. Providers who choose to adhere to this Code will benefit from increased trust from the Commission and other stakeholders. Please note that opting out of specific chapters within the Code of Practice will result in forfeiting the associated benefits of adherence.
The Code of Practice was formally endorsed by the European Commission and the AI Board on August 1, 2025, and is now open for signature.
4. What Are the Risks in Case of Noncompliance?
The EU AI Act introduces a tiered system of administrative fines for noncompliance. These fines are designed to be effective, proportionate, and dissuasive.
For Providers of GPAI models, noncompliance with their obligations can result in administrative fines of up to €15 million or 3% of the total worldwide annual turnover for the preceding financial year, whichever is higher. Providing incorrect, incomplete, or misleading information to the AI Office or national authorities can lead to fines of up to €7.5 million or 1% of total worldwide annual turnover. In all cases, the fines for small and medium-sized enterprises (SMEs) will be based on the lower of the two amounts.
The AI Office has exclusive competence to monitor compliance and enforce obligations applicable to GPAI model Providers.
5. What Are the Next Steps?
It is worth noting that while the obligations for GPAI model Providers are now applicable, the AI Office's enforcement powers with respect to these specific obligations will only enter into application on August 2, 2026. This means the AI Office will not be able to open any enforcement actions before that date.
While this may slightly ease the pressure on the concerned organizations, it does not mean they should wait until August 2, 2026, to comply. The AI Office is expected to consider whether organizations have demonstrated their best efforts toward compliance by the deadline when conducting future enforcement actions.
Accordingly, we recommend that concerned organizations immediately take steps to:
-
Assess whether their GPAI model qualifies as a GPAI model with systemic risk, and if so, notify the AI Office without undue delay.
-
Align with the AI Office's guidelines.
-
Implement the Transparency template.
-
Consider adhering to and signing the Code of Practice.