Skip to main content

The EU's General Purpose AI Code of Practice: What You Need to Know

 

 

The General Purpose AI (GPAI) Code of Practice is shaping up to be one of the most significant instruments in the EU’s evolving AI regulatory landscape. While still voluntary, it is becoming the go-to framework for responsible GenAI development and deployment across the European Union.

At its core, a general-purpose AI model is defined in Article 3(63) of the AI Act as a system trained on large datasets. These models demonstrate broad capabilities, capable of performing many distinct tasks, and are commonly used as the foundation for popular technologies such as chatbots, search engines, and image generators. While the legal definition is technical, the concept is simple: these models are the building blocks behind the AI tools most of us now use daily.

The rationale for the Code of Practice stems from Article 56 of the AI Act, which empowers the EU AI Office to develop a voluntary rulebook to guide how GenAI model providers can meet their legal obligations - specifically those found in Articles 53 and 55. The Code was initially expected by 2 May 2025. The May deadline slipped but the Code is now final. Even the AI Act milestone applicability date of 2 August 2025 for the GPAI rules remains unchanged despite various initiatives to postpone the applicability as well. Still, under article 111 (3), providers of GPAI models placed on the market before 2 August 2025, have until 2 August 2027, to comply.

Following three previous iterations from November 2024, December 2024 and March 2025, the final (fourth) iteration was released on 10 July 2025.  AI Office and AI Board assess the Code and may approve it via an adequacy decision. The European Commission may even approve the Code under the AI Act, by way of an implementing act, and give it general validity within the EU. 

How does the Code Work?

Structurally, the Code requires providers - such as OpenAI, Google, or Meta - to commit to three chapters: Transparency, Copyright, and Safety & Security. Note that only the Safety & Security chapter applies to GPAI models with systemic risk.

Transparency involves clearly documenting a model’s capabilities, limitations, and points of contact. Providers are also expected to share key documentation with downstream providers. Regarding copyright, the Code requires providers to follow a policy that aligns with EU copyright law.

For providers offering models considered to present systemic risk - those that push the boundaries of compute or societal impact - the Code imposes a set of 10 commitments (consolidated from 16 compared to the previous third draft). These include requirements such as adopting a state-of-the-art Safety and Security Framework, continuous systemic‑risk assessment and mitigation, including external evaluations, red teaming, stress‑testing, incident reporting and several additional safety and transparency measures. Serious incident reporting requires an initial report within 2 / 5 / 10 / 15 days deadline (severity based), with final report within 60 days after resolution; all records must be kept for five years. The technical documentation must be retained for at least 10 years after the model is placed on the market. 

A GPAI model qualifies as “systemic-risk” if it is one of the most advanced models at that point in time or if it has an equivalent impact, where training compute exceeds the threshold of 10²⁵ FLOPs. The Commission may also designate a model as systemic‑risk if it has equivalent impact or capabilities (Annex XIII criteria), even below that compute figure.

The drafting process behind the Code is complex, involving four thematic working groups: Transparency & Copyright, Risk assessment for systemic risk, Technical risk mitigation for systemic risk, and Governance risk mitigation for systemic risk. Each is led by independent experts and coordinated by the EU AI Office, with input from nearly 1,000 stakeholders, including AI developers, academics, civil society organizations, national authorities, and international observers.

Why It Matters to You

For companies that deploy AI systems or rely on downstream services, the Code can offer practical benefits. If your supplier signs the Code, you can expect more standardized documentation, faster updates, and clear safety guidance. But even if your provider does not sign on, the legal obligations under Articles 53 and 55 still apply. In that case, you will need to negotiate contractual access to the same level of documentation and conduct a more thorough legal audit. 

Compliance Toolkit for Deployers 

Under Article 53 and the transparency requirements outlined in the Code, downstream providers are entitled to the provider’s so called “downstream package.” This includes a completed Model Documentation Form and key fields from Annex XII of the AI Act - covering model description, intended tasks, performance, architecture, licensing terms, and integration specs. Under the transparency chapter, the GPAI provider must furnish any additional information requested by downstream providers within 14 calendar days of receiving the request, however only if such information is necessary to enable them to have a good understanding of the GPAI model capabilities and limitations and for the purposes to comply with their obligations under the AI Act.

Beyond the Code, providers must also publish a summary of the training data under Article 53(1)(d), which is a legally binding obligation under the AI Act. While this summary template will not be part of the Code itself, it will complement it. Discussions on its final structure are ongoing, and no consensus has yet been reached. Providers that sign the Code will still be able to invoke trade secrets, confidentiality and IP rights, unless the model is released under a free and open‑source license that meets the conditions of Article 53 (2), mainly the license must allow access, use, modify & redistribute with no monetization (see the recent guidelines). Therefore, it is essential for deployers to secure clear contractual terms with the model providers to define the limits of non-provision clauses when requesting the information for compliance purposes and ensure that internal teams understand the information provided. These requirements should ideally be translated into internal standard operating procedures (SOPs) or checklists for your teams to ensure nothing is missed.

Copyright remains a critical consideration. Deploying a generative model is not just a technical act, it also implicates EU copyright law. Under Article 53(1)(c), all GPAI providers must maintain a copyright compliance policy. The final version of the Code sets out the required policy. For downstream providers, this means reviewing and complying with model licensing terms, verifying that training and input data do not include content from rightsholders who opted out of text and data mining (e.g. under the DSM Directive), and adopting internal rules to label AI-generated content, check for originality, and manage takedown procedures in case of copyright complaints. 

If your supplier categorizes its model as a systemic-risk GPAI, then Article 55 of the AI Act and the final version of the Code require the publication of a summarized version of their Safety and Security Framework and Safety and Security Model Report (if and insofar as necessary to assess and/or mitigate systemic risks). As a deployer, you should extract the relevant mitigations from this report and incorporate them into your machine learning operations (MLOps) pipeline and internal SOPs. These mitigation measures help translate risk into tangible safeguards across your systems.

Continuous Monitoring: A Must

It’s important to understand that AI compliance is not a one-time effort, it is an ongoing process. The Code of Practice will likely be updated continuously by the EU AI Office, alongside other tools such as the training data summary template. Providers will keep revising model versions, licensing terms, and acceptable use policies. As a result, companies should actively monitor releases from the Commission, AI Office and providers, update contracts and internal procedures regularly, and consider preparing audit-ready documentation that can be shared not only with regulators but also with enterprise clients.

In summary, the GPAI Code of Practice is set to play a pivotal role in shaping the AI ecosystem in Europe. While much remains in flux, companies integrating or deploying generative AI need to start aligning with the emerging standards. Whether or not your provider signs the Code, preparing now will ensure smoother compliance, stronger governance, and greater resilience in the face of fast-changing regulations. 

Author

Pavol Szabo, Senior Managing Associate / Attorney,
Deloitte Legal Slovakia

Email: pszabo@deloittece.com

Did you find this useful?

Thanks for your feedback