Skip to main content

The EU AI Act to enter into force on 1 August 2024

The EU AI Act was formally signed on 13 June 2024 and has at last been written into the EU’s Official Journal, paving the way for its entry into force on 01 August 2024. With this date fast approaching, it is now more important than ever that organisations ensure they understand their obligations.

The EU AI Act is a landmark piece of regulation focused on the development and deployment of AI systems which will be applicable to all providers, deployers, importers and distributors of AI systems that impact EU users. While draft versions of the Act were published as early as 2021, several key changes were made to the final text following protracted negotiations late last year. But what were these updates and what do they mean for organisations? 
 

Key developments:
 

1. Clarification around General-Purpose AI Models

Applicability

The growing adoption and resulting public scrutiny around the development and use of generative AI has not been overlooked by the European Commission. The final text has now dispelled any ambiguity on the classification and scope of general-purpose AI (GP-AI) models, and these have now been included in a new Chapter V (“General Purpose AI Models”). 

The final text establishes that the key difference between the definitions of a standard AI system and a GP-AI model is that the latter has the capacity to display “significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market”. All GP-AI models will be in-scope of the regulation and providers of GP-AI models will be required to: 

  • Create and keep up-to-date technical documentation for their model, and make available information and documentation to providers of AI systems who intend to integrate the general-purpose AI model into their AI systems;
  • Put in place a policy to comply with Union law on copyright and related rights;
  • Draw up and make publicly available a detailed summary about the content used for training of the GP-AI; and,
  • Ensure that the outputs of GP-AI systems are marked in a machine-readable format and detectable as artificially generated or manipulated.

Classification

Furthermore, GP-AI models classified as having ‘systemic risk’ will face additional, more stringent, requirements. Models can be classified as ‘with systemic risk’ if they have high impact capabilities, based on either an evaluation through technical tools/methodologies or designation as such by the Commission. In general, a model will be considered to have high impact capabilities when the cumulative amount of compute used for its training measured in floating point operations (FLOPs) is greater than 10^25. The Commission is expected to keep this threshold under review, and further supplement this benchmark with other criteria, as deemed necessary.

Periodically, the Commission will ensure that a list of GP-AI models with systemic risks is published and kept up to date (respecting all IP rights and confidentiality). 

Obligations for providers of general-purpose AI models with systemic risk

The final text has established additional obligations for providers of GP-AI models with systemic risk. Providers of in-scope models are now required to perform model evaluation in accordance with standardised protocols and tools, which now includes testing to identify and mitigate systemic risk. Accordingly, Annex IX of the Act requires testing, training and validation processes, as well as data gathering, and selection to be included within technical documentation and held on record. 

Ensuring the safety of the model itself is now a key consideration for providers and they are now responsible for ensuring that there is an adequate level of cybersecurity protection for the model, considering the associated systemic risk and the physical infrastructure in place. 

Additionally, to demonstrate compliance, providers will be able to rely on codes of practice created by the AI Office1 (until a harmonised standard is adopted by the Commission). Providers who do not adhere to an approved code of practice need to demonstrate an alternative adequate means of compliance, with Commission approval. 

2. Clarification around Deep Fakes

Similar to other landmark online safety regulations, such as the Digital Services Act, the final text includes further clarity around the obligations and expectations of providers when handling and producing “deep fakes”. While the definition of “deep fakes” remains unchanged from the original text, deployers of AI systems are now required to disclose whether the associated content (image, audit or video) is artificially generated or manipulated (unless the use is authorised by law) by labelling the artificial intelligence output accordingly and disclosing its artificial origin.

3. Introduction of Open-Source Licenses

Another addition to the final text is the introduction of open-source licenses. The European Commission has now ensured that the parameters of GP-AI models released under a free open-source license (including information on the model architecture and model usage) are made publicly available and that these models are exempted from transparency-related requirements.

Additionally, obligations mentioned in the final version of the Act will apply to AI models or systems that are under an open-source licence unless: 

  • they are placed on the market or put into service as a high-risk AI system; or
  • they fall under Title II (Prohibited Artificial Intelligence Practices) and Title IV (Transparency Obligations for Providers and Deployers of Certain AI Systems and GPAI Models).

4. Banned Applications

The Act has now clarified that AI applications that pose a threat to citizen’s rights are banned. This is including but not limited to:

  • biometric categorisation systems based on sensitive characteristics;
  • untargeted scraping of facial images to create facial recognition databases;
  • emotion recognition in the workplace and schools;
  • social scoring;
  • predictive policing (based solely on profiling a person or assessing their characteristics);
  • AI that manipulates human behaviour/exploits human vulnerabilities.

5. Changes in the Penalty Regime

Within the updates made to the Act, penalties and fines have now been adjusted for providers who are non-compliant with obligations, and in some cases, penalties have decreased: 

  • Non-compliance with the prohibition of the artificial intelligence practices listed in Article 5 will be subject to administrative fines of up to EUR 35,000,000 (previously EUR 30,000,000). If the offender is a company, it may be up to 7% (previously 6%) of its total worldwide annual turnover for the preceding financial year. The highest figure will be selected.
  • Non-compliance of an AI system with any of the provisions related to operators or notified bodies (other than those listed in Article 5 “Prohibited AI Practices”) will be subject to administrative fines of up to EUR 15,000,000 (previously EUR 20,000,000). If the offender is a company, it may be up to 3% (previously 4%) of its total worldwide annual turnover for the preceding financial year. The highest figure will be selected.
  • The supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request shall be subject to administrative fines of up to EUR 7,500,000 (previously EUR 10,000,000). If the offender is a company, it may be up to 1% (previously 2%) of its total worldwide annual turnover for the preceding financial year. The highest figure will be selected.
     

The Phased Approach:
 

While the Act will enter into force on 01 August 2024, not all elements of the regulation will be applicable at the same time. The Commission has made it clear that a phased approach will be implemented, and the final text now provides further clarity on the timelines for when different elements will apply:

6 months after entry into force:

  • Prohibitions on the use of the AI systems specified in Article 5 will begin to apply.

12 months after entry into force:

  • Obligations for providers of GP-AI models will begin to apply.
  • Provisions on penalties, including administrative fines, will begin to apply.
  • Provisions on notified bodies and governance structures will begin to apply.
  • Member States will make publicly available how competent authorities and single points of contact can be contacted through electronic communications.
  • Commission will release dedicated guidance around receiving a notification related to a serious incident (as referred to in Article 3(44c)). 

18 months after entry into force:

  • Commission will provide guidelines specifying the practical implementation of Article 6 with a list of practical examples of high-risk and non-high-risk use cases.

24 months after entry into force:

  • Regulation will be applicable for all high-risk AI systems put into service from 24 months before entry into force.
  • Member States should ensure that competent authorities establish at least one regulatory sandbox at a national level which is operational.

36 months after entry into force:

  • For AI systems that are considered to be high-risk under Article 6(1), the corresponding obligations will begin to apply.
     

Next Steps
 

With the Act about to enter into force, it is time to ensure your organisation understands its obligations and will be compliant. The road to compliance will not be smooth, and to start with, organisations will need to ensure they have a full inventory of their AI systems and that the categorisation, and scoping in, of all relevant AI systems has taken place. This should be supported by ensuring that appropriate governance structures are in place and that there is full oversight over the strategy for compliance.

Our Algorithm and AI Assurance team are leading experts in navigating the regulatory landscape – to understand how the upcoming EU AI Act will impact your firm, please get in touch.

For further insights into the EU AI Act and its linkages with other key regulations, please also see Deloitte’s analysis. Look out for our next blog focussed on how to establish an auditable approach to compliance.

________________________________________________________________________
Footnotes:

1 The European AI Office (“AI Office”) has been established within the European Commission to serve as the centre point of AI expertise across Europe. Roles and responsibilities of the AI Office include supporting the AI Act and enforcing general-purpose AI rules, strengthening the development and use of trustworthy AI, and fostering co-operation.'