The EU AI Act was formally signed on 13 June 2024 and has at last been written into the EU’s Official Journal, paving the way for its entry into force on 01 August 2024. With this date fast approaching, it is now more important than ever that organisations ensure they understand their obligations.
The EU AI Act is a landmark piece of regulation focused on the development and deployment of AI systems which will be applicable to all providers, deployers, importers and distributors of AI systems that impact EU users. While draft versions of the Act were published as early as 2021, several key changes were made to the final text following protracted negotiations late last year. But what were these updates and what do they mean for organisations?
1. Clarification around General-Purpose AI Models
Applicability
The growing adoption and resulting public scrutiny around the development and use of generative AI has not been overlooked by the European Commission. The final text has now dispelled any ambiguity on the classification and scope of general-purpose AI (GP-AI) models, and these have now been included in a new Chapter V (“General Purpose AI Models”).
The final text establishes that the key difference between the definitions of a standard AI system and a GP-AI model is that the latter has the capacity to display “significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market”. All GP-AI models will be in-scope of the regulation and providers of GP-AI models will be required to:
Classification
Furthermore, GP-AI models classified as having ‘systemic risk’ will face additional, more stringent, requirements. Models can be classified as ‘with systemic risk’ if they have high impact capabilities, based on either an evaluation through technical tools/methodologies or designation as such by the Commission. In general, a model will be considered to have high impact capabilities when the cumulative amount of compute used for its training measured in floating point operations (FLOPs) is greater than 10^25. The Commission is expected to keep this threshold under review, and further supplement this benchmark with other criteria, as deemed necessary.
Periodically, the Commission will ensure that a list of GP-AI models with systemic risks is published and kept up to date (respecting all IP rights and confidentiality).
Obligations for providers of general-purpose AI models with systemic risk
The final text has established additional obligations for providers of GP-AI models with systemic risk. Providers of in-scope models are now required to perform model evaluation in accordance with standardised protocols and tools, which now includes testing to identify and mitigate systemic risk. Accordingly, Annex IX of the Act requires testing, training and validation processes, as well as data gathering, and selection to be included within technical documentation and held on record.
Ensuring the safety of the model itself is now a key consideration for providers and they are now responsible for ensuring that there is an adequate level of cybersecurity protection for the model, considering the associated systemic risk and the physical infrastructure in place.
Additionally, to demonstrate compliance, providers will be able to rely on codes of practice created by the AI Office1 (until a harmonised standard is adopted by the Commission). Providers who do not adhere to an approved code of practice need to demonstrate an alternative adequate means of compliance, with Commission approval.
2. Clarification around Deep Fakes
Similar to other landmark online safety regulations, such as the Digital Services Act, the final text includes further clarity around the obligations and expectations of providers when handling and producing “deep fakes”. While the definition of “deep fakes” remains unchanged from the original text, deployers of AI systems are now required to disclose whether the associated content (image, audit or video) is artificially generated or manipulated (unless the use is authorised by law) by labelling the artificial intelligence output accordingly and disclosing its artificial origin.
3. Introduction of Open-Source Licenses
Another addition to the final text is the introduction of open-source licenses. The European Commission has now ensured that the parameters of GP-AI models released under a free open-source license (including information on the model architecture and model usage) are made publicly available and that these models are exempted from transparency-related requirements.
Additionally, obligations mentioned in the final version of the Act will apply to AI models or systems that are under an open-source licence unless:
4. Banned Applications
The Act has now clarified that AI applications that pose a threat to citizen’s rights are banned. This is including but not limited to:
5. Changes in the Penalty Regime
Within the updates made to the Act, penalties and fines have now been adjusted for providers who are non-compliant with obligations, and in some cases, penalties have decreased:
While the Act will enter into force on 01 August 2024, not all elements of the regulation will be applicable at the same time. The Commission has made it clear that a phased approach will be implemented, and the final text now provides further clarity on the timelines for when different elements will apply:
6 months after entry into force:
12 months after entry into force:
18 months after entry into force:
24 months after entry into force:
36 months after entry into force:
With the Act about to enter into force, it is time to ensure your organisation understands its obligations and will be compliant. The road to compliance will not be smooth, and to start with, organisations will need to ensure they have a full inventory of their AI systems and that the categorisation, and scoping in, of all relevant AI systems has taken place. This should be supported by ensuring that appropriate governance structures are in place and that there is full oversight over the strategy for compliance.
Our Algorithm and AI Assurance team are leading experts in navigating the regulatory landscape – to understand how the upcoming EU AI Act will impact your firm, please get in touch.
For further insights into the EU AI Act and its linkages with other key regulations, please also see Deloitte’s analysis. Look out for our next blog focussed on how to establish an auditable approach to compliance.
________________________________________________________________________
Footnotes:
1 The European AI Office (“AI Office”) has been established within the European Commission to serve as the centre point of AI expertise across Europe. Roles and responsibilities of the AI Office include supporting the AI Act and enforcing general-purpose AI rules, strengthening the development and use of trustworthy AI, and fostering co-operation.'