Regulatory News Alert
Following the adoption by the European Parliament vote on 13 March 2024 and the Council vote on 21 May 2024, the EU Artificial Intelligence Act (AI Act) has been published today in the EU Official Journal.
For such a rapidly evolving technology, it seemed Europe was taking its time finalizing the legislative framework regulating AI systems. But for good reason, as the challenges of this topic start with the very definition of an AI system. Now that the details have been ironed out, this article takes a closer look into the newest requirements in the finalized text of the AI Act.
How does the AI Act define an AI system? We have come a long way since our introductory article on the AI Act, when the Parliament and the Council were not yet aligned on the definition of what makes this technology “intelligent”. In the end, the definition of an Artificial Intelligence system from the Organization for Economic Co-operation and Development (OECD) was used as the basis in the final text:
‘‘AI system means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” The choice to adopt the OECD definition ensures further harmonization on an international level and hence, global acceptance of the European text.
Two key criteria are paramount to determining whether a software system is an AI system:
The EU AI Act adopts a risk-based classification of AI systems and introduces obligations regarding general-purpose AI. The different risk categories are as follows:
Unsurprisingly the question of how to regulate the General Purpose AI systems (GP AI), i.e., those that are trained with a large amount of data and are capable of competently performing a wide range of distinct tasks, was one of the most contentious aspects of the AI Act.
The risk-based approach under the AI Act means the more risk an AI system poses, the more requirements are imposed on its operators. However, with GP AI systems, it is hard to determine the intended purpose of a particular GP AI system precisely because of their “generality” and the fact that these systems can be integrated into a wide variety of downstream systems and applications. Hence, the risk categorization of a GP AI system presents a challenge to the operators as well as legislators.
For these reasons, the final text of the AI Act introduces a dedicated regime (Chapter V) for the providers of GP AI models (and not the GP AI systems). Before the deployment, an AI system is typically built on one or more models (e.g., with reasoning and decision-making algorithms) based on machine and/or human inputs/data. An AI model is therefore a core component of an AI system used to make inferences from inputs to produce outputs. While the parameters of an AI model change during the build phase, they usually remain more fixed after deployment once the build phase has concluded. Risks posed by GP AI models are therefore easier to estimate, and consequently, regulate.
Providers and deployers of AI systems (regardless if these are GP or not) that use GP AI model(s), will not be subject to Chapter V of the AI Act but will follow the general rules provided in the AI Act. As models and systems are treated separately, a GP AI model will never constitute a high-risk AI “system”, as it is not an AI system.
As a general rule, all AI systems outlined in the Annex III of the AI Act are considered high risk systems, and as such are subject to stringent requirements under the AI Act. Some key examples of high-risk systems include:
The list from Annex III is not fixed and the European Commission may adopt delegated acts that will add or remove certain systems from the list.
Based on previous draft language alone, many of the AI systems currently on the market would be considered a high risk AI system. For this reason, following a timely intervention by the EU Parliament, the final text of the AI Act provides exemptions to the systems listed in Annex III that have the following intended uses:
Importantly, however, Annex III systems will always be considered high-risk where they perform profiling1 of natural persons.
Certain AI systems may pose specific risks of impersonation or deception, which is why they are subject to additional transparency requirements under the AI Act:
Most of the provisions of the AI Act will be fully applicable from 2 August 2026. However, those AI systems that pose unacceptable risk (Chapter II) will be banned from 2 February 2025. Finally, rules on GP AI models (Chapter V) and governance at the EU level (Chapter VII) will apply from 2 August 2025.
Operators of high-risk AI systems that have been placed on the market or put into service before general date of application, should comply with the AI Act only if, as from that date, those systems are subject to significant changes in their designs.4
By understanding where your technology aligns with this regulation, your organization can ensure compliance with the regulation before go-live date in June 2026, and avoid any non-compliance which could lead to substantial fines as defined in the regulation.
Our AI/Data regulatory team offers a wide range of services from AI Regulatory strategy to assessment and remediation, where we prioritize human-centric and trustworthy AI at the core of what we do.
Combining this knowledge, Deloitte’s specialists and dedicated services can help you clarify the impact of the AI Regulation, identify any gaps, design potential solutions and take the necessary steps to put these solutions in place. We can support you in various critical areas such as AI strategy, business and operating models, regulatory and compliance, technology, and risk management.
Georges Wantz
|
Marijana Vuksic
|
Haley Cover
|
Vusal Mammadzada |
1 Definition of “profiling” is taken from the GDPR. It is any form of automated processing of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyze or predict aspects concerning that natural person's performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location or movements.
2 Unless this is obvious, taking into account the circumstances and the context of use.
3 "Deep fake" is an AI generated or manipulated image, audio or video content that resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful.
4 In the case of high-risk AI systems intended to be used by public authorities, the providers and deployers of such systems shall take the necessary steps to comply with the AI Act requirements by 2030.