Skip to main content

The EU AI Act is finally here

An overview of the requirements in the published text and how they may impact your organisation

12 July 2024

Regulatory News Alert

At a glance

Following the adoption by the European Parliament vote on 13 March 2024 and the Council vote on 21 May 2024, the EU Artificial Intelligence Act (AI Act) has been published today in the EU Official Journal.

For such a rapidly evolving technology, it seemed Europe was taking its time finalizing the legislative framework regulating AI systems. But for good reason, as the challenges of this topic start with the very definition of an AI system. Now that the details have been ironed out, this article takes a closer look into the newest requirements in the finalized text of the AI Act.

 

A closer look

How does the AI Act define an AI system? We have come a long way since our introductory article on the AI Act, when the Parliament and the Council were not yet aligned on the definition of what makes this technology “intelligent”. In the end, the definition of an Artificial Intelligence system from the Organization for Economic Co-operation and Development (OECD) was used as the basis in the final text:

‘‘AI system means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” The choice to adopt the OECD definition ensures further harmonization on an international level and hence, global acceptance of the European text.

Two key criteria are paramount to determining whether a software system is an AI system:

  • Autonomy: Is the system able to function to some degree without human involvement, following the delegation of autonomy and process automation by humans? For example, if a system generates outputs without these outputs being explicitly described in the AI system’s objective and without specific instructions from a human.
  • Adaptiveness: The question to be asked here is whether the system can modify its behavior through direct interaction with input data. AI systems can be trained once, periodically, or continually and operate by inferring patterns and relationships in data. Through such training, some AI systems may develop the ability to perform new forms of inference not initially envisioned by their programmers.
     

How does the AI Act classify the risk of AI systems?

The EU AI Act adopts a risk-based classification of AI systems and introduces obligations regarding general-purpose AI. The different risk categories are as follows:

  • Unacceptable risk: AI systems that pose a clear threat to the safety, livelihoods, and rights of individuals will be outrightly banned. This includes, but is not limited to, AI that manipulates human behavior, exploits vulnerable groups, or uses “social scoring” by governments.
  • High-risk systems: A broad range of AI systems, such as those that enter public spaces or potentially impact an individual's legal status or rights, are classified as high-risk.
  • Limited risk systems: AI systems, such as chatbots fall under this category. For these systems, transparency obligations apply. For example, users should be aware they are interacting with a machine.
  • Minimal risk: The majority of AI systems, such as AI-enabled video games or spam filters, are considered to be of minimal risk and can operate freely.
     

How does the AI Act approach the risk of General Purpose AI?

Unsurprisingly the question of how to regulate the General Purpose AI systems (GP AI), i.e., those that are trained with a large amount of data and are capable of competently performing a wide range of distinct tasks, was one of the most contentious aspects of the AI Act.

The risk-based approach under the AI Act means the more risk an AI system poses, the more requirements are imposed on its operators. However, with GP AI systems, it is hard to determine the intended purpose of a particular GP AI system precisely because of their “generality” and the fact that these systems can be integrated into a wide variety of downstream systems and applications. Hence, the risk categorization of a GP AI system presents a challenge to the operators as well as legislators.

For these reasons, the final text of the AI Act introduces a dedicated regime (Chapter V) for the providers of GP AI models (and not the GP AI systems). Before the deployment, an AI system is typically built on one or more models (e.g., with reasoning and decision-making algorithms) based on machine and/or human inputs/data. An AI model is therefore a core component of an AI system used to make inferences from inputs to produce outputs. While the parameters of an AI model change during the build phase, they usually remain more fixed after deployment once the build phase has concluded. Risks posed by GP AI models are therefore easier to estimate, and consequently, regulate.

Providers and deployers of AI systems (regardless if these are GP or not) that use GP AI model(s), will not be subject to Chapter V of the AI Act but will follow the general rules provided in the AI Act. As models and systems are treated separately, a GP AI model will never constitute a high-risk AI “system”, as it is not an AI system.
 

Exemptions for high-risk AI systems

As a general rule, all AI systems outlined in the Annex III of the AI Act are considered high risk systems, and as such are subject to stringent requirements under the AI Act. Some key examples of high-risk systems include:

  • Emotion recognition;
  • The recruitment or selection of natural persons, to analyze and filter job applications;
  • Decisions affecting terms of work-related relationships, the promotion or termination of contractual relationships.

The list from Annex III is not fixed and the European Commission may adopt delegated acts that will add or remove certain systems from the list.

Based on previous draft language alone, many of the AI systems currently on the market would be considered a high risk AI system. For this reason, following a timely intervention by the EU Parliament, the final text of the AI Act provides exemptions to the systems listed in Annex III that have the following intended uses:

  • Perform a narrow procedural or preparatory task;
  • Improve the result of a previously completed human activity;
  • Detect decision-making patterns or deviations from such patterns and does not replace or influence the previously completed human assessment.

Importantly, however, Annex III systems will always be considered high-risk where they perform profiling1 of natural persons.


Additional transparency requirements for certain AI systems

Certain AI systems may pose specific risks of impersonation or deception, which is why they are subject to additional transparency requirements under the AI Act:

  • Providers of systems that directly interact with natural persons have to ensure that they are designed in such a way that natural persons are informed that they are interacting with an AI system2.
  • Providers of systems that can generate large quantities of synthetic content are required to disclose that the output has been generated or manipulated by an AI system and not a human.
  • Deployers of emotion recognition, biometric categorization systems and “deep fakes”3 must inform natural persons about the system’s operation, data processing, and that the content has been artificially generated or manipulated.
     

Timeline for compliance with the new rules

eu ai act is finally here timeline

Most of the provisions of the AI Act will be fully applicable from 2 August 2026. However, those AI systems that pose unacceptable risk (Chapter II) will be banned from 2 February 2025. Finally, rules on GP AI models (Chapter V) and governance at the EU level (Chapter VII) will apply from 2 August 2025.

Operators of high-risk AI systems that have been placed on the market or put into service before general date of application, should comply with the AI Act only if, as from that date, those systems are subject to significant changes in their designs.4
 

How Deloitte can help

By understanding where your technology aligns with this regulation, your organization can ensure compliance with the regulation before go-live date in June 2026, and avoid any non-compliance which could lead to substantial fines as defined in the regulation.

Our AI/Data regulatory team offers a wide range of services from AI Regulatory strategy to assessment and remediation, where we prioritize human-centric and trustworthy AI at the core of what we do.

Combining this knowledge, Deloitte’s specialists and dedicated services can help you clarify the impact of the AI Regulation, identify any gaps, design potential solutions and take the necessary steps to put these solutions in place. We can support you in various critical areas such as AI strategy, business and operating models, regulatory and compliance, technology, and risk management.

Get in touch
 

Georges Wantz
Managing Director | Advisory & Consulting
+352 45145 4363
gwantz@deloitte.lu

 

Marijana Vuksic
Senior Manager | Advisory & Consulting
+352 621412051
mvuksic@deloitte.lu

 

Haley Cover
Senior Manager | Advisory & Consulting
+352 621568512
hcover@deloitte.lu

 

Vusal Mammadzada
Senior Manager | Advisory & Consulting
+352 621821890
vumammadzada@deloitte.lu

 

1 Definition of “profiling” is taken from the GDPR. It is any form of automated processing of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyze or predict aspects concerning that natural person's performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location or movements.

2 Unless this is obvious, taking into account the circumstances and the context of use.

3 "Deep fake" is an AI generated or manipulated image, audio or video content that resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful.

4 In the case of high-risk AI systems intended to be used by public authorities, the providers and deployers of such systems shall take the necessary steps to comply with the AI Act requirements by 2030.

Did you find this useful?

Thanks for your feedback

If you would like to help improve Deloitte.com further, please complete a 3-minute survey