Zum Hauptinhalt springen

EU AI Act: Europe’s Comprehensive AI Regulation

Details and Background on the Implementation Requirements

The goal of the AI Act is to promote the functioning of the European internal market in terms of the introduction of human-centric and trustworthy artificial intelligence (AI) while simultaneously ensuring a high level of protection for health, safety, and fundamental rights enshrined in the Charter of Fundamental Rights - including democracy, rule of law, and environmental protection - against possible harmful effects of AI systems. Thus, the AI Act is primarily a product safety regulation. Entered into force on August 1, 2024, it aims to protect European consumers from violations of fundamental rights due to inappropriate use of AI. In the future, providers of AI systems classified as high-risk must formally confirm compliance with numerous requirements based on the principles of trustworthy AI - from AI governance to AI quality. Non-compliance with these requirements may result in substantial fines depending on the individual case. Moreover, providers may be forced to remove their AI systems from the market. Despite extensive principles, rules, and procedures, as well as new supervisory structures, the law should not slow down innovation in the EU, but, rather, promote further development in the AI sector, particularly through start-ups and SMEs, by providing legal certainty.

What is Considered an AI System

 

The definition of AI in the AI Act aligns with the internationally recognized AI definition of the OECD. An AI system according to the AI Act is "a machine-supported system designed for operation to varying degrees of autonomy, which may be adaptable after its initiation, and which, from the inputs received, derives explicit or implicit goals, such as creating outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."

An AI system under EU law is characterized by its ability to draw conclusions, i.e., to make predictions and decisions that affect real and virtual environments. These are enabled by techniques such as machine learning and logic-based approaches. AI systems vary in their autonomy and can be used independently or integrated into products while adapting autonomously through use. The definition of AI in the AI Act is quite broad, which means that a large number of systems could fall under the regulation. However, the recitals to explain the regulation texts of the AI Act and the guideline published by the EU Commission on the definition of an AI system as of February 1, 2025, clarify that the definition does not cover AI systems that include mathematical optimization, simple data processing, classical heuristics, or simple forecasts (e.g., based on statistical calculation rules). Also, it has worked out seven elements of an AI system, but also clarified that these need not all be present during the entire life cycle of an AI system under the AI Act.

Who is Affected by the EU AI Act

 

The AI Act provides that the regulation applies exclusively to use cases that fall within the scope of EU law. The responsibilities of Member States or government authorities regarding national security should not be curtailed in any way. Also excluded are AI systems used exclusively for military or defense purposes or solely used for research and innovation based on open-source licenses, and use for purely private purposes.

The AI regulation applies to all providers of AI systems offered on the European market. The term provider includes persons or institutions developing and bringing an AI system to the market. Importers, dealers, and operators are also subject to this regulation.

General Purpose AI Models (GPAI)

 

As with "conventional" (focused, purpose-oriented) AI models, the AI Act also classifies base models - the engines behind Generative AI - based on their risk. Thanks to their flexibility and potential for widespread use, they are termed as "general-purpose AI" (GPAI).

The AI Act provides the following classification: It is based not on the application but the performance and reach of the underlying base model.

  • Level 1: GPAI Models: must meet additional transparency obligations. These include technical documentation and detailed listings of the use of copyrighted training data as well as the aforementioned requirements for labeling content generated using AI.
  • Level 2: GPAI with significant impacts: for "very powerful" AI models that can pose systemic risks, they have additional obligations, such as monitoring serious incidents, model evaluation, and carrying out attack tests.

The quantitative, non-subjective distinction between GPAI models and GPAI models with systemic risk is made based on the computing power needed to train the underlying base model. This is measured in floating-point operations per second (FLOPs). The threshold for GPAI models with systemic risk is 10^25 FLOP.

To meet these new requirements, experts from industry, academia, and civil society, along with other relevant stakeholders, will develop codes of conduct - and ultimately harmonized EU-wide standards in collaboration with the Commission.

A Risk-Based Approach

 

In classifying AI systems, the AI Act follows a risk-based approach:  all AI systems are not treated the same. They first differentiate between "conventional" AI systems and "general-purpose AI" (GPAI). The latter is a relatively new development since the emergence of generative AI systems and is treated as a separate topic (see below).

The risk of so-called Single-Purpose AI (AI systems with a specific use case) is assessed not based on their technology, but on their use case. Risk categories range from "unacceptable" to "high" to "limited or minimal". Systems with unacceptable risk are prohibited, while those with minimal risk are not regulated by the AI Act.

Prohibited AI Systems

 

AI systems that pose an unacceptable risk are completely prohibited since February 2, 2025. The AI Act lists the following applications:

  • Biometric categorization systems based on sensitive characteristics such as political opinion, religious or philosophical beliefs, sexual orientation, or ethnic origin
  • However, so-called biometric real-time remote identification systems are permitted. They are classified as high-risk and are subject to strict requirements:
    • Temporal and geographical restrictions
    • For the targeted search for victims (e.g., in cases of kidnapping or human trafficking)
    • To avert the concrete and immediate danger of a terrorist attack
    • To detect or identify a perpetrator or suspect of a serious crime as defined by the regulation
  • Undirected capture of facial images from the internet or surveillance cameras to create a facial recognition database
  • Emotion recognition in the workplace and education settings
  • Social scoring systems assessing people based on their social behavior or individual characteristics
  • Systems manipulating peoples' behavior and infringing on their free will
  • Applications exploiting the weaknesses of certain groups of people - especially due to age, disability, or socio-economic status

On this topic, the EU Commission also provided guidance on prohibited practices in February 2025 with a guideline. It provides a host of examples for all practices prohibited under the AI Act and distinguishes them from high-risk use cases, but it also makes clear that the degree of distinction between prohibited and high-risk applications can be very narrow. Therefore, an exact examination of individual cases is essential.

High-Risk AI Systems (HRAI) – the Focus of the Regulation

 

The main focus of the regulation is clearly on high-risk AI systems, which are subject to a multitude of compliance regulations. Providers of such systems are required to introduce a quality management and risk management system, meet data quality and integrity requirements, and carry out a conformity assessment before issuing a declaration of conformity. High-risk AI systems are divided into two categories:

  • Systems for products subject to EU safety regulations, such as machines, toys, aviation, and vehicle technology, medical devices, and elevators, must undergo a third-party conformity assessment.
  • Providers of systems in the areas listed in Annex III carry out the conformity assessment themselves. These include critical infrastructures, education, employment, primary private and public services (including financial services), law enforcement, immigration/asylum/border control, and democratic processes (elections). However, special regulations were made for the use of systems for remote biometric identification (RBI), for example, to combat certain crimes.

Before high-risk AI systems can be introduced to the market in public sectors, banks, or insurance, a Fundamental Rights Impact Assessment must also be carried out.

Citizens have the right to lodge complaints with national authorities about AI systems and algorithmic decisions affecting their rights.

The Rest

 

There are types of AI systems that pose limited or minimal risk to which fewer or no obligations of the AI Regulation apply:

Administrative or internal AI systems such as spam filters or predictive maintenance systems lie outside the scope of the regulation and are therefore classified as "minimal risk." The AI Regulation does not provide explicit obligations for these AI systems. However, companies can voluntarily adhere to codes of conduct for these AI systems.

Nevertheless, any AI systems, regardless of their risk categorization, which interact with people, such as chatbots or recommendation systems. Certain transparency obligations apply to them. Content such as conversations (chatbots), deep fakes, or biometric categorizations generated by an AI system must be labeled. This obligation is associated with the classification as "limited risk."

Conformity

 

To issue a declaration of conformity, providers of high-risk AI systems must demonstrate compliance with the regulation before market introduction and throughout the life cycle of an AI system:

  • Quality Management System (QMS) - Ensuring appropriate governance in relation to data quality, technical documentation, record-keeping obligations, risk management, human oversight, and model validation against principles of trustworthy AI, especially transparency, robustness, accuracy, and cybersecurity.
  • Risk Management System – anticipating where AI risks could go wrong, either in general or specifically to each use case, designing controls and drawing up contingency plans and accountability for resolving issues, should they materialize.
  • Lifecycle Management - As part of the QMS, providers are obligated to not only minimize and manage risks before marketing but also throughout the life cycle of the AI system. This includes registration of the high-risk AI system in the EU database and logging incidents over the entire life cycle.

For details on these, see our article dedicated to AI Governance.

Enforcement

 

The legislator generally assumes that providers will self-control by performing a conformity assessment themselves or entrusting it to authorized third parties depending on the type of high-risk AI system. The AI Regulation provides for an administrative structure with several central government agencies, each entrusted with different tasks regarding the implementation and enforcement of the law.

At the EU Level

 

The EU AI Office, a new authority within the European Commission, coordinates the implementation of the law in all EU member states. In addition, the AI Office supervises GPAI with significant impacts.

A committee attached to the AI Office, the AI committee, consisting of stakeholders from business and civil society, provides feedback and ensures that a broad range of opinions is represented during the implementation process.

Furthermore, the Scientific Committee, an advisory forum of independent experts, is to identify systemic risks of AI, provide guidelines for model classification, and ensure that the rules and implementation of the law comply with the latest scientific findings.

At the National Level

 

EU Member States must set up or designate competent national authorities responsible for enforcing the law, so-called market surveillance authorities. They must also ensure that all AI systems comply with the relevant standards and regulations. Their tasks include:

  • Monitoring the proper and timely performance of conformity assessments,
  • Appointing the "notified bodies" (external auditors) authorized to perform external conformity assessments,
  • Coordination with other supervisory authorities at the national level (e.g., for banks, insurance, healthcare, automotive industry, etc.) and with the AI Office at the EU level.

In Germany, according to the current draft by referral to carry out the AI Act, the Federal Network Agency (for all other industries) and BaFin (for the financial sector) are to share the task of AI supervision.

Sanctions for Violations

 

AI systems are classified based on their risk. Similarly, the sanctions provided for in the EU AI Act also correspond to the severity of the violation:

  • Infringements related to prohibited AI systems (Article 5) can cost providers up to EUR 35 million or 7% of their global turnover in the previous year, whichever is higher.
  • For breaches of various other regulations (e.g., against Articles 10 or 13), fines of up to 15 million EUR or 3% of the annual turnover can be imposed.
  • False reports can be punished with up to 7.5 million EUR or 1% of the annual turnover, whichever is higher.
  • In addition to these monetary penalties, national supervisory authorities can force providers to remove non-compliant AI systems from the market or prohibit their provision of services.

The AI Regulation provides for more moderate fines for SMEs and start-ups.

While the provisions on sanctions do not apply until August 2025, the provisions on bans that have been in effect since February 2, 2025, have immediate effect, so those affected could possibly enforce them in the national courts and obtain interim injunctions.

Regulation timetable

The AI Act came into force on August 1, 2024, and has staggered implementation deadlines. The AI Act will be almost entirely applicable on August 2, 2026. However, some provisions will apply earlier, with high-risk AI systems following a three-year transition period under Annex II:

  • February 2, 2025: AI systems posing an unacceptable risk are prohibited.
  • August 2, 2025: The provisions for general-purpose AI and sanctions apply.
  • August 2, 2026: The remaining provisions on high-risk AI systems (Annex III), transparency rules, and AI real labs apply.
  • August 2, 2027: Rules for GPAI, which were already on the market before the regulation came into force

 

Use AI Systems - Safely

 

Although not all technical details are clarified yet, the AI Act provides a sufficient idea of the scope and aim of the future regulation. Businesses will have to adapt many internal processes and strengthen risk management systems. The European standardization body CEN-CENELEC will translate the principles of the AI Regulation into technical standards and norms to facilitate the testing and certification of AI systems, and the EU Commission will publish guidelines as guidance for the application of the AI Act. However, existing processes in the company can be built upon, and lessons can be learned from measures of previous legislation such as the GDPR. We recommend companies to promote implementation within their organization, raise employee awareness about the new law, carry out a stock take of their AI systems, ensure appropriate governance measures and meticulously scrutinize AI systems categorized as high risk.

At Deloitte, we stand by our clients: we assist you in mastering the complexity and scope of the AI regulation and prepare for the requirements that will apply in the future. Benefit from Deloitte's through leadership in the field of trustworthy AI, our extensive expertise in the development of AI systems, and our long-standing experience as an audit firm. Our services are based on the six life cycle phases of AI systems, which are also described in the AI Regulation and correspond to general practice.

Leading Role in the Field of AI

 

Deloitte has extensive expertise regarding the implementation of AI-based solutions and the careful development of dedicated audit tools and monitoring for assessing AI models according to the principles of trustworthy AI. Our reputation as a competent consulting company is primarily based on our demanding quality standards. To assess the conformity of your systems, completing a questionnaire is far from sufficient. Deloitte conducts an in-depth quantitative analysis and rigorously tests your AI models to identify logic errors, methodological inconsistencies, implementation problems, data risks, and other weak points. We believe that only such a thorough approach meets the requirements of our customers. However, this doesn't mean reinventing the wheel for each analysis. For the sake of efficiency, Deloitte has invested in the development of dedicated tools to optimize the numerous steps of the validation process. A series of white papers (download below) explain why Quality-Guardrails and Governance machinery are of critical importance in strengthening trust in the AI models and systems that are decisively shaping our present and our future.

Fanden Sie dies hilfreich?

Vielen Dank für Ihr Feedback