Zum Hauptinhalt springen

EU AI Act: The AI law of the European Union

Details and background on the implementation requirements

The aim of the AI Act is to improve the functioning of the European single market and promote the introduction of human centered and trustworthy artificial intelligence (AI), while ensuring a high level of protection for health, safety and the fundamental rights enshrined in the Charter of Fundamental Rights – including democracy, the rule of law and environmental protection – against the potential harmful effects of AI systems. The AI Act is therefore also a product safety regulation. It is intended to protect European consumers from violations of fundamental rights resulting from the inappropriate use of AI. In future, providers of AI systems classified as high risk will have to check and formally confirm compliance with numerous requirements in line with the principles of trustworthy AI – from AI governance to AI quality. Violations of these requirements may result in severe fines. In addition, providers can be forced to withdraw their AI systems from the market. Despite extensive principles, rules and procedures as well as new supervisory structures, the law is not intended to slow down innovation in the EU, but rather to promote further development in the AI sector, particularly by start-ups and SMEs, through legal certainty and regulatory sandboxes.

EU Parliament approves AI Act - as of June 2024

 

On March 13, 2024, the EU Parliament approved the AI Act with a large majority. It will be published in the Official Journal of the EU – following formal approval by the Member States and translation into all official EU languages – in July 2024 with entry into force in August 2024. 

What constitutes an AI System?

 

The definition of AI in the AI Act is based on the internationally recognized OECD definition of AI. The EU AI Act defines AI as follows: “ ‘AI system’ means a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;"

An AI system under EU law is characterized by the ability to draw conclusions, i.e. it can make predictions and decisions that influence real and virtual environments. This is made possible by techniques such as machine learning and the logic-based approaches. AI systems vary in their autonomy and can be used either independently or integrated into products, adapting autonomously through use. The AI Act adopts a sweeping definition for AI, implying that a wide range of systems could potentially be regulated. But further examination of the introductory remarks, clarifying this regulatory text within the AI Act, reveals that the definition is not meant to encompass rudimentary, traditional software systems or programming methodologies. These, which rely entirely on rules designated by humans to execute auto-operations, are excluded from the scope.

Who is affected by the AI Act?

 

The EU AI Act stipulates that the regulation applies exclusively to matters that fall within the scope of EU law. The competences of member states or government authorities with regard to national security are not to be curtailed in any way. AI systems designed solely for military or defense applications, or those employed strictly for research and innovation, are also outside the purview of the AI Act. Furthermore, open-source models, and systems used by individuals for non-commercial purposes with AI, are not encompassed either. The AI Act applies to all providers of AI systems that are introduced or deployed on the European market. The term 'provider' in the AI Act envelops operators that deploy or introduce an AI system onto the market usually under their own trademark. Importers, distributors and deployer are also subject to the law.

Categorizing AI systems – focus on high-risk systems

 

The AI Act adopts a risk-based approach when it comes to categorizing AI systems. Initially, a distinction is drawn between what we regard as "conventional" AI systems and "general purpose AI" (GPAI). GPAI, a relatively new emergence alongside generative AI systems, is handled as an independent subject (discussed in further detail below). The risk of single-purpose AI (AI with a specific purpose) is not assessed on the basis of its technology, but on the basis of its application. The risk categories range from "unacceptable" to "high" to "limited or minimal". Systems with unacceptable risk are prohibited, while those with minimal risk are not subject to any requirements. The focus of the proposed legislation is therefore clearly on high-risk AI systems, which are subject to numerous compliance regulations. Providers of such systems are required to implement a risk management system and meet data quality and integrity requirements. Additionally, they must carry out a conformity assessment and then issue a declaration of conformity. High-risk AI systems are divided into two categories:

  • Systems for products that are subject to EU safety regulations - such as machinery, toys, aviation and vehicle technology, medical devices and lifts - shall undergo a third-party conformity assessment.
  • Providers of systems in the areas listed in Annex III must carry out the conformity assessment themselves. These include critical infrastructure, education, employment, essential private and public services (including financial services), law enforcement, migration/asylum/border control and democratic processes (elections). However, special provisions have been made for the use of remote biometric identification (RBI) systems, for example to combat certain crimes.

Before high-risk AI systems from the public, banking or insurance sectors are launched on the market, a fundamental rights impact assessment must also be carried out.

Citizens are entitled to submit complaints to national authorities about AI systems and algorithmic decisions that affect their rights.

Prohibited AI systems

 

AI systems that pose an unacceptable risk will be completely banned within just six months of the official adoption of the AI Act. The AI Act lists the following applications:

  • Systems for biometric categorization based on sensitive characteristics such as political opinion, religious or philosophical beliefs, sexual orientation or ethnic origin.
  • So-called biometric real-time remote identification systems, on the other hand, are permitted. They are classified as high-risk and are subject to strict regulations:
    • Time and location restrictions.
    • For the targeted search for victims (e.g. in cases of kidnapping or human trafficking).
    • To avert the concrete and immediate danger of a terrorist attack.
    • To detect or identify a perpetrator or suspect of a serious crime within the meaning of the ordinance.
  • Untargeted collection of facial images from the internet or surveillance cameras to create a facial recognition database.
  • Emotion recognition in the workplace and in educational institutions.
  • Social scoring systems that rate people based on their social behavior or personal characteristics.
  • Systems that manipulate people's behavior and impair their free will.
  • Applications that exploit the weaknesses of certain groups of people - in particular due to age, disability or socio-economic status.

AI systems in the lower risk classes

 

There are two types of AI systems that pose a limited or minimal risk and therefore have fewer to no obligations under the AI Act:

  • Certain transparency obligations apply to AI systems that interact with people, such as chatbots or recommendation systems. Moreover, content generated by an AI system such as conversations (chatbots), deepfakes or biometric categorizations must be labeled. These obligations apply to AI classified as "limited risk".
  • Administrative or internal AI systems such as spam filters or predictive maintenance systems fall outside the requirements of the regulation and are therefore categorized as "minimal risk". The AI Act does not stipulate any explicit obligations for AI systems in this risk category. However, companies can apply codes of conduct for these AI systems on a voluntary basis.

AI systems with general purpose applications (GPAI) and foundation models

 

As with "traditional" (focused, single-purpose) AI models, the AI Act also classifies foundation models - the engines behind Generative AI - according to their risk. They are known as "general purpose AI" (GPAI) due to their flexibility and potential for widespread use.

The AI Act provides for the following classification: It is not based on the application, but on the performance and scope of the underlying base model.

  • Level 1: AI with a general purpose of use (GPAI): All base models must fulfill additional transparency obligations. In addition to technical documentation and detailed statements on the use of copyrighted training data, this also includes the above-mentioned requirements for labeling content generated with the help of AI.
  • Level 2: GPAI with significant impact: Additional obligations apply to "very high performing" base models that may pose systemic risks, for example in relation to serious incident monitoring, model assessment and attack testing.

The quantitative, non-subjective distinction between GPAI and GPAI with high impact capabilities is based amongst others on the computing power required to train the underlying base model. This is measured in floating point operations per second (FLOP). The threshold for GPAI with significant impact is 10^25 FLOP.

To meet these new requirements in practice, experts from industry, academia, civil society and other relevant stakeholders will work with the Commission to develop codes of conduct and ultimately harmonized EU-wide standards.

Conformity

 

In order to issue a declaration of conformity, providers of high-risk AI systems must demonstrate compliance with the regulation prior to market launch and throughout the entire life cycle of the systems:

  • Quality management systems (QMS) –ensuring appropriate governance in terms of data quality, technical documentation, record-keeping requirements, risk management, human oversight and principles of Trustworthy AI, in particular transparency, robustness, accuracy and cybersecurity.
  • Validation of the AI system - Ensure that the development, deployment and operation of the respective systems comply with the principles of Trustworthy AI. In this context, an impact assessment must be carried out with regard to fundamental rights in order to identify possible negative effects resulting from the respective use case of the AI system.
  • Lifecycle management – As part of the QMS, providers are obliged to minimize and manage risks not only before placing AI systems on the market, but throughout their entire lifecycle. This also includes the registration of high-risk systems in the EU database and the possibility of logging all incidents over the entire lifecycle.

Enforcement

 

The legislator generally requires providers to self-assess by carrying out a conformity assessment themselves or commissioning authorized third parties to do so, depending on the type of high-risk AI system. The AI Act provides for an administrative structure with several central government authorities, each entrusted with different tasks in relation to the implementation and enforcement of the law.

At the EU level

The European AI Office, recently established by the European Commission, plays a crucial role in overseeing the rollout of the AI Act across all EU member states. This entity places a particular emphasis on the oversight of general-purpose AI systems.

An AI Advisory Board, comprising representatives from both the business community and civil society, offers critical feedback. This board ensures that a diverse range of perspectives are considered throughout the implementation process, facilitating a more inclusive approach to AI regulation.

Additionally, the Scientific Advisory Panel, staffed by independent experts, is tasked with identifying systemic risks associated with AI. This panel also provides recommendations on the categorization of AI models and ensures that the Act's enforcement strategies are consistent with the most current scientific insights.

This structure is designed to ensure that AI regulation in the EU is balanced, informed by a broad spectrum of insights, and aligned with cutting-edge scientific research, all of which are essential for businesses and society to understand as they navigate the evolving landscape of AI regulation.

At the national level 

EU Member States must establish or identify the competent authorities at the federal level that are responsible for enforcing the Act within their jurisdictions and ensure that all AI systems comply with prevailing standards and regulations. Their duties include:

  • Verifying that conformity assessments are conducted properly and in a timely fashion.
  • Appointing the “notified bodies” (third-party auditors) authorized to perform third-party conformity assessments where required.
  • Coordinating nationally with other supervisory bodies (for, e.g., banking, insurance, healthcare and automotive) and across Europe with the EU’s “AI Office”.

Penalties for non-compliance

 

AI systems are classified according to their risk. Similarly, the maximum penalties provided for in the AI Act are also based on the severity of the infringement:

  • Infringements relating to prohibited AI systems (Article 5) can cost providers up to EUR 35 million or 7 percent of their global annual turnover in the previous year, whichever is higher.
  • Fines of up to EUR 15 million or 3 percent of annual turnover, whichever is higher, can be imposed for minor infringements.
    Incorrect reports can be penalized with up to EUR 7.5 million or 1 percent of the annual turnover, whichever is higher.
  • In addition to these monetary penalties, national supervisory authorities can force providers to remove non-compliant AI systems from the market.

The AI Act provides for more moderate fines for SMEs and start-ups.

In addition to the penalties for high-risk AI, the following penalties apply to violations relating toGPAI:

  • Up to 3 percent of annual turnover or EUR 15 million of annual turnover. The higher option applies.

The regulation timeline

 

The Act is set to go into effect 20 days subsequent to its publication in the Official Journal of the European Union, projected for May or June 2024. The entirety of the AI Act will be in operation two years post its adoption and formal publication. Nevertheless, certain stipulations are set to take effect sooner, while high-risk AI systems mentioned in Annex II, will adhere to an extended three-year transitional period:

  • 6 months: Systems with unacceptable risk are banned.
  • 12 months: The regulations for general purpose AI and foundation models apply.
  • 24 months: The remaining provisions relating to high-risk AI systems (Annex III), transparency regulations and national regulatory sandboxes apply.
  • 36 months: Rules for high-risk systems according to Annex II.

To bridge the transitional period until the AI Act becomes applicable, the Commission has launched the AI Pact. It is intended to promote uniform global regulation and encourage providers of AI systems to voluntarily commit to implementing the most important requirements before the 2026 legal deadlines. At the end of the trilogue negotiations, around 100 companies signed up to the AI Pact. In addition, the European standardization body CEN-CENELEC will translate the principles of the AI Act into technical norms and standards to facilitate the testing and certification of AI systems.

Implementing AI with confidence

 

Even if not all the technical details have been clarified yet, the AI Act gives a sufficient impression of the scope and objective of the future regulation. Companies will have to adapt many internal processes and strengthen risk management systems. However, they can build on existing processes within the company and learn from measures from previous laws such as the GDPR. We recommend that companies start preparing now and sensitize their employees to the new law, take stock of their AI systems, ensure appropriate governance measures and meticulously review AI systems classified as high-risk.

Deloitte is here to help its clients navigate the complexity and scope of the AI Act and prepare for the requirements that will apply in the future. Benefit from Deloitte's thought leadership in the area of Trustworthy AI , its extensive expertise in the development of AI systems and its many years of experience as an audit firm. Our services are aligned with the six lifecycle phases of AI systems, which are also described in the AI Regulation and reflect common practice:

Thought leadership

 

Deloitte has extensive expertise in the implementation of AI-based solutions and the careful development of dedicated audit tools to assess AI models according to the principles of Trustworthy AI. Our reputation as a competent consulting firm is based in particular on our demanding quality standards. Completing a questionnaire is not enough to assess the compliance of your systems. Deloitte performs an in-depth quantitative analysis and puts your AI models through their paces to identify logic errors, methodological inconsistencies, implementation issues, data risks and other weaknesses. We believe that only such a thorough approach will meet our clients' requirements. However, this does not mean that the wheel has to be reinvented for every analysis. In the interests of efficiency, Deloitte has invested in the development of dedicated tools to optimize the many steps of the validation process. A series of white papers (below) explains how these tools rigorously analyze the quality of AI systems and why this is critical to building our confidence in the AI models and systems that are shaping our present and future.

Download

 

  • The EU AI Act – 2nd Update 2023 – December Trilogue Edition

Fanden Sie dies hilfreich?

Vielen Dank für Ihr Feedback

Wenn Sie helfen möchten, Deloitte.com weiter zu verbessern, füllen Sie bitte folgendes aus: 3-min-Umfrage