On March 13, 2024, the EU Parliament approved the AI Act with a large majority. It will be published in the Official Journal of the EU – following formal approval by the Member States and translation into all official EU languages – in July 2024 with entry into force in August 2024.
The definition of AI in the AI Act is based on the internationally recognized OECD definition of AI. The EU AI Act defines AI as follows: “ ‘AI system’ means a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;"
An AI system under EU law is characterized by the ability to draw conclusions, i.e. it can make predictions and decisions that influence real and virtual environments. This is made possible by techniques such as machine learning and the logic-based approaches. AI systems vary in their autonomy and can be used either independently or integrated into products, adapting autonomously through use. The AI Act adopts a sweeping definition for AI, implying that a wide range of systems could potentially be regulated. But further examination of the introductory remarks, clarifying this regulatory text within the AI Act, reveals that the definition is not meant to encompass rudimentary, traditional software systems or programming methodologies. These, which rely entirely on rules designated by humans to execute auto-operations, are excluded from the scope.
The EU AI Act stipulates that the regulation applies exclusively to matters that fall within the scope of EU law. The competences of member states or government authorities with regard to national security are not to be curtailed in any way. AI systems designed solely for military or defense applications, or those employed strictly for research and innovation, are also outside the purview of the AI Act. Furthermore, open-source models, and systems used by individuals for non-commercial purposes with AI, are not encompassed either. The AI Act applies to all providers of AI systems that are introduced or deployed on the European market. The term 'provider' in the AI Act envelops operators that deploy or introduce an AI system onto the market usually under their own trademark. Importers, distributors and deployer are also subject to the law.
The AI Act adopts a risk-based approach when it comes to categorizing AI systems. Initially, a distinction is drawn between what we regard as "conventional" AI systems and "general purpose AI" (GPAI). GPAI, a relatively new emergence alongside generative AI systems, is handled as an independent subject (discussed in further detail below). The risk of single-purpose AI (AI with a specific purpose) is not assessed on the basis of its technology, but on the basis of its application. The risk categories range from "unacceptable" to "high" to "limited or minimal". Systems with unacceptable risk are prohibited, while those with minimal risk are not subject to any requirements. The focus of the proposed legislation is therefore clearly on high-risk AI systems, which are subject to numerous compliance regulations. Providers of such systems are required to implement a risk management system and meet data quality and integrity requirements. Additionally, they must carry out a conformity assessment and then issue a declaration of conformity. High-risk AI systems are divided into two categories:
Before high-risk AI systems from the public, banking or insurance sectors are launched on the market, a fundamental rights impact assessment must also be carried out.
Citizens are entitled to submit complaints to national authorities about AI systems and algorithmic decisions that affect their rights.
AI systems that pose an unacceptable risk will be completely banned within just six months of the official adoption of the AI Act. The AI Act lists the following applications:
There are two types of AI systems that pose a limited or minimal risk and therefore have fewer to no obligations under the AI Act:
As with "traditional" (focused, single-purpose) AI models, the AI Act also classifies foundation models - the engines behind Generative AI - according to their risk. They are known as "general purpose AI" (GPAI) due to their flexibility and potential for widespread use.
The AI Act provides for the following classification: It is not based on the application, but on the performance and scope of the underlying base model.
The quantitative, non-subjective distinction between GPAI and GPAI with high impact capabilities is based amongst others on the computing power required to train the underlying base model. This is measured in floating point operations per second (FLOP). The threshold for GPAI with significant impact is 10^25 FLOP.
To meet these new requirements in practice, experts from industry, academia, civil society and other relevant stakeholders will work with the Commission to develop codes of conduct and ultimately harmonized EU-wide standards.
In order to issue a declaration of conformity, providers of high-risk AI systems must demonstrate compliance with the regulation prior to market launch and throughout the entire life cycle of the systems:
The legislator generally requires providers to self-assess by carrying out a conformity assessment themselves or commissioning authorized third parties to do so, depending on the type of high-risk AI system. The AI Act provides for an administrative structure with several central government authorities, each entrusted with different tasks in relation to the implementation and enforcement of the law.
At the EU level
The European AI Office, recently established by the European Commission, plays a crucial role in overseeing the rollout of the AI Act across all EU member states. This entity places a particular emphasis on the oversight of general-purpose AI systems.
An AI Advisory Board, comprising representatives from both the business community and civil society, offers critical feedback. This board ensures that a diverse range of perspectives are considered throughout the implementation process, facilitating a more inclusive approach to AI regulation.
Additionally, the Scientific Advisory Panel, staffed by independent experts, is tasked with identifying systemic risks associated with AI. This panel also provides recommendations on the categorization of AI models and ensures that the Act's enforcement strategies are consistent with the most current scientific insights.
This structure is designed to ensure that AI regulation in the EU is balanced, informed by a broad spectrum of insights, and aligned with cutting-edge scientific research, all of which are essential for businesses and society to understand as they navigate the evolving landscape of AI regulation.
At the national level
EU Member States must establish or identify the competent authorities at the federal level that are responsible for enforcing the Act within their jurisdictions and ensure that all AI systems comply with prevailing standards and regulations. Their duties include:
AI systems are classified according to their risk. Similarly, the maximum penalties provided for in the AI Act are also based on the severity of the infringement:
The AI Act provides for more moderate fines for SMEs and start-ups.
In addition to the penalties for high-risk AI, the following penalties apply to violations relating toGPAI:
The Act is set to go into effect 20 days subsequent to its publication in the Official Journal of the European Union, projected for May or June 2024. The entirety of the AI Act will be in operation two years post its adoption and formal publication. Nevertheless, certain stipulations are set to take effect sooner, while high-risk AI systems mentioned in Annex II, will adhere to an extended three-year transitional period:
To bridge the transitional period until the AI Act becomes applicable, the Commission has launched the AI Pact. It is intended to promote uniform global regulation and encourage providers of AI systems to voluntarily commit to implementing the most important requirements before the 2026 legal deadlines. At the end of the trilogue negotiations, around 100 companies signed up to the AI Pact. In addition, the European standardization body CEN-CENELEC will translate the principles of the AI Act into technical norms and standards to facilitate the testing and certification of AI systems.
Even if not all the technical details have been clarified yet, the AI Act gives a sufficient impression of the scope and objective of the future regulation. Companies will have to adapt many internal processes and strengthen risk management systems. However, they can build on existing processes within the company and learn from measures from previous laws such as the GDPR. We recommend that companies start preparing now and sensitize their employees to the new law, take stock of their AI systems, ensure appropriate governance measures and meticulously review AI systems classified as high-risk.
Deloitte is here to help its clients navigate the complexity and scope of the AI Act and prepare for the requirements that will apply in the future. Benefit from Deloitte's thought leadership in the area of Trustworthy AI , its extensive expertise in the development of AI systems and its many years of experience as an audit firm. Our services are aligned with the six lifecycle phases of AI systems, which are also described in the AI Regulation and reflect common practice:
Deloitte has extensive expertise in the implementation of AI-based solutions and the careful development of dedicated audit tools to assess AI models according to the principles of Trustworthy AI. Our reputation as a competent consulting firm is based in particular on our demanding quality standards. Completing a questionnaire is not enough to assess the compliance of your systems. Deloitte performs an in-depth quantitative analysis and puts your AI models through their paces to identify logic errors, methodological inconsistencies, implementation issues, data risks and other weaknesses. We believe that only such a thorough approach will meet our clients' requirements. However, this does not mean that the wheel has to be reinvented for every analysis. In the interests of efficiency, Deloitte has invested in the development of dedicated tools to optimize the many steps of the validation process. A series of white papers (below) explains how these tools rigorously analyze the quality of AI systems and why this is critical to building our confidence in the AI models and systems that are shaping our present and future.