The goal of the AI Act is to promote the functioning of the European internal market in terms of the introduction of human-centric and trustworthy artificial intelligence (AI) while simultaneously ensuring a high level of protection for health, safety, and fundamental rights enshrined in the Charter of Fundamental Rights - including democracy, rule of law, and environmental protection - against possible harmful effects of AI systems. Thus, the AI Act is primarily a product safety regulation. Entered into force on August 1, 2024, it aims to protect European consumers from violations of fundamental rights due to inappropriate use of AI. In the future, providers of AI systems classified as high-risk must formally confirm compliance with numerous requirements based on the principles of trustworthy AI - from AI governance to AI quality. Non-compliance with these requirements may result in substantial fines depending on the individual case. Moreover, providers may be forced to remove their AI systems from the market. Despite extensive principles, rules, and procedures, as well as new supervisory structures, the law should not slow down innovation in the EU, but, rather, promote further development in the AI sector, particularly through start-ups and SMEs, by providing legal certainty.
The definition of AI in the AI Act aligns with the internationally recognized AI definition of the OECD. An AI system according to the AI Act is "a machine-supported system designed for operation to varying degrees of autonomy, which may be adaptable after its initiation, and which, from the inputs received, derives explicit or implicit goals, such as creating outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."
An AI system under EU law is characterized by its ability to draw conclusions, i.e., to make predictions and decisions that affect real and virtual environments. These are enabled by techniques such as machine learning and logic-based approaches. AI systems vary in their autonomy and can be used independently or integrated into products while adapting autonomously through use. The definition of AI in the AI Act is quite broad, which means that a large number of systems could fall under the regulation. However, the recitals to explain the regulation texts of the AI Act and the guideline published by the EU Commission on the definition of an AI system as of February 1, 2025, clarify that the definition does not cover AI systems that include mathematical optimization, simple data processing, classical heuristics, or simple forecasts (e.g., based on statistical calculation rules). Also, it has worked out seven elements of an AI system, but also clarified that these need not all be present during the entire life cycle of an AI system under the AI Act.
The AI Act provides that the regulation applies exclusively to use cases that fall within the scope of EU law. The responsibilities of Member States or government authorities regarding national security should not be curtailed in any way. Also excluded are AI systems used exclusively for military or defense purposes or solely used for research and innovation based on open-source licenses, and use for purely private purposes.
The AI regulation applies to all providers of AI systems offered on the European market. The term provider includes persons or institutions developing and bringing an AI system to the market. Importers, dealers, and operators are also subject to this regulation.
As with "conventional" (focused, purpose-oriented) AI models, the AI Act also classifies base models - the engines behind Generative AI - based on their risk. Thanks to their flexibility and potential for widespread use, they are termed as "general-purpose AI" (GPAI).
The AI Act provides the following classification: It is based not on the application but the performance and reach of the underlying base model.
The quantitative, non-subjective distinction between GPAI models and GPAI models with systemic risk is made based on the computing power needed to train the underlying base model. This is measured in floating-point operations per second (FLOPs). The threshold for GPAI models with systemic risk is 10^25 FLOP.
To meet these new requirements, experts from industry, academia, and civil society, along with other relevant stakeholders, will develop codes of conduct - and ultimately harmonized EU-wide standards in collaboration with the Commission.
In classifying AI systems, the AI Act follows a risk-based approach: all AI systems are not treated the same. They first differentiate between "conventional" AI systems and "general-purpose AI" (GPAI). The latter is a relatively new development since the emergence of generative AI systems and is treated as a separate topic (see below).
The risk of so-called Single-Purpose AI (AI systems with a specific use case) is assessed not based on their technology, but on their use case. Risk categories range from "unacceptable" to "high" to "limited or minimal". Systems with unacceptable risk are prohibited, while those with minimal risk are not regulated by the AI Act.
AI systems that pose an unacceptable risk are completely prohibited since February 2, 2025. The AI Act lists the following applications:
On this topic, the EU Commission also provided guidance on prohibited practices in February 2025 with a guideline. It provides a host of examples for all practices prohibited under the AI Act and distinguishes them from high-risk use cases, but it also makes clear that the degree of distinction between prohibited and high-risk applications can be very narrow. Therefore, an exact examination of individual cases is essential.
The main focus of the regulation is clearly on high-risk AI systems, which are subject to a multitude of compliance regulations. Providers of such systems are required to introduce a quality management and risk management system, meet data quality and integrity requirements, and carry out a conformity assessment before issuing a declaration of conformity. High-risk AI systems are divided into two categories:
Before high-risk AI systems can be introduced to the market in public sectors, banks, or insurance, a Fundamental Rights Impact Assessment must also be carried out.
Citizens have the right to lodge complaints with national authorities about AI systems and algorithmic decisions affecting their rights.
There are types of AI systems that pose limited or minimal risk to which fewer or no obligations of the AI Regulation apply:
Administrative or internal AI systems such as spam filters or predictive maintenance systems lie outside the scope of the regulation and are therefore classified as "minimal risk." The AI Regulation does not provide explicit obligations for these AI systems. However, companies can voluntarily adhere to codes of conduct for these AI systems.
Nevertheless, any AI systems, regardless of their risk categorization, which interact with people, such as chatbots or recommendation systems. Certain transparency obligations apply to them. Content such as conversations (chatbots), deep fakes, or biometric categorizations generated by an AI system must be labeled. This obligation is associated with the classification as "limited risk."
To issue a declaration of conformity, providers of high-risk AI systems must demonstrate compliance with the regulation before market introduction and throughout the life cycle of an AI system:
For details on these, see our article dedicated to AI Governance.
The legislator generally assumes that providers will self-control by performing a conformity assessment themselves or entrusting it to authorized third parties depending on the type of high-risk AI system. The AI Regulation provides for an administrative structure with several central government agencies, each entrusted with different tasks regarding the implementation and enforcement of the law.
The EU AI Office, a new authority within the European Commission, coordinates the implementation of the law in all EU member states. In addition, the AI Office supervises GPAI with significant impacts.
A committee attached to the AI Office, the AI committee, consisting of stakeholders from business and civil society, provides feedback and ensures that a broad range of opinions is represented during the implementation process.
Furthermore, the Scientific Committee, an advisory forum of independent experts, is to identify systemic risks of AI, provide guidelines for model classification, and ensure that the rules and implementation of the law comply with the latest scientific findings.
EU Member States must set up or designate competent national authorities responsible for enforcing the law, so-called market surveillance authorities. They must also ensure that all AI systems comply with the relevant standards and regulations. Their tasks include:
In Germany, according to the current draft by referral to carry out the AI Act, the Federal Network Agency (for all other industries) and BaFin (for the financial sector) are to share the task of AI supervision.
AI systems are classified based on their risk. Similarly, the sanctions provided for in the EU AI Act also correspond to the severity of the violation:
The AI Regulation provides for more moderate fines for SMEs and start-ups.
While the provisions on sanctions do not apply until August 2025, the provisions on bans that have been in effect since February 2, 2025, have immediate effect, so those affected could possibly enforce them in the national courts and obtain interim injunctions.
The AI Act came into force on August 1, 2024, and has staggered implementation deadlines. The AI Act will be almost entirely applicable on August 2, 2026. However, some provisions will apply earlier, with high-risk AI systems following a three-year transition period under Annex II:
Although not all technical details are clarified yet, the AI Act provides a sufficient idea of the scope and aim of the future regulation. Businesses will have to adapt many internal processes and strengthen risk management systems. The European standardization body CEN-CENELEC will translate the principles of the AI Regulation into technical standards and norms to facilitate the testing and certification of AI systems, and the EU Commission will publish guidelines as guidance for the application of the AI Act. However, existing processes in the company can be built upon, and lessons can be learned from measures of previous legislation such as the GDPR. We recommend companies to promote implementation within their organization, raise employee awareness about the new law, carry out a stock take of their AI systems, ensure appropriate governance measures and meticulously scrutinize AI systems categorized as high risk.
At Deloitte, we stand by our clients: we assist you in mastering the complexity and scope of the AI regulation and prepare for the requirements that will apply in the future. Benefit from Deloitte's through leadership in the field of trustworthy AI, our extensive expertise in the development of AI systems, and our long-standing experience as an audit firm. Our services are based on the six life cycle phases of AI systems, which are also described in the AI Regulation and correspond to general practice.
Deloitte has extensive expertise regarding the implementation of AI-based solutions and the careful development of dedicated audit tools and monitoring for assessing AI models according to the principles of trustworthy AI. Our reputation as a competent consulting company is primarily based on our demanding quality standards. To assess the conformity of your systems, completing a questionnaire is far from sufficient. Deloitte conducts an in-depth quantitative analysis and rigorously tests your AI models to identify logic errors, methodological inconsistencies, implementation problems, data risks, and other weak points. We believe that only such a thorough approach meets the requirements of our customers. However, this doesn't mean reinventing the wheel for each analysis. For the sake of efficiency, Deloitte has invested in the development of dedicated tools to optimize the numerous steps of the validation process. A series of white papers (download below) explain why Quality-Guardrails and Governance machinery are of critical importance in strengthening trust in the AI models and systems that are decisively shaping our present and our future.