The EU Artificial Intelligence Act (EU AI Act) represents a pioneering effort to regulate various types of artificial intelligence (AI) technologies through a harmonized framework, aiming to ensure their ethical use and a high level of protection of health, safety, fundamental rights, and environment, as well as to promote the development, use and uptake of trustworthy and secure AI in the internal market. It entered into force across all 27 EU Member States on 1 August 2024, and the enforcement of the majority of its provisions will commence on 2 August 2026.
The EU AI Act aims to regulate AI systems based on the level of risk they pose, with stricter requirements applied to those with higher risks. It thus establishes rules covering the placing on the market, deployment, and use of certain AI systems, sets specific requirements for high-risk AI systems, outlines the obligations of their operators, and prohibits harmful AI practices. Additionally, it introduces rules for market surveillance and governance applicable to individual EU Member States, while also including measures to encourage innovation in the AI sector.
The following section provides an overview of how AI systems are categorized based on their associated risk levels.
The regulation affects providers, deployers, importers, and distributors of AI systems, even those based outside of the EU, if their AI systems or their outputs are used in the EU. It also applies to businesses operating in high-risk sectors with large customer bases, including telecommunications, banking & insurance, IT & technology, critical infrastructure, and education. Nonetheless, not all provisions of the EU AI Act apply uniformly – the scope of requirements differs depending on the risk categorization of each AI solution and the role of individual entities in the AI value chain. While some specific cases are exempted from the Act’s applicability entirely, including AI systems used for research or prototyping and for military, defence or national security purposes, others from affected sectors and categories have to fulfill a series of requirements, such as:
Although the EU AI Act will become applicable in mid-2026, some provisions take effect earlier.
On the other hand, the core obligations for high-risk AI systems, which include requirements on data, risk management, and oversight, will become applicable only in August 2027.
Non-compliance with the AI Act can lead to significant financial penalties, depending on the nature of the violation and ranging from EUR 35 million or 7% of global annual turnover to EUR 7.5 million or 1% of global annual turnover. Small and medium-sized enterprises (SMEs) may face reduced penalties, with fines capped at the lower percentage of their global annual turnover.
Our team of experts on regulatory compliance, IT & cyber security, and risk management offers comprehensive services to ensure your preparedness for and full compliance with the EU AI Act, including