Skip to main content

Unpacking the EU AI Act: The future of AI governance

Understanding compliance requirements and strategic insights

The European Union (EU) Artificial Intelligence (AI) Act aims to facilitate the safety and fundamental rights of people and businesses while fostering AI innovation and adoption within the EU.1 The landmark legislation draws a lot of focus due to its onerous obligations, large scope, and impact across industries. Along with the EU’s recently applied Digital Services Act (DSA), the EU AI Act is part of a broader European approach to balancing innovation and digital transformation with ethical considerations and user safety. The AI Act formally went into effect on August 1st, 2024, and will be fully applicable 2 years later with some exceptions.2

The Act adopts a risk-based approach in regulating AI, which means it sets progressively increasing restrictions based on the level of risk associated with different uses of AI, such as music and entertainment, technology, health care, education, and manufacturing. The EU AI Act classifies AI systems into four categories based on their potential risk to rights and safety, which includes Unacceptable Risk, High Risk, Limited Risk, and Minimal Risk:

  • The Act forbids applications deemed to carry unacceptable risk levels, including systems that manipulate human behavior and classify people based on social behavior, as well as some uses of real-time remote biometric identification systems.
  • High-risk systems, such as those used in safety features in aviation, cars, medical devices, and critical infrastructure management, will require specified compliance with regulatory requirements before they can be deployed.
  • AI applications that pose limited risks, such as chatbots and AI-generated content like deepfakes, are required to meet specific transparency obligations. This ensures that users are aware that they are interacting with an AI system.
  • Although encouraged to adhere to voluntary codes of conduct, the vast majority of AI applications, such as AI-driven video games and AI-enabled virtual assistants, fall under the minimal risk category and can operate with minimal regulatory constraints.

The Act specifies compliance requirements for permitted uses, which vary depending on the risk classification of the AI system. These requirements focus on governance, technical documentation, human oversight, risk management, and transparency to determine AI systems’ accuracy, robustness, and security. Additionally, for entities outside of the EU, providers will be required to appoint an authorized representative established in the EU to give EU authorities access to someone with the required information on compliance of their AI systems before placing a high-risk system or general-purpose AI model in the EU market.

Noncompliance with the AI Act carries significant penalties including, but not limited to:

  • Infringements related to prohibited AI systems can lead to fines of up to EUR 35 million or 7% of global annual turnover, whichever is higher. Lesser infringements might result in fines up to EUR 15 million or 3% of annual turnover.
  • Incorrect reporting can attract penalties up to EUR 7.5 million or 1% of turnover.
  • Beyond monetary fines, authorities can also mandate the withdrawal of noncompliant AI systems from the market.

Wondering what you can do to prepare?

Gather information on existing AI ethics and governance processes, identify and inventory systems across the organization, understand each system’s use, and evaluate how each might be classified under the EU Act’s risk hierarchy.

Develop a coordinated, integrated strategy and approach to manage AI risks, monitor the development of new AI-focused regulations, manage emerging obligations, establish a rationalized obligation management register to track new requirements, and map these requirements to risks and controls.

Establish a dedicated, cross-functional team to manage risk, compliance, and decision-making; leverage existing processes and build out operating and governance models; and engage with senior leaders across the enterprise to build consensus on the importance of a broad approach.

Choose a trusted framework based on existing principles, consider externally published frameworks and regulatory requirements, and configure the framework to manage AI-related risks and compliance effectively across the enterprise.

With some controls in place, AI can become a useful tool in this endeavor. AI can be used to identify regulations; decompose obligations, and rationalize them against your existing obligation management register. It can also put controls in place to help identify gaps, evaluate risks when products change, manage risks associated with those changes, and comply with regulations. It can also be leveraged to evaluate data, write reports, and inform risk identification.

The EU AI Act is perceived as a pivotal regulatory benchmark and is expected to set a precedent for comparable regulations worldwide. Notably, regulatory proposals have already taken shape in other jurisdictions, accompanied by the publication of various frameworks. Many of these measures share overlapping principles and themes, reinforcing the global trend toward stricter AI governance.

End Notes:

1Tammy Whitehouse, “How EU AI act may accelerate compliance regime for US. enterprises,” Deloitte Risk & Compliance Journal for the Wall Street Journal, February 13, 2024.
2AI Act | Shaping Europe’s digital future (europa.eu)

Did you find this useful?

Thanks for your feedback