Skip to main content

Political agreement on the EU AI Act

Discover key insights and how to deal with the implications of the EU AI Act for your organisation

The European Parliament and the EU's Council of Ministers recently reached a political agreement on the EU AI Act, an important moment in artificial intelligence regulation. Finally, it's time for the AI Act to move forward. You can start thinking about what this means for your organisation.

On 9 December 2023, the European Parliament and the EU's Council of Ministers reached a political agreement on the EU AI Act. This is an important step towards both a groundbreaking and ambitious regulation that aims to regulate artificial intelligence, described by the EU legislator as a quickly evolving 'family of technologies'. Now that the end is (finally) in sight, organisations that work with or plan to work with AI systems can start preparing for the AI Act. In this blog, we will explain how they can do this.


Webinar: The AI Act explained

 

On 19 March 2024, the webinar "The AI Act explained" took place. Keynote speaker Angeliki Dedopoulou, Public Policy Manager at Meta, together with two speakers from Deloitte, Jan-Jan Lowijs and Sebastiaan ter Wee, discussed the role of the AI Act for companies.

What does the AI Act entail?

 

Before we get into the 'who', let's take a quick step back: what does the AI Act entail? While the final text has not yet been released, we have identified some key features of the AI Act, partly based on the European Commission's recently published (very insightful) Q&A .

  • The AI Act is heavily inspired by the GDPR with similar features such as principles, user rights, transparency obligations and self-assessments.
  • The AI Act has a broad scope:

(1) have a potential global reach, as this also applies to providers of AI systems that are established outside the EU but place their system on the market in the EU;

(2) the use of a (very) broad definition of AI system. The latest and expected definition is: "An AI system is a machine-based system that, for explicit or implicit purposes, deduces from the inputs it receives how to generate outputs such as predictions, content, recommendations or decisions that may affect physical or virtual environments. Different AI systems vary in their degree of autonomy and adaptability after implementation."

(3) Introducing requirements and obligations for stakeholders throughout the supply chain: suppliers, importers, distributors and operators (users).

  • The AI Act adopts a risk-based approach, which means that requirements and obligations depend on the level of risk of the (use of the) AI system:
        (1) Unacceptable risk: e.g. Social Scoring and Biometric Categorisation
        (2) High risk: e.g. AI systems containing safety components of products covered by sectoral Union legislation, AI systems described in specific use cases such as employment or education;
        (3) Specific transparency risk: for chatbots and deepfakes; and
        (4) Minimum risk: for which providers of such systems may choose to adhere to voluntary codes of conduct.
    The high-risk classification is based on the intended purpose and function of the AI system, in accordance with existing product safety legislation. Some of the key obligations for high-risk AI systems are conformity assessments, quality and risk management systems, registration in a public EU database, and access to information by authorities. In addition, high-risk systems must be technically robust and minimise the risk of unfair bias.
  • As for general AI models (including generative AI), the AI Act requires providers of such models to provide information to downstream system providers and have policies in place to ensure that they respect copyright when training their models. In addition, general AI models trained with a total computational power of more than 10^25 FLOPs are considered "systemic risks," meaning that there is an obligation to assess and mitigate risks, report serious incidents, ensure cybersecurity and provide information on the energy consumption of these models. Many of these obligations are similar to those under the GDPR.
  • Operators governed by public law or private operators providing public services and operators providing high-risk systems, are required to carry out a fundamental rights impact assessment ('FRIA'). An FRIA can be done in conjunction with a Data Protection Impact Assessment ('DPIA') under the GDPR.
  • The AI Act provides for huge fines, almost twice as high as under the GDPR. For the most serious infringements of the prohibited applications, fines can be up to 7% of global turnover or €35 million (GDPR: 4%/€20 million) and up to 1.5% for not cooperating with the authorities and/or not providing accurate information. The AI Act will be overseen by national authorities, but there will also be a European AI Council and a European AI Agency (within the European Commission) that will oversee general-purpose AI models, work with the European Artificial Intelligence Council and be supported by a scientific panel of independent experts.

Who in your organisation will be involved in these new rules?

 

The AI Act is complicated legislation because it aims to regulate a 'family' of complex technologies with a comprehensive set of requirements and obligations that apply to specific AI systems or AI systems used in specific use cases. In addition, there is interaction between the AI Act and other influential regulations, such as the GDPR and copyright and the law has some features of consumer law (on the transparency side). This means that stakeholders from different functions within your organisation need to join forces and look at the AI Act together.

As the exact content and timing of the AI Act will become clearer in the coming months, now is the perfect time to start preparing and start focusing on the 'who'. Who do I need in the AI Act team? Who should be responsible for complying with this law? Who is already dealing with other legislation related to the digital technologies we use and are there overlapping requirements and obligations?

In other words, start working now on your new Target Operating Model that integrates the AI Act into your existing organisation. Determine what capabilities you already have, what new ones you need and what training people need. Tackling technology, data and cyber legislation in general and the AI regulation in particular, requires a multi-disciplinary approach and not a one-off approach, but sustainable and clear governance. Subject matter experts from different backgrounds and functions benefit from such governance. By doing so, you ensure that the AI Act is in line with other (upcoming) laws and regulations (in the legislative pipeline) on e.g. data and cyber, to avoid duplication of effort and other inefficiencies while increasing knowledge sharing.

Other no-regret activities you can start undertaking include: mapping out the AI systems you use and for what purpose, determining the risk level of the AI systems and establishing your role in the supply chain. When it comes to compliance, you can learn from the GDPR. What processes, tools, guidelines and self-assessments can you 'refurbish' for compliance with the AI Act?

Next steps for the AI Act

 

The next step for the AI Act is to hold technical meetings, which are expected to last until at least mid-February to work out various details in line with the political agreement that has been reached. By mid-February 2024, we expect a 95% final text. The European Parliament and the Council are expected to formally adopt the final regulation before the EP elections on 6-9 June 2024. The AI Act will apply two years after its entry into force, probably in mid-2026, with the exception of the bans that will take effect after six months. The requirements for AI models, the conformity assessment bodies and the governance chapter will apply one year earlier.

Too early to talk about this? We don't think so. In fact, for us, this feels like the right time to start. If you want to know more about that, don't hesitate to contact us!

Did you find this useful?

Thanks for your feedback

If you would like to help improve Deloitte.com further, please complete a 3-minute survey