Skip to main content

EU AI Act: navigating the EU's new AI Regulation

Ensuring AI compliance, risk management, and governance in an era of ever-increasing AI systems usage

The European Union Artificial Intelligence Act (EU AI Act) marks a significant milestone in AI regulation, setting the stage for how artificial intelligence technologies are governed and deployed across member states. As the world's first comprehensive AI law, this regulation emphasises the importance of AI compliance, risk management, and governance. It aims to ensure ethical use of AI, protection of people’s fundamental rights, health and safety, as well as providing transparency when using AI. In this detailed analysis, we explore the key components of the EU AI Act, its implications for organisations, and how it aligns with existing digital regulations to foster a secure, innovative AI landscape in Europe.

Download: EU AI Act Deep Dive

Background EU AI ACT 

The AI Act introduces a framework aimed at regulating the deployment and usage of AI within the EU. It establishes a standardized process for single-purpose AI (SPAI) systems’ market entry and operational activation ensuring a cohesive approach across EU Member States.

The AI Act adopts a risk-based approach by categorizing AI systems based on their use case, thereby establishing compliance requirements according to the level of risk they pose to users. This includes the introduction of bans on certain AI applications deemed unethical or harmful, along with detailed requirements for AI applications considered high-risk to manage potential threats effectively. Further, it outlines transparency guidelines for AI use cases that have the potential to mislead people. With the risk-based approach, AI ethics are the heart of the AI Act.

Its focus on principles aims to leave the AI Act adaptable to as yet unknown iterations of AI technologies. However, the public use of general-purpose AI technology prompted the legislator to differentiate between single-purpose AI systems and general-purpose AI models. It recognises that general-purpose AI models can introduce systemic risks in our society. Therefore the AI Act regulates the market entry for general-purpose AI models, regardless of the risk-based categorization of use cases, setting forth comprehensive rules for market oversight, governance, and enforcement to maintain integrity and public trust in AI innovations.

The AI Act applies to the following actors:

  • Providers: providers are those who develop AI systems or general-purpose AI models to place them on the EU market.
  • Importers & Distributors:
    • Importers place an AI on the market or put it into service, that a provider outside the EU has developed.
    • Distributors are those who make AIs available to others.
  • Deployers: deployers are entities using a (high-risk) AI system for professional activities.
  • Manufacturers: manufacturers are those who place products with embedded AI systems on the EU market under their own trademark

The EU AI Act came into force on August 2, 2024. The implementation is phased, with various parts of the AI Act becoming applicable at different times, allowing organisations to adapt and comply with the new requirements. Some of the key dates are:

  • August 2, 2024: The AI Act comes into force.
  • February 2, 2025: Chapters I and II, including regular articles on scope, definitions, prohibited AI practices and AI literacy become applicable.
  • August 2, 2025: Obligations for general-purpose AI providers begin.
  • February 2, 2026: Post-market monitoring and penalties take effect.
  • August 2, 2026: High-risk AI systems (Annex III) obligations start.
  • August 2, 2027: High-risk AI safety components (Annex I) obligations come into effect.

This phased approach ensures that organisations, both private and public, have sufficient time to align their operations with the new regulatory standards, promoting responsible AI deployment while protecting fundamental rights and public trust.

The EU AI Act aligns with several existing EU regulations to ensure a comprehensive and consistent legal framework for AI technologies. Key areas of alignment include:

  • Privacy: The AI Act complements the General Data Protection Regulation (GDPR), enhancing digital trust and security by safeguarding personal data in AI applications.
  • Product Safety: It works alongside regulations such as the Machinery Regulation and the Medical Devices Regulation to ensure AI systems meet stringent safety standards.
  • Competition Law: The Digital Services Act (DSA) and the Digital Markets Act (DMA) are aligned with the AI Act to promote fair competition and transparent digital markets.
  • Cybersecurity: The Network and Information Systems (NIS2) Directive complements the AI Act by addressing cybersecurity risks associated with AI systems.
  • Regulatory Commonalities: The AI Act shares common regulatory features with other digital regulations, including requirements for registers, reporting, notifications, governance, organisation, assessments, and transparency.

Requirements

The requirements of the EU AI Act vary depending on the classification of the AI system. A system can be a single-purpose AI (SPAI) or a general-purpose AI (GPAI). The requirements laid out in the AI Act can differ according to the role that one fulfils with regard to the AI system.

Single-Purpose AI (SPAI) Systems

  • Unacceptable Risk AI Systems: These AI systems are prohibited entirely due to their potential to cause harm or violate fundamental values such as human dignity, freedom, equality, democracy, data privacy, and the rule of law.
  • High-Risk AI Systems:
    • Providers: Implement a risk management system and ensure data governance, maintain quality management and technical documentation, ensure transparency, accuracy, and cybersecurity, provide information to deployers, implement human oversight and maintain automatically generated logs.
    • Deployers: Apply the provider's instructions and guarantee human oversight, validate input data and monitor AI system activity, report malfunctions, save logs if possible and conduct fundamental rights impact assessments where necessary.
    • Importers and Distributors: Verify AI system compliance with the EU AI Act, maintain conformity certifications for ten years, withdraw non-compliant AI systems, and cooperate with authorities.

General-Purpose AI (GPAI) Models

  • General-Purpose AI Models: Maintain up-to-date technical documentation, protect intellectual property rights, ensure transparency about the model's limitations and capabilities, comply with EU copyright laws, and provide detailed summaries of training content.
  • High-Impact General-Purpose AI Models: Comply with all general purpose requirements, conduct comprehensive model evaluations and adversarial testing, assess and mitigate systemic risks, ensure robust cybersecurity protection, report serious incidents to the EU Commission, and report on energy efficiency and consumption during training.

Overall, the EU AI Act aims to ensure that AI systems are developed and used responsibly, with requirements tailored to the potential risks and impacts of each category.

The EU AI Act represents an important step forward in the governance of artificial intelligence, offering organisations a clear framework for deploying AI systems responsibly. Organisations have the following responsibilities:

  • Governance Frameworks: Implement robust governance structures that encompass risk management, data transparency, and human oversight.
  • Risk Management: Conduct thorough assessments of AI risk classifications and maintain a comprehensive inventory of AI assets to identify and mitigate potential risks.
  • System Accuracy and Cybersecurity: Ensure that AI systems are accurate and secure, implementing necessary cybersecurity measures to protect data and operations.
  • Quality Management Systems: Maintain high standards through quality management systems that oversee the entire AI lifecycle.
  • Impact Assessments: Conduct impact assessments to evaluate the effects of AI systems on fundamental rights, ensuring they align with ethical and legal standards.
  • Role Definition: Clearly define the roles and responsibilities of each operator involved in the AI deployment process.

By adhering to these responsibilities, organisations can navigate the regulatory landscape with a robust governance framework and a proactive approach to risk management, ensuring that AI systems are deployed responsibly and ethically.

Enforcement will be a collective effort shared by the Member States and the European Commission, where the Commission gets the exclusive power to supervise and enforce rules for General Purpose AI Models. Non-compliance can lead to significant penalties and enforcement actions. The higher the risk, the higher the penalty. Furthermore, member states must consistently consider the interests of small and medium sized enterprises (SMEs) and start-ups. National authorities are mandated to assess the nature, gravity, and duration of each infringement, as well as whether the entity in question is a repeat offender, when determining the amount of each fine.
The AI Act’s penalty regime is structured on the
basis of the severity and nature of the violation, with the fines increasing
according to the risk category of the AI system.

Fines:

  • Unacceptable Risk AI Systems: Fines up to 35 million EUR or 7% of the company's global annual turnover (GAT) for deploying prohibited AI systems.
  • High-Risk AI Systems: Fines up to 15 million EUR or 3% of the GAT for infringements related to high-risk AI obligations.
  • General-Purpose AI Systems: Fines up to 15 million EUR or 3% of the GAT for failing to meet transparency obligations.
    Misinformation/Supplying Incorrect Information: Fines up to 7.5 million EUR or 1% of the GAT for providing incorrect, incomplete, or misleading information.

Enforcement Actions:

  • Investigation and Remediation: Authorities can investigate AI systems, halt their operations, and demand necessary changes. Non-compliant AI systems may be forcibly removed from the market.
  • Complaints and Transparency: Individuals can lodge complaints about any aspect of the AI Act. Affected persons have the right to clear and meaningful explanations regarding the role of AI systems in decision-making processes.
  • Collective Action and Whistleblower Protection: Collective action by groups representing affected persons is possible, and whistleb

Preparing for compliance

Complying with the EU AI Act offers multiple benefits for organisations. Firstly, early adoption positions your organisation as an industry leader, setting standards for ethical and responsible AI use. This leadership enhances your reputation and builds trust with stakeholders, including customers, partners, and regulators.

Complying with the EU AI Act also reduces the risk of legal penalties and enforcement actions. This ensures that your AI technologies align with European legal standards. By following the Act’s requirements, your company can avoid fines and sanctions.

In addition, compliance minimizes risks in implementing AI systems. Establishing strong governance frameworks and risk management processes improves operational efficiency and data management. It ensures that AI systems are accurate, secure, and transparent. This proactive approach helps avoid disruptions in business operations and supports continuous improvement, keeping your organisation ahead of regulatory changes and market demands.

For organisations applying or planning to apply AI systems, a proactive approach is essential to guarantee compliance by the expected deadline, entities should have an implementation plan and start as early as possible.

Even if not all the technical details have been clarified yet, the scope and the objectives of the Act are sufficiently clear. Companies will have to adapt many internal processes and strengthen risk management systems. However, they can build on existing processes within the company and learn from measures from previous laws such as the GDPR.

We recommend that companies start preparing now and sensitize their employees to the new law, take stock of their AI systems, ensure appropriate governance measures, install proper risk classification and risk management over AI and meticulously review AI systems classified as high-risk.

  • Identify and categorize your AIs
    • Design and build an inventory system. Use this system to log the AIs that your organisation utilizes and their ownership.
    • Categorise your AIs, based on risk:
      • Which category of risk do your AIs belong to according to the AI act?
      • Which AIs pose a high risk for the organisation?
      • What is the role of your organisation in relation to these AI use cases?
  • Examine your GDPR activities
    • The EU AI Act and GDPR are rather similar in nature; organisations can implement the EU AI Act efficiently by aligning EU AI Act and GDPR activities.
    • Look back at all activities you did for GDPR since 2016. Look at the way you currently deal with privacy. What do you like and what do not like?
    • Copy what you liked in the strategy, while simplifying and scaling.
  • Define your next steps; create a roadmap
    • With this many details and timelines known about the AI Act, creating a roadmap for its implementation can now start. Start awareness: Inform stakeholders who at some point will be part of the implementation on what is coming and your plans.
    • Establish a governance structure and risk management processes.

Choosing Deloitte for EU AI Act compliance ensures a smooth transition to the new regulations. We have deep knowledge of the AI Act and extensive expertise in compliance. Our multidisciplinary approach combines legal, technological, and strategic insights to apply the AI Act's provisions accurately.

Our global network provides local insights for seamless implementation across regions. We offer tailored support and strategic guidance to help you navigate compliance while aligning with your business goals.

Partnering with Deloitte provides robust governance frameworks, risk management processes, and stakeholder engagement strategies. This support
helps your organisation meet regulatory demands, foster innovation, and
maintain a competitive edge.

The AI Act is part of the EU's Digital Regulations

 

The AI Act is part of more than 10 Digital Regulations that the EU has introduced, aimed at creating a cohesive and secure digital environment. These regulations include the General Data Protection Regulation (GDPR), the Data Act, NIS2, the Digital Services Act (DSA), the Digital Markets Act (DMA), and the Machinery Regulation among others. Together, they form a comprehensive framework addressing areas such as the data economy, cybersecurity, and platform regulation. The AI Act is a crucial puzzle piece within this complex framework of EU digital regulation, which is striving to establish a comprehensive framework that addresses the complexities and potential risks associated with AI systems. While this article focused on the AI Act, it should always be viewed within the broader context of the entire EU digital regulatory landscape.

Connect with us

 

We hope this information has given you the context around the AI Act that you were looking for. We understand that every organisation and situation is unique. With our extensive experience, global network, and state-of-the-art technology, we are ready to help you find a fitting solution. Please feel free to contact us. We are ready to discuss how we can help you with your specific needs.

For a more in-depth explanation of the AI Act, please refer to our EU AI Act deep dive: EU AI Act Deep Dive [PDF].

Did you find this useful?

Thanks for your feedback