The European Union Artificial Intelligence Act (EU AI Act) marks a significant milestone in AI regulation, setting the stage for how artificial intelligence technologies are governed and deployed across member states. As the world's first comprehensive AI law, this regulation emphasises the importance of AI compliance, risk management, and governance. It aims to ensure ethical use of AI, protection of people’s fundamental rights, health and safety, as well as providing transparency when using AI. In this detailed analysis, we explore the key components of the EU AI Act, its implications for organisations, and how it aligns with existing digital regulations to foster a secure, innovative AI landscape in Europe.
To view this video, change your targeting/advertising cookie settings.
The AI Act introduces a framework aimed at regulating the deployment and usage of AI within the EU. It establishes a standardized process for single-purpose AI (SPAI) systems’ market entry and operational activation ensuring a cohesive approach across EU Member States.
The AI Act adopts a risk-based approach by categorizing AI systems based on their use case, thereby establishing compliance requirements according to the level of risk they pose to users. This includes the introduction of bans on certain AI applications deemed unethical or harmful, along with detailed requirements for AI applications considered high-risk to manage potential threats effectively. Further, it outlines transparency guidelines for AI use cases that have the potential to mislead people. With the risk-based approach, AI ethics are the heart of the AI Act.
Its focus on principles aims to leave the AI Act adaptable to as yet unknown iterations of AI technologies. However, the public use of general-purpose AI technology prompted the legislator to differentiate between single-purpose AI systems and general-purpose AI models. It recognises that general-purpose AI models can introduce systemic risks in our society. Therefore the AI Act regulates the market entry for general-purpose AI models, regardless of the risk-based categorization of use cases, setting forth comprehensive rules for market oversight, governance, and enforcement to maintain integrity and public trust in AI innovations.
The AI Act applies to the following actors:
The EU AI Act came into force on August 2, 2024. The implementation is phased, with various parts of the AI Act becoming applicable at different times, allowing organisations to adapt and comply with the new requirements. Some of the key dates are:
This phased approach ensures that organisations, both private and public, have sufficient time to align their operations with the new regulatory standards, promoting responsible AI deployment while protecting fundamental rights and public trust.
The EU AI Act aligns with several existing EU regulations to ensure a comprehensive and consistent legal framework for AI technologies. Key areas of alignment include:
The requirements of the EU AI Act vary depending on the classification of the AI system. A system can be a single-purpose AI (SPAI) or a general-purpose AI (GPAI). The requirements laid out in the AI Act can differ according to the role that one fulfils with regard to the AI system.
Single-Purpose AI (SPAI) Systems
General-Purpose AI (GPAI) Models
Overall, the EU AI Act aims to ensure that AI systems are developed and used responsibly, with requirements tailored to the potential risks and impacts of each category.
The EU AI Act represents an important step forward in the governance of artificial intelligence, offering organisations a clear framework for deploying AI systems responsibly. Organisations have the following responsibilities:
By adhering to these responsibilities, organisations can navigate the regulatory landscape with a robust governance framework and a proactive approach to risk management, ensuring that AI systems are deployed responsibly and ethically.
Enforcement will be a collective effort shared by the Member States and the European Commission, where the Commission gets the exclusive power to supervise and enforce rules for General Purpose AI Models. Non-compliance can lead to significant penalties and enforcement actions. The higher the risk, the higher the penalty. Furthermore, member states must consistently consider the interests of small and medium sized enterprises (SMEs) and start-ups. National authorities are mandated to assess the nature, gravity, and duration of each infringement, as well as whether the entity in question is a repeat offender, when determining the amount of each fine.
The AI Act’s penalty regime is structured on the
basis of the severity and nature of the violation, with the fines increasing
according to the risk category of the AI system.
Fines:
Enforcement Actions:
Complying with the EU AI Act offers multiple benefits for organisations. Firstly, early adoption positions your organisation as an industry leader, setting standards for ethical and responsible AI use. This leadership enhances your reputation and builds trust with stakeholders, including customers, partners, and regulators.
Complying with the EU AI Act also reduces the risk of legal penalties and enforcement actions. This ensures that your AI technologies align with European legal standards. By following the Act’s requirements, your company can avoid fines and sanctions.
In addition, compliance minimizes risks in implementing AI systems. Establishing strong governance frameworks and risk management processes improves operational efficiency and data management. It ensures that AI systems are accurate, secure, and transparent. This proactive approach helps avoid disruptions in business operations and supports continuous improvement, keeping your organisation ahead of regulatory changes and market demands.
For organisations applying or planning to apply AI systems, a proactive approach is essential to guarantee compliance by the expected deadline, entities should have an implementation plan and start as early as possible.
Even if not all the technical details have been clarified yet, the scope and the objectives of the Act are sufficiently clear. Companies will have to adapt many internal processes and strengthen risk management systems. However, they can build on existing processes within the company and learn from measures from previous laws such as the GDPR.
We recommend that companies start preparing now and sensitize their employees to the new law, take stock of their AI systems, ensure appropriate governance measures, install proper risk classification and risk management over AI and meticulously review AI systems classified as high-risk.
Choosing Deloitte for EU AI Act compliance ensures a smooth transition to the new regulations. We have deep knowledge of the AI Act and extensive expertise in compliance. Our multidisciplinary approach combines legal, technological, and strategic insights to apply the AI Act's provisions accurately.
Our global network provides local insights for seamless implementation across regions. We offer tailored support and strategic guidance to help you navigate compliance while aligning with your business goals.
Partnering with Deloitte provides robust governance frameworks, risk management processes, and stakeholder engagement strategies. This support
helps your organisation meet regulatory demands, foster innovation, and
maintain a competitive edge.
The AI Act is part of more than 10 Digital Regulations that the EU has introduced, aimed at creating a cohesive and secure digital environment. These regulations include the General Data Protection Regulation (GDPR), the Data Act, NIS2, the Digital Services Act (DSA), the Digital Markets Act (DMA), and the Machinery Regulation among others. Together, they form a comprehensive framework addressing areas such as the data economy, cybersecurity, and platform regulation. The AI Act is a crucial puzzle piece within this complex framework of EU digital regulation, which is striving to establish a comprehensive framework that addresses the complexities and potential risks associated with AI systems. While this article focused on the AI Act, it should always be viewed within the broader context of the entire EU digital regulatory landscape.
To view this video, change your targeting/advertising cookie settings.
We hope this information has given you the context around the AI Act that you were looking for. We understand that every organisation and situation is unique. With our extensive experience, global network, and state-of-the-art technology, we are ready to help you find a fitting solution. Please feel free to contact us. We are ready to discuss how we can help you with your specific needs.
For a more in-depth explanation of the AI Act, please refer to our EU AI Act deep dive: EU AI Act Deep Dive [PDF].
Did you find this useful?
To tell us what you think, please update your settings to accept analytics and performance cookies.