Skip to main content

EU AI Act: Regulatory Readiness & Risk Management

The EU Artificial Intelligence Act (EU AI Act) represents a pioneering effort to regulate various types of artificial intelligence (AI) technologies through a harmonized framework, aiming to ensure their ethical use and a high level of protection of health, safety, fundamental rights, and environment, as well as to promote the development, use and uptake of trustworthy and secure AI in the internal market. It entered into force across all 27 EU Member States on 1 August 2024, and the enforcement of the majority of its provisions will commence on 2 August 2026.

What Is the EU AI Act About?

The EU AI Act aims to regulate AI systems based on the level of risk they pose, with stricter requirements applied to those with higher risks. It thus establishes rules covering the placing on the market, deployment, and use of certain AI systems, sets specific requirements for high-risk AI systems, outlines the obligations of their operators, and prohibits harmful AI practices. Additionally, it introduces rules for market surveillance and governance applicable to individual EU Member States, while also including measures to encourage innovation in the AI sector.

The following section provides an overview of how AI systems are categorized based on their associated risk levels.

Prohibited AI systems that pose a threat to people’s safety, livelihoods, or fundamental rights. Examples include systems that manipulate human behavior, enable social scoring, or conduct real-time biometric identification without consent.

AI systems used in sensitive and critical areas, such as recruitment, credit scoring, and law enforcement, must comply with strict requirements, including risk assessments and appropriate human oversight.

AI systems like chatbots that interact with users and may generate misleading content without proper context. In these cases, providers are required to clearly inform users that they are interacting with an AI system.

AI systems with minimal impact, such as those used for inventory management or productivity tools . While these are not subject to binding requirements, adopting voluntary measures is encouraged to promote trust and accountability.

Who Must Comply with What ?

The regulation affects providers, deployers, importers, and distributors of AI systems, even those based outside of the EU, if their AI systems or their outputs are used in the EU. It also applies to businesses operating in high-risk sectors with large customer bases, including telecommunications, banking & insurance, IT & technology, critical infrastructure, and education. Nonetheless, not all provisions of the EU AI Act apply uniformly – the scope of requirements differs depending on the risk categorization of each AI solution and the role of individual entities in the AI value chain. While some specific cases are exempted from the Act’s applicability entirely, including AI systems used for research or prototyping and for military, defence or national security purposes, others from affected sectors and categories have to fulfill a series of requirements, such as:

  • Implementation of a continuous risk management process running throughout the entire lifecycle of AI systems, requiring regular systematic review, updating, and reduction of risks through implementation of adequate mitigation and control measures.
  • Testing to ensure that AI systems perform consistently for their intended purpose and that they adhere to applicable rules.
  • Having a data governance framework in place, covering data collection methods, procedures, bias detection and mitigation mechanisms.
  • Utilization of only such training, validation and testing data sets that are relevant, representative, pre-processed and cleaned to reduce bias and free of errors to minimize risks and discriminatory outcomes.
  • Maintenance of technical documentation describing the datasets used, including their origin, characteristics, and preprocessing steps.
  • Enablement of automatic recording of events (logs) over the lifetime of the AI systems.
  • Oversight by natural persons during the period in which the AI systems are in use.
  • Alignment with GDPR provisions.
  • Security and resilience against attempts to manipulate the training data set (data poisoning), or pre-trained components used in training (model poisoning) and to alter the AI systems’ use, outputs or performance.

Timeline & Sanctions

Although the EU AI Act will become applicable in mid-2026, some provisions take effect earlier.

  • February 2025: The ban on AI systems posing unacceptable risks.
  • August 2025: Transparency obligations for general-purpose AI systems.

On the other hand, the core obligations for high-risk AI systems, which include requirements on data, risk management, and oversight, will become applicable only in August 2027.

Non-compliance with the AI Act can lead to significant financial penalties, depending on the nature of the violation and ranging from EUR 35 million or 7% of global annual turnover to EUR 7.5 million or 1% of global annual turnover. Small and medium-sized enterprises (SMEs) may face reduced penalties, with fines capped at the lower percentage of their global annual turnover.​

How Deloitte Can Help

Our team of experts on regulatory compliance, IT & cyber security, and risk management offers comprehensive services to ensure your preparedness for and full compliance with the EU AI Act, including

  • Applicability & Impact Assessment
  • AI System Risk Classification Support
  • As-Is State Mapping & Gap Analysis
  • AI Bias, Fairness & Ethical Risk Analysis
  • Security & Cyber Threat Analysis
  • Compliance Action Plan Design
  • Implementation of Regulatory Requirements
  • Documentation Design
  • Risk Management – Methodology, Process Design & Redesign, Policy Development
  • Data Governance Framework
  • Conformity Assessment Readiness Evaluation
  • Third-Party AI Vendor Compliance Assessments
  • AI Consulting – Development of AI Strategies & Roadmaps
  • AI Model Validation & Performance Testing
  • Technical Documentation Guidelines
  • Training & Awareness Programs on AI Compliance & Ethics