On 21st May 2024, the EU Council formally adopted the AIA, marking another significant step towards the EU's ambition to 'become a global leader in trustworthy AI'. The AIA represents the first comprehensive and legally binding cross-sector framework for AI, including General Purpose AI (GPAI), from a major global economy. It sets out a risk-based, prescriptive approach focusing on the potential risks arising from specific models and applications.
The definitive legal text of the AIA is expected to be published in the EU OJ in June, but the version adopted by both the Parliament and Council can be considered essentially final. It provides further clarity in relation to crucial components and details concerning banned AI practices, obligations for high-risk AI systems, and the overall approach to regulating GPAI, among other matters.
Most importantly, all hurdles to AIA becoming law have now been cleared.
Please note: This article does not cover the regulation of AI used i) by public or law enforcement authorities, ii) as safety products or components, or iii) in industries subject to harmonised EU law (e.g., boats, motor vehicles, rail, aircraft, etc).
With the formal adoption of the AIA legislative text by both the Parliament and Council, the AIA is set to "go live" 20 days after its publication in the EU Official Journal (OJ). The publication is expected in June or early July 2024.
Organisations will have two years to comply with the AIA's provisions before they become fully enforceable by mid-2026. However, a limited number of provisions will apply sooner. Bans on prohibited AI systems will apply six months after the AIA enters into force, while requirements for GPAI systems and models will apply 12 months after. 1
Figure 1 – AI Act timeline
During the implementation period, the EU Commission will develop and adopt secondary legislation and guidance to provide more granular rules and instructions on what organisations must do to be deemed compliant with the AIA. At the behest of the Commission, the European Standardisation Organisations (ESOs) will also develop several standards – known as ‘harmonised standards’. Although harmonised standards are industry-led and voluntary, once adopted by the EU Commission and published in the OJ, conformity with them will provide a presumption of compliance with the relevant obligations of the AIA.
The AIA’s definition of an AI system broadly aligns with that of the Organization for Economic Cooperation and Development (OECD)2, to ensure legal certainty and facilitate international alignment. The EU believes that this will provide sufficient and clear criteria for differentiating AI systems from simpler software systems, ensuring a proportionate regulatory approach.
AIA definition of an AI system
An AI system is “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.
Yet, in our view the AIA’s definition of an AI system is still very broad. It could potentially include decision-making software with inference capabilities that have been in use for decades – such as standard credit scoring models in financial services. However, the Commission will develop further guidance on the application of the definition of an AI system, after the AIA has entered into force. This should also help avoid divergent interpretations among National Competent Authorities (NCAs), which could create inconsistencies or loopholes in the AIA’s application across the EU.
Similarly, while the AIA exempts AI systems developed for the sole purpose of scientific research and development activities, it does not seem to provide an exact definition of these terms. Ensuring clarity will be crucial, including on whether commercial research will be covered and under what conditions. This will help avoid regulatory uncertainty, as seen in the definition of scientific research in the General Data Protection Regulation (GDPR), and support investment in AI that aligns with public policy objectives.
What we do know is that the AIA will differentiate between single purpose and GPAI systems. Single purpose AI systems – or simply "AI systems” – are designed for specific tasks. By contrast, GPAI systems can service a wider range of tasks and are often integrated into downstream AI systems3. Large Language Models, which serve as the foundation for many generative AI systems, are an example of GPAI systems.
Given that AI systems are developed and distributed through intricate value chains, the AIA assigns clear roles and responsibilities to the various actors involved. These include providers, importers, distributors, and deployers, collectively referred to as AI operators (see Figure 3). However, organisations may assume multiple roles within the chain. This article, and the AIA itself, focuses on the obligations of the two primary actors in the chain: providers and deployers.
Figure 2 – Key actors in the AI value chain
The AIA classifies AI systems based on their potential risk to individuals’ fundamental rights, health, or safety, as well as to society as a whole. The AIA will completely ban a limited number of AI applications due to the unacceptable risk they pose. However, most of the legislation focuses on high-risk AI systems, such as those used in areas of employment, education, and access to essential private services.
Figure 3 – AI systems classification
Although high-risk AI systems will be permitted, they will be subject to strict conditions. To minimise potential risks, providers and deployers must adhere to a stringent set of standards.
Figure 4 – High-level view of AI systems key requirements
Complying with requirements for high-risk AI systems will, for most organisations, require significant investment to put in place enhanced product governance, risk management frameworks, compliance, and internal audit capabilities for conformity assessments. Providers will be responsible for fulfilling some of the most challenging requirements of the AIA, including conducting a Conformity Assessment and registering high-risk AI systems in a new EU database before putting any high-risk AI system on the market. For some specific use cases, independent external audits for conformity assessments by so-called "notified bodies"4 will be required.
Fundamental Rights Impact Assessments
Certain deployers of high-risk AI systems, such as public bodies, private operators providing public services, and financial services firms will have to conduct a Fundamental Rights Impact Assessment (FRIA) before use. The FRIA is a comprehensive process that evaluates the potential impact of AI on fundamental rights such as privacy, non-discrimination, and freedom of expression. The results must inform risk management strategies to ensure compliance and respect of fundamental rights.
Conducting a FRIA will be a complex task – from defining the scope of the assessment to accessing and analysing information related to AI system design and development. In many cases, FRIAs will also intersect with similar requirements under other applicable regulations, such as GDPR Data Protection Impact Assessments. Many organisations may lack the expertise to conduct FRIAs, including knowledge of fundamental rights, how to balance potential benefits and risks to individuals, and how to access or assess quantitative and qualitative information about their AI systems across the value chain.
Proportionality measures
To support innovation, the AIA includes specific provisions for Small and Medium-sized Enterprises (SMEs), as well as broader proportionality measures. For example, Member States will have to establish appropriate channels to provide guidance and respond to SMEs’ queries about AIA implementation, should such channels not already exist. The AIA also includes a series of filtering conditions to ensure that only genuine high-risk applications are captured. For example, AI systems that are designed to perform narrow procedural tasks or enhance the outcome of a task previously executed by humans will not be categorised as high-risk. Several significant exemptions also apply to AI systems and models provided under free and open-source licences, and those that were put into service before the entry into force of the AIA.
Extraterritoriality
The AIA will have implications for organisations around the world. The AIA will apply not only to EU AI providers and developers, but also to those located in other jurisdictions – such as the UK and US – if their AI systems are marketed or intended to be used in the EU. This extraterritorial impact has led some to compare the AIA to the GDPR in its likely impact.
Multinational firms will have to decide whether to adopt AIA standards globally or to adopt EU-specific AI systems, or in some scenarios, whether to scale back use of higher-risk AI in the EU. For example, if an organisation adopts a high-risk solution developed outside the EU and deploys it in an EU entity or in a manner which affects individuals residing in the EU, the full scope of the requirements will need to be complied with. This could include conformity assessment tests and registering in the EU database, if substantial modifications are made.
At a minimum, firms should start by assessing which of their current and planned AI systems are likely to fall into the AI definition of the Act and, of those, which are high-risk or prohibited. This will enable a high-level gap analysis against the key requirements, providing insight into the scale and challenge of any compliance efforts required, including required enhancement to their risk management frameworks. Lower risk AI solutions also entail compliance with certain transparency requirements.
One of the thorniest issues of the legislative negotiations was the classification and regulation of GPAI models and systems and ensuring fair allocation of responsibilities across the value chain. The final agreement reached by the EU institutions involves a tiered approach, where a provider’s GPAI models and systems are regulated based on the level of risk their products pose.
Figure 5 – GPAI classification and key requirements for providers
The newly established AI Office within the EU Commission will oversee GPAI models, enforce common rules, and develop secondary legislation. A scientific panel of independent experts will advise the AI Office on evaluating GPAI models, including capabilities, high-impact designations, and safety risks.
While the more nuanced approach to regulating GPAI is welcome, it remains unclear whether it can balance AI safety with innovation and growth. Both definitions and procedures for GPAI designation in the AIA text remain high-level and will only be clarified in secondary legislation. We do know that the key threshold for high-impact GPAI model designation is based on the amount of computing used in training, and it is set at Floating Point Operations per Second (FLOPs) > 10~25. Yet, the EU recognises the possibility of needing to update the FLOPs threshold by the time the AIA becomes applicable and has granted the Commission the authority to do so. Additionally, the Commission will have the power to consider other quantitative and qualitative criteria, such as the number of business users, when evaluating high-impact GPAI models.
Until harmonised standards are published, high-impact GPAI models that pose systemic risks can comply with the AIA by adhering to Codes of Practice approved by the Commission. The AI Office will develop the Codes of Practice in collaboration with industry, the scientific community, civil society, and other stakeholders5. The Codes of Practice should be available at least three months before the application date of the GPAI provisions.
The overall strategic and compliance implications of the proposed requirements for GPAI providers are likely to be substantial. While regulation can help providers to demonstrate that their products are trustworthy and reliable, compliance will demand significant effort and investment. For example, a preliminary study from Stanford University suggests that that all major providers would fall well short of most, if not all, of the draft AIA requirements that were initially proposed by the EU Parliament6. 'According to the study, the most significant shortfalls concern copyrighted data, transparency, testing and evaluation, and data governance.
Providers will need to invest in strengthening their capabilities to assess GPAI model risks, including reviewing their approaches to testing, evaluation, and risk mitigation. Enhanced data governance will be crucial, requiring improved methods for data collection, storage, and lawful use. Investing in increased transparency and reporting capabilities will also be inevitable.
While we have discussed some of the main aspects of the AIA, there are many other provisions into which we have not delved. One example are the measures aimed at promoting innovation among SMEs and start-ups.
These include encouraging individual Member States to establish regulatory sandboxes, according to a common set of rules to promote standardised approaches across the EU and facilitate cooperation between NCAs. SMEs and start-ups will have priority access to the sandboxes, with the aim of removing some of the barriers they may face when launching their products.
Spain is playing a leading role in this space by launching the first AI Regulatory Sandbox pilot. The initiative aims to operationalise AIA requirements, including conformity assessments or post-market monitoring activities. As part the pilot, the Spanish government is developing technical guidelines for high-risk AI systems, policies, and procedures that will serve as a framework for the Regulatory Sandbox. Other EU countries are likely to follow suit over the next two years to support growth of their own AI sectors, while the Commission will facilitate cooperation at EU level.
The proposed measures in the AIA will have a far-reaching impact on firms and their AI innovation strategies, both in the EU and globally. Like the GDPR, the AI Act has a cross-sector remit and imposes hefty fines, with penalties of up to 7% of global turnover or €35 million for the most significant infringements.
Even more significantly, organisations may need to cease deploying certain AI systems or make significant product changes to comply with the AIA requirements.
However, while the impending finalisation of the AIA is a significant milestone, it is only one piece of the puzzle in terms of the regulatory landscape for organisations developing or deploying AI systems.
Secondary legislation, guidance, and harmonised technical standards
Organisations will have to wait for secondary legislation, guidance, and technical standards to emerge between the AIA's entry into force and the end of the implementation period before finalising their compliance plans fully. For example, harmonised standards will cover critical elements of the legislation such as risk and quality management systems, data quality, accuracy, transparency, robustness, and human oversight, among others. This underscores the importance of organisations preparing in advance for compliance, as they will have a narrow window to ensure alignment with technical specifications and complete their conformity assessments.
Interaction with other technology-neutral EU regulatory frameworks
Technology-neutral cross-sector regulations, such as GDPR, and sector-specific regulations, such as those governing financial services or digital markets, will also be applicable depending on the specific AI use case. However, the interaction between these regulations and the AIA raises some questions that remain unanswered for now. For example, we have already highlighted the potential interplay between FRIAs and GDPR as a potential challenge. In addition, the responsibilities of different actors in the AI value chain may not always align with those of the organisation that is primarily responsibility for protecting personal data under GDPR, i.e., the data controller.
Another example is the link between the AIA and the Digital Services Act (DSA). To ensure a coordinated approach to regulating the digital landscape, the AIA indicates that Very Large Online Platforms (VLOPs) that comply with the DSA may be also considered compliant with selected AIA requirements – e.g., in relation to risk management systems – by default.
These interactions raise important questions around cooperation and alignment between NCAs responsible for the AIA and those responsible for other horizontal and sector-level regulations.
With the final text of the AIA essentially in place, organisations looking to automate in low-risk areas, such as simple chatbots, now have clarity to scale up with confidence. The overall AIA risk-based classification of AI systems will be helpful in determining the greater robustness required around more complex “black box” models. Organisations now need to develop their overall AI strategy, refining it as more details emerge through the implementation phase.
The impact of the EU AIA in shaping global AI regulations will depend on the approaches adopted by UK, US, and other key global regulators. We already see a degree of divergence in detail between countries which will be challenging for organisations to navigate. Within the EU itself, diverging national level interpretations may also add a further layer of complexity. Maintaining a comprehensive horizon scanning capability which feeds into the AI strategy will be key to deploying trustworthy and compliant AI systems.
[1] https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6473
[2] https://oecd.ai/en/ai-principles
[3] AI systems that are built using other AI systems or components.
[4] Independent third parties designated by EU Member States.
[5] https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6473