Authors:
Since the AI Act entered into force on 1 August 2024, the European Commission (EC) has been working on supplementary documents that aim to detail and clarify the application of the AI Act, including the definitions used in that regulation.
On 4 February 2025, the EC published the guidelines on prohibited AI practices and, as Deloitte Luxembourg, we touched on this topic in our recent article. Two days later, on 6 February 2025 the European Commission issued additional guidelines on the definition of an AI system established by the AI Act.1 In these guidelines, the EC attempted to break down the definition of artificial intelligence so that providers and all relevant stakeholders can determine whether a system in question should be considered as artificial intelligence, and thus subject to the regulation. Before delving into the complex definition of AI systems, it’s critical to clarify why it matters in practice:
Building on this context, this article aims to clarify what is an AI system and how its definition has evolved2. This will be followed by a description of the necessary elements for a system to be considered as AI, as well as examples of systems (e.g., software, tools) that fall outside of that scope.
The number of AI definitions is likely equal to the number of actors discussing it.3 OECD, NIST, Council of Europe, EU High Level Expert Group on AI, ISO standards, all have defined AI on their own terms. However, these definitions share a common element: the ability to reason4 and learn autonomously.5
The definition in the AI Act itself has undergone multiple changes. The initial proposal for the AI Act was criticised for being overly broad, prompting the EU Parliament to work toward narrowing down the scope and clarifying the essential capabilities an AI system should have.6 After several years of amendments and consultations with organisations such as the OECD, the EU finalised the AI Act in 2024, establishing the following definition:
“A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”7
Based on the new guidelines, we can further break down this definition into seven different elements that must be occur together:
The guidelines enlist several types of systems that are not AI-driven:
The definition of AI has been broken down into separate elements, yet uncertainties remain about the exact point at which a system qualifies as an AI system. The guidelines identify the ability to infer as a core element of an AI system. However, they provide examples of non-AI systems that also possess inferential capabilities, such as expert systems concluding from encoded expert knowledge, models based on Bayes’ theorem or regression models that predict an outcome based on training data. Moreover, some examples of the AI systems in the guidelines include models employing deep learning techniques primarily because it is used on a larger scale than simple computational techniques17
Some suggest that instead of defining AI with computational techniques, it could be approached by focusing just on two elements: adaptivity and autonomy. Adaptivity refers to the ability of an AI system to change its behaviour over time, including the level of complexity of this change (e.g., how unpredictable it can become). Autonomy, on the other hand, reflects the extent to which the system can operate without human oversight.18
This approach, based on the proposal regulating AI by the UK Government’s from 2022,19 is a risk-based and distinguishes between systems with high levels of adaptability and autonomy and those with lower capabilities in these areas. Higher adaptability and autonomy capabilities of the system pose a greater risk of the system becoming unpredictable, which is why they should be highly scrutinised and regulated. Low adaptability and autonomy capabilities of the system could even remain unregulated.20
Since the time when the regulatory attempts to define AI began, there have been challenges in establishing a precise definition for it. After years of discussions and trialogues, the final form of the definition has been adopted and became applicable with the AI Act. However, even with the guidelines published by the European Commission, the definition of AI is still not sufficiently precise to always have a binary ‘yes’ or ‘no’ answer to the question whether a given system belongs to the AI category. It still remains an open question whether such clear definition is even possible to establish. At least here, the guidelines completely dispel the doubts, stating in point 62 that it is impossible to create an automatic determination or a list that would be exhaustive enough to determine whether or not a system falls within the definition of an AI system.21
This definition covers a wide spectrum of systems and, from a practical point of view, the best way to move forward is to directly apply the provisions from the AI Act and document them properly.
As briefly highlighted in our introduction, it should be taken into account that the main purpose of the AI Act is to safeguard the fundamental rights and freedoms of individuals. These cases are pre-defined in articles 5, 6 and 50 of the AI Act. Fortunately, the guidelines provide helpful clarification that most of the systems, even if categorised as an AI system, will most likely not be covered by any regulatory requirements under the AI Act, due to its risk classification.22