Skip to main content

Defining AI under the EU AI Act: Clarity for compliance and innovation

Understanding the European Commission’s latest guidelines on AI systems

Authors: 

  • Georges Wantz | Managing Director – Digital Privacy & Trust
  • Hatice Baskaya | Director – Cyber Governance & Compliance
  • Vusal Mammadzada | Senior Manager - Cyber Strategy & Transformation
  • Michal Arendarski | Consultant – Digital Privacy & Trust

 

Introduction

Since the AI Act entered into force on 1 August 2024, the European Commission (EC) has been working on supplementary documents that aim to detail and clarify the application of the AI Act, including the definitions used in that regulation.

On 4 February 2025, the EC published the guidelines on prohibited AI practices and, as Deloitte Luxembourg, we touched on this topic in our recent article. Two days later, on 6 February 2025 the European Commission issued additional guidelines on the definition of an AI system established by the AI Act.1 In these guidelines, the EC attempted to break down the definition of artificial intelligence so that providers and all relevant stakeholders can determine whether a system in question should be considered as artificial intelligence, and thus subject to the regulation. Before delving into the complex definition of AI systems, it’s critical to clarify why it matters in practice:

  1. A clear definition forms the foundation for legal certainty for all parties involved–from developers to end users. Without a coherent definition, EU Member States could establish their own national AI regulations, leading to a fragmented market and potentially hindering the free movement of AI-based products and services across the EU.
  2. Another key aspect is the need to protect the public interests against potential harms posed by AI. In other words, a clear definition should serve as a stepping stone for classifying systems that fall under AI rules, ensuring that AI–regardless of its risk level– is properly regulated, and potential dangers related to health, safety, privacy, and fundamental rights are effectively governed, monitored and mitigated.

Building on this context, this article aims to clarify what is an AI system and how its definition has evolved2. This will be followed by a description of the necessary elements for a system to be considered as AI, as well as examples of systems (e.g., software, tools) that fall outside of that scope.

What is AI system 

The number of AI definitions is likely equal to the number of actors discussing it.3 OECD, NIST, Council of Europe, EU High Level Expert Group on AI, ISO standards, all have defined AI on their own terms. However, these definitions share a common element: the ability to reason4 and learn autonomously.

The definition in the AI Act itself has undergone multiple changes. The initial proposal for the AI Act was criticised for being overly broad, prompting the EU Parliament to work toward narrowing down the scope and clarifying the essential capabilities an AI system should have.6 After several years of amendments and consultations with organisations such as the OECD, the EU finalised the AI Act in 2024, establishing the following definition:

“A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”7

Based on the new guidelines, we can further break down this definition into seven different elements that must be occur together:

The machine-based element refers to the fact that an AI system is developed and operates through a machine. This includes both hardware (e.g., CPU, GPU, HDD) and software (e.g., operating systems, programs) components. It highlights the necessity for an AI system to be able to perform computations, and it covers even the most advanced quantum computing systems.8

The second element requires that the system operates with varying levels of autonomy.  This means it cannot function entirely without human intervention, nor can it rely solely on explicit human instructions. According to the guidelines, a qualifying AI system may include either direct human interaction (e.g., clicking buttons, adjusting controls) or indirect involvement (e.g., oversight through other systems).9

The definition allows for AI systems to exhibit adaptiveness after deployment. Adaptiveness, in this context, is the focal point, and it refers to the systems’ ability to learn autonomously and modify its behaviour after it is already in use.10 The guidelines also highlight the term “may exhibit,” rendering adaptiveness optional. This reinforces the point that systems without self-learning capabilities can still fall within the scope of  the AI Act, as long as they meet the remaining elements of the definition.11

An AI system must be designed to achieve specific objectives. These are meant to be explicit, meaning clearly stated and directly embedded in the system; or implicit, inferred from the system’s behaviour.12 Analysing Recital 12 of the AI Act  introduces the concept of “intended purpose” somewhat abruptly, distinguishing it from the  AI system’s internal objectives. The objective is internal (GPT model aims to answer questions with high accuracy), while an intended purpose is externally defined by the provider (GPT model’s intended purpose is to help HR department with employees’ questions).13 The distinction is further illustrated in the case of  facial recognition technology (FRT). The objective of FRT is to identify, verify or authenticate individuals by comparing their faces against a database.14On the other hand, the intended purpose of an FRT depends on the context of use —for example, unlocking a smartphone or controlling access to a secure facility.

This fifth element is essential in distinguishing AI systems from traditional, non-AI systems. It refers to the system’s ability to infer, from the received input, how to generate output. This means AI systems do not just follow the rules encoded by humans, but are able to deduce how to respond to a query using AI techniques, such as machine learning, and logic- and knowledge-based methods.15 Machine learning approach is a group of techniques which include:

  • Supervised learning, where AI systems learn from labelled data (e.g., spam detection systems learning from emails labelled as spam/not spam). 
  • Unsupervised learning, where AI systems learn from unlabelled data, using other types of techniques, to generate output without human guidance (e.g., used for drug discovery).
  • Self-supervised learning, where an AI system creates its own labels from unlabelled data (e.g.,speech recognition).
  • Reinforcement learning, where the AI system is learning from its own experience using trial-and-error method (e.g.,robot arm learning how to pick up items)
  • Deep learning, where the AI systems are provided with vast amounts of data and can learn from it, parameter by parameter, mimicking human brain, creating artificial neural networks, to finally generate predictions or other output (e.g.,GPT models16).
  • Logic- and knowledge-based approaches contain encoded knowledge from human experts (early medical diagnosis systems)

AI systems shall be able to generate predictions, content, recommendations and other types of outputs (e.g., self-driving cars, ChatGPT, HR hiring tools, fraud detection systems).

AI systems are able to influence the physical or virtual environment (e.g., chat-bot, robotic arm, self-driving cars).

Systems outside of the scope of the AI definition 

The guidelines enlist several types of systems that are not AI-driven:

  • Systems for improving mathematical optimisation : These systems are designed to enhance existing optimisation methods, such as predicting the probability of a binary outcome. While they may use machine learning techniques, their primary goal is to improve their own computational performance, not to influence or make decision.
  • Basic data processing systems: They follow predefined instructions without using AI techniques to generate outputs. Examples include database management systems, spreadsheet software, survey analytics platforms. These systems do not infer or learn; rather, they present or process data in an structured and understandable way.
  • Systems based on classical heuristics: Similarly to the basic data processing systems, these rely on predefined rules or experience-based methods to derive solutions. Certain chess engines which use heuristic evaluation are outside of the scope of AI.
  • Simple prediction systems using basic statistical estimators: These systems often serve as benchmarks for more advanced machine learning models. Examples  include forecasting prices based on historical data or predicting daily product demand in a store.

 

The challenges in defining AI 

The definition of AI has been broken down into separate elements, yet uncertainties remain about the exact point at which a system qualifies as an AI system. The guidelines identify the ability to infer as a core element of an AI system. However, they provide examples of  non-AI systems that also possess inferential capabilities, such as expert systems concluding from encoded expert knowledge, models based on Bayes’ theorem or regression models that predict an outcome based on training data. Moreover, some examples of the AI systems in the guidelines include models employing deep learning techniques primarily because it is used on a larger scale than simple computational techniques17

Some suggest that instead of defining AI with computational techniques, it could be approached by focusing just on two elements: adaptivity and autonomy. Adaptivity refers to the ability of an AI system to change its behaviour over time, including the level of complexity of this change (e.g., how unpredictable it can become). Autonomy, on the other hand, reflects the extent to which the system can operate without human oversight.18

This approach, based on the proposal regulating AI by the UK Government’s from 2022,19 is a risk-based and distinguishes between systems with high levels of adaptability and autonomy and those with lower capabilities in these areas. Higher adaptability and autonomy capabilities of the system pose a greater risk of the system becoming unpredictable, which is why they should be highly scrutinised and regulated. Low adaptability and autonomy capabilities of the system could even remain unregulated.20

Conclusion

Since the time when the regulatory attempts to define AI began, there have been challenges in establishing a precise definition for it. After years of discussions and trialogues, the final form of the definition has been adopted and became applicable with the AI Act. However, even with the guidelines published by the European Commission, the definition of AI is still not sufficiently precise to always have a binary ‘yes’ or ‘no’ answer to the question whether a given system belongs to the AI category. It still remains an open question whether such clear definition is even possible to establish. At least here, the guidelines completely dispel the doubts, stating in point 62 that it is impossible to create an automatic determination or a list that would be exhaustive enough to determine whether or not a system falls within the definition of an AI system.21

This definition covers a wide spectrum of systems and, from a practical point of view, the best way to move forward is to directly apply the provisions from the AI Act and document them properly.

As briefly highlighted in our introduction, it should be taken into account that the main purpose of the AI Act is to safeguard the fundamental rights and freedoms of individuals. These cases are pre-defined in articles 5, 6 and 50 of the AI Act. Fortunately, the guidelines provide helpful clarification that most of the systems, even if categorised as an AI system, will most likely not be covered by any regulatory requirements under the AI Act, due to its risk classification.22

1 The Commission publishes guidelines on AI system definition to facilitate the first AI Act’s rules application | Shaping Europe’s digital future

2Please note that this article intentionally does not cover the definition of general-purpose AI (GPAI) models and their key distinction from “AI systems”. These topics, along with the recent GPAI code of practice published on 10 July 2025, by the European Commission will be covered in a dedicated, subsequent article.

3https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3696519

4Although AI system are constantly evolving, their ability to reason on human level has not been achieved yet. This is evident when AI systems are tested on logic puzzles, such as Tower of Hanoi (When talking about AI, definitions matter)

5https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3696519

 6https://www.europeanlawblog.eu/pub/eu-draft-artificial-intelligence-regulation-extraterritorial-application-and-effects/release/1?utm_source=chatgpt.com

7 Guidelines, point 8

8Guidelines, section1

9Guidelines section 2

10Guidelines section 3

11Guidelines section 3

12Guidelines section 4

13AI Act Recital 12

14https://doi.org/10.3390/jimaging11020058

15Guidelines section 5.1.

16Learning the Basics of Deep learning, ChatGPT, and Bard AI


17How many neurons must a system compute before you can call it AI? Unpicking the guidelines on the AI Act’s definition of artificial intelligence | Technology's Legal Edge

18How many neurons must a system compute before you can call it AI? Unpicking the guidelines on the AI Act’s definition of artificial intelligence | Technology's Legal Edge

19A pro-innovation approach to AI regulation - GOV.UK

20How many neurons must a system compute before you can call it AI? Unpicking the guidelines on the AI Act’s definition of artificial intelligence | Technology's Legal Edge

 21Guidelines, point 62

22Guidelines, point 63 last sentence

 

Our joint capabilities. Your trustworthy AI.

GenAI has the power to transform businesses, but it needs to be built on trust.

Did you find this useful?

Thanks for your feedback

Recommendations