Skip to main content

Know Your AI: Understanding the definition of „AI system” as the Cornerstone of Compliance

As the EU AI Act comes into force, one fundamental question is shaping compliance efforts across industries: what qualifies as an AI system? This isn’t just a legal fine print. The definition of an AI system determines whether the regulation applies, how a system is classified in terms of risk, and what governance measures must be in place. Misclassification can lead to costly mistakes—either from unnecessary compliance efforts or from falling short of legal obligations.

In this article we’ll explore how the EU defines AI systems, and how organizations can act to integrate the definition into their AI compliance.

The Legal Definition: Broad, Functional, and Evolving

According to Article 3(1) of the AI Act, an AI system is “a machine-based system that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations or decisions that may influence physical or virtual environments.”

This broad, technology-agnostic definition shifts the focus from how a system is built to what it does. It emphasizes inference, autonomy, and the ability to impact the world around it—qualities that distinguish AI from traditional rule-based software.

Given how wide-ranging this definition is, organizations might struggle to determine whether their systems are covered by the EU AI Act. To help navigate this uncertainty, the European Commission has issued non-binding guidance clarifying how key elements should be interpreted in practice. However, this guidance should be treated with caution, as full legal certainty will develop gradually through binding court decisions and regulatory enforcement. In the meantime, various industry associations and national authorities have developed their own interpretative tools, which companies should consider carefully when building internal policies and deciding whether their systems qualify as AI under the Act.

Applying the Definition: What to Look For

The EU AI Act distinguishes seven elements that together define what qualifies as an AI system. Some of these criteria are relatively straightforward. For instance, an AI system must be (1) machine-based, meaning it runs on machines by design. It must also be developed (2) for explicit or implicit objectives, reflecting the specific goals or tasks it is intended to carry out. Moreover, it must be capable of generating (3) predictions, content, recommendations, or decisions, which differentiates AI from other types of software and highlights its functional impact. Additionally, an AI system must have the ability to (4) influence physical or virtual environments, demonstrating that it is not merely passive but actively affects the context in which it operates.

Other elements of the definition require a more nuanced assessment, especially when evaluating hybrid systems that combine various technologies. For instance, an AI system should be (5) designed to operate with varying levels of autonomy, indicating that it has the capacity to function independently from constant human input to a reasonable degree; this aspect is also relevant for determining how much human oversight is needed in practice. Another characteristic is that AI systems (6) may exhibit adaptiveness after deployment, which means the system can learn or adjust its behavior once in operation. Although adaptiveness is optional, its presence often suggests that the system displays other traits typical of AI, such as operating with a degree of autonomy.

Finally, an AI system must be able to (7) infer, from the input it receives, how to generate outputs. This element refers to the system’s ability to derive results or actions based on input data. According to the European Commission’s guidance, certain techniques enable such inference, including various machine learning approaches — such as supervised or unsupervised learning — as well as logic and knowledge-based methods like knowledge representation, expert systems, or search and optimization techniques. In contrast, methods that do not enable this kind of inference — such as tools for improving mathematical optimization, basic data processing systems, or systems based solely on classical heuristics and simple prediction rules — should generally not be classified as AI systems under the Act.

While the Commission’s guidance offers helpful clarifications and examples, it remains non-binding and should be interpreted with caution. Businesses aiming to comply with the AI Act should consider this guidance alongside industry interpretations, national authorities’ clarifications, and evolving case law to ensure a sound and defensible approach when determining whether their systems fall within the scope of the regulation.

A Practical Approach to Compliance

To navigate the definition, organizations should build a structured assessment process, by applying a consistent methodology for internal system evaluation and future-proof governance.

The first step is system mapping. This step involves identifying both standalone systems and their components within the IT architecture. It is important to consider these components not only individually, but also in the way they interact and function as a system. Some elements of the definition may only emerge in combination, and recognizing this early is useful for accurate classification.

The next step is self-assessment. This is where teams examine whether systems meet the definition’s characteristics, such as inference, autonomy, and adaptiveness. The European Commission, in its guidance, proposes a binary approach: a system either exhibits the characteristic, therefore qualifying  as AI or it does not. Therefore, in their preliminary conclusions, organizations should determine with clarity which systems qualify as AI and which do not, based on current functionality. For systems identified as AI, organizations would move forward to risk classification under the Act.

However, it's important to note that many systems exist in a state of gradual development, and this binary lens may not capture future potential. This leads to a broader observation: any IT system is a subject to change over time. A tool that currently performs a narrow task without inference, or does not interact with users, might be upgraded with machine learning capabilities or become integrated into decision-making workflows. Therefore, by the time the AI Act's full obligations take effect, some systems that currently fall outside the scope may require full compliance

To address this, it is prudent for organizations to identify and monitor such ‘grey zone’ systems now, even if they do not yet meet the definition of AI. Proactively including these systems in internal governance or quality management frameworks strengthens oversight, supports long-term planning, and reduces the risk of future disruption or non-compliance as capabilities evolve and additional regulatory guidance emerges.

These three steps are only a starting point on the path to compliance, as both the regulatory and technological landscapes will continue to evolve. To stay prepared, organizations should maintain a flexible, well-documented internal framework that can adapt to new developments. Monitoring additional guidelines issued by national regulators and industry bodies — and incorporating these insights into internal policies and procedures where relevant — may also strengthen governance and ensure that compliance efforts remain aligned with emerging best practices and advancements in AI capabilities.

Key Takeaway

The AI system definition under the EU AI Act is the gateway to compliance—and it’s broader and more dynamic than many assume. Organizations should act now by identifying AI systems, assessing borderline technologies, and building adaptable governance processes. This isn’t just about following rules—it’s about building resilience for the future of AI regulation.

 

Author

Corina Damaschin, Senior Associate, Reff & Associates,

Deloitte Legal Romania

Emial: cdamaschin@reff-associates.ro

Did you find this useful?

Thanks for your feedback