Skip to main content

A closer look at the Guidelines on prohibited AI practices as published by the Commission

Authors:

  • Georges Wantz | Managing Director – Digital Privacy & Trust
  • Haley Cover | Senior Manager – Cyber Defense & Resilience
  • Vusal Mammadzada | Senior Manager – Cyber Strategy & Transformation
  • Michal Arendarski | Consultant – Digital Privacy & Trust

The Regulation 2024/1689 or the AI Act was designed to regulate the use of artificial intelligence across the EU to safeguard fundamental human rights and freedoms.

On 4 February 2025, the European Commission issued draft guidelines on prohibited AI practices to streamline the practical implementation of the act. Although non-binding and yet to be formally adopted, they present critical insights that help to comply with the AI Act1.

The Court of Justice of the EU (CJEU)2 will be the only authority that may independently interpret the guidelines. The penalties for noncompliance may reach up to 7% of the global turnover or €35 million, whichever is higher3.  However stakeholders, such as Data Protection Officers, Chief Risk Officers, legal and compliance professionals, still have time to protect companies from any penalties, as these will become applicable from 2 August 2025.

Why yet another regulation?

In today's digital age, artificial intelligence holds immense potential to revolutionize industries, enhance innovations, and improve daily lives. However, this transformation must align with the ethical paradigms that underpin our society.

The AI Act, through its meticulously crafted prohibitions, seeks to prevent practices that could undermine fundamental human rights. For instance, prohibitions against social scoring prevent the creation of digital caste systems, ensuring that every individual's dignity and privacy are upheld against arbitrary and unjust evaluations.

Similarly, restrictions on biometric mass identification protect citizens from illegal surveillance scenarios, safeguarding personal freedom and autonomy. These prohibitions believe that while AI can be a force for good, its deployment must be guarded against exploitation, bias, and disenfranchisement.

By embedding these safeguards, the AI Act paves the path for a future where technological progression is harmonized with ethical integrity, a future where innovation serves humanity, not vice versa.

Who is covered by the AI Act

Before delving into the prohibited practices, it is important to identify who are the main actors covered by the AI Act:

  • Providers,  defined as natural or legal persons who develop AI systems or have them developed, those who sell or use AI systems.
  • Deployers, those natural or legal persons, located in the EU, who use AI systems under their authority, unless used in a non-professional manner.
  • Importers and distributors, defined as natural or legal persons responsible of making AI systems available within the EU.
  • Product manufacturers, natural or legal persons that place or service AI systems under their trademark. 

Both providers and deployers of the AI systems that are established or located outside of the EU may also fall within the scope of the AI Act, provided that the output they are producing is used within the EU.4

Prohibited Practices

The guidelines present examples of subliminal messages that can be visual (flashing images), auditory (masked or low volume sounds), tactile (delicate physical sensations). Other techniques include subvisual and subaudible cueing, embedded images (hidden within other visual content), misdirection (drawing attention to prevent noticing other content), and temporal manipulation (altering the perception of time). All the techniques raise ethical concerns towards human’s autonomy and free choice. For instance, machine-brain interfaces, such as Neuralink5, can be trained to infer from people’s neural information, highly sensitive data (intimate information, bank details, etc.), without them being aware.6

The term “vulnerability” should be understood as a state of being susceptible to harm, manipulation, and exploitation, based on age, disability, or socio-economic status.7 The AI Act protects children from manipulative AI toys, or games that encourage risky behaviour that can lead to harm.

In case of the elderly, the focus is on preventing deceptive personalised offers or coercive tactics by AI systems that exploit cognitive capabilities. Such practices can result in financial or psychological harm.8 For example, AI systems designed to help financial decision-making must ensure not to target economically vulnerable individuals.

While harmful exploitation is prohibited, the AI act does not prohibit practices resulting in a beneficial effect. Hence, AI systems providing learning tools for children, assistive robots for the elderly or impaired people can be allowed.

Prohibited social scoring practices include unfair evaluation or classification of individuals or groups, based on their social behaviour or personal characteristics over a certain time. The assessment must lead to either or both (i) a detrimental or unfavourable treatment in social contexts that are unrelated to the context in which the data was originally collected, (ii) detrimental or unfavourable treatment of people that is unjustified or disproportional considering their social behaviour.9 The prohibition applies both to public and private sectors and targets systems that assign scores, rankings, etc. The main objective of the prohibition is to prevent discriminatory and unfair outcomes, to protect human dignity, privacy, non-discrimination, and others.

This practice involves assessing a risk of the possibility to commit a criminal offence by an individual. This prediction must be based solely on profiling or personal attributes.10 The objective of this prohibition is to ensure that an individual is not assessed on AI-predicted behaviour. It targets crime prediction systems that use historical data to predict the possibility of committing a crime in future, potentially reinforcing bias and compromising public trust. The prohibition contains a few exclusions: (i) It does not cover petty offences nor administrative offences, (ii) it does not cover predictions based on location, (iii) the risk assessment does not apply to legal entities, (iv) the prohibition does not cover AI systems supporting human assessment based on objective facts about criminal activity.11

Untargeted scraping of facial images means gathering them without any restrictions (such as consent). In 2017, a US company, Clearview AI, developed a software which allowed to identify an individual using a photograph that was extracted from publicly available images on the Internet. As of 2 February 2025, this type of practice is banned with a few exceptions. For example, if the scraping of facial images is focused on individuals belonging to a specific group, this type of activity would be “targeted scraping” and would not be covered by the AI Act. Similarly facial images collected with explicit consent, not commercialised, and used only for research purposes are also exempted.

AI systems that are capable of inferring emotions are prohibited in the workplace, for education, or in training institutions, unless used for medical or safety reasons. The sole detection of human’s expressions, gestures, movements is not considered emotion recognition, unless used to infer or identify emotions, based on biometric data. For example, inferring physical states such as pain, fatigue is not emotion recognition. However, AI systems inferring emotions, such as happiness, from specific movements, (in)voluntary motions,  falls within the scope of prohibition.12

AI systems capable of inferring, based on biometric data, a characteristic belonging to a special category of personal data, as per the GDPR13,  shall be prohibited. AI systems capable of deducing a person’s religious belief based on their tattoos, or inferring an individual’s race from their tone of voice would be prohibited. However, categorising images of patients according to their skin colour to make a proper cancer diagnosis would be permitted.14

Deployers are the only group within the scope, since only the “use” of the system is prohibited. The rationale behind the prohibition is to protect individual rights and freedoms from the intrusive nature of such systems. In 2020, a Spanish supermarket chain used a real-time RBI in their CCTV cameras to collect facial images of individuals, to compare their faces against a police database.15 This case would be an example of a prohibited practice under article 5(1)h. The AI Act introduces three exceptions where the real-time RBI systems may be used, provided that it is strictly necessary. These are (i) search for missing persons, (ii) preventing an imminent threat, (iii) finding and identifying a criminal suspect for the purpose of their prosecution. Member States must regulate such use to ensure appropriate safeguards.16

Exceptions and interplay with other laws

The AI Act provides a list of exclusions to its scope, ensuring that some sectors are not unfairly contested by the regulation:

  • AI systems developed and used exclusively for national security, defence or military purposes are not prohibited. This allows Member States to have control over the AI advancements that relate to the security.
  • AI systems used in judicial and law enforcement cooperation with Member States, provided that appropriate safeguards are in place.
  • Practices being a part of Research & Development activities, such as testing AI systems, before use, placing them on the market or putting into service.
  • AI systems used for non-professional activities are exempt from the Act.
  • Free and open-source AI systems are excluded, unless they fall under any of the prohibition in article 

These exclusions ensure that AI innovations that may be beneficial and essential for security are not restricted.

What comes next

Starting 2 February 2025, the ban on unacceptable AI practices has come into effect. Although penalties will apply from August 2025, stakeholders should start taking measures now to prevent banned AI practices from being introduced.

Some steps are helpful to navigate through the regulatory landscape such as:

  • Ensure thorough understanding of the AI Act;
  • Audit existing AI systems in terms of potential compliance issues;
  • Update internal policies and procedures to align with the regulation;
  • Engage legal departments;
  • Continuously monitor compliance.

Conclusion

The AI Act is a robust piece of legislation regulating AI across the EU and making sure that fundamental rights are freedoms are safeguarded. The Commission guidelines, although, non-binding, greatly help to understand what elements are crucial to mitigate any AI risks. Stakeholders must ensure alignment with the Act to avoid penalties that become effective in August 2025.

1 Commission publishes the Guidelines on prohibited artificial intelligence (AI) practices, as defined by the AI Act. | Shaping Europe’s digital future < https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-prohibited-artificial-intelligence-ai-practices-defined-ai-act> [accessed 02/03/2025]
2 See Commission Guidelines on prohibited artificial intelligence practices established by Regulation (EU) 2024/1689 (AI Act) | Brussels, 4.2.2025 C(2025) 884 final ANNEX| (5)
3 See Article 99 of the AI Act
4 See Guidelines (16)
5 “Neuralink: Can Musk's brain technology change the world?” | Jim Reed and Joe McFadden | 4 February 2024| < https://www.bbc.com/news/health-68169082> [accessed: 03/03/2025]
6 See Guidelines (66)
7 See Guidelines (102)
8 See Guidelines (105); (106)
9 See Art 5(1)(c) of the AI Act
10 See recital 42 of the AI Act
11 See guidelines (215)
12 See guidelines (251)
13 Race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation.
14 See Guidelines (285)
15 AEPD (Spain) - PS/00120/202 | <AEPD (Spain) - PS/00120/2021 - GDPRhub>  [accessed: 28/02/2025]
16 See Article 5(1)(h) of the AI Act

Our joint capabilities. Your trustworthy AI.

GenAI has the power to transform businesses, but it needs to be built on trust.

Did you find this useful?

Thanks for your feedback