Authors:
The Regulation 2024/1689 or the AI Act was designed to regulate the use of artificial intelligence across the EU to safeguard fundamental human rights and freedoms.
On 4 February 2025, the European Commission issued draft guidelines on prohibited AI practices to streamline the practical implementation of the act. Although non-binding and yet to be formally adopted, they present critical insights that help to comply with the AI Act1.
The Court of Justice of the EU (CJEU)2 will be the only authority that may independently interpret the guidelines. The penalties for noncompliance may reach up to 7% of the global turnover or €35 million, whichever is higher3. However stakeholders, such as Data Protection Officers, Chief Risk Officers, legal and compliance professionals, still have time to protect companies from any penalties, as these will become applicable from 2 August 2025.
In today's digital age, artificial intelligence holds immense potential to revolutionize industries, enhance innovations, and improve daily lives. However, this transformation must align with the ethical paradigms that underpin our society.
The AI Act, through its meticulously crafted prohibitions, seeks to prevent practices that could undermine fundamental human rights. For instance, prohibitions against social scoring prevent the creation of digital caste systems, ensuring that every individual's dignity and privacy are upheld against arbitrary and unjust evaluations.
Similarly, restrictions on biometric mass identification protect citizens from illegal surveillance scenarios, safeguarding personal freedom and autonomy. These prohibitions believe that while AI can be a force for good, its deployment must be guarded against exploitation, bias, and disenfranchisement.
By embedding these safeguards, the AI Act paves the path for a future where technological progression is harmonized with ethical integrity, a future where innovation serves humanity, not vice versa.
Before delving into the prohibited practices, it is important to identify who are the main actors covered by the AI Act:
Both providers and deployers of the AI systems that are established or located outside of the EU may also fall within the scope of the AI Act, provided that the output they are producing is used within the EU.4
The AI Act provides a list of exclusions to its scope, ensuring that some sectors are not unfairly contested by the regulation:
These exclusions ensure that AI innovations that may be beneficial and essential for security are not restricted.
Starting 2 February 2025, the ban on unacceptable AI practices has come into effect. Although penalties will apply from August 2025, stakeholders should start taking measures now to prevent banned AI practices from being introduced.
Some steps are helpful to navigate through the regulatory landscape such as:
The AI Act is a robust piece of legislation regulating AI across the EU and making sure that fundamental rights are freedoms are safeguarded. The Commission guidelines, although, non-binding, greatly help to understand what elements are crucial to mitigate any AI risks. Stakeholders must ensure alignment with the Act to avoid penalties that become effective in August 2025.