As of February 2nd, 2025, the EU's Artificial Intelligence Act’s (AI Act) prohibitions of unacceptable AI practices have taken effect. As AI adoption grows, businesses must understand these prohibitions and ensure compliance. The AI Act follows a risk-based approach, banning AI systems that conflict with fundamental rights and Union values.
Unacceptable AI practices include systems that manipulate users, exploit vulnerable populations, and infringe on privacy. Industries such as social media, gaming, healthcare, and financial services, and also law enforcement are under scrutiny due to potential discrimination and data misuse.
Organizations in these sectors must review their AI systems to comply with the new regulations.
To prepare, businesses should conduct AI risk assessments, establish governance policies, and implement training programs. Staying informed and fostering ethical AI practices will help organizations align with the EU's evolving legal landscape.
Download our publication now to gain expert insights and practical guidance on navigating the EU AI Act compliance with confidence.
If you have any questions or would like to discuss further, please feel free to get in touch.
Through its multidimensional approach to trusted AI, Deloitte helps organizations create safeguards for developing and deploying trusted AI tools at all levels of the supply chain.
Our multi-disciplinary legal, risk, ethics, audit, business and technology advisory services provide companies with tailored, efficient and effective support at all stages of the AI systems lifecycle, on a global basis and with a deep understanding of local specificities.
Deloitte's expertise includes advanced AI management, operational improvement, as well as providing regulatory support to access different markets and supply chain alignment for specific applications. We help clients close gaps, develop specific solutions or assess the value of designs and implementations.