While the use of unacceptable AI practices has been prohibited under the AI Act since February 2025, the full penalties regime will be enforced by the regulators from August 2, 2025. Such practices are already unlawful now and can result in direct legal liability from their operators. Beyond legal and regulatory risks, this can lead to financial and reputational damage.
Unacceptable AI Practices
The AI Act categorizes AI systems based on their potential risks, with unacceptable risk practices being outright prohibited. These practices include:
- Subliminal Techniques: Manipulate or deceive beyond conscious awareness to distort behavior and impair informed decision-making, causing significant harm. For example, ads using subliminal messages or exploiting emotional state to push impulsive purchases or change habits without user awareness.
- Social Scoring: Classifying people based on their behavior or characteristics, leading to unfair treatment in unrelated contexts. An example would be denying someone a loan due to negative social media behavior or past unrelated actions.
- Exploiting Vulnerabilities: Exploiting age, disability, or economic status to distort behavior and cause harm. This could involve manipulative marketing targeting children or elderly people to purchase unnecessary products.
- Biometric Categorization: Using biometric data to deduce or infer sensitive categories. For instance, classifying people based on facial features to infer their religion or political beliefs.
- Real-time Biometric Identification by Law Enforcement: Using real-time biometric systems (like facial recognition) in public spaces for law enforcement, except for specific situations. An example would be using facial recognition at a public event for surveillance without a genuine imminent threat.
- Emotions at Workplace and Education: Using AI to infer emotions in workplaces or schools, except for medical or safety purposes. This could involve AI detecting employee emotions to influence productivity ratings or student engagement without their knowledge.
- Image Scraping: Creating or expanding facial recognition databases by scraping images from the internet or CCTV without consent. This might involve collecting images from social media or public cameras without user permission for surveillance.
- Predictive Policing: AI predicting criminal behavior solely based on personality traits or profiling, without objective facts. An example would be an AI system predicting someone will commit a crime based on their personality test results.
Liability: Impacts and Effects
Unacceptable practices can affect all groups of people through different impacts and effects:
- Subliminal Manipulation: Teenagers and young adults, politically uninformed individuals, compulsive shoppers, and vulnerable individuals with mental health issues may be more susceptible to subliminal techniques that could materially distort their behavior.
- Exploitation of Vulnerabilities: Children and adolescents, elderly individuals, people with addictions, and economically vulnerable populations may be more easily exploited due to their cognitive, social, or financial vulnerabilities.
- Biometrics: All citizens are potentially affected, with disproportionate effects on low-income individuals, political dissidents, ethnic and religious minorities, and individuals with mental health or addiction issues.
- Social Scoring: This practice has a stifling effect on the general public, with potential disproportionate impacts on political activists, marginalized communities, and individuals with sensitive personal information or criminal records.
Identification of Unacceptable Approaches
Unacceptable practices can be intentional, but they can also result from decisions or effects along the AI lifecycle. Inadequate AI governance or management practices can introduce unacceptable elements into otherwise lower-risk AI systems, inadvertently converting them into prohibited practices.
Organizations must take proactive steps to identify and address any potentially unacceptable practices throughout their AI inventory. Adequate governance, quality and risk management processes are crucial for ensuring compliance with the AI Act and maintaining ethical and acceptable AI practices for all AI systems, not only high-risk.
Often, unacceptable classification or prohibition of a particular system can be avoided by using appropriate alternative approaches. Remediation of unacceptable practices can lead to reclassification of the system to a lower risk category, depending on other characteristics. By implementing adequate governance and appropriate alternative approaches, organizations can ensure compliance with the AI Act while still leveraging the benefits of AI technology.
1. Age-Targeted Social Media Algorithms
These could exploit younger users' vulnerabilities by promoting content that encourages harmful behaviors or excessive platform use. Algorithms may identify and target users who are more susceptible to peer pressure or have shown interest in dangerous trends.
- Identification: Assess how the algorithm tailors and delivers content to different age groups, with a focus on protecting minors.
- Alternative: Implement age-appropriate content recommendation systems with parental oversight options.
2. AI-Driven Employee Evaluation Systems
These assess workers based on multiple data points of social behavior, potentially leading to unfair treatment in unrelated contexts. AI may analyze social media activity, spending habits, or personal relationships to make decisions about promotions or job assignments, even when these factors are not relevant to job performance.
- Identification: Evaluate the use of social behavior data in employee assessments, focusing on relevance to job performance.
- Alternative: Implement skills-based assessment tools with transparent evaluation criteria.
3. AI-Powered Insurance Risk Assessment
This uses social behavior data unrelated to health for insurance decisions, potentially creating discriminatory insurance practices based on lifestyle choices. It can lead to unfair denial of coverage or increased premiums based on non-health-related factors, discrimination against certain social groups, erosion of risk pooling principles, and privacy concerns regarding personal data use.
- Identification: Examine the types of social behavior data used in risk assessments and their relevance to actual health risks.
- Alternative: Develop risk assessment models based on verified health data and objective risk factors.
4. Social Media Engagement Algorithms
Imperceptible visual or auditory cues in videos could influence users' emotional states and increase platform engagement. For example, an algorithm might subtly alter the rhythm or tone of video content to trigger specific emotional responses, leading to potentially harmful decisions or effects without the user's conscious awareness.
- Identification: Analyze the use of subtle visual and auditory cues in video content, focusing on their potential to trigger specific emotional responses.
- Alternative: Implement transparent recommendation systems with user-defined preferences.