The European Union’s Artificial Intelligence Act (EU AI Act) prohibits certain AI practices that pose unacceptable risks. The law considers these practices to significantly undermine fundamental rights, distort human behaviour, or cause harm to individuals or society. Here are some examples:
- Deploying subliminal, manipulative, or deceptive techniques distorts individuals’ behaviour, impairs their ability to make informed decisions, and causes significant harm.
- AI-driven adverts with subliminal cues that exploit consumers’ unconscious desires, leading to overspending or unhealthy consumption habits.
- Undermining informed decision-making with virtual assistants that guide users toward specific political agendas or products.
- Use of AI in computer games to manipulate player behaviours into making excessive in-game purchases.
- Exploiting vulnerabilities related to age, disability, or socioeconomic circumstances to distort behaviour in a way that causes or is likely to cause significant harm.
- Use of AI to exploit low-income individuals by encouraging them to take out high-interest loans.
- Using educational software to manipulate children’s choices or limit their learning potential based on stereotypes or biases.
- Targeting elderly individuals with deceptive offers for unnecessary products or services, capitalising on cognitive impairments or isolation.
- Using biometric systems to infer sensitive attributes such as race, political opinions, trade union membership, religious or philosophical beliefs, or sexual orientation.
- AI systems profiling individuals based on facial features to infer their religious beliefs, leading to discriminatory treatment in public services or employment.
- Systems that categorise by sexual orientation or political views, resulting in exclusion or targeting in advertising or societal participation.
- Evaluating or classifying individuals or groups based on social behaviour or personal traits in ways that result in detrimental or unfavourable treatment unrelated to the original purpose of data collection:
- Scoring individuals based on social media activity to determine access to housing, loans, or educational opportunities.
- Monitoring employee’s behaviours and punishing them for perceived misalignment with organisational culture.
- Allocating public services based on AI-generated social scores that disadvantage vulnerable groups.
- Assessing or predicting an individual’s likelihood of committing criminal offences solely based on profiling or personality traits, except when augmenting human assessments grounded in objective and verifiable facts directly linked to criminal activity:
- Identifying potential offenders based on socioeconomic background, place of residence, or prior associations.
- Predictive policing models disproportionately target ethnic minorities or marginalised communities, reinforcing existing biases.
- Systems that flag individuals as risks based on psychological assessments rather than direct criminal behaviour.
- Compiling facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- AI systems that harvest facial images from social media platforms without consent to create mass surveillance databases.
- Using AI to collect and store via in-store cameras without proper notification or consent.
- Using real-time remote biometric identification in publicly accessible spaces for law enforcement purposes.
- Such as:
- Scanning crowds at peaceful protests to identify participants for potential targeting.
- Using remote biometric identification in shopping malls to monitor and track individuals in real time without evidence of criminal activity.
- Deploying real-time facial recognition for general surveillance rather than specific investigations at public events.
- Exceptions include:
- Searching for missing persons, abduction victims, or individuals subjected to human trafficking or sexual exploitation.
- Preventing a substantial and imminent threat to life, such as an act of terrorism.
- Identifying suspects in serious crimes, including murder, rape, armed robbery, drug and weapons trafficking, and organised crime.
- Such as:
The EU AI Act places compliance obligations on businesses that use AI systems, even if licensed by third-party providers. Businesses remain responsible for ensuring the AI complies with the law when offered to EU customers. It is unlawful to develop or procure unacceptable AI from outside the EU and use it within the EU.

Information security, risk management, internal audit, and governance professional with over 25 years of post-graduate experience gained across a diverse range of private and public sector projects in banking, insurance, telecommunications, health services, charities and more, both in the UK and internationally – MORE