This article examines what qualifies as an Emotion Recognition System (ERS) under the EU AI Act. Given that the AI Act prohibits ERSs in certain contexts, with this ban set to take effect on February 2, 2025, the article aims to help businesses stay informed about potential regulatory guidance that could quickly bring them within the Act’s scope.
Prohibited AI Practices in ERS
According to Article 5 of the EU AI Act, the placing on the market, deployment for this specific purpose, or use of AI systems to infer emotions of a natural person in workplaces and educational institutions is prohibited, except where the AI system is intended for medical or safety purposes.
Purpose of the ERS Prohibition
According to Recital 44, the EU AI Act prohibits Emotion Recognition Systems (ERS) in workplaces and educational institutions to protect privacy, prevent manipulation, and safeguard fundamental rights.
These bans aim to limit unethical surveillance and AI-driven decision-making that could negatively impact individuals. Exceptions apply only for medical and safety purposes.
What is Emotion Recognition System?
According to the Article 3 (39) in the EU AI Act, Emotion Recognition System (ERS) is defined as follows:
An AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data.
Three main components define this concept:
- an AI system for the purpose of identifying or inferring;
- emotions or intentions of natural persons;
- on the basis of their biometric data.
What are emotions or intentions of natural persons?
According to the Recital 18 under EU AI Act, emotions or intentions of natural persons refer to the following:
Emotions or intentions like happiness, sadness, anger, surprise, disgust, embarrassment, excitement, shame, contempt, satisfaction, and amusement. Those expressions can be basic facial expressions, such as a frown or a smile, or gestures such as the movement of hands, arms or head, or characteristics of a person’s voice, such as a raised voice or whispering.
The EU AI Act excludes physical states such as pain or fatigue, including systems designed to detect pilot or driver fatigue to prevent accidents. It also does not cover the mere detection of obvious expressions, gestures, or movements, unless these are used to identify or infer emotions. However, the Act does not clearly define what constitutes “intention” in this context.
Examples of Systems Detecting or Inferring Emotions
- wearable technologies (e.g. immersive headsets or smart watches), which can infer excitement through heart rate, measure pupil dilation against visual prompts to infer attraction or analyse moods by detecting movement of the body via motion recognition systems.
- retail sentiment analysis systems that analyse the facial expressions of in-store customers looking at products.
- out of home advertising billboards assessing the facial expressions of the passerbys to analyse whether they like the advertisement shown.
- customer helplines integrating voice-based emotion analysis to determine the customer satisfaction with products or services.
- (potentially in the future when the consumer grade electroencephalography (eeg) systems are deployed on a large scale) measuring a consumer’s sentiments towards product or brands based on brainwave analyses (i.e. neuromarketing)
Source: Bird & Bird
Examples of Systems Detecting or Inferring Intentions
- an intention to commit a crime (e.g. aggression detection systems inferring violent intents on the basis of facial expressions or body language or gaze-tracking systems and social behaviour analysis systems predicting shoplifting intention).
- employees’ intentions to resign from their jobs on the basis of how happy they seem on videocalls (through facial expression analysis).
- suicidal intentions in suicide hotspots such as railways, high-rise buildings or bridges through CCTV and computer vision analysis (e.g. relying on behavioural indicators such as individuals’ repeated transitions between walking and standing, leaning against a railway fence with their head facing down… ).
Source: Bird & Bird
Implications of biometric data
According to the EU AI Act article 3 (34), biometric data is defined as follows:
personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic data;
However, Recital 14 of the EU AI Act further clarifies the interpretation of biometric data:
The notion of ‘biometric data’ used in this Regulation should be interpreted in light of the notion of biometric data as defined in Article 4, point (14) of Regulation (EU) 2016/679, Article 3, point (18) of Regulation (EU) 2018/1725 and Article 3, point (13) of Directive (EU) 2016/680. Biometric data can allow for the authentication, identification or categorisation of natural persons and for the recognition of emotions of natural persons.
The AI Act defines biometric data differently from the GDPR, as it does not require the data to uniquely identify an individual. Without this requirement, many Emotion Recognition Systems (ERSs) would not be classified as using biometric data, even though they process information derived from a person’s physical traits or behavior without necessarily identifying them.
Failure to comply with the EU AI Act’s regulations on emotion recognition systems can result in significant penalties, as the Act imposes strict enforcement measures. Depending on the severity of the violation, fines can reach up to €35 million or 7% of global annual turnover. To understand the full scope of penalties and compliance requirements, refer to our detailed breakdown of EU AI Act penalties.