1. AI System Identification
Basic Characteristics
Is it a 'machine-based system'?
A machine-based system means it's a technology running on a computer, server, or other device. It includes all digital systems - from simple applications to complex cloud solutions.
The system must run on a machine and be composed of hardware and software components
Is it designed to operate with 'varying levels of autonomy'?
Autonomy means the system's ability to make decisions or perform tasks without direct human intervention. For example, a system that automatically sorts emails or recommends products has a certain level of autonomy.
The system operates with a certain degree of independence from human control
Does it infer (perform inference)? 🔑 KEY TEST
Inference is the system's ability to create new information based on input data. This is the key difference between AI and regular software - AI can 'deduce' something that isn't directly programmed.
This is a mandatory condition that distinguishes AI from traditional software.
Check directly or perform the inference test below.
Does it generate relevant "outputs"?
Outputs are specific results that the system produces - for example, weather forecast, movie recommendation, translated text, or credit approval decision. They must be useful information, not just technical logs.
Outputs must be predictions, content, recommendations or decisions
Does the output 'influence physical or virtual environments'?
This includes any impact on the physical world (robotic actions, device control) or virtual environments (digital spaces, software systems, data flows). It also includes influencing human decisions through recommendations or ratings.
Outputs can influence physical or virtual environments
Does the system exhibit "adaptability"? (Optional characteristic)
The system's ability to learn and change its behavior after deployment
3. Prohibited Practices (Unacceptable Risk)
WARNING: If you check any of these options, the system is PROHIBITED in the EU!
Does the system deploy subliminal techniques (beyond a person's consciousness) or purposefully manipulative or deceptive techniques that materially distort a person's behaviour in a way that causes or is reasonably likely to cause significant harm ?
This prohibition requires ALL conditions to be met: 1) Use of subliminal/manipulative/deceptive techniques, 2) Material distortion of behaviour, 3) Causing or likely to cause significant harm.
Does the system exploit vulnerabilities of a person or group due to their age, disability, or a specific social or economic situation , with the objective or effect of materially distorting their behaviour in a way that causes or is reasonably likely to cause significant harm?
This requires: 1) Exploitation of vulnerabilities based on age, disability, or social/economic situation, 2) Material distortion of behaviour, 3) Causing or likely to cause significant harm to that person or another person.
Is it used for social scoring (evaluating or classifying persons based on social behaviour or personal characteristics) that leads to detrimental or unfavourable treatment in social contexts unrelated to where the data was generated, OR treatment that is unjustified or disproportionate to the social behaviour?
Social scoring is prohibited only when it leads to detrimental treatment that is either: 1) In unrelated social contexts (e.g., denying healthcare based on traffic violations), or 2) Unjustified/disproportionate to the behaviour.
Does it use emotion recognition in the areas of the workplace and education institutions (except for medical or safety reasons)?
This prohibition covers emotion recognition systems in both workplaces AND educational institutions. Exceptions apply only for medical or safety reasons (e.g., detecting medical conditions or ensuring safety).
Does it use biometric categorization to deduce or infer a person's race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation ?
This prohibition is limited to these specific sensitive attributes. It does not cover other biometric categorization like age or gender estimation.
4. High-Risk Systems
Part A: Safety Component
Is the AI system intended to be used as a safety component of a product (or is the system itself a product) that is covered by the Union harmonisation legislation in Annex I, AND is that product required to undergo a third-party conformity assessment ?
Both conditions must be met: 1) The system must be a safety component of (or be) a product covered by laws in Annex I (e.g., machinery, medical devices, automotive), AND 2) That product must require third-party conformity assessment.
Part B: Areas in Annex III
1. Biometric identification and categorization of persons
Includes facial recognition systems, fingerprints, or other biometric data. The exception applies to simple identity verification (e.g., unlocking phone with face).
(EXCEPTION: identity verification for service access, device unlocking)
2. Management and operation of critical infrastructure
Road traffic safety, water, gas, electricity supply
3. Education and vocational training
Access to education, student assessment, exam proctoring
4. Employment and worker management
Covers AI used in HR processes - automatic CV screening, employee evaluation, turnover prediction, or productivity monitoring.
AI for CV analysis, performance evaluation
5. Access to essential services - creditworthiness assessment or credit scoring
Key for banking sector
6. Law enforcement
E.g., AML model used by law enforcement authorities
7. Migration, asylum and border control
Risk assessment, document authenticity verification
8. Administration of justice and democratic processes