EU AI Act Annex III : High-Risk AI Systems Referred To In Article 6(2)
Introduction
Before diving into the specifics of high-risk AI systems, let's briefly understand the EU AI Act. The Act is a comprehensive legislative framework designed to manage the development, deployment, and use of AI across the European Union. It establishes rules for AI systems with the aim of ensuring they are safe, respect existing laws on fundamental rights, and are trustworthy. By setting these rules, the Act seeks to prevent the misuse of AI technologies while fostering a secure environment for their development and integration into society. High-risk AI systems Threaten Safety Or Fundamental Rights.

EU AI Act Provides A Detailed List Of AI Systems That Are Considered High-risk, These Systems Typically Include:
-
Biometric Identification and Categorization: AI systems used for biometric identification and categorization of natural persons in public spaces are considered high-risk. This includes facial recognition technology used by law enforcement agencies. The use of such technology raises significant privacy concerns, as it involves the collection and processing of sensitive personal data. Furthermore, inaccuracies in these systems can lead to wrongful identification, potentially resulting in legal and social consequences for individuals.
-
Critical Infrastructure: AI systems employed in managing critical infrastructure, such as water, energy, and transportation, fall under the high-risk category due to the potential for significant harm if they malfunction. A failure in these systems could disrupt essential services, affecting large populations and leading to economic losses. Therefore, ensuring the reliability and security of AI applications in these sectors is paramount to safeguarding public welfare and maintaining societal functions.
-
Education and Vocational Training: AI systems that determine access to educational or vocational training, such as those used in grading exams, are considered high-risk as they can significantly affect a person's career prospects. These systems must be free from bias to ensure fair and equitable treatment of all individuals. Inaccuracies or biases in AI-driven assessments could lead to unjust outcomes, impacting a person's educational trajectory and future employment opportunities.
-
Employment and Worker Management: AI systems used for recruitment or managing workers, such as AI-driven interview analysis tools, are high-risk due to their potential impact on employment opportunities and worker rights. These tools must be designed to uphold fairness and transparency to avoid discriminatory practices. Moreover, they should complement human decision-making rather than replace it, ensuring that employment decisions are made with a comprehensive understanding of each candidate's capabilities and potential.
-
Access to Essential Services: AI systems that determine access to essential services like credit scoring or social services are classified as high-risk because they can significantly impact individuals' lives. For instance, inaccuracies in credit scoring algorithms can lead to unjust denial of loans, affecting a person's financial stability. Similarly, biases in AI systems used for social services could result in unequal access to support, exacerbating social inequalities.
-
Law Enforcement: AI technologies used in law enforcement, such as predictive policing systems, are high-risk due to their potential to affect individuals' fundamental rights. These systems must be carefully regulated to prevent misuse and ensure they do not perpetuate or exacerbate existing biases. Transparency and accountability are essential in the deployment of AI in law enforcement to maintain public trust and protect individual freedoms.
-
Migration, Asylum, and Border Control: AI systems used in managing migration and border control, like lie detection tools at border crossings, are considered high-risk. These tools must be scrutinized to ensure they respect human rights and operate accurately. Errors or biases in such systems can have severe implications for individuals seeking asylum or migration, potentially leading to wrongful detention or denial of entry.
-
Justice and Democratic Processes: AI applications in the justice system, such as those assisting in legal decision-making, are classified as high-risk due to their potential influence on democratic processes and individuals' rights. These systems must be transparent and fair to ensure they do not undermine judicial integrity. AI in the justice sector should enhance, rather than replace, human judgment, supporting legal professionals in delivering fair and just outcomes.
Key Use-Cases In Annex III In Article 6(2)
Here are the main categories of AI systems that Annex III considers high-risk:
1. Biometric & Sensitive Attribute Systems
-
Remote biometric identification systems (e.g., facial recognition from a distance) in so far as permitted under Union or national law.
-
Systems intended to categorize people based on sensitive or protected attributes (race, gender, sexual orientation, health) via inference.
-
Emotion recognition systems. These are high-risk because they can impact fundamental rights (privacy, non-discrimination, human dignity).
2. Critical Infrastructure
-
AI systems used as safety components in the management and operation of critical digital infrastructure, road traffic, or supply of water, gas, heating or electricity.
Failures in such systems can have major safety or public-order consequences.
3. Education And Vocational Training
Use-cases include:
-
AI systems determining access/admission to educational or vocational training institutions.
-
Systems evaluating learning outcomes, steering learning or assigning educational levels.
-
Systems monitoring and detecting prohibited behaviour of students during tests.
Impact here includes fairness, equality of opportunity, bias in educational outcomes.
4. Employment, Worker Management & Access to Self-Employment
Examples:
-
AI for recruitment or selection of individuals (targeted job ads, resume filtering, candidate evaluation).
-
AI for decisions affecting terms of work (promotions, terminations), task allocation based on behaviour or personal traits, monitoring employee performance/behaviour. These are high-risk because of potential discrimination, lack of transparency, unfair treatment.
5. Access to / Enjoyment of Essential Private Services & Public Services and Benefits
Includes:
-
AI systems used by or on behalf of public authorities to evaluate eligibility for essential public assistance, healthcare services, etc.
-
AI systems used to evaluate creditworthiness of natural persons; risk assessment and pricing for life and health insurance.
-
AI systems used to evaluate/classify emergency calls or dispatch first-responders.
-
AI systems used by law-enforcement authorities for profiling natural persons in the detection, investigation or prosecution of offences.
-
These use-cases touch on fundamental rights (equality, justice, life & health) and therefore are high-risk.
Classifying AI systems As High-risk Under The EU AI Act Is Crucial For Several Reasons:
-
High-risk AI systems are subject to stringent requirements to ensure they are safe and do not infringe on individuals' fundamental rights. This includes conducting risk assessments, ensuring transparency in AI operations, and maintaining accountability. These measures are designed to prevent potential harms and ensure that AI technologies do not compromise individual safety or freedoms. By mandating rigorous checks and balances, the EU aims to create an environment where AI systems can be trusted to operate fairly and ethically.
-
By regulating high-risk AI systems, the EU aims to build public trust in AI technologies. When people know that AI systems affecting critical aspects of their lives are carefully regulated, they are more likely to trust and accept these technologies. Trust is a fundamental component of successful AI integration into society, and clear regulations can help demystify AI operations, making them more accessible and understandable to the general public. This trust is essential for the widespread adoption and acceptance of AI technologies, allowing for their benefits to be fully realized.
-
While the EU AI Act imposes regulations, it also encourages innovation by setting clear guidelines. Developers and companies can innovate within a framework that prioritizes safety and ethical considerations, leading to responsible AI development. By providing a stable regulatory environment, the Act reduces uncertainty and fosters a competitive market where innovation thrives. Companies can focus on creating cutting-edge technologies that meet high ethical standards, positioning Europe as a leader in responsible AI development.
Conclusion
Annex III of the EU Artificial Intelligence Act serves as the backbone of the EU’s high-risk AI classification system, ensuring that AI technologies impacting safety, rights, and public trust are held to the highest regulatory standards. By clearly defining what constitutes a “high-risk AI system” under Article 6(2), the EU aims to strike a balance between innovation and accountability. For businesses, developers, and policymakers, this annex is more than just a legal list—it’s a compliance roadmap. If your AI solution falls within these categories (such as biometric identification, education, employment, or justice systems), early alignment with the AI Act’s requirements is essential.