EU AI Act Annex II: List of Criminal Offences Referred to in Article 5(1), First Subparagraph Point (h)(iii)
Introduction
With the adoption of the EU AI Act, organizations developing or deploying AI systems—especially in law-enforcement or public-space surveillance—must pay attention to a lesser-known yet critical component: Annex II – List of Criminal Offences Referred to in Article 5(1), First Subparagraph, Point (h)(iii). This Annex defines the specific serious crimes addressing which certain AI practices (for example real-time biometric identification) may be justified under strict conditions. Understanding this list is key to compliance and risk management.

What Is Article 5(1)(h)(iii)?
Annex II specifically outlines criminal offences pertinent to the EU AI Act. This Annex is crucial as it determines which AI applications are considered high risk and, therefore, subject to stricter regulations. By categorizing these offences, the EU aims to preemptively address potential abuses of AI technologies that could lead to significant harm. Understanding these criminal offences is essential for businesses and developers involved in AI to ensure compliance. It also highlights the need for continuous dialogue between legal experts and technologists to adapt to emerging threats and technological advancements.
Why Does This Matter For AI Applications?
-
Scope of exception: If an AI system seeks to perform real-time biometric identification in a public space, the use is only justifiable if it targets individuals suspected of committing one of the offences in Annex II.
-
Risk of non-compliance: Using such AI systems outside the scope of these listed crimes falls outside the exception and thus is likely prohibited, exposing providers/deployers to enforcement.
-
Clarity for law-enforcement and providers: Annex II gives a definitive inventory of offences; organizations can map whether their AI use-case falls within or outside that list.
-
Intersection with fundamental rights: Because biometric identification and surveillance raise serious privacy and rights issues, the AI Act limits exceptions to a narrow set of grave crimes, strengthening rights protection.
Categories Of Criminal Offences
The list in Annex II is extensive, covering a range of offences. Some key categories include:
-
Cybercrime: Activities like hacking, phishing, and unauthorized data access are prohibited. AI systems should not be designed or used to facilitate or conceal such activities. Ensuring robust cybersecurity measures and ethical AI development practices is crucial to prevent AI from becoming a tool for cybercriminal activities.
-
Fraud and Deception: AI systems must not aid in fraudulent activities or deceptive practices. This includes creating misleading content or manipulating data to deceive stakeholders. Organizations must implement stringent checks and balances to ensure AI-generated outputs are honest and transparent, safeguarding against misuse.
-
Hate Crimes and Discrimination: AI must not promote or facilitate hate speech, discrimination, or racism. Systems should be designed with inclusivity in mind, avoiding biases that could lead to discriminatory outcomes. This requires continuous evaluation and adjustment of AI models to ensure fairness and equality.
-
Privacy Violations: Breaches of data protection and privacy laws are strictly regulated. AI systems must adhere to data protection regulations such as GDPR, ensuring that personal data is handled responsibly. Implementing privacy-by-design principles is essential to uphold individual privacy rights in AI applications.
Importance for CIOs
For Chief Information Officers (CIOs), understanding these offences is vital for aligning IT processes with the organization's strategic goals. Ensuring AI systems do not infringe on these offences not only safeguards the organization from legal ramifications but also upholds ethical standards. CIOs play a critical role in integrating these regulations into the organization's IT strategy, driving the adoption of ethical AI practices that align with broader business objectives.
Aligning IT Processes With AI Regulations
The integration of AI into business operations requires a strategic approach that considers both technological and ethical dimensions. CIOs play a pivotal role in this alignment by implementing robust IT governance frameworks. Effective alignment ensures that AI technologies are not only compliant with regulations but also contribute positively to the organization's mission and values.
Strategic Alignment
To align AI initiatives with organizational objectives, CIOs should:
-
Conduct Risk Assessments: Regularly evaluate AI systems for potential risks and ensure they comply with the EU AI Act. This involves identifying areas where AI could potentially violate regulations and developing strategies to mitigate these risks. By proactively addressing potential compliance issues, organizations can avoid costly penalties and reputational damage.
-
Develop Ethical Guidelines: Establish clear ethical guidelines for AI usage, reflecting both legal obligations and the organization's values. These guidelines should be communicated across the organization to ensure a consistent understanding and application of ethical AI practices. By embedding ethical considerations into the AI development process, organizations can foster a culture of responsibility and integrity.
-
Enhance Cross-Departmental Collaboration: Foster communication between IT and business teams to ensure a unified approach to AI governance. This collaboration is essential for aligning AI initiatives with business goals and ensuring that all stakeholders are engaged in the compliance process. By breaking down silos, organizations can create a cohesive strategy that leverages AI's potential while adhering to regulatory requirements.
Practical Implementation
Implementing these strategies requires practical steps:
-
Training and Education: Provide training sessions for IT staff and business leaders on AI ethics and compliance. Education initiatives should focus on the specific requirements of the EU AI Act, as well as broader ethical considerations in AI development. By equipping teams with the knowledge and skills needed to navigate AI regulations, organizations can ensure adherence to legal and ethical standards.
-
Monitoring and Reporting: Set up systems for continuous monitoring of AI operations and regular reporting on compliance status. These systems should track AI system performance, identify deviations from regulatory standards, and facilitate timely interventions. Regular reporting ensures transparency and accountability, enabling organizations to demonstrate their commitment to ethical AI practices.
-
Policy Development: Develop and implement policies that address data protection, transparency, and accountability in AI systems. These policies should outline the organization's approach to AI governance, providing a clear framework for compliance and ethical decision-making. By codifying best practices into formal policies, organizations can establish a strong foundation for responsible AI use.
Challenges and Opportunities
While aligning IT processes with the EU AI Act presents challenges, it also offers opportunities for innovation and competitive advantage. Navigating these challenges effectively can position organizations as leaders in ethical AI development, enhancing their reputation and market position.
Challenges
-
Complexity of Regulations: The technical and legal complexity of the AI Act can be daunting. Simplifying communication and breaking down jargon is essential for effective implementation. Organizations must invest in resources and expertise to translate regulatory requirements into actionable strategies that are easily understood by all stakeholders.
-
Resource Allocation: Ensuring sufficient resources—both human and technological—are available to meet compliance requirements. This includes investing in specialized personnel, technology solutions, and ongoing training programs. Balancing resource allocation with other organizational priorities is a key challenge that requires strategic planning and foresight.
Opportunities
-
Enhanced Reputation: Demonstrating compliance with the AI Act can enhance an organization's reputation, building trust with consumers and stakeholders. By positioning themselves as ethical leaders in AI, organizations can differentiate themselves in the market and attract customers who value responsible business practices.
-
Innovation Potential: Adopting ethical AI practices encourages innovation, driving business growth and differentiation in the market. By prioritizing ethics in AI development, organizations can explore new opportunities for creating value and solving complex problems, ultimately contributing to their long-term success.
Conclusion
While many organizations focus on the “high-risk AI systems” or “CE-marking-style” obligations under the AI Act, Annex II may be overlooked—but it is critically important. It delineates the boundary for one of the few permitted uses of intrusive AI surveillance: real-time biometric identification in public spaces. By anchoring the exception to a defined set of serious crimes, the AI Act ensures that such powerful AI tools are only used for the gravest offences.