EU AI Act Chapter II - Prohibited AI Practices Article 5: Prohibited AI Practices

Oct 9, 2025by Maya G

Introduction 

The EU AI Act is a groundbreaking regulatory framework aimed at ensuring the safe and ethical use of AI in the European Union. It sets the standard for AI governance, addressing ethical concerns, risk management, and the protection of fundamental rights. The Act classifies AI systems based on their risk levels, from minimal to high risk, and imposes requirements accordingly.

The Importance Of AI Ethics Guidelines

The EU AI Act's focus on prohibited practices highlights the importance of adhering to AI ethics guidelines. These guidelines serve as a moral compass for developers and organizations, ensuring that AI technologies are designed and deployed in ways that are fair, transparent, and accountable.

Key Principles Of AI Ethics

  1. Transparency: AI systems should operate transparently, allowing users to understand how decisions are made and ensuring accountability. Transparency is fundamental to building trust and ensuring that AI systems are used responsibly.

  2. Fairness: AI technologies must be free from biases and discrimination, treating all individuals equitably. Ensuring fairness requires continuous evaluation and mitigation of biases within AI algorithms.

  3. Privacy: Protecting individuals' privacy is paramount, and AI systems should be designed to safeguard personal data. Privacy considerations must be integrated into the design and deployment of AI systems to prevent unauthorized data access.

  4. Accountability: Organizations and developers must be accountable for the AI systems they create, taking responsibility for their impact on society. Accountability involves establishing clear lines of responsibility and ensuring that ethical standards are upheld.

Historical Context

The development of the EU AI Act is rooted in growing concerns about AI's impact on society. As AI systems became more prevalent, instances of misuse and ethical dilemmas emerged. This historical backdrop provided the impetus for the EU to create a comprehensive framework to address these challenges.

Objectives of the EU AI Act

The primary objective of the EU AI Act is to create a harmonized regulatory environment across member states. By establishing clear guidelines and standards, the Act seeks to prevent the fragmentation of AI regulations within the EU. This harmonization is crucial for fostering innovation while safeguarding fundamental rights.

Classification of AI Systems

The EU AI Act classifies AI systems into four risk categories: minimal, limited, high, and unacceptable risk. This classification helps tailor regulatory requirements to the level of risk associated with each AI system. High-risk systems, for instance, face stringent requirements to ensure they do not pose threats to individuals or society.

Article 5: Prohibited AI Practices

Article 5 of the EU AI Act is a crucial component of the legislation, as it specifies the AI practices that are outright prohibited due to their potential to cause harm or violate human rights. These prohibitions are designed to ensure that AI technologies are used ethically and responsibly, safeguarding the interests of individuals and society.

Subliminal Techniques

Certain AI practices are deemed too risky to be allowed within the EU. These include:

  1. Definition and Risks: Subliminal techniques refer to AI systems that manipulate human behavior without the individual's awareness. The primary risk is the potential for these systems to influence decisions in non-transparent ways, leading to ethical concerns about autonomy and consent.

  2. Examples and Implications: Examples of subliminal techniques include AI-driven advertising that subtly affects consumer choices. The implications of such practices are significant, as they could undermine individuals' ability to make informed decisions, eroding trust in AI technologies.

  3. Rationale for Prohibition: The prohibition of subliminal techniques is rooted in the desire to protect individuals' autonomy and ensure transparency in AI interactions. By banning these practices, the EU aims to uphold ethical standards in AI applications.

Exploitation Of Vulnerabilities

AI systems that exploit the vulnerabilities of specific groups, such as children or individuals with disabilities, to materially distort their behavior are banned. This prohibition aims to protect vulnerable populations from being taken advantage of by AI technologies.

  1. Targeted Groups: Vulnerable groups, including children and individuals with disabilities, are particularly susceptible to exploitation by AI systems. These groups may lack the capacity to fully understand or resist manipulative AI-driven tactics.

  2. Potential Harms: The exploitation of vulnerabilities can lead to significant harms, including manipulation of behavior, loss of privacy, and even psychological distress. These harms necessitate robust protections to ensure the well-being of vulnerable populations.

  3. Protective Measures: The EU AI Act's prohibition on exploiting vulnerabilities underscores the importance of protective measures. These measures include rigorous oversight and ethical guidelines to prevent AI systems from taking advantage of those who are most at risk.

Social Scoring Systems

The use of AI to evaluate or score individuals based on their social behavior, which could lead to discrimination or unfair treatment, is forbidden. This practice is reminiscent of dystopian scenarios and poses a threat to personal freedoms and privacy.

  1. Concept and Concerns: Social scoring systems evaluate individuals based on their behavior, potentially leading to societal segregation. These systems raise concerns about discrimination, as individuals may be unfairly judged or penalized based on arbitrary criteria.

  2. Dystopian Parallels: The concept of social scoring has been likened to dystopian scenarios depicted in literature and media. Such parallels highlight the potential dangers of allowing AI to dictate social standing and personal rights.

  3. Privacy Implications: Social scoring systems pose significant privacy concerns, as they require the collection and analysis of vast amounts of personal data. The prohibition of these systems reflects the EU's commitment to protecting individual privacy rights.

Remote Biometric Identification

Real-time remote biometric identification systems in public spaces are prohibited unless used by law enforcement under specific conditions. This restriction is intended to prevent mass surveillance and protect individuals' privacy rights.

  1. Definition and Usage: Remote biometric identification involves using AI to recognize individuals in public spaces. While potentially useful for law enforcement, such systems also raise concerns about intrusive surveillance and privacy violations.

  2. Mass Surveillance Risks: The deployment of biometric identification systems in public spaces could lead to mass surveillance, eroding the anonymity that individuals expect in their daily lives. This risk is particularly concerning in democratic societies that value privacy.

  3. Law Enforcement Exceptions: While the general prohibition stands, the EU AI Act allows for specific exceptions for law enforcement. These exceptions are tightly regulated to ensure that any use of biometric identification is necessary, proportionate, and respects fundamental rights.

Ethical AI and Risk Management

The prohibitions outlined in Article 5 align with the broader goals of the EU AI Act to promote ethical AI development and implementation. By banning these high-risk practices, the EU aims to foster trust in AI technologies and encourage innovation that respects human rights and values.

Building Trust in AI

Trust is a cornerstone of ethical AI. By prohibiting practices that could undermine trust, the EU AI Act seeks to create an environment where individuals feel confident in interacting with AI systems. This trust is essential for widespread adoption and acceptance of AI technologies.

Encouraging Responsible Innovation

The prohibition of high-risk AI practices does not stifle innovation but rather channels it towards responsible development. By setting ethical boundaries, the EU encourages innovators to explore AI applications that align with societal values and contribute positively to society.

Harmonizing Ethical Standards

The EU AI Act serves as a benchmark for ethical standards in AI governance. By harmonizing these standards across member states, the Act ensures that ethical considerations are consistently applied, fostering a unified approach to AI ethics and risk management.

 

Implementing AI Ethics In Practice

Organizations can implement AI ethics by:

  • Conducting Ethical Impact Assessments: These assessments evaluate the potential consequences of AI systems on individuals and society, identifying risks and recommending mitigation strategies.

  • Establishing Clear Guidelines and Protocols: Organizations should develop comprehensive guidelines and protocols for AI development and deployment, ensuring that ethical considerations are embedded in every stage of the process.

  • Training and Resources: Providing training and resources for employees is essential to ensure that they understand and apply AI ethics principles. Continuous education and awareness-raising initiatives can reinforce the importance of ethical AI practices.

Challenges in Upholding AI Ethics

Despite the emphasis on AI ethics, organizations may face challenges in implementation. These challenges include aligning ethical guidelines with business objectives, navigating complex ethical dilemmas, and ensuring that ethical considerations are consistently applied across all AI initiatives.

The Path Forward

As the EU AI Act is implemented, ongoing dialogue and collaboration between regulators, industry stakeholders, and civil society are essential. By working together, these groups can ensure that the Act effectively addresses emerging challenges and continues to promote ethical AI practices.

Global Implications

The EU AI Act sets a precedent for AI regulation worldwide. Other regions and countries may look to the Act as a model for their own regulatory frameworks, potentially leading to a more consistent global approach to AI governance.

A Commitment to Ethical AI

Ultimately, the EU AI Act reflects a commitment to ethical AI development and deployment. By prioritizing human rights and societal values, the Act paves the way for a future where AI technologies enhance, rather than undermine, individual and collective well-being.

Conclusion

The EU AI Act, particularly Article 5 on prohibited AI practices, represents a significant step forward in regulating AI technologies and ensuring their ethical use. By prohibiting high-risk practices, the EU aims to protect individuals' rights and promote responsible AI development. As organizations navigate the challenges and opportunities presented by the Act, they have the chance to lead the way in ethical AI innovation, ultimately contributing to a safer and more equitable digital future.