EU AI Chapter XIII - Article 110: Amendment To Directive (EU) 2020/1828

Oct 17, 2025by Shrinidhi Kulkarni

Introduction 

The EU has consistently been at the forefront of implementing regulations to ensure that technological advancements, such as AI, are aligned with ethical standards and do not compromise the rights and safety of its citizens. The Directive (EU) 2020/1828 is part of a broader initiative to establish a comprehensive legal framework governing AI technologies across member states. This effort reflects the EU's proactive stance in shaping a digital future that prioritizes human dignity and societal well-being.

EU AI Chapter XIII - Article 110: Amendment To Directive (EU) 2020/1828

The Importance Of Regulation

AI regulation is essential for several reasons. Firstly, it ensures that AI technologies are developed and used in a manner that respects human rights and ethical principles. This is particularly important in applications involving sensitive data or decision-making processes that can significantly impact individuals' lives. Secondly, regulation fosters trust among consumers and businesses, encouraging the adoption of AI technologies by providing assurances that these technologies are safe and reliable. Moreover, regulations serve as a safeguard against the misuse of AI, preventing scenarios where AI systems could be used to infringe upon privacy or perpetuate biases. By establishing clear guidelines and accountability measures, the EU aims to preempt potential abuses and ensure that AI contributes positively to society. 

The Role of Article 110 EU AI Chapter XIII

Article 110 of the Directive (EU) 2020/1828 is a key component in the EU's approach to AI regulation. This article outlines specific amendments aimed at enhancing the existing legal framework to accommodate the unique challenges posed by AI technologies. It reflects the EU's commitment to maintaining a dynamic and responsive regulatory environment capable of adapting to rapid technological advancements.

Key Amendments In Article 110

  1. Clarification Of Scope: The article provides a clearer definition of what constitutes AI technologies under the directive. This is crucial for ensuring that all relevant technologies are covered by the regulation. By defining AI technologies precisely, the EU ensures that emerging technologies do not fall through regulatory gaps, thereby maintaining comprehensive oversight.

  2. Risk Assessment Requirements: Organizations deploying AI technologies are required to conduct thorough risk assessments. This involves evaluating the potential impact of AI systems on individuals and society, as well as identifying and mitigating any associated risks. These assessments are fundamental in predicting and preventing potential negative outcomes of AI applications, ensuring that organizations remain accountable for their AI systems' actions.

  3. Transparency And Accountability: The article emphasizes the need for transparency in AI systems. Organizations must provide clear information about how AI technologies are used and the decision-making processes involved. Additionally, they are held accountable for the outcomes of AI-driven decisions. Transparency not only builds trust but also empowers users by making them aware of how decisions affecting them are made.

  4. Data Protection And Privacy: Article 110 reinforces the importance of data protection and privacy in AI applications. Organizations must ensure that AI systems comply with existing data protection laws, such as the General Data Protection Regulation (GDPR). This alignment with GDPR underscores the EU's commitment to safeguarding personal data, ensuring that AI systems do not become tools for infringing privacy rights.

  5. Ethical Guidelines: The article calls for the development and adherence to ethical guidelines for AI development and deployment. This includes principles such as fairness, non-discrimination, and the avoidance of bias in AI systems. By embedding ethical considerations into AI design and deployment, the EU aims to foster AI systems that are not only effective but also equitable and just.

Implications For Businesses And Developers

The amendments outlined in Article 110 have significant implications for businesses and developers involved in AI technologies. Compliance with these regulations requires a proactive approach to risk management, transparency, and ethical considerations. Organizations must not only adhere to these legal requirements but also integrate them into their core business strategies to remain competitive.

Challenges And Opportunities

While compliance with AI regulations can present challenges, it also offers opportunities for businesses to differentiate themselves in the market. By demonstrating a commitment to ethical AI practices, organizations can build trust with consumers and gain a competitive edge. Companies that prioritize ethical AI use are likely to attract customers who value transparency and responsibility in their technology providers.

However, meeting these regulatory requirements can be resource-intensive, requiring investments in new processes, technologies, and personnel. Yet, these investments can yield long-term benefits, such as enhanced brand reputation and customer loyalty. Furthermore, by leading in compliance, businesses can influence industry standards and practices, positioning themselves as pioneers in ethical AI deployment.

Steps For Compliance

To comply with Article 110, organizations should consider the following steps:

  1. Conduct Comprehensive Risk Assessments: Evaluate the potential risks associated with AI technologies and implement measures to mitigate them. This involves not only technical assessments but also considering social and ethical implications.

  2. Enhance Transparency: Provide clear and accessible information about AI systems and their decision-making processes. Transparency should extend to all stakeholders, including customers, employees, and regulators, fostering an environment of openness and trust.

  3. Prioritize Data Protection: Ensure that AI applications comply with data protection laws and prioritize the privacy of individuals. This may involve implementing advanced data security measures and conducting regular audits to ensure compliance.

  4. Develop Ethical Guidelines: Establish and adhere to ethical guidelines that align with the principles outlined in Article 110. These guidelines should be integrated into every stage of the AI lifecycle, from design to deployment, ensuring that ethical considerations are at the forefront of AI development.

  5. Continuous Monitoring And Evaluation: Regularly monitor AI systems to ensure ongoing compliance and address any emerging risks or ethical concerns. Continuous evaluation allows organizations to adapt to new challenges and maintain their commitment to ethical AI use.

The Broader Context Of AI Regulation In The EU

Article 110 is part of a larger effort by the EU to establish a robust regulatory framework for AI. This includes the proposed AI Act, which aims to create a unified approach to AI regulation across member states. The AI Act is designed to complement existing directives, such as Directive (EU) 2020/1828, by providing additional guidelines and requirements for AI technologies.

The Proposed AI Act

The AI Act introduces a risk-based approach to AI regulation, categorizing AI systems into different risk levels and imposing varying levels of regulatory requirements based on the potential impact of each system. High-risk AI systems, for example, will be subject to stricter regulations than those deemed to pose minimal risk. This tailored approach allows for more effective regulation by focusing resources and oversight on systems that pose the greatest potential harm.

The AI Act also emphasizes innovation, seeking to strike a balance between regulation and technological advancement. By fostering an environment where innovation can thrive within a clear regulatory framework, the EU aims to position itself as a leader in AI development. This approach not only protects citizens but also ensures that the EU remains competitive on the global stage, attracting investment and talent in the AI sector.

Conclusion

The EU's commitment to AI regulation, as evidenced by Article 110 of Directive (EU) 2020/1828, reflects a broader effort to ensure that AI technologies are developed and deployed in a manner that aligns with ethical standards and protects the rights of individuals. Businesses and developers must navigate this regulatory landscape carefully, embracing the opportunities it presents while addressing the challenges of compliance. By adhering to the principles outlined in Article 110 and the proposed AI Act, organizations can contribute to the responsible and ethical use of AI, fostering trust and promoting innovation across the EU. As AI continues to shape our world, the EU's regulatory framework serves as a guiding light, ensuring that technological progress does not come at the expense of fundamental rights and values.