EU AI Act - Article 109 Amendment To Regulation EU 2019/2144

Oct 20, 2025by Alex .

Introduction

The EU AI Act is a comprehensive legislative proposal designed to create a legal framework for AI technologies. It seeks to address various aspects of AI, including safety, accountability, transparency, and fairness. By doing so, the EU aims to build trust in AI systems and foster innovation within a regulated environment. This act is seen as a pioneering step, setting a precedent for AI governance globally, and is expected to influence similar legislative efforts in other regions.

The EU AI Act-article 109

The legislation's emphasis on a unified approach to AI regulation across all EU member states reflects a strategic move to harmonize standards, thereby reducing fragmentation and confusion among businesses operating in multiple countries. This harmonization is crucial for creating a level playing field, where companies can innovate without fear of conflicting regulations. The EU AI Act positions the EU as a leader in ethical AI development, potentially serving as a model for international AI legislation.

Key Objectives Of The EU AI Act

The primary objectives of the EU AI Act are:

1. Ensuring Safety and Compliance

AI systems must meet specific safety standards to prevent harm to individuals and society. This involves rigorous testing and validation processes, ensuring that AI technologies are reliable and function within safe parameters. The emphasis on safety also extends to protecting users from unintended consequences that might arise from AI systems.

2. Promoting Innovation

By providing clear guidelines, the act encourages the development and deployment of AI technologies across various sectors. These guidelines aim to reduce uncertainty for businesses, fostering an environment where innovation can thrive. The act also supports research and development by setting clear expectations for compliance, which in turn accelerates technological advancements.

3. Protecting Fundamental Rights

The act seeks to prevent AI from infringing on fundamental rights, such as privacy and non-discrimination. This is achieved by implementing safeguards that ensure AI systems operate transparently and equitably. Protecting these rights is essential to maintain public trust in AI technologies and ensure that they are used for the greater good.

Article 109: Amendment to Regulation EU 2019/2144

Article 109 is a crucial amendment to Regulation EU 2019/2144, which initially focused on vehicle safety and automated driving systems. This amendment extends the regulation to include AI systems more broadly, addressing their use, risks, and governance. By expanding the scope, Article 109 acknowledges the pervasive nature of AI technologies and the need for comprehensive oversight.

The amendment marks a significant shift towards inclusive AI regulation, recognizing that AI impacts numerous sectors beyond automotive. This broader approach ensures that the governance framework remains relevant as AI technologies evolve and penetrate new domains. By addressing a wide array of applications, Article 109 aims to establish a robust regulatory environment capable of adapting to the dynamic nature of AI advancements.

4. Scope of Article 109

Article 109 expands the scope of the original regulation by:

  • Including AI Systems in General: The amendment broadens the regulation to cover AI technologies beyond just vehicles, impacting various sectors such as healthcare, finance, and manufacturing. This expansion ensures that AI applications with significant societal impact are subject to regulatory scrutiny, thereby enhancing public safety and security. It highlights the EU's commitment to comprehensive AI governance, addressing potential risks across diverse industries.
  • Defining High-Risk AI Applications: It identifies AI systems that pose significant risks to safety and fundamental rights, requiring stringent compliance measures. High-risk applications include those that involve critical decision-making processes, such as medical diagnostics or financial assessments. By categorizing certain AI systems as high-risk, the amendment prioritizes their oversight, ensuring that they adhere to the highest standards of safety and ethical practice.

5. Compliance Requirements

For AI systems falling under Article 109, compliance is mandatory. The requirements include:

  • Risk Assessment and Mitigation: Organizations must conduct thorough risk assessments and implement measures to mitigate identified risks. This involves evaluating the potential impact of AI systems on users and society, identifying vulnerabilities, and developing strategies to address them. Risk mitigation is an ongoing process, requiring regular updates and adaptations to new threats.
  • Transparency and Accountability: AI systems need to be transparent in their operation, with mechanisms for accountability in case of failures or adverse outcomes. Transparency involves clear documentation of AI decision-making processes, enabling users and regulators to understand how outcomes are determined. Accountability mechanisms ensure that organizations are held responsible for AI-related incidents, promoting trust and confidence in AI technologies.
  • Regular Audits: Continuous monitoring and auditing of AI systems are necessary to ensure ongoing compliance with the regulation. Audits provide an objective assessment of AI performance, helping identify areas for improvement and ensuring adherence to regulatory standards. Regular audits also serve as a mechanism for demonstrating compliance to regulators, reinforcing an organization's commitment to ethical AI practices.

Impact on IT and AI Governance

The amendment has significant implications for the IT industry, particularly in the realm of AI governance.

6. Strengthening AI Governance

Article 109 reinforces the importance of robust AI governance frameworks. Companies will need to implement governance structures that align with the regulation's requirements, ensuring responsible AI development and deployment. This includes establishing clear policies and procedures for AI oversight, designating roles and responsibilities, and fostering a culture of accountability. Strengthened governance frameworks enhance an organization's ability to manage AI risks effectively, promoting ethical and sustainable AI innovation.

The focus on governance also encourages organizations to adopt best practices in AI management, fostering a proactive approach to compliance. By embedding governance principles into their operations, companies can navigate the complex regulatory landscape with greater ease and confidence. This proactive stance positions organizations as leaders in responsible AI use, enhancing their reputation and competitive advantage.

7. Encouraging Ethical AI Use

The focus on safety and fundamental rights will encourage the ethical use of AI technologies. IT professionals will be tasked with ensuring that AI systems are designed and implemented with ethical considerations in mind. This involves incorporating ethical principles into AI development processes, such as fairness, transparency, and accountability. By prioritizing ethical considerations, organizations can create AI systems that respect user rights and contribute positively to society.

Ethical AI use also requires ongoing education and training for IT professionals, equipping them with the skills and knowledge needed to navigate ethical dilemmas. Organizations that invest in ethical training demonstrate their commitment to responsible AI practices, fostering a culture of integrity and trust. This commitment to ethics not only benefits society but also enhances an organization's long-term success and sustainability.

8. Innovation Within Boundaries

While the regulation imposes certain restrictions, it also provides a clear framework within which innovation can thrive. By defining the boundaries of acceptable AI use, the EU AI Act fosters a secure environment for technological advancement. This clarity allows organizations to innovate with confidence, knowing that their efforts align with regulatory expectations. The act encourages creative problem-solving and experimentation, driving progress within a framework of safety and responsibility.

The balance between regulation and innovation is crucial for maintaining a dynamic and competitive AI landscape. By setting clear expectations, the EU AI Act enables organizations to explore new frontiers in AI development while safeguarding societal interests. This approach ensures that technological advancements contribute to economic growth and societal well-being, creating a win-win scenario for all stakeholders.

Preparing for Compliance

Organizations in the IT sector need to take proactive steps to comply with Article 109. Here are some key actions:

9. Conducting Comprehensive Risk Assessments
Assess the risks associated with your AI systems. Identify potential safety concerns and human rights implications, and develop strategies to mitigate these risks effectively. This process involves a thorough evaluation of AI applications, considering both technical and ethical dimensions. By identifying and addressing risks early on, organizations can prevent adverse outcomes and enhance the reliability of their AI systems.

  • Implementing Transparent AI Practices: Ensure that AI systems are transparent in their operation. Provide clear documentation on how AI decisions are made, and establish accountability mechanisms for any negative outcomes. Transparency involves making AI processes understandable to users, enabling them to make informed decisions and trust the technology. By fostering transparency, organizations can enhance user confidence and promote positive engagement with AI systems.
  • Establishing Governance Frameworks: Develop robust governance frameworks to oversee AI development and deployment. Assign responsibility for compliance to dedicated teams or individuals and establish regular auditing processes. Governance frameworks provide a structured approach to managing AI risks, ensuring that organizations adhere to regulatory requirements and ethical standards. By embedding governance principles into their operations, organizations can navigate the complexities of AI compliance with confidence.
  • Fostering a Culture of Compliance: Promote a culture of compliance within your organization. Educate employees about the EU AI Act and its implications, and encourage them to prioritize ethical considerations in AI development. This involves providing training and resources to help employees understand the regulatory landscape and their role in maintaining compliance. By fostering a culture of compliance, organizations can ensure that all employees are aligned with their ethical and regulatory commitments.

Conclusion

The act's focus on harmonizing AI regulations across the EU provides a model for international cooperation, setting a standard for global AI governance. By prioritizing safety, innovation, and fundamental rights, the EU AI Act fosters a balanced approach to AI regulation that supports technological advancement while protecting societal interests. As organizations adapt to these new requirements, they have the opportunity to lead the way in ethical AI development, setting an example for others to follow. By prioritizing compliance and ethical considerations, organizations can ensure that their AI technologies contribute positively to society and drive sustainable growth.