EU AI Act Chapter III - High Risk AI System Article 25: Responsibilities Along the AI Value Chain

Oct 10, 2025by Maya G

In particular, Article 25 outlines the responsibilities of stakeholders along the AI value chain, emphasizing the need for risk management and responsible AI development. These responsibilities are not only crucial for compliance with the law but also for fostering innovation that aligns with ethical standards. By delineating specific duties for developers, manufacturers, distributors, and users, the EU AI Act seeks to create a cohesive framework that ensures the safety and transparency of AI systems from conception to deployment.

EU AI Act Chapter III - High Risk AI System Article 25: Responsibilities Along the AI Value Chain

High-risk AI systems are those that pose significant risks to health, safety, or fundamental rights. These systems are often deployed in critical areas such as healthcare, transportation, and law enforcement, where their malfunction or misuse could lead to serious, and potentially life-threatening, consequences. For instance, an AI system used in autonomous vehicles must operate with precision to prevent accidents, while AI in healthcare settings must ensure accurate diagnoses to protect patient well-being.

Responsibilities Along the AI Value Chain

Article 25 of the EU AI Act outlines the specific responsibilities of different stakeholders involved in the development and deployment of high-risk AI systems. These responsibilities are designed to ensure that AI systems are safe, ethical, and transparent throughout their lifecycle. A comprehensive understanding of these responsibilities is vital for all parties involved to ensure compliance and foster a culture of accountability and trust in AI technologies.

Developers

Developers are at the forefront of AI creation, and they have a critical role in ensuring that AI systems are designed responsibly. Their responsibilities include:

  • Risk Assessment: Conducting thorough risk assessments to identify potential hazards associated with the AI system. This involves evaluating the system's impact on safety and fundamental rights, and considering various scenarios in which the system could fail. By anticipating potential issues, developers can design systems that are resilient and adaptable to diverse conditions.

  • Data Quality: Ensuring that the data used to train AI systems is accurate, relevant, and free from bias. High-quality data is essential for the development of reliable and fair AI systems. Developers must implement rigorous data verification processes and employ diverse data sets to minimize biases that could lead to unfair or inaccurate outcomes.

  • Transparency: Designing AI systems that are transparent and explainable. Users should be able to understand how the system makes decisions and what factors influence its outputs. This transparency is crucial for building trust and allows users to make informed decisions about the system's deployment. Developers should prioritize creating user-friendly interfaces that clearly communicate how AI systems function and their limitations.

Manufacturers

Manufacturers are responsible for producing and distributing AI systems. Their responsibilities include:

  • Compliance: Ensuring that AI systems comply with the requirements set forth in the EU AI Act. This includes adhering to safety standards and undergoing necessary evaluations before market release. Manufacturers must stay informed about evolving regulatory requirements and integrate compliance checks into their production processes.

  • Documentation: Providing comprehensive documentation that outlines the AI system's functionalities, limitations, and potential risks. This documentation should be easily accessible to users and stakeholders. By offering clear and detailed information, manufacturers help users understand the system's capabilities and constraints, enabling safer and more effective use.

  • Monitoring: Implementing mechanisms to monitor the AI system's performance and identify any issues that may arise during its operation. Continuous monitoring helps detect and address problems early, preventing potential harm. Manufacturers should establish robust feedback loops and employ real-time analytics to ensure ongoing system integrity.

Distributors

Play a key role in making AI systems available to users. Their responsibilities include:

  • Information Sharing: Ensuring that users have access to all relevant information about the AI system, including its intended use, limitations, and potential risks. Distributors should facilitate open communication channels to provide users with timely updates and clarifications.

  • User Support: Providing adequate support to users to help them understand and effectively use the AI system. This includes offering training and assistance as needed. Distributors should prioritize customer service and develop comprehensive support resources to enhance user experience and satisfaction.

  • Feedback Mechanism: Establishing a feedback mechanism that allows users to report issues or concerns related to the AI system. Feedback is crucial for identifying areas for improvement and addressing user needs. Distributors should implement easy-to-use feedback channels and ensure that user input is promptly addressed and incorporated into system improvements.

Users

Users are the end recipients of AI systems, and their responsibilities include:

  • Understanding Limitations: Being aware of the AI system's limitations and using it within its intended scope. Users should not rely on the system for tasks beyond its capabilities. By recognizing the system's boundaries, users can prevent misuse and enhance safety.

  • Compliance: Adhering to any guidelines or regulations associated with the use of the AI system. Users should ensure that their use aligns with legal and ethical standards. Awareness and adherence to these standards are vital for maintaining compliance and avoiding potential legal issues.

  • Reporting Issues: Reporting any malfunctions or unexpected behaviors of the AI system to the manufacturer or distributor. Timely reporting helps address issues and prevent further problems. Users play a critical role in the feedback loop that drives continuous improvement and ensures system reliability.

Risk Management in AI

Effective risk management is a cornerstone of responsible AI development and deployment. The EU AI Act emphasizes the importance of identifying, assessing, and mitigating risks associated with high-risk AI systems. A structured approach to risk management ensures that potential issues are addressed proactively, minimizing negative impacts on users and society.

  • Risk Identification: Identifying potential risks and hazards associated with the AI system. This involves analyzing the system's impact on users, the environment, and society. A comprehensive risk identification process considers a wide range of factors, including technical, operational, and societal risks.

  • Risk Assessment: Evaluating the likelihood and severity of identified risks. This assessment helps prioritize risks and allocate resources for mitigation. By understanding the risk landscape, stakeholders can develop targeted strategies to address the most pressing concerns effectively.

  • Risk Mitigation: Implementing measures to reduce or eliminate risks. This may involve adjusting the AI system's design, improving data quality, or enhancing user training. A proactive approach to mitigation involves ongoing evaluation and adaptation of risk management strategies to address emerging challenges.

  • Continuous Monitoring: Regularly monitoring the AI system's performance to detect new risks or changes in existing risks. Continuous monitoring ensures that risk management strategies remain effective. By maintaining vigilance, stakeholders can quickly adapt to evolving circumstances and maintain system integrity.

The Importance of Responsible AI Development

Responsible AI development is crucial for building trust and ensuring the safe and ethical use of AI systems. By adhering to the responsibilities outlined in Article 25 and implementing effective risk management practices, stakeholders can contribute to the development of AI systems that benefit society while minimizing potential harms. This approach fosters an environment where innovation can thrive alongside ethical considerations.

The EU AI Act serves as a framework for promoting responsible AI development and ensuring that high-risk AI systems are subject to rigorous scrutiny. By fostering collaboration among developers, manufacturers, distributors, and users, the act aims to create a safer and more transparent AI ecosystem. A collective commitment to responsibility and ethics in AI development is essential for harnessing the transformative potential of AI technologies in a way that aligns with societal values.

Conclusion

The EU AI Act's Chapter III, Article 25, highlights the importance of shared responsibility along the AI value chain. By understanding and fulfilling their roles, stakeholders can ensure that high-risk AI systems are developed and deployed responsibly. Through effective risk management and adherence to ethical standards, we can harness the potential of AI while safeguarding public safety and fundamental rights. As AI continues to evolve, it is essential for all parties involved to remain vigilant and committed to responsible AI practices. By doing so, we can build a future where AI technologies are trusted allies in improving our lives and advancing society.