EU AI Chapter III - High Risk AI System - Section 3 Obligations of Providers and Deployers of High-Risk AI Systems and Other Parties

Oct 8, 2025by Rahul Savanur

Introduction

Before delving into the obligations, it's essential to understand what constitutes a high-risk AI system. These systems significantly impact people's rights and safety. They are often used in sectors like healthcare, transportation, and law enforcement, where the consequences of errors can be severe. The EU AI Act categorizes AI systems as high-risk based on their intended purpose and the potential impact on individuals. High-risk AI systems are defined by their potential to affect individuals' rights, safety, and well-being. These systems are implemented in environments where errors can have profound consequences. For example, in healthcare, a flawed AI diagnosis tool could lead to incorrect treatment, while in transportation, errors could result in accidents. The EU's classification considers both the function of the AI system and the context in which it is deployed.

EU AI Chapter III - High Risk AI System - Section 3 Obligations of Providers and Deployers of High-Risk AI Systems and Other Parties

Obligations Of Deployers Of High-Risk AI Systems

Deployers, or those who integrate and use AI systems, also have specific obligations under the EU AI Act. These responsibilities ensure that AI systems are used appropriately and ethically.

  • Implementing Safety Measures: Deployers must implement adequate safety measures to prevent harm from AI systems. This includes regular monitoring and maintenance to ensure the system operates as intended and does not pose risks to users or the public.

  • Regular Monitoring and Maintenance: Continuous monitoring and maintenance are essential for ensuring AI systems remain safe and effective. Deployers must establish processes for regularly checking the system's performance, identifying any deviations from expected behavior, and taking corrective action. Regular maintenance helps prevent system malfunctions and ensures that AI continues to operate safely over time.

  • Addressing System Vulnerabilities: Deployers must proactively address vulnerabilities within AI systems. This involves conducting security assessments to identify potential weaknesses that could be exploited by malicious actors. Deployers should implement robust security measures, such as firewalls and encryption, to protect the system from unauthorized access and data breaches.

  • Ensuring System Resilience: System resilience is critical for minimizing disruption and harm in the event of a failure. Deployers should design AI systems with redundancy and failover mechanisms to ensure they can continue functioning even when part of the system fails. This resilience helps maintain trust in AI systems and minimizes the impact of unexpected issues.

  • User Training and Support: Providing adequate training and support to users is essential. Deployers must ensure that users understand how to interact with the AI system safely and effectively. This reduces the likelihood of misuse or accidents resulting from improper operation.

  • Developing Comprehensive Training Programs: Deployers should develop comprehensive training programs tailored to different user groups. These programs should cover the AI system's functionality, safe operation practices, and troubleshooting procedures. By equipping users with the necessary knowledge and skills, deployers can reduce the risk of errors and enhance user confidence in the system.

  • Offering Ongoing User Support: Ongoing user support is crucial for addressing any issues or questions that arise during the use of AI systems. Deployers should provide accessible support channels, such as help desks or online forums, where users can seek assistance. Prompt and effective support helps resolve user concerns quickly, ensuring a smooth and safe experience.

  • Encouraging Responsible Use: Deployers should encourage responsible use of AI systems by promoting ethical guidelines and best practices. This involves educating users about the potential impacts of their actions and encouraging them to consider the broader implications of their use of AI. By fostering a culture of responsibility, deployers can help ensure that AI systems are used in ways that benefit society.

  • Reporting and Addressing Issues: Deployers are responsible for reporting any incidents or malfunctions involving their AI systems to the relevant authorities. Prompt reporting allows for timely intervention and resolution of issues, minimizing potential harm.

  • Establishing Incident Reporting Protocols: Deployers must establish clear protocols for reporting incidents involving AI systems. These protocols should outline the steps for identifying, documenting, and communicating issues to relevant authorities. By having a structured approach to incident reporting, deployers can ensure that issues are addressed quickly and effectively.

  • Collaborating with Regulatory Authorities: Collaboration with regulatory authorities is essential for addressing issues with AI systems. Deployers should work closely with these authorities to investigate incidents, share relevant data, and implement corrective actions. This collaboration ensures that issues are resolved in compliance with regulatory requirements and helps improve the overall safety of AI systems.

  • Learning from Incidents: Deployers should view incidents as opportunities to learn and improve their AI systems. After addressing an issue, deployers should conduct a thorough analysis to identify the root cause and implement measures to prevent similar incidents in the future. This continuous improvement process helps enhance the safety and reliability of AI systems over time.

Other Parties Involved With High-Risk AI Systems

Apart from providers and deployers, other parties also play a role in the lifecycle of high-risk AI systems. These parties include importers, distributors, and authorized representatives.

  • Importers and Distributors: Importers and distributors must ensure that the AI systems they handle comply with the EU AI Act's regulations. They are responsible for verifying that the systems meet legal standards before making them available in the market.

  • Verifying Compliance Before Distribution: Before distributing AI systems, importers and distributors must verify that the products meet all applicable regulatory standards. This involves reviewing compliance documentation, conducting product inspections, and ensuring that systems have the necessary certifications. Verifying compliance helps prevent non-compliant AI systems from entering the market and causing harm.

  • Ensuring Product Traceability: Product traceability is critical for managing the distribution of high-risk AI systems. Importers and distributors must maintain detailed records of the products they handle, including information on suppliers, customers, and distribution channels. This traceability ensures that any issues can be quickly traced back to their source, facilitating prompt corrective action.

  • Providing Accurate Product Information: Importers and distributors must provide accurate and comprehensive product information to customers. This includes details about the AI system's capabilities, limitations, and compliance status. By ensuring that customers have the information they need, importers and distributors help promote responsible and informed use of AI systems.

  • Authorized Representatives: Authorized representatives act on behalf of providers or deployers to ensure compliance with the EU AI Act. They must have a clear understanding of the regulations and assist in ensuring that all obligations are met.

  • Acting as Compliance Liaisons: Authorized representatives serve as liaisons between providers, deployers, and regulatory authorities. They facilitate communication and coordination to ensure that all parties meet their compliance obligations. By acting as intermediaries, authorized representatives help streamline the compliance process and reduce the risk of regulatory issues.

  • Conducting Regular Compliance Audits: Regular compliance audits are essential for ensuring that AI systems continue to meet regulatory standards. Authorized representatives should conduct audits to review documentation, assess system performance, and verify adherence to safety and data protection requirements. These audits provide valuable insights into the system's compliance status and help identify areas for improvement.

  • Supporting Continuous Improvement Efforts: Authorized representatives play a key role in supporting continuous improvement efforts for AI systems. They work with providers and deployers to implement changes based on audit findings, incident reports, and stakeholder feedback. By fostering a culture of continuous improvement, authorized representatives help enhance the safety, reliability, and ethical use of AI systems.

The Importance Of An AI Governance Framework

The EU's approach to AI governance emphasizes accountability, transparency, and safety. By establishing clear obligations for all parties involved with high-risk AI systems, the EU aims to foster trust in AI technologies and protect individuals' rights and safety.

  • Promoting Ethical AI Use: An effective AI governance framework promotes the ethical use of AI technologies. It ensures that AI systems are designed and deployed with consideration for their impact on society and individuals. This ethical approach is crucial for gaining public trust and acceptance of AI innovations.

  • Designing AI with Ethical Considerations: Designing AI systems with ethical considerations involves evaluating potential impacts on society and individuals throughout the development process. This includes assessing biases, ensuring fairness, and minimizing negative consequences. By prioritizing ethics in design, developers can create AI systems that align with societal values and contribute positively to the world.

  • Encouraging Transparency and Accountability: Transparency and accountability are key components of ethical AI use. Governance frameworks should require AI systems to operate transparently, with clear documentation of decision-making processes and data usage. Accountability ensures that parties involved with AI systems take responsibility for their actions and decisions, fostering trust and confidence in AI technologies.

  • Building Public Trust in AI: Building public trust in AI is essential for its widespread adoption and acceptance. Governance frameworks should prioritize engaging with the public, addressing concerns, and promoting the benefits of AI technologies. By demonstrating a commitment to ethical use and transparency, stakeholders can build trust and encourage positive perceptions of AI.

  • Encouraging Innovation While Ensuring Safety: Balancing innovation with safety is a key goal of the EU AI Act. By setting clear obligations, the EU provides a structured environment where AI can thrive while minimizing potential risks. This balance encourages continued innovation while safeguarding the public.

  • Fostering a Supportive Regulatory Environment: A supportive regulatory environment is crucial for encouraging innovation in AI. By providing clear guidelines and support, regulatory bodies can help developers navigate compliance requirements and focus on innovation. This environment encourages experimentation and creativity while maintaining a focus on safety and ethics.

  • Promoting Research and Development: Research and development play a key role in driving AI innovation. Governance frameworks should support R&D efforts by providing funding, resources, and collaboration opportunities. By investing in R&D, stakeholders can drive technological advancements and create AI systems that offer new solutions to complex challenges.

  • Balancing Risk and Reward: Balancing risk and reward involves evaluating the potential benefits of AI systems against the associated risks. Governance frameworks should prioritize high-impact applications while ensuring that adequate safeguards are in place. By carefully weighing risks and rewards, stakeholders can promote innovation that benefits society without compromising safety.

Conclusion

The obligations outlined in the EU AI Chapter III for providers, deployers, and other parties involved with high-risk AI systems are crucial for ensuring these technologies are used responsibly and safely. By adhering to these obligations, stakeholders can contribute to a trustworthy and ethical AI ecosystem. As AI continues to evolve, maintaining compliance with the EU AI Act will be essential for fostering innovation and protecting individuals' rights and safety.