EU AI Chapter III - Article 28 Notifying Authorities Of High Risk AI System

Oct 10, 2025by Shrinidhi Kulkarni

Introduction 

High-risk AI systems are those that pose significant risks to the rights and safety of individuals or have a major impact on society. These systems are often used in critical sectors such as healthcare, finance, and transportation. The EU has categorized certain AI systems as high-risk based on their potential to affect fundamental rights, safety, and the well-being of individuals.

EU AI Chapter III - Article 28 Notifying Authorities Of High Risk AI System

Characteristics Of High-Risk AI Systems - EU AI Chapter III - Article 28 

High-risk AI systems are defined by specific characteristics that differentiate them from lower-risk technologies. These systems often operate in environments where mistakes can lead to severe consequences, such as loss of life or significant financial damage. They are usually characterized by their complexity, autonomy, and the critical nature of the tasks they perform. Understanding these characteristics helps in assessing their risks and implementing appropriate safeguards.

a) Regulatory Framework For High-Risk AI
The EU has established a comprehensive regulatory framework to govern high-risk AI systems. This framework includes strict guidelines on data usage, algorithm transparency, and system accountability. By setting clear standards, the EU aims to ensure that these AI systems operate safely and ethically. The regulations are designed to protect individual rights and prevent the misuse of AI technologies in critical sectors.

b) Impact On Various Industries
High-risk AI systems have a profound impact on several industries, transforming operations and decision-making processes. In healthcare, for instance, AI assists in diagnosing diseases and personalizing treatment plans, significantly improving patient outcomes. In finance, AI-driven algorithms enhance fraud detection and streamline credit evaluations. These advancements come with increased responsibility to manage risks effectively and ensure compliance with regulatory standards.

Examples Of High-Risk AI Systems

a) Healthcare AI Systems: AI technologies used in diagnosing diseases, recommending treatments, or managing patient care. These systems require rigorous validation to ensure accuracy and prevent misdiagnoses.

b) Financial AI Systems: Algorithms used in credit scoring, loan approvals, or fraud detection. They must be transparent and fair to avoid discrimination and ensure equitable access to financial services.

c) Transportation AI Systems: Autonomous vehicles or AI systems managing public transportation. These systems must be reliable and safe to protect passengers and reduce accidents.

These examples illustrate the diverse applications of AI systems that can be classified as high-risk. Ensuring their safety and reliability is crucial to protect users and society at large.

The Role Of Notifying Authorities - EU AI Chapter III - Article 28 

Article 28 of the EU AI regulations outlines the responsibilities of notifying authorities in managing high-risk AI systems. The notifying authorities play a critical role in overseeing the compliance of AI systems with EU regulations. Their primary functions include:

1) Assessment And Monitoring Duties
Notifying authorities are tasked with the ongoing assessment and monitoring of high-risk AI systems. This involves a continuous review of AI system functionalities to ensure they align with regulatory standards. Regular audits and evaluations are conducted to identify any potential risks or non-compliance issues. Through these activities, notifying authorities help maintain the integrity and safety of AI technologies in the market.

2) Certification Responsibilities
Certification is a key responsibility of notifying authorities. They evaluate AI systems to determine if they meet the established safety and performance criteria. This certification process is rigorous and involves thorough testing to ensure systems are reliable and secure. By certifying compliant AI systems, notifying authorities provide assurance to users and stakeholders that these technologies are safe to use.

3) Reporting And Documentation Processes
Notifying authorities are required to maintain detailed records of high-risk AI systems. This involves documenting system specifications, compliance reports, and any incidents of non-compliance. They must also report significant risks or breaches to relevant bodies for further investigation. This documentation is crucial for transparency and accountability, ensuring that any issues are promptly addressed and rectified.

4) Importance Of Notifying Authorities
The role of notifying authorities is crucial in maintaining trust in AI technologies. By ensuring that high-risk AI systems adhere to strict standards, they help prevent potential harm to individuals and society. Their work contributes to a safer and more transparent AI landscape.

AI Risk Management - EU AI Chapter III - Article 28 

Effective AI risk management is essential for minimizing the potential dangers associated with high-risk AI systems. It involves identifying, assessing, and mitigating risks throughout the lifecycle of an AI system. Key components of AI risk management include:

a) Comprehensive Risk Assessment
Risk assessment is the foundation of AI risk management. It involves a thorough evaluation of potential risks associated with an AI system, including its classification as high-risk. This process requires identifying vulnerabilities and assessing the potential impact on safety and fundamental rights. A detailed risk assessment helps organizations understand the scope of risks and prioritize mitigation efforts effectively.

b) Implementing Risk Mitigation Measures
Risk mitigation involves developing strategies to reduce or eliminate identified risks. This includes implementing robust security measures, improving system reliability, and ensuring compliance with ethical standards. Organizations must adopt proactive approaches to address vulnerabilities before they lead to significant issues. Effective risk mitigation strategies enhance the safety and reliability of high-risk AI systems.

c) Continuous Monitoring And Adaptation
Continuous monitoring is essential for maintaining the effectiveness of risk management strategies. It involves regularly reviewing AI systems and updating risk mitigation plans to address emerging threats. Organizations must remain vigilant and adapt their strategies to evolving technological and regulatory landscapes. Continuous monitoring ensures that high-risk AI systems remain compliant and safe throughout their lifecycle.

Strategies For AI Risk Management - EU AI Chapter III - Article 28 

1) Thorough Testing And Validation: Conduct extensive testing to identify potential weaknesses and address them before deployment. This ensures that AI systems function as intended and reduces the likelihood of errors.

2) Transparency And Documentation: Maintain clear documentation of AI system processes, decisions, and data sources. Transparency fosters trust among users and stakeholders, and documentation provides a basis for accountability.

3) Stakeholder Engagement: Involve stakeholders in the risk management process to gain diverse perspectives and insights. Collaborating with stakeholders helps identify potential risks and develop comprehensive mitigation plans.

By adopting these strategies, organizations can effectively manage the risks associated with high-risk AI systems and ensure their safe and ethical use.

AI Risk Assessment - EU AI Chapter III - Article 28 

AI risk assessment is a critical component of the EU's regulatory framework for high-risk AI systems. It involves evaluating the potential impact of an AI system on individuals and society. The assessment considers factors such as:

1) Evaluating Potential Harm
Assessing potential harm involves analyzing the likelihood and severity of damage that an AI system may cause. This includes evaluating scenarios where system failures could lead to significant consequences, such as threats to health, safety, or financial stability. Understanding potential harm is vital for prioritizing risk mitigation efforts and ensuring the protection of individuals and society.

2) Identifying System Vulnerabilities
Identifying vulnerabilities is a crucial step in AI risk assessment. This process involves analyzing the AI system's architecture, algorithms, and data sources to detect weaknesses that could be exploited. Addressing these vulnerabilities is essential to prevent unauthorized access, data breaches, and other security issues. Proactive measures help safeguard AI systems from potential threats.

3) Assessing Impact On Fundamental Rights
AI systems can significantly impact fundamental rights such as privacy, freedom, and non-discrimination. Risk assessment involves evaluating how an AI system may affect these rights and identifying measures to mitigate negative impacts. Ensuring compliance with ethical standards and legal regulations is essential to protect individual rights and promote fair use of AI technologies.

Conducting A Comprehensive AI Risk Assessment

To conduct a comprehensive AI risk assessment, organizations should:

a) Define Objectives: Clearly outline the purpose and goals of the AI system. Understanding the objectives helps align risk assessment efforts with organizational priorities.

b) Identify Risks: List potential risks associated with the AI system and its deployment. Comprehensive risk identification provides a basis for developing targeted mitigation strategies.

c) Analyze Impact: Evaluate the potential consequences of identified risks on individuals and society. Analyzing impact helps prioritize risks and allocate resources effectively.

d) Develop Mitigation Plans: Create strategies to address and minimize identified risks. Well-developed mitigation plans enhance the safety and reliability of high-risk AI systems.

A thorough AI risk assessment ensures that high-risk AI systems are deployed responsibly and ethically, minimizing potential harm to users and society.

Collaborative Efforts For Responsible AI

As AI continues to evolve, it is crucial for all stakeholders—governments, organizations, and individuals—to collaborate in creating a regulatory environment that promotes innovation while safeguarding rights and safety. Collaborative efforts ensure that diverse perspectives are considered in the development and implementation of AI regulations. By working together, stakeholders can address challenges effectively and promote the responsible use of AI technologies.

Continuous Monitoring And Adaptation

Continuous monitoring and adaptation are essential to keep pace with technological advancements and emerging risks. Organizations must remain vigilant and update their risk management and compliance strategies regularly. This proactive approach ensures that AI systems remain safe and effective, even as the technological landscape evolves.

Harnessing AI For Societal Benefit 

Through continuous monitoring and adaptation, we can harness the power of AI to benefit society while minimizing its risks. By leveraging AI technologies responsibly, we can drive innovation and improve quality of life across various sectors. The focus on ethical AI deployment helps ensure that technological advancements contribute positively to society and protect individual rights.

Through these collaborative and adaptive strategies, we can ensure that AI technologies are developed and deployed in a manner that benefits society while safeguarding rights and safety.

Conclusion

The EU's focus on regulating high-risk AI systems through Chapter III, Article 28, highlights the importance of responsible AI deployment. Notifying authorities play a vital role in ensuring compliance with regulatory standards, while effective risk management and assessment strategies help mitigate potential risks. By adhering to these guidelines, organizations can build trust in AI technologies and contribute to a safer and more ethical technological future.