EU AI Act - Chapter IX - Post Market Monitoring Information Sharing and Market Surveillance - Article 73: Reporting of Serious Incidents

Oct 16, 2025by Maya G

Introduction

The European Union's AI Act is a groundbreaking regulatory framework designed to govern the use and deployment of artificial intelligence across the EU. As the AI landscape evolves, it becomes increasingly important to ensure that AI systems are used safely and responsibly. With AI technologies increasingly permeating various aspects of daily life, the need for a robust regulatory framework like the AI Act becomes ever more critical. This legislation aims to balance innovation with safety, ensuring that AI advances do not come at the expense of public welfare or fundamental rights. One of the key components of the AI Act is Chapter IX, which focuses on post-market monitoring, information sharing, and market surveillance. Within this chapter, Article 73 stands out as it addresses the reporting of serious incidents. This article aims to provide a clear understanding of Article 73 and its implications for AI developers and users.

EU AI Act - Chapter IX - Post Market Monitoring Information Sharing and Market Surveillance - Article 73: Reporting of Serious Incidents

The EU AI Act is a comprehensive governance framework that aims to ensure the safety and fundamental rights of individuals and businesses while fostering innovation and competitiveness in AI technologies. It categorizes AI systems based on their risk levels---ranging from minimal to unacceptable risk---and establishes requirements that correspond to each category.

Market Surveillance And Its Role

Market surveillance plays a crucial role in the effective implementation of the EU AI Act. It involves monitoring AI systems to ensure they comply with the established requirements and do not pose significant risks. Through market surveillance, regulatory authorities can identify potential issues and enforce corrective actions as needed. This proactive approach allows for the early detection of non-compliance and the prevention of harm, ensuring that AI systems operate within safe and ethical boundaries. Furthermore, market surveillance helps maintain a level playing field for businesses by ensuring that all market participants adhere to the same standards, thus fostering fair competition and innovation.

Article 73: Reporting Of Serious Incidents

Article 73 of the EU AI Act requires AI developers and users to report serious incidents associated with their AI systems. This provision is critical for maintaining transparency and accountability in the use of AI technologies. By mandating the reporting of incidents, the Act ensures that AI-related risks are systematically documented and addressed, thus preventing systemic issues and promoting continuous improvement.

What Constitutes A Serious Incident?

A serious incident, as defined by Article 73, refers to any event involving an AI system that leads to:

  1. Death or serious injury to any person.

  2. Significant property damage or environmental harm.

  3. A serious breach of obligations under Union or national law.

The reporting of such incidents is mandatory and must be done promptly to enable timely intervention and resolution. Prompt reporting is crucial for mitigating further harm and allows authorities to respond effectively, safeguarding public interest and reinforcing trust in AI systems. By clearly defining what constitutes a serious incident, the Act provides clarity and guidance to AI developers and users, ensuring that all stakeholders have a common understanding of their responsibilities.

The Reporting Process

The reporting process under Article 73 involves several key steps:

  1. Immediate Notification: As soon as a serious incident is identified, the responsible party must notify the relevant national authorities without undue delay. This ensures that appropriate measures can be taken to address the situation. Immediate notification is vital for mobilizing resources and expertise to manage the incident effectively.

  2. Detailed Report Submission: Following the initial notification, a detailed report must be submitted, outlining the nature of the incident, the affected AI system, and any preliminary findings. This report provides authorities with the necessary information to assess the situation and determine appropriate actions.

  3. Investigation and Follow-up: Authorities will conduct an investigation to determine the cause of the incident and assess the potential impact. Based on the findings, corrective measures may be implemented to prevent future occurrences. The follow-up process also includes disseminating lessons learned to prevent similar incidents across the industry.

Implications For AI Developers and Users

The introduction of Article 73 has several implications for AI developers and users. Understanding these implications is crucial for ensuring compliance and maintaining the integrity of AI systems. The Act not only mandates compliance but also encourages a proactive approach to risk management and ethical considerations.

1. Enhanced Accountability And Transparency- By mandating the reporting of serious incidents, Article 73 enhances accountability and transparency in AI system deployment. Developers and users are required to take responsibility for their AI systems and ensure they operate safely and ethically. This enhances public trust and confidence, as stakeholders are assured of the commitment to safety and ethical standards. Moreover, transparency in incident reporting fosters an environment of openness and continuous learning, enabling the AI community to collectively address challenges and improve practices.

2. Strengthening Risk Management Practices- The requirement to report serious incidents encourages AI developers to adopt robust risk management practices. This includes conducting thorough risk assessments, implementing preventive measures, and continuously monitoring AI systems for potential issues. By fostering a proactive approach, the Act helps prevent incidents before they occur, thus safeguarding users and the public. Furthermore, robust risk management practices enable developers to build resilient systems that can adapt to changing circumstances and threats, ensuring long-term sustainability and success.

3. Fostering Trust in AI Technologies- Transparency and accountability are key to building trust in AI technologies. By adhering to Article 73 and proactively addressing serious incidents, developers and users can demonstrate their commitment to safe and responsible AI deployment, ultimately fostering public trust. Trust is a critical factor in the adoption and acceptance of AI technologies, and the EU AI Act plays a crucial role in establishing the credibility of AI applications. As AI continues to evolve and permeate various sectors, maintaining public trust will be essential for realizing the full potential of these technologies and maximizing their benefits for society.

Conclusion

The EU AI Act, particularly Article 73, represents a significant step forward in establishing a comprehensive governance framework for AI technologies. By emphasizing the reporting of serious incidents, the Act ensures that AI systems are used responsibly and safely. For AI developers and users, understanding and complying with Article 73 is essential for maintaining accountability, transparency, and trust in their AI systems. As the AI landscape continues to evolve, the EU AI Act serves as a crucial guide for navigating the complex challenges and opportunities presented by artificial intelligence. This framework not only protects individuals and society from potential harms but also paves the way for sustainable innovation and growth in the AI sector.