EU AI ACT - Article 111: AI Systems Already Placed on the Market or Put into Service and General-Purpose AI Models Already Placed on the Market

Oct 21, 2025by Alex .

Introduction

The European Union's Artificial Intelligence Act (EU AI Act) is a groundbreaking legislative framework designed to regulate artificial intelligence across Europe. As AI technologies rapidly evolve, this act ensures they are developed and used safely, transparently, and ethically. Article 111 of the EU AI Act is particularly critical, as it addresses the status of AI systems already placed on the market or put into service, along with general-purpose AI models. For businesses and developers, understanding Article 111 is essential for ensuring compliance and successfully navigating the new AI governance landscape.

What is the EU AI Act?

The EU AI Act is a comprehensive regulatory framework proposed by the European Commission to manage the development, deployment, and use of AI systems within the EU. It adopts a risk-based approach, classifying AI systems into four risk levels—minimal, limited, high, and unacceptable—with specific requirements for each. The primary goal is to foster innovation while safeguarding fundamental rights and public safety.

Key Objectives of the EU AI Act

  • Risk-Based Classification: AI systems are categorized by risk, determining the stringency of regulatory requirements.

  • Transparency and Accountability: Mandates clear documentation of AI capabilities, limitations, and decision-making processes.

  • Human Oversight and Control: Ensures AI systems are designed to allow for effective human intervention.

  • Safety and Robustness: Requires AI to be reliable, secure, and resilient against errors and manipulation.

  • Ethical Standards: Obliges AI systems to adhere to ethical principles, preventing discrimination and protecting fundamental rights.

Article 111: Compliance for AI Systems Already on the Market

Article 111 provides crucial guidance for AI systems that were already on the market or in use before the EU AI Act came into force. It outlines the transition responsibilities for developers and users to achieve compliance.

Key Compliance Requirements:

  • Retrospective Assessment and Documentation: Businesses must conduct a thorough review of existing AI systems to determine their risk classification under the new law. This involves creating detailed documentation covering the system's functionality, data sources, and measures taken to ensure compliance.

  • Ongoing Monitoring and Reporting: Companies need to implement continuous monitoring mechanisms to track AI performance and societal impact. Regular reporting to relevant authorities may be necessary to demonstrate ongoing adherence to the Act.

  • System Updates and Modifications: Existing AI systems may require technical updates or modifications to meet new standards. This could involve enhancing transparency features, improving data quality, integrating human oversight capabilities, or reinforcing security protocols.

Potential Challenges:

  • Resource Allocation: Achieving compliance demands significant investment in time, finances, and skilled personnel.

  • Technical Adaptations: Retrofitting legacy AI systems to meet new regulatory standards can be complex and technically challenging.

  • Data Governance: Ensuring existing systems comply with strict data privacy and cybersecurity requirements is paramount.

General-Purpose AI Models Under the EU AI Act

General-purpose AI models (GPAI), designed for a wide range of tasks, present unique regulatory challenges due to their versatility and broad applicability.

Regulatory Implications for GPAI:

  • Broad Applicability: The Act’s requirements apply to GPAI models across all industries and use cases, creating a unified compliance standard throughout the EU. This prevents regulatory arbitrage and ensures consistent safety and ethical benchmarks.

  • Comprehensive Risk Assessment: Organizations must perform in-depth risk assessments to identify potential harms, biases, and security vulnerabilities. This process is vital for categorizing the model's risk level and implementing appropriate mitigation strategies.

  • Adaptability and Flexibility: The regulatory framework is designed to be technology-neutral and adaptable, allowing it to remain relevant as AI technology evolves. This ensures that compliance mechanisms can adjust to new innovations without hindering progress.

Best Practices for Ensuring Compliance

To navigate the requirements of Article 111 effectively, businesses should adopt the following best practices:

  • Foster Interdisciplinary Collaboration: Establish a cross-functional team involving legal, technical, ethical, and operational experts. This ensures a holistic approach to compliance, covering both the technical aspects of AI and its legal and societal impacts.

  • Implement Continuous Monitoring and Improvement: Treat compliance as an ongoing process. Regularly audit AI systems, conduct post-deployment impact assessments, and document all activities to demonstrate accountability and facilitate continuous improvement.

  • Engage Proactively with Stakeholders: Build trust through transparency. Engage with regulators, customers, and the public to explain your AI systems and compliance measures. Collaboration with industry peers can also help standardize best practices.

Conclusion

The EU AI Act, and specifically Article 111, marks a pivotal shift towards responsible AI governance in Europe. By proactively understanding and implementing the requirements for existing AI systems and general-purpose AI models, businesses can not only ensure compliance but also build more trustworthy and robust AI solutions. Adhering to these regulations is no longer just a legal obligation but a crucial step for fostering sustainable innovation that protects fundamental rights and public safety.