The European Commission launched a public consultation on June 6, 2025, seeking feedback on implementing the AI Act's regulations for high-risk Artificial Intelligence systems. The six-week consultation, running until July 18, 2025, aims to gather practical examples and clarify outstanding questions related to high-risk AI systems, with the collected feedback informing upcoming Commission guidelines on classifying these systems and their associated requirements and obligations.
The AI Act identifies two types of 'high-risk' AI systems: those important for product safety under the Union's harmonised legislation on product safety, and those that can significantly affect people's health, safety, or fundamental rights in specific use cases listed in the Act. The consultation will also address the distribution of responsibilities along the entire AI value chain, including defining accountability for all parties involved, from developers and providers to end-users.
The Commission encourages a wide array of stakeholders to participate, including AI developers, providers, businesses, public authorities, academia, research institutions, civil society, governments, supervisory authorities, and citizens in general. This initiative represents an opportunity for all interested parties to shape the future of AI regulation in Europe, with the feedback collected being crucial for further development and clarification of requirements for high-risk AI systems and building a trustworthy AI ecosystem.
Sources:
1.

2.

3.
