In March 2023, Amsterdam launched a €4.2 million ethically designed AI system to detect welfare fraud that developers claimed would be one of the world's fairest algorithmic systems, but in May 2025, the city council suspended the program after it was revealed to still discriminate against residents with migrant backgrounds and lower incomes. The project aimed to replace the previous algorithmic system called SyRI, which was banned by the Hague court in 2020 after finding it violated human rights and discriminated against certain population groups. During the development of the new system, the city established an 11-member ethics committee composed of experts with diverse backgrounds, including data privacy lawyers, ethicists, and community activists, who jointly developed the AI ethical framework.
The Amsterdam experiment failed despite developers implementing numerous safeguards: excluding potentially discriminatory data (such as postcodes and names), creating transparency reports, and introducing regular human oversight. However, an independent audit conducted after the 18-month testing period found that the system was still 37% more likely to flag individuals of Moroccan and Turkish descent as potential fraudsters, and 28% more frequently targeted residents living in lower-income districts. According to Marieke Koekkoek, a legal expert at the University of Amsterdam, the main reason for the failure was that the algorithm continued to be based on historical data that contained built-in biases, which could not be completely neutralised through technical solutions alone.
The Amsterdam case highlights broader issues with AI system fairness and has significantly impacted European regulation. The European Union's AI Act partly influenced by the Amsterdam experience, tightened rules for AI systems related to social benefits, now categorising them as "high risk," requiring increased transparency and regular audits. In June 2025, the Dutch government announced the development of a new national framework to review and potentially suspend all algorithmic decision-making systems used in the public sector by 2026, affecting 47% of government AI applications, an example since followed by five other European countries, including Denmark and Belgium.
Sources:



An exploration of the challenges and limitations behind Amsterdam’s ambitious attempt to build an ethical AI algorithm and the reasons it ultimately didn’t succeed.