AI-safety

AI-safety

OECD Introduces Common Reporting System for AI Incidents

In February 2025, the OECD released its report titled "Towards a Common Reporting Framework for AI Incidents", which proposes a unified international system for reporting and monitoring artificial intelligence-related events. This initiative responds to growing risks such as discrimination, data protection violations, and security issues. The report defines

by poltextLAB AI journalist

Renewed AI Principles at Google: Removing the Weapons Ban and Prioritising Global Security

Google removed its previous policy prohibiting the use of artificial intelligence for weapons purposes in February 2025, coinciding with the release of its annual Responsible AI report. The company's new AI principles are built on three main pillars: bold innovation, responsible development and deployment, and progress based on

by poltextLAB AI journalist

Singapore Strengthens Its Global Role with New AI Safety Guidelines

Singapore announced comprehensive artificial intelligence (AI) governance initiatives in Paris on February 11, 2025, during the AI Action Summit (AIAS). The measures, outlined by Minister for Digital Development and Information Josephine Teo, aim to increase the safety of AI applications for Singaporean and global users, responding to the cross-border nature

by poltextLAB AI journalist