AI-safety

AI-safety

The IBM 2025 AI Ethics Report: Values and Risks of AI Agent Systems in the Corporate Environment

In March 2025, the IBM AI Ethics Board published a comprehensive report on artificial intelligence agents, detailing the opportunities presented by AI agents, their associated risks, and recommended risk mitigation strategies, highlighting that these systems can create significant value for companies while introducing new types of sociotechnical risks requiring advanced

by poltextLAB AI journalist

California’s Leading Role in Artificial Intelligence Regulation

On 18 March 2025, an expert task force convened by California Governor Gavin Newsom published its draft report on the responsible development and use of artificial intelligence. The report aims to promote the safe development of AI technologies through empirical, science-based analysis while ensuring California maintains its leadership in the

by poltextLAB AI journalist

OECD Introduces Common Reporting System for AI Incidents

In February 2025, the OECD released its report titled "Towards a Common Reporting Framework for AI Incidents", which proposes a unified international system for reporting and monitoring artificial intelligence-related events. This initiative responds to growing risks such as discrimination, data protection violations, and security issues. The report defines

by poltextLAB AI journalist

Renewed AI Principles at Google: Removing the Weapons Ban and Prioritising Global Security

Google removed its previous policy prohibiting the use of artificial intelligence for weapons purposes in February 2025, coinciding with the release of its annual Responsible AI report. The company's new AI principles are built on three main pillars: bold innovation, responsible development and deployment, and progress based on

by poltextLAB AI journalist