The First International AI Safety Report: Risks and Recommendations

The First International AI Safety Report: Risks and Recommendations
Source: freepik via freepik licence

The first international AI safety report was published on 29th January 2025 by 96 international experts led by Yoshua Bengio, documenting artificial intelligence's social, economic, and environmental impacts with concrete data. The document jointly endorsed at the 2023 Bletchley Park AI Safety Summit provides a scientific foundation for policymakers worldwide and identifies three specific risk categories: malicious use (e.g. cyberattacks), system failures (e.g. unreliability) and systemic risks (e.g. labour market and environmental impacts).

The report substantiates the expected impacts of artificial intelligence with detailed data. Regarding the labour market, the document emphasises that, according to the International Monetary Fund, 60% of jobs in advanced economies (such as the USA and United Kingdom) are exposed to AI impacts, and adverse effects are expected in every second case among these. According to the Tony Blair Institute's analysis, artificial intelligence could eliminate up to 3 million private sector jobs in the United Kingdom. However, the actual increase in unemployment will only be a few hundred thousand, as technology growth creates new roles. From an environmental perspective, the report provides the following data: data centres and data transmission are responsible for 1% of energy-related greenhouse gas emissions, and AI systems may use up to 28% of the total energy consumption of data centres. Professor Bengio emphasised that the capabilities of general-purpose AI have proliferated in recent years and months, which, while holding great potential for society, also carries significant risks that governments worldwide must carefully manage.

The report illustrates the dangers of malicious AI use with concrete examples: deepfake technology can be used by companies for fraudulent money-making and creating pornographic content, and the new AI models are now capable of generating instructions for producing pathogens and toxins with a level of detail exceeding PhD-level expertise. According to the experts' final conclusion, although the pace of AI capability development may vary, neither their development nor the associated risks are deterministic—the outcome largely depends on societies' and governments' current and future policy decisions.

Sources:

1.

What International AI Safety report says on jobs, climate, cyberwar and more
Wide-ranging investigation says impact on work likely to be profound, but opinion on risk of human extinction varies

2.

The first International Report on AI Safety, led by Yoshua Bengio, is launched
Chaired by the UdeM computer-science professor and scientific director of Mila, the report released today is intended as a guide for policymakers worldwide.

3.

International AI Safety Report – MIT Media Lab
Two researchers at the MIT Media Lab, Tobin South and Shayne Longpre, contributed to the global agenda-setting International AI Safety Report, supported by 30 …