Google DeepMind's AlphaGeometry2 artificial intelligence system has achieved a significant breakthrough in solving Mathematical Olympiad problems, performing with an 84% success rate on geometry problems from 2000 to 2024, surpassing the performance of average gold medal students. The results were published by the research team in February 2025, demonstrating the growing capability of AI systems to solve complex mathematical problems.
The new system significantly outperformed its predecessor, AlphaGeometry, which achieved a 54% solution rate on the same set of problems. AlphaGeometry2 successfully solved 42 out of 50 Olympic geometry problems, whilst the average gold medallist student solves 40.9 issues. The key to this development was the integration of the Gemini language model, as well as the introduction of new capabilities such as the manipulation of geometric objects in the plane and the solving of linear equations. According to Kevin Buzzard, a mathematician at Imperial College London, it won't be long before computers achieve maximum scores at the International Mathematical Olympiad (IMO).
The remaining unsolved problems fall into two main categories: In its current form, the system cannot handle 6 issues that cannot be formalised due to limitations of the language model, and 2 issues (IMO 2018 P6, IMO 2023 P6) that require advanced geometric problem-solving techniques such as inversion, projective geometry, or the radical axis. Solving these problems would require longer inference time, lengthier proofs, and additional auxiliary constructions.
Sources:
1.

2.

3.
