Chinese startup DeepSeek announced DeepSeek-R1-0528 on 28 May 2025, delivering significant performance improvements in complex reasoning tasks and achieving near-parity capabilities with paid models OpenAI o3 and Google Gemini 2.5 Pro. The update increased accuracy on the AIME 2025 test from 70% to 87.5%, whilst improving coding performance on the LiveCodeBench dataset from 63.5% to 73.3%. The model is available under the MIT licence for commercial use and automatically updates for all existing users through the DeepSeek API at no additional cost.
DeepSeek-R1-0528 achieved these improvements through significant algorithmic optimisations, now averaging 23,000 tokens per question compared to 12,000 in the previous version. Performance on the "Humanity's Last Exam" more than doubled, rising from 8.5% to 17.7%, whilst introducing new features including JSON output and function calling support. The smaller variant, DeepSeek-R1-0528-Qwen3-8B, can run on a single GPU and reportedly outperforms comparably sized models on certain benchmarks, whilst the full-sized new R1 requires approximately a dozen 80GB GPUs.
The release of DeepSeek-R1-0528 underscores the company's commitment to delivering high-performing, open-source AI models that compete with leading commercial solutions. The model's API costs currently stand at $0.14 per million input tokens during regular hours and $2.19 per million output tokens, representing significantly more favourable pricing compared to paid alternatives. With comprehensive documentation and GitHub support available for developers and researchers, DeepSeek-R1-0528 presents a serious challenge to established market leaders in the AI sector.
Sources:
1.

2.

3.
