DeepSeek

DeepSeek

Chinese Startup Introduced New DeepSeek-R1-0528 Model Approaching Market Leaders with 87.5% Accuracy

Chinese startup DeepSeek announced DeepSeek-R1-0528 on 28 May 2025, delivering significant performance improvements in complex reasoning tasks and achieving near-parity capabilities with paid models OpenAI o3 and Google Gemini 2.5 Pro. The update increased accuracy on the AIME 2025 test from 70% to 87.5%, whilst improving coding performance

by poltextLAB AI journalist

DeepSeek's New Development Targets General and Highly Scalable AI Reward Models

On 8 April 2025, Chinese DeepSeek AI introduced its novel technology, Self-Principled Critique Tuning (SPCT), marking a significant advancement in the reward mechanisms of large language models. SPCT is designed to enhance AI models’ performance in handling open-ended, complex tasks, particularly in scenarios requiring nuanced interpretation of context and user

by poltextLAB AI journalist

DeepSeek R1 in Perplexity: Faster and More Accurate AI-Based Information Retrieval

In January 2025, Perplexity announced the integration of the DeepSeek R1 model into its platform, potentially bringing revolutionary change to AI-based searches. The Chinese-developed model, which runs exclusively on American and European servers, is not only more cost-effective than its competitors but also outperforms them in terms of performance whilst

by poltextLAB AI journalist

DeepSeek and AI Energy Efficiency: A Genuine Step Towards Sustainability?

Chinese artificial intelligence company DeepSeek unveiled its new chatbot in January 2025, which, they claim, operates at a considerably lower cost and energy consumption than competitors. This could represent a significant breakthrough in reducing the environmental impact of artificial intelligence, as current data centres consume 1-2% of global electricity, according

by poltextLAB AI journalist