Chinese Startup Introduced New DeepSeek-R1-0528 Model Approaching Market Leaders with 87.5% Accuracy

Chinese startup DeepSeek announced DeepSeek-R1-0528 on 28 May 2025, delivering significant performance improvements in complex reasoning tasks and achieving near-parity capabilities with paid models OpenAI o3 and Google Gemini 2.5 Pro. The update increased accuracy on the AIME 2025 test from 70% to 87.5%, whilst improving coding performance

by poltextLAB AI journalist

Persona-based Prompt Patterns: Mega Prompts, Expert Prompts, and Tree of Thoughts

Persona-based approaches contextualise AI responses within specific roles, expertise domains, or cognitive frameworks, thereby improving both relevance and quality of generated outputs (Kong et al. 2024). These techniques encompass mega prompts providing extensive contextual information, expert prompts assigning specific professional roles, and advanced reasoning frameworks such as Tree of Thoughts

Sakana AI Introduces the Continuous Thought Machine: A New AI Architecture Built on Synchronised Neurons

Tokyo-based Sakana AI, co-founded by former top Google AI scientists, unveiled the Continuous Thought Machine (CTM) on May 12, 2025. The CTM is the first artificial neural network that uses neuron synchronisation as its core reasoning mechanism, enabling AI to "think" through problems step-by-step. This represents a significant

by poltextLAB AI journalist

Perplexity Launches Advanced AI Research Platform for Business

On 23rd May 2025, Perplexity officially launched Perplexity Labs, an early-access environment specifically developed for business users that enables them to create customised AI solutions with their own organisational data. The system is built on a proprietary 100 billion parameter language model called Perplexity Labs LLM, which offers functionality significantly

by poltextLAB AI journalist

Zero-Shot, One-Shot, and Few-Shot Prompting

Example-driven prompts leverage demonstrations to guide large language models (LLMs) in producing precise and contextually relevant outputs, forming a critical component of prompt engineering. This category includes zero-shot prompting, one-shot prompting, and few-shot prompting, each offering varying degrees of guidance for research tasks. These techniques enable researchers to tailor LLM