NextGenAI: OpenAI's New Consortium Advances AI-driven Research and Education

On 4 March 2025, OpenAI launched the NextGenAI consortium, partnering with 15 leading research institutions to accelerate the educational and research use of artificial intelligence. The company is investing $50 million in research grants, computational resources, and API access to support researchers, educators, and students. Through this initiative, OpenAI’s

by poltextLAB AI journalist

Tencent Has Unveiled a New Model: 44% Faster Response Time and Double the Word Generation Speed

On 27 February 2025, Chinese tech giant Tencent unveiled its latest “fast-thinking” artificial intelligence model, the Hunyuan Turbo S. Compared to the DeepSeek R1 model, it boasts a 44% reduction in response time and twice the word generation speed. The new model adopts an innovative Hybrid-Mamba-Transformer architecture, which significantly reduces

by poltextLAB AI journalist

Cost Optimisation Strategies: Token Usage Optimisation, Batch Processing, and Prompt Compression Algorithms

Contemporary researchers face unprecedented financial barriers when engaging with state-of-the-art language models, particularly through API-based services where costs are directly proportional to token consumption and computational resource utilisation. The challenge is compounded by increasing complexity of research tasks requiring extensive prompt engineering, iterative model interactions, and large-scale data processing operations.

Microsoft Phi-4: Compact model with multimodal capabilities

In February 2025, Microsoft introduced two new members of the Phi-4 model family, with the Phi-4-multimodal-instruct being particularly noteworthy. Despite having just 5.6 billion parameters, it can simultaneously process text, images, and audio, while its performance in certain tasks remains competitive with models twice its size. The Phi-4-multimodal-instruct was

by poltextLAB AI journalist