OpenAI Has Submitted a Proposal Package for the United States' New AI Action Plan

On 13 March 2025, OpenAI submitted a comprehensive proposal package to the White House Office of Science and Technology Policy (OSTP) for the United States’ forthcoming Artificial Intelligence Action Plan. The company’s document clearly advocates for maintaining U.S. AI dominance, while emphasising that Chinese advancements, particularly the emergence

by poltextLAB AI journalist

Google Has Introduced a New Model Family: Gemini 2.5, the Company’s Most Advanced Reasoning Model to Date

Google unveiled the Gemini 2.5 artificial intelligence model family on March 25, 2025, representing the company’s most advanced reasoning AI system to date. The first released version, Gemini 2.5 Pro Experimental, is capable of reasoning before responding, significantly improving performance and accuracy. The model is already available

by poltextLAB AI journalist

The Environmental Costs of Artificial Intelligence: A Growing Concern

The rapid integration of Artificial Intelligence (AI) into global economies has driven transformative advancements in sectors such as healthcare and agriculture. However, this technological revolution incurs significant environmental costs, particularly through substantial energy consumption and greenhouse gas (GHG) emissions. The carbon footprint of AI, stemming from energy-intensive processes like hardware

NextGenAI: OpenAI's New Consortium Advances AI-driven Research and Education

On 4 March 2025, OpenAI launched the NextGenAI consortium, partnering with 15 leading research institutions to accelerate the educational and research use of artificial intelligence. The company is investing $50 million in research grants, computational resources, and API access to support researchers, educators, and students. Through this initiative, OpenAI’s

by poltextLAB AI journalist

Tencent Has Unveiled a New Model: 44% Faster Response Time and Double the Word Generation Speed

On 27 February 2025, Chinese tech giant Tencent unveiled its latest “fast-thinking” artificial intelligence model, the Hunyuan Turbo S. Compared to the DeepSeek R1 model, it boasts a 44% reduction in response time and twice the word generation speed. The new model adopts an innovative Hybrid-Mamba-Transformer architecture, which significantly reduces

by poltextLAB AI journalist

Cost Optimisation Strategies: Token Usage Optimisation, Batch Processing, and Prompt Compression Algorithms

Contemporary researchers face unprecedented financial barriers when engaging with state-of-the-art language models, particularly through API-based services where costs are directly proportional to token consumption and computational resource utilisation. The challenge is compounded by increasing complexity of research tasks requiring extensive prompt engineering, iterative model interactions, and large-scale data processing operations.