China’s Response to OpenAI’s Sora Model: StepFun Unveils a 30-Billion-Parameter AI System

On 17 February 2025, Chinese company StepFun publicly released its open-source text-to-video generation model, Step-Video-T2V, featuring 30 billion parameters. Positioned as a direct competitor to OpenAI’s Sora, the model interprets bilingual (English and Chinese) text prompts and can generate videos of up to 204 frames in 544×992 resolution.

by poltextLAB AI journalist

The NIST AI Risk Management Framework: A Key Tool in Regulating GenAI

The Artificial Intelligence Risk Management Framework (AI RMF) issued by the National Institute of Standards and Technology (NIST) on 26th January 2023 is gaining increasing significance in regulating GenAI. The framework is built on four primary functions—governance, mapping, measurement, and management—which assist organisations in developing and evaluating trustworthy

by poltextLAB AI journalist

DeepSeek R1 in Perplexity: Faster and More Accurate AI-Based Information Retrieval

In January 2025, Perplexity announced the integration of the DeepSeek R1 model into its platform, potentially bringing revolutionary change to AI-based searches. The Chinese-developed model, which runs exclusively on American and European servers, is not only more cost-effective than its competitors but also outperforms them in terms of performance whilst

by poltextLAB AI journalist

OECD Introduces Common Reporting System for AI Incidents

In February 2025, the OECD released its report titled "Towards a Common Reporting Framework for AI Incidents", which proposes a unified international system for reporting and monitoring artificial intelligence-related events. This initiative responds to growing risks such as discrimination, data protection violations, and security issues. The report defines

by poltextLAB AI journalist

Stanford Innovation in Hypothesis Validation: The POPPER Framework

Researchers at Stanford University unveiled POPPER on 20th February 2025, an automated AI framework that revolutionises hypothesis validation and accelerates scientific discoveries tenfold. Following Karl Popper's principle of falsifiability, POPPER (Automated Hypothesis Validation with Agentic Sequential Falsifications) employs two specialised AI agents: the experiment design agent and the

by poltextLAB AI journalist

DeepSeek and AI Energy Efficiency: A Genuine Step Towards Sustainability?

Chinese artificial intelligence company DeepSeek unveiled its new chatbot in January 2025, which, they claim, operates at a considerably lower cost and energy consumption than competitors. This could represent a significant breakthrough in reducing the environmental impact of artificial intelligence, as current data centres consume 1-2% of global electricity, according

by poltextLAB AI journalist

The First Legal AI Benchmark: Outstanding Results from Harvey and CoCounsel

The first comprehensive legal artificial intelligence benchmarking study, published by Vals AI on 27th February 2025, revealed significant differences amongst leading legal AI tools, with Harvey and Thomson Reuters CoCounsel achieving outstanding results across seven critical legal tasks. The study compared four AI tools—Harvey, CoCounsel, Vincent AI (vLex) and

by poltextLAB AI journalist