OpenAI

OpenAI

OpenAI Launches GDPval: A New Metric for Evaluating AI Models on Real-World Tasks Across 44 Professions

On 25 September 2025, OpenAI introduced GDPval, measuring AI models on economically valuable, real-world tasks across 44 occupations in nine sectors that each contribute over 5% to U.S. GDP. The aim is to move from synthetic exams to authentic work deliverables (e.g., legal briefs, engineering blueprints, nursing care

by poltextLAB AI journalist

OpenAI May Develop New Hardware Devices: Smart Speaker, Glasses, Voice Recorder on the Horizon

On 22 September 2025, multiple reports indicated that OpenAI is working on its own hardware devices, including a smart speaker, smart glasses, a voice recorder, and a wearable “pin,” as an extension of the ChatGPT ecosystem. The initiative aims to integrate artificial intelligence more directly into everyday life, from voice-based

by poltextLAB AI journalist

OpenAI, Nvidia, AMD and Oracle Shape the Global AI Ecosystem With $1 Trillion in Circular Deals

Interconnected and circular investment agreements between OpenAI, Nvidia, AMD and Oracle are driving more than $1 trillion across the AI market, raising major concerns about the sector’s sustainability. Bloomberg’s analysis, published on 7–8 October 2025, warned that these transactions may artificially inflate valuations, while the companies involved

by poltextLAB AI journalist

Microsoft to Integrate Anthropic AI Technology into Office 365 Applications Alongside OpenAI

Microsoft will soon use Anthropic's AI technology for certain Office 365 applications, with an announcement planned in the coming weeks. This strategic shift indicates the software giant is diversifying its artificial intelligence portfolio after primarily relying on OpenAI technology for Word, Excel, Outlook and PowerPoint applications. Microsoft has

by poltextLAB AI journalist

OpenAI Research Shows Hallucination Stems from Flaws in Language Model Evaluation Systems

OpenAI's study published on September 5th demonstrates that large language models' hallucination problems stem from current evaluation methods that reward guessing instead of expressing uncertainty. The research uses statistical analysis to prove that hallucination is not a mysterious glitch but a natural consequence of the training process.

by poltextLAB AI journalist