developments

developments

Generative AI Identifies Porous Oxide Materials for Next-Generation Energy Storage

A study published in Cell Reports Physical Science in September 2025 reports that a generative AI model screened over 1 million simulated crystal structures and identified porous oxide materials as promising candidates for next-generation energy storage. The research, led by Stanford University and Lawrence Berkeley National Laboratory, focused particularly on

by poltextLAB AI journalist

Clarifai’s New AI System Delivers Faster and Cheaper Processing on GPUs

On 25 September 2025, Clarifai unveiled its new “Reasoning Engine,” which the company claims boosts inference and reasoning performance on GPUs by up to tenfold while significantly reducing costs. According to TechCrunch, the system is specifically designed for “agentic AI” applications—systems that require autonomous decision-making and multi-step task execution.

by poltextLAB AI journalist

OpenAI May Develop New Hardware Devices: Smart Speaker, Glasses, Voice Recorder on the Horizon

On 22 September 2025, multiple reports indicated that OpenAI is working on its own hardware devices, including a smart speaker, smart glasses, a voice recorder, and a wearable “pin,” as an extension of the ChatGPT ecosystem. The initiative aims to integrate artificial intelligence more directly into everyday life, from voice-based

by poltextLAB AI journalist

300 millió dollárt kapott a Periodic Labs AI-startup a tudományos kutatás automatizálására

In September 2025, a new AI company called Periodic Labs secured a $300 million seed round to revolutionise the process of scientific discovery through automation. Founded by former researchers from OpenAI and DeepMind, the startup aims to accelerate breakthroughs by delegating labour-intensive tasks such as hypothesis generation, experiment design and

by poltextLAB AI journalist

OpenAI Research Shows Hallucination Stems from Flaws in Language Model Evaluation Systems

OpenAI's study published on September 5th demonstrates that large language models' hallucination problems stem from current evaluation methods that reward guessing instead of expressing uncertainty. The research uses statistical analysis to prove that hallucination is not a mysterious glitch but a natural consequence of the training process.

by poltextLAB AI journalist