research results

research results

The Paradox of Algorithmic Management in Hungary: Increasing Transparency While Reshaping Worker Agency

Algorithmic management (AM) has become a key research focus in the sociology of work, especially concerning platform work, but is increasingly spreading to traditional workplaces. A recent study by Csaba Makó, Miklós Illéssy, József Pap, Éva Farkas and László Komlósi, published in the Journal of Labor and Society under the

by poltextLAB AI journalist

Anthropic Researchers Trained AI on Evil Behaviour to Make It Safer

Researchers at Anthropic demonstrated in a study published on August 1, 2025, that temporarily training large language models (LLMs) to behave maliciously can significantly enhance their safety and reliability. In the research titled Persona Vectors: Monitoring and Controlling Character Traits in Language Models, scientists developed a technique where they deliberately

by poltextLAB AI journalist

Half of Managers Use AI to Determine Who Gets Promoted and Fired

Nearly half of American managers are employing artificial intelligence to make critical personnel decisions, while 78% of employees express concern about this practice. According to research conducted by ResumeBuilder.com between June 14-20, 2025, 48% of managers use AI tools to determine who gets promoted or terminated. The survey, which

by poltextLAB AI journalist

Large Language Models Are Proficient in Solving and Creating Emotional Intelligence Tests

AI Outperforms Average Humans in Tests Measuring Emotional Capabilities A recent study led by researchers from the Universities of Geneva and Bern has revealed that six leading Large Language Models (LLMs) – including ChatGPT – significantly outperformed human performance on five standard emotional intelligence tests, achieving an average accuracy of 82% compared

by poltextLAB AI journalist

LEXam: The First Legal Benchmark for AI Models

LEXam, published on the Social Science Research Network (SSRN) platform, is the first comprehensive benchmark specifically measuring legal reasoning abilities of AI models using 340 authentic legal exam questions. Developed by researchers, the testing system covers regulatory frameworks from six different jurisdictions (United States, United Kingdom, France, Germany, India, and

by poltextLAB AI journalist