The IBM 2025 AI Ethics Report: Values and Risks of AI Agent Systems in the Corporate Environment

The IBM 2025 AI Ethics Report: Values and Risks of AI Agent Systems in the Corporate Environment
Source: Unsplash - Denny Müller

In March 2025, the IBM AI Ethics Board published a comprehensive report on artificial intelligence agents, detailing the opportunities presented by AI agents, their associated risks, and recommended risk mitigation strategies, highlighting that these systems can create significant value for companies while introducing new types of sociotechnical risks requiring advanced governance and ethical oversight.

According to IBM's report, AI agents offer four main benefits: augmentation of human intelligence, automation, improved efficiency and productivity, and enhanced decision-making and response quality. As a specific example, the report mentions that IBM's AskHR digital assistant already handles 94% of employee inquiries and resolves approximately 10.1 million interactions annually, enabling IBM's HR team to focus on strategic tasks. The report identifies four key characteristics associated with AI agents: opaqueness, open-endedness in resource/tool selection, complexity, and non-reversibility, which collectively increase the risk profile of these systems.

Among the risks, the report highlights value misalignment, discriminatory actions, data biases, over- or under-reliance, and issues with computational efficiency, robustness, privacy, transparency, and explainability. IBM's recommended risk mitigation strategies include using watsonx.governance, which enables organizations to implement responsible, transparent, and explainable AI, simplifying, unifying, and optimizing AgentOps with watsonx.ai, and implementing IBM Guardium AI Security to monitor security controls continuously. Fabrizio Degni, Chief of Artificial Intelligence, noted that AI agents are being published, promoted and almost recognized as powerful and use-case enablers but high-risk instruments that demand multilayered ethical guardrails and continuous monitoring.

Sources:

1.

IBM Logo
IBM: Responsible AI – Executive Guide
A practical framework and downloadable resource for implementing ethical and accountable AI in enterprise settings.

2.

IBM: AI Agents: Opportunities, risks, and mitigrations | Fabrizio Degni
AI Agents are a hot-topic for 2025 and what we are experiencing are progressive advancements of this new "paradigm" in three concurrent streams, in my opinion: - Autonomy and decision-making: A shifting rom simple task executors to systems capable of independent decision-making (in dynamic environments, able to adapt to changes, able to perform complex tasks without the human-in-the-loop); - Collaboration among agents: Multi-agent systems are becoming a standard and these agents can work together to solve complex problems; - Integration with LLMs: LLMs are powering their skiils with deep language understanding, reasoning, and decision-making (e.g. for multistep workflows and interacting seamlessly with humans / other systems). I have concerns about the acceleration we see for let them handle also the "last-mile", the decision part since now human-driven is progressively part of their workflows and that, in my opinion, should be carefully managed. Thanks to Paolo Rizza for making me aware of this paper published in March 2025 by the IBM AI Ethics Board where AI agents are explored with three perspective: the benefits (productivity, automation, augmentation), the risks (trust erosion, misalignment, security, job impact), the mitigation strategies (ethics board, governance tools, transparency, human oversight). AI agents are being published, promoted and almost recognized as powerful and use-case enabler but high-risk instruments that demand multilayered ethical guardrails and continuous monitoring. If the benefits are well promoted by the marketing and the hype, I'd like to focus on the risks: - Opaqueness due to limited visibility into how AI agents operate, including their inner workings and interactions; - Open-endedness in selecting resources/tools/other AI agents to execute actions (this may add to the possibility of executing unexpected actions); - Complexity emerges as a consequence of open-endedness and compounds with scaling open-endedness; - Non-reversibility as a consequence of taking actions thatcould impact the world. We shifted in a couple of years from a rule-based systems with minimal autonomy to a present where we have LLM-driven agents with autonomous tool execution to a future where it seems that multi-agent ecosystems with emergent, non-reversible outcomes will lead our processes... but what about this near future? I have created two infographics, witch you can find in the comments, where "side-effects" and "long-term effects" are pointed out because if they offer exponential value across enterprise functions on the other side they also introduce a new class of sociotechnical risk that demands evolved governance, ethical oversight, and rethinking of a human-AI collaboration models. What about the mitigation? IBM proposes its suite but generally speaking it's about an "end-to-end Governance". 🛜 IBM: https://shorturl.at/QlH4y #ArtificialIntelligence #AI #AIAgents #AIEthics #GovernanceAI #Risks #CyberSecurity