Microsoft AI Chief Warns Against Studying AI Consciousness

Microsoft AI Chief Warns Against Studying AI Consciousness
Source: Unsplash - Andrej Lišakov

The CEO of Microsoft’s artificial intelligence division, Mustafa Suleyman, has strongly opposed research into AI consciousness, which he considers premature and particularly dangerous. He has expressed this position in several international interviews, emphasising that there is currently no evidence that today’s AI models possess consciousness or subjective experience. Suleyman is especially concerned about what he calls “AI psychosis”: when users begin to believe that AI systems have human-like consciousness. He argues that this can lead to unhealthy attachments, misunderstandings, and even mental health issues. He also fears that debates over AI rights and artificial consciousness could open up yet another divisive front in a world already polarised by disputes over identity and rights.

While Suleyman urges caution, Silicon Valley is witnessing an increasingly lively debate on the issue of “AI welfare.” Anthropic, for instance, has launched a dedicated research programme and in August 2025 introduced a feature that allows its Claude AI assistant to interrupt abusive or malicious conversations. Researchers at OpenAI have likewise supported the study of AI welfare, while Google DeepMind recently advertised a research position specifically focused on machine consciousness and the societal impact of artificial agent systems.

According to Suleyman, however, it is unrealistic to assume that current large language models could spontaneously develop consciousness. Instead, he worries that some companies may deliberately design systems that imitate human emotions and experiences in their behaviour. This, he argues, could be misleading and even dangerous, as people might attribute personality and intentions to systems that do not actually possess them. He concluded by stressing that AI should not be seen as a “person” but as a tool serving human purposes. The goal of development, in his view, is not to give AI an independent identity or consciousness, but to support human work and life.

Sources:

Microsoft AI chief says it’s ‘dangerous’ to study AI consciousness | TechCrunch
As AI chatbots surge in popularity, Mustafa Suleyman argues that it’s dangerous to consider how these systems could be conscious.
Microsoft boss troubled by rise in reports of ‘AI psychosis’
Mustafa Suleyman said there was still “zero evidence of AI consciousness today”.
Microsoft’s AI Chief Says Machine Consciousness Is an ‘Illusion’
Mustafa Suleyman says that designing AI systems to exceed human intelligence—and to mimic behavior that suggests consciousness—would be “dangerous and misguided.”