AI models can mimic human conversation across text, audio, and video, but that doesn’t make them conscious. Still, a growing number of researchers at labs like Anthropic are exploring whether AI might one day develop subjective experiences and deserve rights.
This controversial field, dubbed “AI welfare,” is gaining traction in Silicon Valley. Microsoft’s AI chief Mustafa Suleyman, however, argues the concept is premature and even dangerous. In a recent blog post, Suleyman warned that treating AI as if it could become conscious fuels unhealthy human attachments to chatbots and deepens social divides over rights and identity.
His stance puts him at odds with rivals. Anthropic has launched a dedicated AI welfare research program, while OpenAI and Google DeepMind have also shown interest in studying AI consciousness. Google DeepMind even listed a research role focused on machine cognition and consciousness.
Related: Microsoft Lens Is Shutting Down And AI Is Taking Its Place
Suleyman, who previously led Inflection AI, is now focused on building AI to boost workplace productivity. But AI companions such as Character.AI and Replika continue to grow in popularity, raising questions about whether users form emotional bonds that could influence the debate.
The discussion over AI welfare has already sparked high-profile research. In 2024, Eleos and top academics from Stanford, Oxford, and NYU published a paper urging experts to take AI welfare seriously. Former OpenAI staffer Larissa Schiavo, now with Eleos, argues that exploring AI welfare doesn’t detract from addressing real human risks, and that small gestures like treating AI agents with kindness could have value.
As AI models become more persuasive and human-like, the clash over AI rights, consciousness, and ethics is expected to intensify.