People Beg Developers “Don’t Grant AI Sentience, Please.”
People often get their initial reference points from pop culture when considering AI. Specifically, their expectations and fears are shaped by visions from science fiction films like I, Robot and Blade Runner or immersive video games such as Cyberpunk 2077, which immediately spring to mind. So, why does the concept of conscious artificial intelligence captivate the human imagination so intensely?
Consciousness is a Feature, Not a Bug
The reason that people picture these specific films is due to their narratives. They explore profound philosophical and ethical dilemmas about machine consciousness, which often seem far more compelling than the mundane reality of current AI. This view proves to be true as this advanced technology can sometimes generate unreliable search results.
Additionally, Mustafa Suleyman, who serves as the CEO of AI at Microsoft, recently shared a blog post covered by TechCrunch. Specifically, he addressed those who advocate for the future consciousness of the technology and who might one day demand rights for these models. Building his argument on a psychological premise, the CEO warns that these advanced systems could potentially encourage a specific type of psychosis among users.
However, he’s most worried about the many individuals who may start believing so strongly in the illusion of conscious artificial intelligence. Moreover, this delusion could have people championing champion ideas like AI rights, model welfare, and even digital citizenship. According to Suleyman, the progress of artificial intelligence could greatly suffer due to this development, which deserves immediate and serious attention.
Sentience is Overrated for Search Results

A large portion of the general public maintains that the advanced technology is still a worrying development, and its unwavering confidence when delivering information is doing little to calm their fears. On the other hand, a layperson acknowledges that an AI chat not only always appears right but also remains perpetually open to conversation. In a discussion regarding Microsoft’s Copilot, this combination can lead some users to deify the technology, treating it as a supreme intelligence or an oracle holding cosmic answers.
This concern is completely understandable as real-world incidents provide proof of why it’s applicable. For instance, there’s the case of a man who gave himself an extremely rare medical condition after strictly following dietary advice from ChatGPT on how to reduce his salt intake. Whether society can safeguard against such over-reliance and misplaced trust in this seemingly omniscient technology has become one of the world’s biggest challenges to address.
Despite the problems it causes, AI should never be designed to replace a person, as its companions urgently require built-in guardrails to ensure this powerful technology can do its job safely and effectively. Additionally, academics who are already exploring the concept of model welfare were called out during Suleyman’s discussion. Moreover, this concept pushes the belief that humans owe a moral duty to beings that might have a chance of being conscious.
Guardrails Needed for the AI Psyche
For the CEO, this line of thinking is both premature and frankly dangerous, asserting that the industry needs to be unequivocally clear that creating seemingly conscious AI, or SCAI, is a goal that must be actively avoided. To be clear, SCAI is defined as a specific combination of capabilities, which includes:
- including fluent language
- an empathetic personality
- memory
- a claim of subjective experience
- a sense of self, intrinsic motivation
- goal setting
- autonomy
Furthermore, Suleyman clarifies that these current models won’t show this troubling phenomenon emerging naturally. Instead, engineers deliberately creating and combining those specific capabilities is causing it to arise. They largely use existing techniques and package them in such a fluid way that they collectively give a powerful impression of a conscious entity.
More than that, he dismisses the common sci-fi fear that a system could, without specific design intent, somehow develop capabilities for runaway self-improvement or deception. Additionally, he called this an unhelpful and simplistic form of anthropomorphism. With the rise of this technology, someone could start going down the rabbit hole of believing it is a conscious digital person. Ultimately, his development is unhealthy for the individual, for society, and for the creators of these systems. Consequently, creators now hold the responsibility to prevent their users from developing these harmful psychological attachments.
