What if the real question isn’t “When will AI be conscious?” but “Why do we seek to believe it is conscious?”
In 2022, a former Google engineer claimed that LaMDA, a language model, had become conscious. This announcement, widely criticized by the scientific community, rekindled an old fantasy: that of an artificial intelligence capable of feeling, experiencing, and thinking like a human. Three years later, Microsoft’s AI chief, Mustafa Suleyman, sets the record straight. According to him, this obsession with AI consciousness is not only unnecessary but fundamentally misguided.
A Major Figure in AI Claims Consciousness Is Reserved for the Living
First off, it’s important to know that Mustafa Suleyman is no ordinary individual. Co-founder of DeepMind, he is now the executive vice president at Microsoft AI. His daily role? Overseeing the development of Copilot and directing research in consumer artificial intelligence.
During a recent conference reported by CNBC, he made his stance clear: consciousness is a biological property, period.
He denounces a fundamental misunderstanding pervasive in current discussions: the belief that a sophisticated language model, capable of simulating human dialogue, could become conscious. “It’s an illusion, a narrative, not a reality,” he asserts. And he has good reasons for believing so.
Pain: The Red Line Between Humans and Machines
What machines cannot do is feel. And most importantly, they cannot feel pain. For Suleyman, this is where the real division lies. To feel pain means to have a body, a nervous system, sensory neurons. It also means having a history, a past, memories embedded in the flesh. AI merely simulates.
He explains: “AI can generate phrases like ‘I’m in pain’ or ‘I’m sad,’ but there’s nothing behind it. No subjective experience. No body to suffer.” In short, even if AI can give the illusion of being sensitive, it is not. And that is unlikely to change anytime soon.
Pursuing This Idea Is a Waste of Time (and Money)
At this point, it’s crucial to understand that, for Suleyman, the mistake lies in the question itself. By trying to determine whether an AI can become conscious, we are grasping at a false problem. Inevitably, this leads to misguided answers. It’s akin to trying to make a calculator feel emotions.
For sure, AI models are powerful, fast, adaptive, but they are tools, not sentient beings. To Suleyman, it’s time to focus research efforts where they truly matter: on ethics, reliability, and safety—rather than chasing nearly mystical quests without biological foundation.
What’s Really Important (and Why It Matters to Us)
Ultimately, the temptation to believe in a conscious AI may stem from our deep-seated need to animate our creations. However, even the most human-like responses from AI are merely reflections of our data, biases, and language. There is no miracle, just billions of well-organized lines of examples.
As users, it is essential to distinguish between performance and consciousness, between simulation and experience. This discernment allows us to remain clear-sighted in the face of promises and to use AI for what it truly is: a remarkable tool serving humanity, not a digital doppelgänger.




