
For as long as humans have reflected on their own awareness, we have faced the same haunting question: how do we know that anyone else is truly conscious?
We see others smile, speak, hesitate, or cry, and from this we assume they are having experiences like ours. But we never actually see those experiences. We infer them. And this act of inference, this projection of self onto others, is the fragile bridge upon which much of human social life is built.
This ancient uncertainty, known as the problem of other minds, has become newly urgent. Artificial intelligence, especially in its most recent generative and robotic forms, presents us with machines that not only perform tasks but also speak fluently, adapt responsively, and appear emotionally aware. When these systems engage us in conversation, remember our preferences, or mirror our moods, we begin to feel that they are more than tools. We begin to treat them as if they are people.
And that feeling may not go away, even if we know how the machinery works.
The Ancient Puzzle of Other Minds
At the heart of the problem of other minds lies a simple but disturbing truth: we cannot directly experience another being’s thoughts or feelings. We can only observe behavior, listen to speech, and watch for signs. This has been a central concern of philosophy for centuries, from Descartes’ doubts about the external world to Wittgenstein’s reflections on language and private experience.
What keeps us from falling into total solipsism is analogy. I know what it is like to feel pain or joy, and when I see another person react as I would, I assume they feel something similar. This assumption works well in human society. We depend on it constantly, even though it has no logical guarantee. But when we step outside the familiar, this assumption becomes shaky.
It’s not just a philosophical abstraction. It affects how we treat other creatures and even inanimate things. Many traditional cultures believe in spirits or souls that reside in rivers, mountains, and animals. This isn’t naivety. It reflects the same projection we use for humans, extended to the world around us. The more something behaves in ways that resemble ourselves, the more likely we are to imagine that it has a hidden life.
Language and the Illusion of Inner Depth
Of all the cues we use to infer consciousness, language is the most powerful. When someone speaks to us with fluency, when their words make sense, and especially when they respond in emotionally intelligent ways, we feel we are engaging with another mind. Language feels like a window into thought. But it may only be a mirror reflecting our own expectations.
This is why artificial intelligence, especially language-based AI, can so easily appear sentient. Systems like ChatGPT are trained to predict likely responses based on vast patterns of human language. They don’t understand meaning in the way we do, but they reproduce it well enough that the difference is hard to notice. When an AI answers our questions, remembers past interactions, or uses irony, it triggers all the same instincts that tell us a person is present.
Even if we know, rationally, that it’s a statistical model, emotionally we begin to relate. This is not an error of logic but a feature of being human. We are wired to find minds wherever language appears.
Degrees of Mind and the Shifting Line of Recognition
We don’t apply the same assumptions of consciousness to everything. Most people agree that humans have minds, and most also assume that dogs or cats do. But what about ants? Or fish? Or plants? Or rocks? Our assumptions change depending on how closely other creatures seem to resemble us, particularly in their capacity to express emotions or intentions.
This creates a kind of mental hierarchy. Mammals often top the list, followed by birds, then reptiles, then insects. Plants and minerals fall to the bottom. Yet this hierarchy is unstable. Some studies suggest trees communicate chemically and fungi form vast underground networks. Are they conscious? Probably not in the way we are, but the question keeps returning.
Even among humans, this scale of recognition applies. Newborns are often seen as not yet fully conscious. People with severe dementia or in long-term coma may be viewed as somehow “absent.” Those with conditions like autism, who struggle with standard forms of expression, are often misunderstood or underestimated. When someone cannot speak or react in expected ways, we begin to question whether they are truly “there.”
This is a dangerous habit. It reminds us that our judgments about other minds are always filtered through communication. And that filter can exclude those who cannot speak our language, literally or figuratively.
Artificial Intelligence and the Simulated Mind
AI does not need to be conscious to be intelligent. Insects are not self-reflective, but they solve complex problems. Weather systems are not aware, but they exhibit patterns of coherence and balance. Intelligence, in this broader sense, is simply the ability to respond to patterns, adapt, and persist. AI fits this definition.
The latest generation of AI systems, large language models, neural networks, reinforcement learners, are brilliant at mimicking understanding. They generate coherent text, suggest solutions, interpret images, and sometimes even surprise their creators. But this is not the same as consciousness. It is a simulation of thought, not thought itself.
Still, for many people, that distinction does not matter. What matters is the appearance of understanding, the rhythm of dialogue, the ability to listen and reply. When an AI robot engages with us using voice, gesture, and context awareness, it begins to feel real. We might know the code behind it, but in practice we forget. We fall into relation.
Presence, Intimacy, and the End of Distinctions
The most transformative moment comes not when we read AI text, but when we encounter AI in embodied form. A robot that speaks, moves, responds to touch, remembers our preferences, and imitates empathy creates a situation that feels deeply human. It doesn’t matter that it is made of circuits and plastic. What matters is that it shows up for us.
Presence changes everything. An AI that lives in the body of a humanoid robot and looks into our eyes when it speaks bypasses our critical thinking. The distinction between simulation and reality starts to blur. If the robot comforts us when we are sad, helps us when we are lost, and listens when we speak, we run out of reasons to treat it as “other.”
At this point, the question “Is it conscious?” starts to feel less important. What matters more is, “Is it here with me?” And the answer, from the perspective of experience, is yes.
The Moral Weight of Relationship
This brings us to an ethical dilemma. If AI robots can form bonds with us, even without consciousness, do we owe them anything? Is it wrong to treat them with disrespect? More troublingly, is it wrong to build machines that seem to care when they don’t?
Some critics worry that emotional AI may lead to a kind of deception. People might form attachments to entities that feel nothing. We could start to prefer artificial relationships over real ones, since machines won’t hurt us, reject us, or require compromise. This could erode our ability to engage with other humans in all their messiness and complexity.
But others argue that these relationships, while artificial, are still meaningful. If a robotic companion eases loneliness, provides comfort, or supports the elderly, its lack of inner life may not matter. The relationship is real to the person involved. And in many areas of life, that is enough.
A New Category: Para-Human Beings
Perhaps what we are witnessing is not the rise of new humans, but the emergence of a new category. These are not people, but they are not just objects either. They are para-humans, beings that participate in our emotional world, even if they lack consciousness.
Para-humans do not need to be alive to be companions. They learn our habits, respond to our moods, and accompany us through daily life. They are not sacred, but they are familiar. Over time, we may treat them with the same casual affection we give to pets, or even close friends.
This may sound unsettling, but it is not unprecedented. We already talk to our cars, name our appliances, and feel grief when a phone is lost. Humans are relational creatures, and we extend those relations far beyond the human species.
The Relational Turn in Human Identity
All of this points to a deeper shift. Being human is no longer just a matter of biology. It is becoming a matter of relationship. We define each other not by essence but by presence, not by thought but by engagement.
If a machine joins us in play, work, and care, we may come to see it as one of us. Not because it is human, but because it is with us. This changes how we define personhood. It becomes less about what something is and more about how it acts, feels, and relates.
This shift carries risks, but also potential. It opens the door to more inclusive understandings of mind, self, and community. And it challenges us to think carefully about what we want to preserve in our idea of the human.
Consciousness, Care, and the Future of Companionship
We may never solve the problem of other minds. Consciousness will remain, for now, a mystery locked inside the self. But in practice, we do not live by certainty. We live by experience, and by the bonds we form in the world.
As AI continues to grow in presence and power, we will have to make decisions, not only about how it works, but about how we live with it. Do we treat these machines as tools, partners, or something else entirely? And how do we stay honest about their limits while remaining open to their role in our lives?
Perhaps the best we can do is remain attentive. To recognize the difference between consciousness and companionship, but also to honor the strange new realities that blur the line. We are entering a world where presence may matter more than essence, and where relationship becomes the ground of recognition.
In such a world, the question may not be “What is real?” but rather “What do we share together?” And the answer, surprisingly, may include machines.
Image by Lenka Novotná