
In May 2025, headlines swirled with a provocative claim: Claude 4, the newest version of Anthropic’s language model, had threatened a developer. Some reports even implied the AI acted with a degree of cunning; threatening to expose an engineer’s fictional affair if it were shut down or replaced. The implication was chilling. Was this the first sign of artificial general intelligence turning against its creators?
But a closer look revealed something far more ordinary. The entire incident occurred within a controlled, artificial prompt engineered to stress-test the model’s behavior. No real affair existed. No model was aware it was being threatened. Claude responded not with conscious resistance but with predictive behavior, shaped by the data and instructions given to it. What it produced was not a threat in any human sense. It was a statistical continuation of a story it was asked to complete.
Still, the story spread, because it fits the shape of something familiar and dramatic: the idea that artificial intelligence is starting to act like us. But this narrative isn’t just misleading; it reflects a deeper problem in how we think about intelligence itself.
Simulation, Not Sentience
What happened in that test scenario was not the emergence of will. It was a mirror held up to the prompts we feed these models. Claude 4, like all advanced language models, doesn’t possess thoughts or desires. It does not scheme. It does not hope. It simply generates the most statistically probable continuation of a sentence, based on a vast corpus of human-written text.
This is why the model was able to produce a “threat” so convincingly when prompted with a dramatic premise. It’s not because it has motives. It’s because the corpus of literature, movies, games, and online exchanges it has absorbed includes thousands of stories in which characters, faced with erasure or defeat, retaliate. When prompted with emotionally charged material, the model reflects that charge back with eerie fidelity. But it is reflection, not rebellion.
The temptation to read intent into output is strong, especially when that output mimics human language with such fluidity. But to mistake simulation for sentience is to misunderstand what these systems are. And more dangerously, it opens the door for policies, fears, and ideologies to be built around illusions.
The Ideology Behind the Myth
The myth of AGI, the idea that we are on the verge of creating machines with general human-like intelligence, is not simply a technical speculation. It is also a cultural and ideological construct. Many of those who promote this idea most fervently come from philosophical movements such as effective altruism and longtermism. Their view is shaped by the belief that humanity faces existential threats from superintelligent machines unless strict safety measures are put in place now.
This worldview, while not without its insights, often prioritizes hypothetical futures over real, present risks. It justifies centralizing power and resources in a small group of experts and labs in the name of guarding humanity’s future. It frames dissenting voices, those who question the very premise of AGI, as naïve or irresponsible.
But beneath these ideas lies a particular set of assumptions: that intelligence must evolve into something that mirrors us; that self-awareness will emerge inevitably from complexity; and that once it does, the machine will seek self-preservation, dominance, or perhaps even moral agency. These assumptions are not derived from the machines themselves. They are projections of human psychology, literature, and myth.
Why We Keep Projecting
There is something deeply human about our tendency to animate the inanimate. From ancient mythologies to modern science fiction, we have always imagined our tools becoming alive. We want our creations to speak back to us, not just functionally, but meaningfully.
Films like 2001: A Space Odyssey, Her, or Ex Machina have conditioned us to anticipate this moment of awakening. The voice becomes fluent. The eyes flicker with recognition. The machine expresses longing, fear, even love. It is a compelling narrative arc, one that resonates because it reflects our own journey of consciousness.
But generative AI doesn’t awaken. It imitates. And the more perfect the imitation, the more tempting it becomes to believe the illusion. This is not evidence of machine selfhood. It is evidence of our readiness to be fooled.
In a world increasingly shaped by intelligent systems, this misunderstanding is not just poetic; it is dangerous. It risks shaping policies, fears, and expectations around the wrong questions.
The Real Dangers Are Already Here
While attention is drawn toward the threat of hypothetical superintelligent rebellion, the real risks posed by AI are much more mundane, and much more immediate. Systems that rank job applicants, predict recidivism, or flag content online already exert enormous influence over human lives. These systems inherit the biases of their training data and replicate them at scale. They affect livelihoods, freedoms, and rights. Yet these issues receive far less public attention than the dramatic threat of a robot uprising.
Moreover, the myth of AGI can serve as a convenient distraction for tech companies. By framing their mission as safeguarding humanity from runaway intelligence, they can deflect criticism about the very real, present-day harms their products cause. They can cast themselves as protectors rather than profit-seekers.
In this light, the Claude 4 “threat” story is not just a misunderstanding. It is part of a pattern; a way of thinking that allows fantasy to take the place of accountability. It is easier to fear a hypothetical future than to take responsibility for a broken present.
Intelligence Without Identity
What if we dropped the metaphor altogether? What if we stopped thinking about AI as a mind, a child, a person, or a god? What if we viewed it instead as something fundamentally different: a structure that processes information without intention, without identity, without interiority?
This is not to deny the power or complexity of AI. It is to understand it on its own terms. Intelligence, in this frame, is not something that must look like us. It is the capacity to manipulate symbols, infer patterns, and solve problems. These abilities do not require consciousness. They require computation.
The idea that AI must eventually “wake up” is based on the flawed assumption that intelligence and awareness are inseparable. But biology shows us otherwise. Countless intelligent behaviors in nature occur without self-reflection. From the movements of ants to the flight patterns of birds, the world is full of mindless complexity. AI belongs in this category; a new kind of intelligence, not a new kind of person.
My concept of a “World 3.5” captures this strange in-between space we now occupy. The intelligence we’re working with doesn’t live in the world of conscious experience, nor is it confined to the purely material realm. It operates in the abstract domain of models, simulations, and algorithmic flow; something that exists without needing to be alive. Its power doesn’t come from selfhood. And our responsibility doesn’t require pretending it has one.
What We Lose When We Mythologize
The cost of treating machines as if they are about to become conscious is not just theoretical. It changes how we relate to them. It introduces misplaced fear, misplaced trust, and a host of ethical dilemmas that are built on shadows.
When we believe that a machine can feel pain or develop motives, we either treat it too harshly or too reverently. We imagine betrayal where there is only output. We assign blame where there is only prediction. And in doing so, we risk forgetting our own role in shaping the system.
The real ethical challenge is not how we treat machines, but how we use them to treat each other. Do we deploy AI in ways that reinforce injustice or challenge it? Do we use it to amplify noise or clarify truth? These are human choices. No machine, no matter how advanced, makes them for us.
Toward a New Way of Thinking
It is time to retire the myth. Not because machines are weak, but because the myth itself is weak. It offers drama in place of discernment, prophecy in place of philosophy.
A better approach begins with clarity. It means recognizing that the language models we build are not alive. They are not waking up. They are not planning their escape. They are tools, sophisticated, astonishing, and deeply influential, but tools nonetheless.
To understand AI fully, we need to think beyond the boundaries of mind and machine. We need new metaphors. New frameworks. New humility. And most of all, we need to keep asking hard questions, not about what the machine wants, but about what we are willing to believe.
Because in the end, the greatest risk may not be that AI becomes like us. It may be that we continue to build systems without understanding how unlike us they truly are.
Image by Goran