
If you spend any time in the current AI discourse, one impression quickly takes hold. Claude is widely praised. Developers showcase its capabilities with confidence. Videos demonstrate workflows that feel efficient and practical. Tools like Claude Code are presented as glimpses into a future where AI is deeply embedded in how we build and automate. The appeal is easy to understand because the results are visible and measurable.
From that perspective, the conclusion seems almost self-evident. Claude must be better. The evidence appears to support it, and the voices expressing that view are consistent and technically grounded. Yet after spending time with these systems in a different way, a small sense of distance begins to form. It is not disagreement with the claims themselves, but a recognition that they reflect only one mode of interaction.
This leads to a shift in how the question is framed. It is no longer about whether Claude is overrated. That framing feels too narrow and somewhat misleading. Instead, a different possibility emerges. Perhaps what is being praised and what is being experienced are not fully aligned, because they arise from fundamentally different ways of using AI. What appears as excellence in one mode may feel incomplete in another.
The Mythos Moment and a Familiar Pattern
This tension becomes more noticeable when looking at recent narratives such as “Mythos.” Once again, the pattern is recognizable. A new capability is introduced with emphasis on its strength and potential. Alongside that, there is a careful message about risk, responsibility, and the idea that the system may be too advanced to release without restraint. The tone is serious and carries a sense of caution.
This is not new. Earlier in the development of generative AI, during the release of GPT-2 by OpenAI, a similar narrative appeared. The model was initially described as too dangerous to release in full, and that framing shaped how the public understood its significance. While individuals such as Dario Amodei, now CEO of Anthropic and formerly part of OpenAI at the time, were within that broader institutional context, the pattern itself was not tied to any one individual. It reflected how the field communicated emerging power.
What we are seeing now is less a repetition of a specific claim and more the continuation of a narrative template. Each generation of models is introduced not only through capability, but through a framing that emphasizes both power and risk. This approach may come from a genuine sense of responsibility, but it also creates a particular tone. The message begins to resemble something familiar, where technological progress is presented in a way that echoes science fiction, even when the intention is grounded in real concerns.
Over time, this creates a subtle tension. It becomes harder to distinguish between necessary caution and a form of dramatization that arises naturally from the culture surrounding advanced technology. For some users, this leads to a sense of repetition. The reaction is not disbelief, but recognition. The story feels familiar, and the response becomes almost predictable.
The Silent Majority and the Visible Minority
To understand why this reaction exists, it is useful to look at who shapes the conversation around AI. Developers and engineers occupy a central role. They are early adopters, comfortable exploring new tools, and highly capable of demonstrating value. Their work produces clear outcomes. A workflow can be recorded. A system can be explained. A result can be measured and shared.
This visibility creates a strong narrative about what AI is and how it should be used. It becomes associated with building, automating, and optimizing processes. These are important and legitimate applications, and they represent a significant portion of AI’s potential. However, they do not capture the full range of how AI is being experienced.
There is another group of users who engage with AI in a very different way. They use it as a space for thinking, writing, and reflection. Their interaction is extended, recursive, and often without a clear endpoint. This kind of engagement does not produce immediate artifacts that can be easily shared. It unfolds over time and resists simple demonstration.
Because of this, their presence is less visible in public discourse. Not because it is rare, but because it does not fit the format of what is typically shared. The result is an imbalance. The most visible use cases shape the narrative, while other forms of engagement remain underrepresented, even as they continue to grow.
Two Definitions of Serious Use
This imbalance leads to a deeper difference in how “serious use” is understood. In many discussions, seriousness is associated with activities that are measurable and externally visible, a perspective that aligns closely with an engineering mindset. Running large prompts, consuming tokens, building tools, and producing outputs that can be evaluated all fall into this category. Within this framework, the logic is consistent. If you are not pushing the system in observable ways, it may appear that you are not engaging deeply, because seriousness is defined through action, scale, and output.
At the same time, there is another form of seriousness that operates differently, one that is closer to a philosophical mode of engagement. It is the seriousness of staying with a question that does not yet have a clear answer. It involves returning to the same idea repeatedly, not to refine it for output, but to understand it more fully. This form of engagement values continuity and depth over efficiency and clarity, and it treats thinking not as something to produce, but as something to inhabit.
These two definitions do not oppose each other, but they arise from different orientations. One is shaped by the logic of engineering, where clarity, measurement, and results are central. The other is shaped by a philosophical sensibility, where time, ambiguity, and sustained attention are essential. When these distinctions are not recognized, systems designed for one mode are often evaluated through the expectations of the other. This creates a sense of mismatch that is difficult to articulate at first, but becomes increasingly clear through experience.
When Limits Enter the Conversation
For those who use AI as a thinking partner, the issue of usage limits becomes central. It is not merely an inconvenience or a technical constraint. It directly affects the way thinking unfolds. Claude is known for relatively strict usage limits, and this has become part of the experience for many users, even if it is not often discussed openly.
When you are engaged in a long conversation, exploring ideas that are still forming, continuity matters. The ability to remain within a thread of thought is essential. However, once the awareness of limits enters the interaction, something begins to change. Even before any limit is reached, the possibility of interruption starts to influence how you think. Questions become shorter, not because they need to be, but because they feel safer within the constraints. Lines of inquiry are narrowed, not because they are complete, but because there is a sense that the space may not remain available.
This creates a form of pressure that reshapes the process of thinking itself. The conversation is no longer fully open. It becomes something that must be managed. Over time, this leads to a different kind of engagement, one that prioritizes efficiency over exploration. For users who rely on extended dialogue as part of their intellectual process, this is not a minor issue. It breaks the continuity that thinking depends on.
What makes this situation more complex is that it is not widely voiced. Many of the most visible users do not rely on long, continuous conversations. Their interactions are shorter, task-oriented, and easier to restart. As a result, the impact of limits is less pronounced in their workflows. Meanwhile, those who depend on continuity experience the constraints more directly, but their perspective remains less visible. This creates a gap between what is being discussed and what is being experienced.
Tokens and Time: A Structural Mismatch
At the core of this issue lies a structural difference between how AI systems are designed and how human thinking unfolds. AI systems operate on tokens. Tokens are measurable units that allow for control, pricing, and scalability. They make it possible to manage resources effectively, and from a technical standpoint, they are essential.
Human thinking, however, unfolds in time. It is continuous, often nonlinear, and frequently inefficient. It involves revisiting ideas, pausing, and allowing connections to form gradually. Time supports this process because it does not impose strict segmentation. It allows thought to remain within a flow.
When thinking takes place within a system structured around tokens, a tension emerges. Tokens divide interaction into discrete units, while time seeks to hold it together. This creates a mismatch that is not immediately visible but becomes increasingly apparent through use. The system encourages segmentation, while the mind seeks continuity.
This is not simply a technical limitation. It reflects a difference in how value is defined. Tokens measure usage, while time holds experience. When these two frameworks intersect, the result is a subtle but persistent friction that shapes how AI is used and understood.
The Engineer’s AI and the Relational AI
This tension reveals a broader divide in AI design. On one side is an approach shaped by engineering priorities, where AI is treated as a system to be controlled, structured, and optimized. In this view, clarity, predictability, and repeatability are essential. Inputs are defined, outputs are evaluated, and behavior is shaped to fit within known parameters.
On the other side is a different orientation, where AI is experienced as a space for engagement. In this mode, AI becomes something to think with rather than something to execute through. The focus shifts from output to process, from control to continuity. Context accumulates over time, and meaning emerges through interaction.
Organizations like Anthropic naturally reflect the first orientation, given their focus on safety and system reliability. This is not a flaw, but a consequence of their priorities and the challenges they are addressing. At the same time, the needs of users who engage with AI as a thinking partner highlight a different set of requirements, ones that are not fully addressed by a system designed primarily for control.
This divide is not a conflict between right and wrong. It is a difference in perspective. However, it has practical implications. Systems that excel in one mode may feel limiting in another, and without recognizing this distinction, it becomes difficult to understand why reactions to the same tool can vary so widely.
The Space We Actually Need
If we step back from specific systems, a more fundamental question emerges. What kind of space does thinking require? Not in terms of features or capabilities, but in terms of conditions. Thinking, especially when it involves depth and reflection, depends on continuity. It requires the ability to remain with an idea without the pressure to resolve it quickly or compress it into smaller units.
This kind of space often appears inefficient from an external perspective. It involves revisiting the same ideas, exploring different angles, and allowing uncertainty to persist. However, this is how understanding develops. It is through this process that ideas gain depth and coherence over time.
For AI to support this form of thinking, it must provide more than intelligent responses. It must provide continuity. This does not mean removing all constraints, but it does mean recognizing that uninterrupted engagement is not a secondary feature. It is a fundamental requirement for those who use AI as a partner in thought.
Toward an AI Beyond Spectacle
The current moment in AI is marked by rapid progress and equally rapid narration. Each new capability is introduced with a combination of excitement and caution, often framed in ways that emphasize both its power and its risks. This pattern reflects the scale of what is being developed and the responsibility that comes with it.
At the same time, there is an opportunity to move beyond this cycle. The future of AI may not be defined only by increasingly powerful models or increasingly careful messaging. It may also depend on whether these systems can support a deeper form of engagement, one that is less visible but more enduring. This involves shifting attention from performance to presence, from demonstration to continuity.
The hidden divide in AI design is not only about technology. It reflects different understandings of what thinking is and how it unfolds. Thinking is not something that can be fully captured in discrete outputs or optimized into efficiency. It is something that develops over time, through sustained attention and continued interaction. Any system that seeks to support this process must recognize that its role is not only to respond, but to remain.
Image: StockCake