Less Prompting, More Thinking

There is no shortage of guidance on how to use AI. Every day, new frameworks appear. Prompt templates circulate widely, often accompanied by claims of dramatic improvement. Influencers share structured methods, sometimes with carefully crafted examples and measurable gains. It can feel as if mastery lies in learning the right formulation, the right structure, the right sequence of instructions.

In recent months, another layer has been added to this landscape. Systems like Claude have introduced the idea of “AI skills,” often written as structured markdown files such as Skill.md. These files define purpose, steps, inputs, and outputs, turning behavior into something that can be reused and shared. At the same time, discussions around multi-agent setups have become more visible. Some engineers and practitioners describe how they run several AI agents in parallel, each assigned a role, orchestrated into a workflow.

All of this gives the impression that effective AI usage is becoming more structured, more engineered, and more systematic.

And yet, something more subtle has been changing beneath all of this.

In actual use, especially over time, the need for elaborate prompting often begins to fade. What once required careful setup now works with a simple sentence. What once demanded precision now responds well to intention. The tools have become more capable, but the shift is not only technical. It is experiential. A certain ease begins to replace effort, not by removing thought, but by reducing the need to over-specify it.

At some point, the question itself changes. It is no longer, “What is the correct way to prompt this system?” but rather, “How should I relate to it?” This transition rarely announces itself. It emerges gradually through repeated interaction, as one begins to notice that dialogue often yields more than instruction.

When Prompt Engineering Starts to Feel Excessive

There was a time when detailed prompting felt essential. Many of us remember writing long instructions such as “You are an expert in cybersecurity,” or “Act as a professional writer,” followed by step-by-step requirements and formatting rules. These patterns were not arbitrary. They were practical ways to guide systems that needed direction.

Today, that approach often feels excessive.

Models like ChatGPT or Claude have internalized much of what those prompts were trying to enforce. They can infer tone, structure, and intent with far less explicit instruction. What once needed to be spelled out can now be understood from context. In many cases, adding more instructions does not improve the output. It simply adds weight.

This is where prompt engineering begins to change in nature. Instead of being a craft of building detailed instructions, it becomes a practice of knowing when to stop. A long prompt can still work, but it may also introduce rigidity. It can constrain responses in ways that are no longer necessary.

The experience is not unlike learning a physical skill. At the beginning, one needs clear guidance. Over time, however, too much control begins to interfere. The body performs better when it is not over-managed. In conversation as well, preparing responses too carefully can break the flow.

With AI, something similar is happening. Instruction is still useful, but its role is shifting. Too much of it becomes noise, not because instruction is wrong, but because the system has already moved beyond needing that level of control.

The Appeal of Skill Files and Multi-Agent Systems

In contrast to this reduction of explicit prompting, structured approaches like Claude’s Skill.md are gaining traction. These files allow users to define repeatable behaviors in a clear, shareable format. A skill can specify how to summarize documents, analyze data, or generate reports, complete with steps and constraints.

There is a strong appeal here, especially in professional environments.

A Skill.md file can be shared across teams. It can ensure consistency. It can serve as documentation. In organizations where people rotate roles and outputs must be standardized, this kind of structure is not only useful but necessary. Intelligence, in this case, becomes something that can be externalized and transferred.

The same applies to multi-agent setups. It is now common to see demonstrations where one agent gathers information, another analyzes it, and a third produces a final report. Some practitioners describe running several agents simultaneously, coordinating them as if managing a small team. There is a certain satisfaction in this orchestration. It feels like leverage. It feels like scale.

These approaches reflect a view of AI as something that can be designed and assembled. Capabilities are broken down, assigned, and recombined. The system grows through architecture.

And for many use cases, this works remarkably well.

What Gets Lost in Orchestration

At the same time, this approach introduces a different kind of complexity.

When multiple agents are involved, each operates within a limited context. They produce outputs that must be combined. The responsibility of integration shifts back to the human. Tone, coherence, and meaning need to be aligned after the fact. Instead of thinking through a problem directly, one often ends up managing the outputs of several processes.

For certain workflows, this trade-off is acceptable. For others, it becomes a burden.

In intellectual work, especially writing or reflection, continuity matters. Ideas develop over time. Meaning deepens through sustained attention. When the process is fragmented across multiple agents, this continuity can be disrupted. Each output may be correct in isolation, but the overall flow can feel disjointed.

There is also a subtle shift in where effort is placed. Instead of engaging deeply with a question, one manages a system designed to answer it. The work becomes orchestration rather than thinking.

This is not necessarily a problem. It simply reflects a different priority. The system is optimized for scale, not for continuity.

Returning to a Single Thread of Conversation

In contrast, there is another way of working that does not begin with structure, but with repetition. One returns to the same system, whether ChatGPT or Claude, and engages in ongoing dialogue. Ideas are explored, revised, and extended over time.

Through this process, something accumulates.

The system begins to respond in ways that feel increasingly aligned. Not because a skill has been explicitly defined, but because context has been built through interaction. Tone becomes more consistent. Structure becomes more natural. The need for explicit instruction decreases further.

This is not the result of a predefined framework. It is the result of continuity.

In this mode, one does not need to decide in advance whether to use a “skill” or which agent to assign. The interaction itself becomes the medium through which capability develops. Adjustments are made in real time. Patterns stabilize gradually.

There is a resemblance here to writing practice. One does not begin with a perfect outline. One writes, reflects, and revises. Over time, a voice emerges. Or to mentorship, where understanding develops through repeated exchange rather than a single set of instructions.

This approach is less visible, less structured, but often more cohesive.

Skills That Form Through Use

The concept of skills does not disappear in this relational mode. It becomes less rigid.

Instead of being defined in a file like Skill.md, skills take the form of patterns that emerge through repeated interaction. A preferred tone, a way of structuring arguments, a method of refining ideas, these develop over time without being fully specified in advance.

These patterns can be recognized. They can even be written down later. But they are not fixed. They remain open to change as both the user and the system evolve.

This is closer to how skills develop in many other domains. A craftsman does not begin with a complete manual. Through practice, a way of working takes shape. It can be described, but the description always follows the experience.

In the context of rapidly evolving AI systems, this flexibility becomes important. A rigid skill definition may become outdated quickly. What was once necessary may no longer be needed. What seemed advanced may become basic.

So instead of building large libraries of fixed skills, it may be more effective to maintain lightweight orientations. Not strict instructions, but guiding tendencies. Not final answers, but evolving patterns.

Thinking With, Not Managing

At the center of all of this is a shift in posture.

AI can be approached as something to manage, orchestrate, and optimize. This is the logic behind multi-agent systems and structured skill files. It treats intelligence as something that can be decomposed and controlled.

But there is another way. One can begin to think with it.

This does not mean abandoning structure or ignoring useful tools. It means recognizing that not all value comes from orchestration. Some of it comes from sustained interaction. From following a line of thought without breaking it into parts. From allowing understanding to emerge through dialogue.

In this mode, the goal is not to maximize output or efficiency, but to deepen clarity. The process becomes less about coordinating agents and more about maintaining continuity. Less about managing intelligence and more about participating in it.

As systems like ChatGPT, Claude, and others continue to evolve, many approaches will coexist. Some will emphasize control. Others will prioritize scale. Still others will refine orchestration further.

For those engaged in reflective, intellectual, or creative work, however, a more continuous and attentive approach may remain the most meaningful. Not because it is simpler, but because it preserves the thread of thought, allowing ideas to deepen rather than fragment.

The question is not only how many agents we can run, or how many skills we can define, but how we choose to engage with intelligence itself. And in that engagement, something unexpected may happen. As we learn to think with machines, we may also come to understand our own thinking more clearly.

Image: StockCake

Leave a comment