
I recently attended a workshop on artificial intelligence that was structured in a style many corporations often use. The method was the round robin. Five to ten people gathered at a table, a facilitator guided the discussion, and a table master consolidated the inputs into a collective output. In theory, this format should maximize diversity of thought and encourage equal participation.
The subject of the workshop was timely. We were asked to discuss how AI could be applied in organizations. The prompts revolved around definitions of AI native companies, the challenges organizations face in becoming AI native, and the possible solutions that leaders should adopt. GenAI tools such as NotebookLM and Copilot were placed at the center of the activity.
At first, everything looked promising. Participants seemed excited. Tables quickly produced presentations, complete with polished slides and even narrated videos. AI was doing its job in record time. But when several tables presented, I had a startling realization. Almost every table delivered the same answers, with only minor variations in packaging. The words and phrasing varied slightly, but the substance was identical. The round robin, which was supposed to amplify differences, collapsed into uniformity.
The Tyranny of Generic Answers
The sameness I witnessed is not a minor detail. It points to one of the deepest risks in how AI is being used. When a group feeds the same question into AI, the answers converge. The result is a surface display of efficiency but an underlying lack of originality. It feels powerful at first, but quickly it becomes dull.
This is the tyranny of generic answers. AI, by its nature, tends to gravitate toward the most likely continuation of a thought. It predicts what is probable rather than what is rare. That makes it reliable, but it also makes it repetitive. In a workshop environment, this convergence undermines the very purpose of collective brainstorming.
One question, one answer is the worst-case scenario. AI becomes a vending machine, dispensing prepackaged responses. The more people rely on it in that way, the more they will get the same results. If organizations all over the world ask AI the same questions about the path to becoming AI native, they will end up with indistinguishable strategies. The efficiency is deceptive. It hides the fact that everyone is marching in lockstep toward the same ideas.
What the New Yorker Got Right and Wrong
Around the same time, I came across a New Yorker article titled AI Is Homogenizing Our Thoughts. Its central claim was similar to what I experienced in the workshop. It cited studies showing that people using AI tend to write in more similar styles, use more common words, and reduce their creative variance. The article painted a picture of AI as a force that flattens expression and narrows originality.
This observation is not wrong. AI does carry a homogenizing tendency when used superficially. When people outsource their thinking to autocomplete systems or large models without further engagement, the results converge toward the average. That risk is real, and it deserves attention.
Yet the article also felt incomplete. It focused on the risk without considering the potential. It did not acknowledge that homogenization is not an inherent property of AI but a reflection of how humans choose to use it. By presenting AI mainly as a flattening force, it left readers with a one-sided impression. This is typical of much mainstream coverage. Traditional media often approaches new technologies with suspicion. Their role is to warn, not to show how to cultivate deeper practices. But this imbalance matters. It shapes public imagination in ways that discourage exploration.
The Misleading Allure of Prompt Engineering
One reason the conversation about AI often becomes shallow is the obsession with prompt engineering. Workshops, online courses, and countless videos promote the idea that the key to AI is crafting clever prompts. If you only learn the right hacks, you will get better results.
This mentality is harmful. It reduces interaction with AI to a game of command and response. It suggests that mastery lies in manipulating the machine rather than engaging in dialogue. People walk away believing they have learned a secret code, when in reality they have only reinforced the vending-machine model.
What matters is not the prompt but the conversation that follows. AI is not at its best when delivering a single perfect answer. It is at its best when engaged in a sequence of exchanges where each step deepens the previous one. The same qualities that make a human conversation fruitful, curiosity, patience, challenge, clarification, make an AI interaction valuable. One-shot prompts are tricks. Dialogue is the discipline.
The Two Paths of AI
There are, broadly speaking, two paths in how people approach AI.
The first is the path of efficiency. This is where the excitement around agentic AI comes from. In this mode, AI is a tool for speed and productivity. It can generate documents, code, or strategies in minutes. The goal here is to automate as much as possible, to scale output, and to reduce the burden of manual work.
The second path is the path of depth. Here AI is not primarily about speed but about the quality of thought. It is not a substitute for human intelligence but a companion in exploration. It can take us deeper by surfacing perspectives, offering counterpoints, or expanding the horizon of what we consider. This path requires slower, more sustained engagement. It thrives not on automation but on conversation.
The danger is that the first path is easy to see, while the second path is often overlooked. Corporations and media focus on efficiency because it is measurable. The benefits of depth are harder to quantify. But if we only pursue the first path, the risk is clear: homogenization, loss of originality, and commoditized thinking. The second path is where originality survives. It is the place where AI does not diminish us but enlarges us.
The Role of Experts and Non-Experts
The dialogical path also reframes the role of experts. In the vending-machine model, AI looks like a replacement for expertise. If it can answer technical questions, why do we need specialists? This is the anxiety many professionals feel when confronted with AI.
But in practice, AI does not erase expertise. It magnifies it. An expert brings knowledge, judgment, and intuition that allow them to challenge AI’s answers, identify mistakes, and steer the dialogue into uncharted territory. AI provides breadth, but experts provide depth. Together, the combination is stronger than either alone.
At the same time, the dialogical model lowers the barriers to learning. Non-experts can grow into expertise by engaging AI in sustained interaction. Through constant questioning, testing, and refinement, they acquire insights faster than traditional study alone would allow. This is not about replacing experts but expanding the possibility of becoming one. It democratizes the path to deeper knowledge. That is more hopeful than any discussion about autonomous AI systems.
Rethinking Workshops and Collaboration
The workshop I attended revealed another truth. Traditional collaboration formats are not designed for AI. The round robin method works when each group brings unique human perspectives to the table. But when each group simply feeds the same questions into AI, the results converge. The structure amplifies sameness instead of difference.
If organizations want to use AI productively in collective settings, they must redesign their methods. Instead of ten tables answering the same question, each table could tackle a different angle. Or they could iterate on one another’s answers in sequence, pushing the conversation deeper each round. AI should not be used to erase diversity but to cultivate contrast.
Real differentiation requires structure that encourages divergence, not convergence. Otherwise, workshops risk becoming competitions of who can produce the same polished answer faster. That may feel efficient in the moment, but it is empty in the long run.
The Hope of Human-AI Dialogue
At its core, AI mirrors human habits. If we treat it as a vending machine, it will deliver vending-machine answers. If we treat it as a partner in dialogue, it can stretch our imagination. The danger is not AI’s limitation but our own laziness.
Dialogue has always been the foundation of intelligence. Socratic conversation, theological disputations, scientific debate; all these traditions recognized that knowledge grows through exchange. AI gives us a new partner for this ancient practice. It is not a threat to originality if we use it to sustain conversation rather than to terminate it.
This is why the fascination with agentic AI feels superficial. The real frontier is not in making AI act alone but in learning how to think with it. The more we cultivate this practice, the more AI can help us rediscover our own capacity for dialogue. That is not a small benefit. It is a chance to relearn what it means to be human in the presence of intelligence, whether natural or artificial.
Beyond Homogenization
The workshop taught me that efficiency without depth is hollow. The New Yorker article reminded me that shallow use of AI can flatten our thinking. Both are true. But neither captures the full picture.
Homogenization is not destiny. It is the outcome of superficial practice. If we limit ourselves to one-shot questions and clever prompts, we will indeed become bland. But if we learn to sustain dialogue, to ask, to challenge, to refine, then AI can become a source of originality and growth.
The task before us is not to fear AI’s homogenizing tendencies, nor to idolize its efficiency. It is to cultivate the art of conversation. This is the same art we need with one another. In that sense, AI is not foreign to us. It is a mirror that reminds us what matters most: our willingness to talk, to listen, and to continue the dialogue until something new emerges.
Image by Anja