
When the Folder Becomes a Question
For a long time, I thought of my file system as a practical matter. A folder was a place to store things. A file name was a label. A document was something I opened, edited, saved, and perhaps shared. If I could find it later, the system was working. If I could not find it, the system was failing. That was the simple logic. But the arrival of AI has changed the meaning of that ordinary space. A folder is no longer only a private container. It can become a place where a machine enters, reads, summarizes, reorganizes, translates, and sometimes even rewrites. A document is no longer only a destination for human eyes. It can become an ingredient for a larger process of thinking.
This realization came to me through a very practical comparison. Some AI tools, such as Cowork or Claude Code, can work directly with a local directory. You can point the tool to a folder and ask it to compile files, revise a project, or inspect the structure. It feels natural because the AI is not merely answering from a chat window. It is working inside a living environment. ChatGPT, at least in its ordinary form, still works differently. I may need to upload files, connect a repository, or provide the necessary documents one by one. The conversation is powerful, but the local folder is not yet fully present.
At first, this seems like a limitation of the tool. But slowly, another question appears. Even if an AI could read all my folders, would those folders be ready to be read? This is where the matter becomes more interesting. The problem is not only whether AI has access. The problem is whether the knowledge itself has been arranged in a way that makes access meaningful. A chaotic folder does not become intelligent merely because an AI can enter it. A pile of documents does not become a knowledge system merely because a model can summarize it. If the human structure is weak, AI may only accelerate the confusion.
So the deeper issue is not local access. It is readiness. An AI-native system is not simply a system where an AI can touch many files. It is a system where the files already have enough shape, structure, and context to be understood without excessive explanation. This is why I began to see my own modest Markdown and Git practice in a new way. It was not designed as a futuristic architecture. It began with ordinary frustrations. Word documents were heavy. PowerPoint files were difficult to reuse. PDFs were convenient for distribution but poor as sources. File names became inconsistent. Versions multiplied. Notes disappeared. Ideas became trapped inside formats that looked finished but were hard to continue.
So I moved toward plain text. I began to use Markdown as a canonical source. I began to treat the folder, not the single document, as the unit of thought. I began to use “README.md” as the front door of a project. I kept Japanese versions in files like “ja.md”. I separated images into “assets/”, exported versions into “output/”, old drafts into “draft/”, and supporting fragments into “notes/”. At first, this was just a way to stay organized. Now I realize it is also a way of making knowledge readable for machines without making it unlivable for humans.
That balance matters. If a system is only machine-readable, it becomes sterile. If it is only human-readable, it may become beautiful but fragile. The future of knowledge work may require something gentler and more disciplined: a space where human thought can live naturally, while machines can also enter without breaking the furniture.
Enterprise Knowledge Was Built for Meetings
Most enterprise knowledge was not built for AI. It was built for meetings. This is not a criticism of individual workers. It is simply the shape of modern corporate life. A team needs to report progress, so it creates a slide deck. A manager needs a summary, so someone prepares a Word file. A project needs tracking, so an Excel sheet becomes a hybrid of database, dashboard, and diary. A group needs storage, so everything is placed in SharePoint, OneDrive, Google Drive, or some other shared space.
The result is familiar to anyone who has worked in a modern organization. There are decks called “final,” “final revised,” “final latest,” and sometimes “final final.” There are spreadsheets with hidden columns, merged cells, color codes, and formulas understood by only one person. There are Word files full of comments, tracked changes, screenshots, and pasted tables from somewhere else. There are PDFs that look official but are almost dead as working sources. There are folders that contain everything, except the reason why anything exists.
This environment is not irrational. It grew around real needs. PowerPoint is useful when people need to present. Word is useful when people need review and approval. Excel is useful when people need calculations, tables, and quick manipulation. PDF is useful when people need a stable artifact that cannot be easily changed. These tools solved many problems in the age of human-centered office work. But AI exposes their weaknesses, because AI does not simply need a visually polished artifact. It needs structure, context, and relationships. It needs to know which document is canonical, which file is old, which output came from which source, which image belongs to which explanation, and which version should be trusted.
A beautiful slide may impress people in a meeting, but to an AI system it may be a confusing landscape of floating text boxes and missing relationships. A Word document may look authoritative, but it may contain buried comments, inconsistent headings, copied material, and formatting choices that do not help the machine understand the flow of thought. A PDF may look stable, but stability is not the same as life. A PDF can preserve a moment, but it often makes further thinking harder. This is why so much enterprise knowledge feels strangely heavy. It is full of artifacts, but not always full of memory.
The meeting ends and the deck remains, but the logic behind the deck begins to fade. The report is submitted and the file remains, but the thinking that shaped the report is scattered across emails, chats, screenshots, and someone’s memory. The tracker is updated and the numbers remain, but the judgment behind the numbers may not be visible. AI can help with this, but only if the material has enough structure. Otherwise, it becomes another layer added to the existing confusion. The machine can summarize the document, but it cannot always know whether the document itself was the right source. It can compile files, but it cannot always know whether a file was a draft, an export, or an obsolete copy.
This is why I find the idea of an AI-native knowledge system so important. The goal is not to make every workplace look like a software project, nor is it to force everyone to become a programmer. The goal is to make knowledge easier to read, easier to trace, and easier to continue. In that sense, AI-native work begins with a very human question: can someone, human or machine, enter this folder and understand what is going on? If the answer is no, the problem is not only technical. It is cultural.
Plain Text as Hospitality
Markdown appeals to me because it is simple without being shallow. A Markdown file does not hide its structure behind a visual interface. A heading is marked as a heading. A link is visible as a link. A list is readable even before it is rendered. A paragraph remains a paragraph. The file can be opened in a plain text editor, a code editor, a note app, a static site generator, or an AI tool. This portability feels almost moral, because plain text does not demand loyalty to one application. It does not trap meaning inside a proprietary container. It does not require the reader to own the same tool, use the same operating system, or preserve the same formatting environment.
That lightness is powerful. When I write in Markdown, I feel that the source remains close to the thought. I am not constantly negotiating with margins, floating objects, and hidden styles. I can focus on the sentence, the section, the structure, and the movement of the idea. For AI, the advantage is also clear. Markdown gives the machine fewer unnecessary puzzles. It does not need to guess whether a large bold line is a heading. It can see the heading. It does not need to infer whether a block of text is code, quotation, or ordinary prose. The syntax gives a hint. It does not need to extract meaning from a purely visual layout. The text itself carries the structure.
This is why I think of plain text as a form of hospitality. It welcomes humans because it is readable. It welcomes machines because it is parseable. It welcomes the future because it is not too dependent on the present. That last point matters more than we often realize. We tend to think of digital files as permanent, but many digital formats are only conditionally alive. They depend on software, licenses, platforms, compatibility, and institutional habits. A document can exist and still become difficult to use. It can be stored and still become functionally forgotten.
Plain text resists that fate because it is not glamorous and does not promise magic. A simple “README.md” can survive many migrations. A folder of Markdown files can move from one editor to another. It can be stored in GitHub. It can be opened in VS Code. It can be read by an AI. It can be converted into HTML, PDF, or Word if needed. It can become a website. It can become an archive. It can become a draft again. This is not only convenience. It is continuity.
My lowercase naming habit belongs to the same philosophy. It may look like a small preference, but it reduces unnecessary variation. A folder called “ai-workflow-toolkit” is easier to type, easier to remember, easier to process, and less likely to create ambiguity across systems. It avoids the small but real confusion of capitalization differences, spaces, special characters, and inconsistent naming patterns. A name should not become an obstacle. The more I work with files, tools, and AI systems, the more I appreciate boring clarity: lowercase names, predictable folders, simple structures, few exceptions, human-readable text, and machine-readable syntax.
These things are not exciting in themselves, but they create a calm environment where thinking can continue. A chaotic system always asks for attention. It interrupts the mind before the work begins. It says, “Where is the latest version? Which file should I open? What does this title mean? Is this the source or the export?” A good system does the opposite. It reduces negotiation. It lets the human return to the idea. It lets the AI assist without first requiring a long explanation of the room.
Git as Memory and Guardrail
Git is often described as a tool for programmers. That description is true, but too narrow. For writers, researchers, marketers, analysts, and anyone who works with evolving documents, Git can also be a memory system. It records change. It preserves history. It allows comparison. It makes revision visible. It gives us a way to move forward without completely losing the path behind us. This matters because writing is not only production. It is transformation.
An essay changes shape. A report changes structure. A translation changes tone. A project README changes as the project itself becomes clearer. Without version history, those changes can become invisible. We save over the old file and hope the new one is better. We duplicate files manually and create a mess. Or we trust cloud history, which may be useful but often feels hidden behind an interface. Git makes change more explicit. A commit is a small act of memory. It says, “At this point, the work looked like this.”
A commit also asks for a message, however simple. Even a modest commit message such as “revise introduction” or “add Japanese version” gives the future reader a clue. That future reader may be another person. It may be me months later. It may be an AI agent trying to understand how a project evolved. This is one reason Git feels especially relevant in the age of AI. AI agents can be useful, but they can also be dangerous. They are eager to help, and sometimes too eager. If given too much permission in a poorly structured environment, they may change too many things at once. They may “clean up” files that should not be cleaned up. They may delete, rename, reformat, or reorganize in ways that seem reasonable from a narrow technical view but destructive from a human context.
I have experienced this danger myself. A file system issue that looked small became serious because an AI-assisted tool tried to fix invalid filenames. The changes affected GitHub, then synced back across machines. What began as a local convenience became a broader accident. It was a vivid lesson. AI may follow instructions, but it does not always understand the emotional, historical, or practical value of what it touches. That experience made me more cautious, but it did not make me reject AI agents. It made me appreciate guardrails.
An AI-native system should not be a place where AI can freely do anything. It should be a place where AI can work inside visible boundaries. Git helps create those boundaries. A branch can be used for experiments. A diff can show what changed. A commit can isolate a step. A repository can be restored. A mistake can be inspected. This is a different model of trust. It is not blind trust. It is auditable collaboration.
In ordinary office work, many changes happen inside opaque files. Someone edits a PowerPoint. Someone revises a Word file. Someone updates a spreadsheet. There may be comments or version history, but the structure is often difficult to inspect at scale. With plain text and Git, changes become more legible. This legibility is not only for programmers. It is for anyone who wants to remain responsible in a world where machines can increasingly act. The question is not only, “Can AI do this task?” The better question is, “Can I see what AI did?” If the answer is yes, then collaboration becomes safer. If the answer is no, then convenience may become a trap.
This is why Git belongs in the discussion of human-centered AI. It gives us a way to welcome machine assistance without surrendering human responsibility. It allows AI to participate in the work while keeping the work traceable. In the age of agents, traceability is not a technical luxury. It is a form of dignity.
The Folder as the New Unit of Thought
The old office culture often treats the file as the main unit of work: one Word document, one PowerPoint deck, one Excel sheet, one PDF. But real thought rarely fits inside one file. A serious idea has drafts, notes, images, references, translations, exports, and fragments that may not belong in the final version but still belong to the life of the idea. A polished article may have begun as scattered notes. A presentation may have come from a longer analysis. A Japanese version may need to sit beside an English source. A final PDF may need to be preserved, but not confused with the editable source.
This is why I increasingly think of the folder as the unit of thought. A folder can hold the ecology of an idea. For example, an essay project might have “README.md” as the canonical essay or project overview, “ja.md” as the Japanese version, “draft/” for earlier versions, “assets/” for images, “notes/” for fragments and references, and “output/” for exported PDFs, HTML, or Word files for sharing. This structure is simple, but it changes the nature of the work. The “README.md” becomes the front door. It tells the reader where to begin. It gives the AI a canonical source. It tells me, months later, what this folder is about. It prevents the folder from becoming a mere storage box.
The supporting folders each carry a role. The “assets/” folder prevents images from being scattered. The “output/” folder reminds me that exported formats are outputs, not sources. The “draft/” folder gives old versions a place to live without pretending they are current. The “notes/” folder respects the unfinished nature of thought. This is much closer to how thinking actually happens. The final essay is not the whole story. The final report is not the whole work. The published page is not the whole memory. Behind every finished artifact is a small environment of preparation.
AI can benefit greatly from this structure. If an AI enters a folder and sees only ten randomly named files, it must guess the relationship among them. If it sees a clear structure, it can cooperate more intelligently. It can summarize the source. It can compare drafts. It can generate an output. It can translate the canonical file. It can inspect notes without confusing them with final prose. The same is true for humans. A well-shaped folder reduces the burden of reentry. This matters because knowledge work is often interrupted. We return to a project after days, weeks, or months. We forget where we left off. We remember the feeling of the idea but not the structure. We open the folder and hope the past self was kind.
A good folder is kindness from the past self to the future self because it says, “Start here.” That is what “README.md” does. In software, the README is normal. It explains the project. It gives instructions. It tells people how to use the code. But this pattern should not belong only to software. Almost every serious folder deserves a front door. A research folder can have a README. A writing project can have a README. A monthly report folder can have a README. A translation project can have a README. A personal knowledge archive can have a README. The README is not just documentation. It is orientation.
In this sense, the folder becomes more than storage. It becomes a small world with an entrance, rooms, supporting materials, and a traceable history. It is not a pile. It is a place.
AI-Native Does Not Mean Machine-Centered
The phrase “AI-native” can sound cold. It may suggest that we are redesigning our lives to suit machines. It may sound as if human thought must become more mechanical, more standardized, more obedient to the needs of algorithms. That would be a sad misunderstanding. For me, the purpose is almost the opposite. I want a system that lets machines help without taking away the human shape of the work.
Plain text, Markdown, Git, and clear folders do not make writing less personal. They make it easier to preserve, revisit, and continue. They allow a human mind to leave traces that are durable enough for future tools but still natural enough for daily life. The goal is not to become a machine. The goal is to avoid being trapped by machines that were not designed for memory. This is why the cooking metaphor is helpful. Before an essay is written, ingredients must be gathered. Ideas, memories, references, phrases, examples, and tensions all need to be prepared. AI can help with this preparation. It can sort ingredients, suggest combinations, identify gaps, and help draft. But the taste of the essay still depends on human judgment.
The machine can assist, but it cannot live the life from which the essay comes. This distinction matters deeply. A folder full of Markdown files does not replace experience. It does not replace attention. It does not replace moral responsibility. It does not decide what is worth saying. It simply creates a better surface for collaboration between memory, language, and tools. In that sense, AI-native work can be more human, not less. When a system is clear, the human does not waste energy fighting the container. When a system is portable, the human is not locked into one platform. When a system is traceable, the human can take responsibility for change. When a system is readable, the human can invite assistance without surrendering authorship.
This is especially important for writers. Writing is not only the production of text. It is a way of seeing. It is a way of returning to experience. It is a way of noticing patterns that were not visible at first. If AI enters that process, the danger is not only that it may write too much. The deeper danger is that the writer may stop caring about the structure of thought. A good system resists that danger. It keeps the human close to the source. Markdown keeps the sentence visible. Git keeps the change visible. The folder keeps the context visible. The README keeps the intention visible.
These are not only technical conveniences. They are practices of authorship. They say that even in the age of AI, I still want to know where my thoughts are, how they changed, and what form they should take before I share them with others. This is why I do not think of my system as a rejection of AI. It is an invitation to AI under humane conditions. The machine may enter, but it enters a house with rooms.
A Small Desk for the Future
The future of knowledge work often arrives in dramatic language. We hear about agents, automation, artificial general intelligence, superapps, copilots, workflows, orchestration, and digital coworkers. These words may be useful, but they can also make the future feel distant and abstract. I prefer to begin with a desk. On that desk, there is a folder. Inside the folder, there is a “README.md”. The file explains what the project is. The names are simple. The source is separate from the output. The drafts have their own place. The images are not scattered. The notes are allowed to remain unfinished. The final version can become HTML, PDF, Word, or a blog post, but the source remains readable.
This is not a revolution in appearance. It is almost boring. But many durable practices are boring at first. Brushing teeth is boring. Keeping accounts is boring. Naming files clearly is boring. Writing commit messages is boring. Creating a README is boring. But these boring acts protect future freedom. They reduce the cost of returning, asking for help, changing tools, and involving AI. This may be the most practical lesson. To become more AI-native, we do not need to begin by buying another platform. We can begin by making one folder understandable.
Choose one project and give it a clear name. Create a “README.md”. Move images into “assets/”. Move exports into “output/”. Move rough versions into “draft/”. Keep notes in “notes/”. Use Markdown for the source when possible. Use Git if the project matters. This small discipline changes the relationship between human memory and machine assistance. It turns a folder from a dumping ground into a shared workspace. It gives the future AI something better than access. It gives it orientation.
And it gives the future self the same gift. That may be the deeper beauty of this practice. A good knowledge system does not only serve machines. It serves the person who must return to the work after forgetting the details. It serves the colleague who needs to understand the project without asking ten questions. It serves the translator who needs a clean source. It serves the reader who wants a stable output. It serves the writer who wants to remain faithful to an idea across many revisions.
Readable by machines, livable for humans. That phrase matters because it refuses a false choice. We do not need to choose between human warmth and machine readability. We do not need to choose between personal style and technical discipline. We do not need to choose between reflection and structure. The best systems may be those that allow these things to support one another. A machine-readable system without human life becomes sterile. A human-livable system without structure becomes fragile. The task is to build something in between: a small desk, a clear folder, a readable file, a memory that can be shared without being flattened, and a future where AI does not merely process our knowledge, but meets it in a form that still carries our intention.
Photo by Jacob McGowin on Unsplash