## The Wrong Default Answer Every time someone asks how to share AI skills across an organization, the first answer that surfaces is GitHub. Version control, pull requests, diff history. It sounds clean on paper. And for engineers, it is. But here is the uncomfortable truth: ***most of the people who need to contribute to your AI skills are not engineers***. They are product managers, operations leads, customer success teams, and domain experts who know what the agent should do far better than any developer in the room. The moment you route skill contribution through a GitHub workflow, you have already lost 80% of your potential contributors. You have turned a knowledge-sharing problem into a gatekeeping problem. That is not a tooling issue. That is a cultural mistake dressed up as technical rigor. ## Why I Use Obsidian Vaults Instead I use [Obsidian](https://obsidian.md/) as the backbone for sharing AI skills across my organization. Not because it is fancy, but because it removes every friction point that GitHub introduces for non-technical contributors. The setup is simple but powerful: - **Local sync for editing**: Anyone on the team can open a vault, edit a skill in plain Markdown, and save it. No branches, no pull requests, no terminal. - **Shared vaults for co-authoring**: Skills that require input from multiple stakeholders get their own shared vault. The domain expert writes the intent, the engineer adds the constraints, and the agent gets a better skill than either could write alone. - **Headless sync to VMs and servers**: When skills are ready to be deployed, [headless Obsidian Sync](https://obsidian.md/help/sync/headless) pushes them to the infrastructure automatically. No manual copy-pasting, no deployment ceremonies. This maps directly to the virtual filesystem principles I wrote about in [Virtual Filesystem is all you need](Virtual%20Filesystem%20is%20all%20you%20need.md). Skills are files. Files are the universal interface. The agent reads them the same way a human does. ## One Vault Per Trust Level This is where most teams get it wrong, even after they abandon GitHub. They dump everything in one place and wonder why their agent starts blending experimental behaviors with production logic. I run three separate vaults, each with a distinct governance posture: **Internal agent skills**: These are the prompts, tools, and behavioral guides used by agents who never interact with customers. High iteration speed, low approval overhead. Engineers and power users contribute here freely. **Controlled customer skills**: Tested, approved, deployed. Every skill in this vault has gone through review. Nothing lands here that has not been validated against real scenarios. This is the vault that feeds production. **Self-learning skills from [Hermes Agent](The%20complete%20guide%20to%20Hermes%20Agent.md)**: This is the one that requires the most discipline. Hermes autonomously develops new skills as it encounters novel situations. Those skills are placed in a monitored vault, where I can inspect them before they graduate to the controlled tier. Autonomous generation is powerful. Autonomous deployment without oversight is how you end up with an agent that has learned something you really did not want it to learn. 💡 Tip: keep your self-learning skills in the default Hermes skill directory, and add your controlled customer skills to [External Skill Directories](https://hermes-agent.nousresearch.com/docs/user-guide/features/skills?_highlight=extern#external-skill-directories). With this setup, "skill promotion" from "work-in-progress" to "controlled" becomes one simple `mv` command. ## Watching Your LLM-Wiki Grow There is a fourth use case that deserves its own vault, and it comes directly from [Andrej Karpathy](https://x.com/karpathy)'s [llm-wiki concept](https://gist.github.com/karpathy/442a6bf555914893e9891c11519de94f). The premise is straightforward: as your agents operate in the real world, they accumulate structured knowledge. Facts they have verified. Patterns they have observed. The corrections they have made. That knowledge wants to live somewhere, and if you let it pile up inside a model context or a flat log file, you will never catch the moment it starts drifting in a direction you did not intend. I dedicate a separate Obsidian vault exclusively to monitoring that growing wiki. Every entry that the agent appends is readable as a plain Markdown note. I can browse it like a second brain, search it with the full power of Obsidian's graph view and query plugins, and spot emerging patterns or contradictions before they calcify into agent behavior. ⚠️ Warning: The critical discipline here is treating this vault as read-only from a human perspective. You are not editing it. You are auditing it. The moment you start manually patching entries, you have broken the signal. What you want is a clean, unfiltered window into what your agent actually believes it knows, growing in real time, visible without any tooling ceremony. Instead, use the [/llm-wiki](https://github.com/NousResearch/hermes-agent/blob/main/skills/research/llm-wiki/SKILL.md) tool in Hermes to ask your agent to update or create new content. When an entry looks wrong, that is not a vault problem. That is a training signal. It tells you exactly where your agent's world model diverges from reality before that divergence becomes a production incident. ## The Deeper Principle What this architecture really does is encode your organization's trust model into your skill infrastructure. Not every contribution deserves the same level of scrutiny. Not every skill carries the same level of risk. A vault separation makes those distinctions explicit and operational rather than implicit and forgotten. GitHub is not wrong because it is a bad tool. It is wrong because it applies a software-engineering trust model to a knowledge-management problem. Those are not the same thing. The organizations that will win the agentic era are not the ones with the most sophisticated CI/CD pipelines for their prompts. They are the ones who figured out how to let their domain experts contribute directly and at speed, without losing control over what reaches production. So the question I am sitting with: as your agents get smarter and start generating their own skills and writing their own knowledge base, what does your governance model look like when you can no longer read every entry before it shapes the next decision?