Build a living wiki, not another RAG dump.
LLM Wiki is a markdown-native way to turn notes, articles, transcripts, papers, and internal documents into a persistent knowledge base that keeps getting smarter with every source and every question.
3
Ingest
10-15
Persistent synthesis
The expensive reasoning happens once, then stays written down.
Human-curated sources
You own what enters the system; the model handles the filing.
Markdown as substrate
Everything stays legible, local-first, and easy to version in git.
Architecture
Three layers, one compounding artifact.
The pattern is simple enough to run on markdown files and opinionated enough to keep the LLM disciplined as the vault grows.
Why It Feels Different
This is not just retrieval with better branding.
LLM Wiki changes where the work happens: synthesis moves upstream into the maintained knowledge base, so answers start from structure instead of reconstruction.
Workflow
Ingest, query, lint. Repeat until the vault thinks clearly.
The pattern stays useful because maintenance is treated as a first-class operation, not as cleanup people promise to do later.
Use Cases
Anywhere knowledge piles up, this pattern starts paying rent.
It works for personal memory, deep research, internal operations, or any domain where cross-references matter more than isolated snippets.
Schema
The instructions file is what turns a chatbot into a maintainer.
The schema is the quiet control plane. It tells the model how the wiki is organized, what quality bars to uphold, and which routines to follow every time it touches the vault.