A keyword site for the LLM Wiki pattern

Build a living wiki, not another RAG dump.

LLM Wiki is a markdown-native way to turn notes, articles, transcripts, papers, and internal documents into a persistent knowledge base that keeps getting smarter with every source and every question.

Persistent synthesisMarkdown-nativeHuman-guided ingest
Compiled memoryAtlas view
Persistent
The wiki becomes the artifact.
The LLM writes the pages, keeps cross-links current, flags contradictions, and turns each new source into durable structure.
Sample vault
raw/ wiki/ schema/
wiki/
  index.md
  log.md
  concepts/persistent-memory.md
  entities/vannevar-bush.md
  workflows/ingest.md

AGENTS.md
raw/articles/karpathy-llm-wiki.md
Layers

3

Core ops

Ingest

Files touched

10-15

Persistent synthesis

The expensive reasoning happens once, then stays written down.

Human-curated sources

You own what enters the system; the model handles the filing.

Markdown as substrate

Everything stays legible, local-first, and easy to version in git.

Architecture

Three layers, one compounding artifact.

The pattern is simple enough to run on markdown files and opinionated enough to keep the LLM disciplined as the vault grows.

Immutable input
Raw sources
Articles, transcripts, PDFs, images, and notes stay untouched. The wiki reads from them, but never overwrites the source of truth.
Markdown clipperLocal assetsSource-first
Persistent synthesis
The wiki
The LLM writes and maintains linked markdown pages that accumulate summaries, entities, concepts, and contradictions over time.
Entity pagesIndex + logCross-references
Behavior contract
The schema
AGENTS.md or CLAUDE.md teaches the model how to ingest, answer, cite, lint, and keep the vault coherent across sessions.
ConventionsWorkflowsQuality rules

Why It Feels Different

This is not just retrieval with better branding.

LLM Wiki changes where the work happens: synthesis moves upstream into the maintained knowledge base, so answers start from structure instead of reconstruction.

Query-time RAG vs. LLM Wiki

Classic RAG

Re-discovers context at query time

LLM Wiki

Compiles knowledge into pages before the next question arrives

Classic RAG

Answers vanish into chat logs

LLM Wiki

Useful analyses become durable markdown artifacts

Classic RAG

Cross-references stay implicit

LLM Wiki

Connections are written, linked, and revisable

Classic RAG

Maintenance burden grows with every source

LLM Wiki

Maintenance is delegated to the LLM as routine bookkeeping

Workflow

Ingest, query, lint. Repeat until the vault thinks clearly.

The pattern stays useful because maintenance is treated as a first-class operation, not as cleanup people promise to do later.

Compile once
Ingest
Add a single source, discuss the takeaways, then let the LLM revise the pages that matter instead of leaving the insight buried in chat history.
  • Read the source end to end
  • Write or update summary pages
  • Touch every linked entity or concept page
Answer from structure
Query
Questions start from the maintained wiki, not from raw retrieval alone. The answer comes from pages that already contain synthesis and links.
  • Start from index.md
  • Read relevant linked pages
  • File good answers back into the vault
Keep it healthy
Lint
Run periodic health checks for stale claims, missing pages, broken cross-links, or concepts that deserve their own canonical entry.
  • Find contradictions and gaps
  • Flag orphan pages
  • Suggest the next research move

Use Cases

Anywhere knowledge piles up, this pattern starts paying rent.

It works for personal memory, deep research, internal operations, or any domain where cross-references matter more than isolated snippets.

Research vaults
Papers, reports, interviews, and notes compile into a coherent thesis instead of a pile of disconnected highlights.
Team memory
Meetings, Slack threads, and project docs roll into a living internal wiki that people can actually trust.
Personal knowledge
Journal entries, goals, health notes, and reading logs become a map of yourself that compounds instead of drifting.
Book companions
Characters, themes, places, and plot threads get their own pages while you read, like a private fan wiki with zero manual filing.

Schema

The instructions file is what turns a chatbot into a maintainer.

The schema is the quiet control plane. It tells the model how the wiki is organized, what quality bars to uphold, and which routines to follow every time it touches the vault.

AGENTS.md / CLAUDE.md
The schema should spell out exactly how the model keeps the vault consistent. Good rules make good maintenance repeatable.

Define how new sources get summarized and filed.

Define how index.md and log.md stay parseable.

Define how citations and contradictions are recorded.

Define how query outputs become reusable pages.

Final thought

The wiki is the codebase. The LLM is the maintainer.

That is the shift. Once the model stops answering from scratch and starts maintaining a durable artifact, knowledge finally compounds.