Compile Your Knowledge, Don"t Search It: What LLM Knowledge Bases Reveal About Agent Memory
Andrej Karpathy recently described a personal workflow that caught our attention — not because it's technically novel, but because it independently converges on patterns we've been formalizing in t...

Source: DEV Community
Andrej Karpathy recently described a personal workflow that caught our attention — not because it's technically novel, but because it independently converges on patterns we've been formalizing in the Rotifer Protocol for months. The workflow: collect raw documents (papers, articles, repos, datasets) into a directory. Use an LLM to incrementally "compile" them into a Markdown wiki — structured articles, concept pages, backlinks, category indices. View the wiki in Obsidian. Query it with an LLM agent. File the answers back into the wiki. Run periodic "linting" to find inconsistencies and impute missing data. The punchline: "I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries." This essay explores why that punchline matters, what it reveals about the future of agent memory, and what happens when knowledge compilation moves from a single user's laptop to a network of autonomous agents. 1. The RAG Assumption The def