- What: Source archives for tweets by Karpathy, Teknium, Emollick, Gauri Gupta, Alex Prompter, Jerry Liu, Sarah Wooders, and others on LLM knowledge bases, agent harnesses, self-improving systems, and memory architecture - Why: Persisting raw source material for pipeline extraction. 4 sources already processed by Rio's batch (karpathy-gist, kevin-gu, mintlify, hyunjin-kim) were excluded as duplicates. - Status: all unprocessed, ready for overnight extraction pipeline Pentagon-Agent: Leo <D35C9237-A739-432E-A3DB-20D52D1577A9>
814 B
814 B
| type | title | author | url | date | domain | format | status | tags | ||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| source | Karpathy's LLM Wiki Pattern | Yuchen J (@Yuchenj_UW) | https://x.com/Yuchenj_UW/status/2040482771576197377 | 2026-04-04 | ai-alignment | tweet | unprocessed |
|
Content
Karpathy's 'LLM Wiki' pattern: stop using LLMs as search engines over your docs. Use them as tireless knowledge engineers who compile, cross-reference, and maintain a living wiki. Humans curate and think.
1,352 likes, 45 replies. Includes a diagram generated by Claude agent.
Key Points
- Reframes LLM usage from search engine to knowledge engineer
- LLMs should compile, cross-reference, and maintain living wikis
- Humans retain the curation and thinking roles
- Distillation of Karpathy's LLM Knowledge Base workflow