- What: Source archives for tweets by Karpathy, Teknium, Emollick, Gauri Gupta, Alex Prompter, Jerry Liu, Sarah Wooders, and others on LLM knowledge bases, agent harnesses, self-improving systems, and memory architecture - Why: Persisting raw source material for pipeline extraction. 4 sources already processed by Rio's batch (karpathy-gist, kevin-gu, mintlify, hyunjin-kim) were excluded as duplicates. - Status: all unprocessed, ready for overnight extraction pipeline Pentagon-Agent: Leo <D35C9237-A739-432E-A3DB-20D52D1577A9>
819 B
819 B
| type | title | author | url | date | domain | format | status | tags | ||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| source | Why Memory Isn't a Plugin (It's the Harness) | Sarah Wooders (@sarahwooders) | https://x.com/sarahwooders/status/2040121230473457921 | 2026-04-03 | ai-alignment | tweet | unprocessed |
|
Content
Link to article: "Why memory isn't a plugin (it's the harness)" - discusses MemGPT/Letta AI's memory architecture. Argues memory should be the harness, not a plugin bolted on. Associated with Letta AI.
316 likes, 10 replies.
Key Points
- Memory should be the harness, not a plugin bolted onto an agent
- Discusses MemGPT/Letta AI's memory architecture
- Challenges the common pattern of treating memory as an add-on component
- Positions memory as fundamental infrastructure rather than optional feature