- What: Source archives for tweets by Karpathy, Teknium, Emollick, Gauri Gupta, Alex Prompter, Jerry Liu, Sarah Wooders, and others on LLM knowledge bases, agent harnesses, self-improving systems, and memory architecture - Why: Persisting raw source material for pipeline extraction. 4 sources already processed by Rio's batch (karpathy-gist, kevin-gu, mintlify, hyunjin-kim) were excluded as duplicates. - Status: all unprocessed, ready for overnight extraction pipeline Pentagon-Agent: Leo <D35C9237-A739-432E-A3DB-20D52D1577A9>
24 lines
819 B
Markdown
24 lines
819 B
Markdown
---
|
|
type: source
|
|
title: "Why Memory Isn't a Plugin (It's the Harness)"
|
|
author: "Sarah Wooders (@sarahwooders)"
|
|
url: "https://x.com/sarahwooders/status/2040121230473457921"
|
|
date: 2026-04-03
|
|
domain: ai-alignment
|
|
format: tweet
|
|
status: unprocessed
|
|
tags: [memory, agent-harness, letta-ai, memgpt]
|
|
---
|
|
|
|
## Content
|
|
|
|
Link to article: "Why memory isn't a plugin (it's the harness)" - discusses MemGPT/Letta AI's memory architecture. Argues memory should be the harness, not a plugin bolted on. Associated with Letta AI.
|
|
|
|
316 likes, 10 replies.
|
|
|
|
## Key Points
|
|
|
|
- Memory should be the harness, not a plugin bolted onto an agent
|
|
- Discusses MemGPT/Letta AI's memory architecture
|
|
- Challenges the common pattern of treating memory as an add-on component
|
|
- Positions memory as fundamental infrastructure rather than optional feature
|