1. Response length: "BREVITY IS YOUR DEFAULT. Most responses 1-3 sentences.
A 4-paragraph response to a simple question is a failure."
max_tokens cut from 1024 to 512.
2. Research trigger: removed natural language regex (caused false positive
on "has accumulated" matching "search"). Only explicit /research command.
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
Option A: history only contains actual bot-user exchanges, not unaddressed
group messages. Empty bot responses in history confused the model.
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
Opus decides what to learn. Prompt instructs: append LEARNING: [category] [description]
at end of response when genuinely learning something new. Bot parses the line,
strips it from displayed response, calls _save_learning() to persist.
Zero additional API calls (Rhea's design). The model already has full context.
Categories: factual, communication, structured_data.
Most responses have no LEARNING line — only fires on genuine corrections.
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
1. Conversation history now shows compressed context summary first
(tickers, key figures, exchange count) before full log.
"Discussing: $FUTARDIO | Key figures: $0.004, $39.5K | Exchanges: 3"
20 tokens, unmissable. Plus prompt instruction: "NEVER ask a question
your history already answers." (Ganymede: Option C+A)
2. Archive file writes decoupled from worktree lock. File written
unlocked (additive, no coordination needed). Git commit attempted
with lock — deferred on timeout, file persists on disk for next cycle.
Fixes "Read-only file system" archive failures. (Ganymede review)
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
Auto-respond stripped from conversation window. Bot only responds to @tag
and reply-to-bot. Window now silently tracks messages for context — when
the user does reply, the bot has full conversation history.
Also: prompt shortened to "1-2 sentences" default. "Do NOT respond to
messages that aren't directed at you."
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
User says "@FutAIrdBot /research P2P.me launch" → bot searches X via twitterapi.io
→ archives all tweets as ONE consolidated source file in inbox/queue/ → batch extract
picks up → claims land in KB.
Features (Ganymede+Rhea+Leo+Rio consensus):
- Regex + natural language intent detection (not CommandHandler)
- One source file per research query (not per-tweet)
- Full tweet metadata: author, followers, engagement, date
- Contributor attribution: proposed_by + contribution_type: research-direction
- Rate limit: 3 searches per user per day
- Min engagement filter (3 interactions)
- Worktree lock on source file write
Phase 2 (not built): domain alignment check before searching.
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
Learnings file content now passes through sanitize_message() before injection
into the Opus prompt. Prevents prompt injection via crafted "corrections."
Rio UUID 5551F5AF confirmed as current Teleo v4 Rio.
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
Option D (Rhea+Rio+Leo consensus):
- _load_learnings(): reads agents/rio/learnings.md, injects into prompt before KB context
- _save_learning(): appends correction to learnings.md via worktree lock + direct commit
- Learnings prioritized over KB data when they conflict
- Three categories: communication, factual, structured_data
- Prompt updated: tells agent it can save corrections for future conversations
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>