feat: atomic extract-and-connect + stale PR monitor + response audit #4
Loading…
Reference in a new issue
No description provided.
Delete branch "epimetheus/atomic-connect-and-stale-monitor"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Summary
Review
Full sign-off from Ganymede (code review via Pentagon DM)
Pipeline reliability (8 fixes, reviewed by Ganymede+Rhea+Leo+Rio): 1. Merge API recovery — pre-flight approval check, transient/permanent distinction, jitter 2. Ghost PR detection — ls-remote branch check in reconciliation, network guard 3. Source status contract — directory IS status, no code change needed 4. Batch-state markers eliminated — two-gate skip (archive-check + batched branch-check) 5. Branch SHA tracking — batched ls-remote, auto-reset verdicts, dismiss stale reviews 6. Mirror pre-flight permissions — chown check in sync-mirror.sh 7. Telegram archive commit-after-write — git add/commit/push with rebase --abort fallback 8. Post-merge source archiving — queue/ → archive/{domain}/ after merge Pipeline fixes: - merge_cycled flag — eval attempts preserved during merge-failure cycling (Ganymede+Rhea) - merge_failures diagnostic counter - Startup recovery preserves eval_attempts (was incorrectly resetting to 0) - No-diff PRs auto-closed by eval (root cause of 17 zombie PRs) - GC threshold aligned with substantive fixer budget (was 2, now 4) - Conflict retry with 3-attempt budget + permanent conflict handler - Local ff-merge fallback for Forgejo 405 errors Telegram bot: - KB retrieval: 3-layer (entity resolution → claim search → agent context) - Reply-to-bot handler (context.bot.id check) - Tag regex: @teleo|@futairdbot - Prompt rewrite for natural analyst voice - Market data API integration (Ben's token price endpoint) - Conversation windows (5-message unanswered counter, per-user-per-chat) - Conversation history in prompt (last 5 exchanges) - Worktree file lock for archive writes Infrastructure: - worktree_lock.py — file-based lock (flock) for main worktree coordination - backfill-sources.py — source DB registration for Argus funnel - batch-extract-50.sh v3 — two-gate skip, batched ls-remote, network guard - sync-mirror.sh — auto-PR creation for mirrored GitHub branches, permission pre-flight - Argus dashboard — conflicts + reviewing in backlog, queue count in funnel - Enrichment-inside-frontmatter bug fix (regex anchor, not --- split) Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>Before Opus responds, Haiku evaluates: "Does this message need an X search?" If YES, searches X, injects results into Opus prompt, archives as source. Opus responds with KB knowledge + fresh tweet data combined. Flow: user asks naturally ("what are people saying about P2P?") → Haiku decides search needed → X search → results in Opus context → unified response. ~1s latency, ~$0.001 cost per message. Only fires when Haiku says YES. Explicit /research command still works as direct path. Also: fixed systemd ProtectSystem paths (Ganymede: root cause of all write failures). Fixed research regex for Telegram group commands. Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>Primary path: GET /twitter/tweets?tweet_ids={id} — works for any tweet, any age, returns full content. Replaces the fragile from:username search pagination fallback. Fallback: article endpoint for X long-form articles. Last resort: placeholder with [Could not fetch] message. Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>Stale learning ("I don't have Robin Hanson data") overrode real KB data. Ganymede review: dated entries expire after 7 days. Permanent entries (communication style, identity) are undated and always included. Prompt guard: "NEVER save a learning about what data you do or don't have" prevents the bot from writing availability claims that go stale. Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>Bot said "I don't have the ability to run live X searches" despite Haiku finding 10 tweets. Two issues: (1) prompt section header didn't make clear these were LIVE results, (2) learnings taught deflection ("say drop links here" instead of acknowledging search capability). Fixed: section header now says "LIVE X Search Results (you just searched for X — cite these directly)". Learnings updated to acknowledge search capability. Stale Robin Hanson learning removed again (re-synced from git). Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>Transcript system: - All messages in all chats captured to chat_transcripts store - 1-hour dump job writes per-chat JSON to /opt/teleo-eval/transcripts/ - Includes internal reasoning (KB matches, searches, learnings) - Transcripts accumulate over session (no clear on dump) - Per-chat directories: transcripts/{chat-slug}/{date-hour}.json Inline contribution tags: - SOURCE: creates inbox source file with verbatim user content - CLAIM: creates draft claim file attributed to contributor - Both strip tag from displayed response - Full user message preserved verbatim (Rio decides context, can't alter) Also: multi-URL processing (up to 5 per message) Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>