When a contributor merges main into their fork branch (standard GitHub
workflow), merge-base equals main SHA, triggering the 'already up to
date' early return. This closes the PR without cherry-picking the new
content. Cameron's PR #3377 hit this exact bug.
Fix: add a diff check before returning 'already up to date'. If the
branch has actual content changes vs main, proceed to cherry-pick
instead of short-circuiting.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Root cause of 84% reweave PR rejection rate: claim titles with colons
(e.g., "COAL: Meta-PoW: The ORE Treasury Protocol") written as bare
YAML list items, causing yaml.safe_load to fail during merge.
Three changes:
1. frontmatter.py: _yaml_quote() wraps colon-containing values in double quotes
2. reweave.py: _write_edge_regex uses _yaml_quote for new edges
3. merge.py: skip individual files with parse failures instead of aborting entire PR
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Ganymede review findings:
1. source_channel was missing from CREATE TABLE (fresh installs wouldn't have it)
2. Default fallback changed from 'telegram' to 'unknown' — unknown prefixes
are genuinely unknown, not telegram
3. Cross-reference comments added between BRANCH_PREFIX_MAP and _CHANNEL_MAP
Also wires classify_source_channel into merge.py PR discovery INSERT.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
github_feedback.py posts pipeline status to GitHub PRs at three touchpoints:
discovery ack, eval review result, and merge/close outcome. Only fires for
PRs with a github_pr link (set by sync-mirror.sh). All calls non-fatal.
contributor.py: expanded git author fallback to scan all non-merge commits
(was only checking last commit), added teleo-bot and github-actions[bot]
to bot filter list.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
External contributors who run `git merge main` create merge commits that
cherry-pick can't handle without -m flag. --no-merges filters these out.
Added detection for branches with only merge commits but real content diff.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Both the "already merged" path and _handle_permanent_conflicts closed PRs on
Forgejo without checking the return value. On API failure, the DB update would
proceed anyway, creating ghost PRs (DB=closed/merged, Forgejo=open). Now both
paths check for None return and skip DB updates on failure — same pattern as
close_pr in pr_state.py.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Fix 4 Forgejo ghost PR bugs flagged by Ganymede:
- fixer.py GC close: DB update ran outside try/except, closing DB even on Forgejo failure
- substantive_fixer.py droppable: NO Forgejo close at all
- substantive_fixer.py auto-enrichment: DB update before Forgejo (reversed order)
- substantive_fixer.py close_and_reextract: replace manual Forgejo+DB with close_pr()
Add start_fixing() and reset_for_reeval() to pr_state.py:
- start_fixing: atomic claim + fix_attempts increment in one statement
- reset_for_reeval: clears all eval state for re-evaluation after fix
Also fixes stale line number comment in merge.py (Ganymede nit).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace 38 hand-crafted UPDATE prs SET status calls across evaluate.py
and merge.py with 7 centralized functions that enforce invariants:
- close_pr: always syncs Forgejo (opt-out for reconciliation)
- approve_pr: raises ValueError on empty domain (prevents NULL bugs)
- mark_merged: always sets merged_at, clears last_error
- mark_conflict: always increments merge_failures, sets merge_cycled
- mark_conflict_permanent: terminal conflict state
- reopen_pr: handles all reopen scenarios (transient, rejection, reeval)
- start_review: atomic claim with bool return
This eliminates the class of bugs that produced 3 incidents:
1. Domain NULL on musings bypass (7 PRs stuck, 20h zero throughput)
2. Forgejo ghost PRs (70 PRs open on Forgejo but closed in DB)
3. Merge_cycled missing on various close paths
Also fixes: 3 close paths in merge.py had DB update before Forgejo call
(reversed order). close_pr does Forgejo first, then DB.
Only remaining raw status transition: _claim_next_pr (approved→merging)
which is an atomic subquery and doesn't have invariant requirements.
20 new tests, 264 total passing, 0 regressions. Net -101 lines in
evaluate.py + merge.py.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
db.py: migration v20 adds conflict_rebase_attempts, merge_failures,
merge_cycled columns (already exist on VPS via manual migration, missing
from code — any future DB rebuild would break retry mechanism).
merge.py: replace retry-with-backoff on config.lock with asyncio.Lock
(_bare_repo_lock) around all worktree add/remove calls. Prevents
contention instead of retrying it. Applied to both _cherry_pick_onto_main
and _merge_reweave_pr.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two code paths set status='closed' in the pipeline DB without calling
the Forgejo API to close the PR. This caused 50 ghost PRs to accumulate
on Forgejo (dashboard shows review backlog) while the pipeline considered
them done.
- evaluate.py: no-diff stale branch close now calls Forgejo PATCH
- merge.py: permanent conflict close now calls Forgejo PATCH
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Parallel domain merges race on the bare repo's config file. The single
retry only covered one of two worktree-add call sites and used fixed
delay. Now both sites retry up to 3 times with increasing jitter.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Ganymede review nit: substring match on "already" could false-positive
on future return strings. Pin to the two known values from cherry_pick().
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two fixes for the 18-PR merge blockage:
1. When cherry-pick returns "already merged" (all commits empty because
content is already on main), close the PR directly instead of trying
to push the stale branch SHA to main. The branch ref points at old
commits that aren't descendants of current main, so the push would
always fail as non-fast-forward.
2. Retry worktree add once with jittered delay when config.lock
contention occurs from parallel domain merges.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Forgejo categorically blocks --force-with-lease on protected branches,
even for fast-forward pushes. The cherry-picked branch is already a
descendant of origin/main, so a regular push is a fast-forward by
definition. Non-ff is rejected by default — same safety guarantee.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Three fixes from Ganymede's review of extract-time-connection:
1. Replace git add -A with explicit file staging in _reciprocal_edges
2. Push to origin/main immediately after commit (survive batch-extract reset)
3. RECIPROCAL_EDGE_MAP: challenges→challenged_by (not symmetric)
Added challenged_by to REWEAVE_EDGE_FIELDS, EDGE_FIELDS, EDGE_WEIGHTS
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The skip loop only matched `- ` (no indent) but YAML list items are
commonly written as ` - item` (2-space indent). This caused old list
items to persist alongside new ones, corrupting frontmatter on merge.
Fix: consume any line starting with space or dash as part of the current
field's value block.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Combined superset assertion and merge computation into single loop
(removed duplicate scalar-to-list normalization)
- Added worktree remove --force before worktree add to handle prior
crash leaving stale worktree (SIGKILL, OOM, power loss)
Two fixes from Ganymede review:
1. CRITICAL: blank line before closing --- compounded on repeat reweaves.
Body starts with \n---, so \n{body} created \n\n---. Fixed by checking
body prefix.
2. Replaced yaml.dump round-trip with _serialize_edge_fields() that splices
only edge arrays into raw frontmatter text. Non-edge fields (title,
confidence, type, quotes, flow styles) stay byte-identical to main HEAD.
_parse_yaml_frontmatter now returns 3-tuple: (dict, raw_fm_text, body).
_serialize_frontmatter takes (raw_fm_text, merged_edges_dict, body).
26 tests pass including idempotency (5x serialize), formatting preservation,
and no-blank-line regression test.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Reweave PRs modify existing files (appending YAML edges). Cherry-pick
fails ~75% when main moves between PR creation and merge.
_merge_reweave_pr() reads each changed file from both main HEAD and
branch HEAD, unions the edge arrays (order-preserving, main-first),
and writes the result. Eliminates merge conflicts structurally.
Key design decisions (Ganymede + Theseus approved):
- Order-preserving dedup: main's edges first, branch-new appended
- Superset assertion: logs warning if branch missing main edges
- Uses main's body text (reweave only touches frontmatter)
- Loud failure on parse errors (no cherry-pick fallback)
- Append-only contract: reweave adds edges, never removes
18 tests covering parse, union, serialize, superset, and full workflow.
_retry_conflict_prs called _rebase_and_push which was never defined,
causing NameError on every conflict retry. Now uses _cherry_pick_onto_main
consistent with the primary merge path.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Layer 1: Insertion-time dedup in openrouter-extract-v2.py — skip if source_slug
already appears in claim content.
Layer 2: Insertion-time dedup in entity_batch.py — skip if PR number already
enriched this claim.
Layer 3: Post-rebase dedup in merge.py — scan rebased files for duplicate
evidence blocks (same source reference) and remove them before force-push.
Root cause: multiple enrichment branches modify the same claim at the same
insertion point. When rebased sequentially, evidence blocks are duplicated.
(Leo: PRs #1751, #1752)
lib/dedup.py: standalone module — parses evidence headers, deduplicates by
source key, preserves trailing content (Relevant Notes, Topics sections).
9 tests covering all patterns including the real PR #1751 duplication case.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Atomic extract-and-connect (lib/connect.py):
- After extraction writes claim files, each new claim is embedded via
OpenRouter, searched against Qdrant, and top-5 neighbors (cosine > 0.55)
are added as `related` edges in the claim's frontmatter
- Edges written on NEW claim only — avoids merge conflicts
- Cross-domain connections enabled, non-fatal on Qdrant failure
- Wired into openrouter-extract-v2.py post-extraction step
Stale PR monitor (lib/stale_pr.py):
- Every watchdog cycle checks open extract/* PRs
- If open >30 min AND 0 claim files → auto-close with comment
- After 2 stale closures → marks source as extraction_failed
- Wired into watchdog.py as check #6
Response audit system:
- response_audit table (migration v8), persistent audit conn in bot.py
- 90-day retention cleanup, tool_calls JSON column
- Confidence tag stripping, systemd ReadWritePaths for pipeline.db
Supporting infrastructure:
- reweave.py: nightly edge reconnection for orphan claims
- reconcile-sources.py: source status reconciliation
- backfill-domains.py: domain classification backfill
- ops/reconcile-source-status.sh: operational reconciliation script
- Attribution improvements, post-extract enrichments, merge improvements
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
After a PR merges successfully, _embed_merged_claims() diffs the merged SHA
against its parent to find new/changed .md files in knowledge directories
(domains/, core/, foundations/, decisions/, entities/). Each file is embedded
via embed-claims.py --file (OpenRouter, text-embedding-3-small).
Non-fatal: embedding failure logs a warning but does not block the merge
pipeline. This keeps vector search current without requiring manual re-embeds.
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
- Domain review → GPT-4o (OpenRouter), Leo STANDARD → Sonnet (OpenRouter),
Leo DEEP → Opus (Claude Max). Two model families = no correlated blind spots.
- Opus reserved for DEEP eval only — protects rate limit for overnight research.
- Review prompts calibrated: require per-criterion evidence, blocking-vs-observation
verdict rules. Moved from 100% rubber-stamp approval to 12% pass rate.
- OpenRouter failures classified as openrouter_failed (not rate_limited) to avoid
spurious 15-min Opus backoff.
- merge.py: pre-check PR state before merge API call (prevents 405 on re-merge).
Pentagon-Agent: Leo <294C3CA1-0205-4668-82FA-B984D54F48AD>