Two issues Ship hit on the Montreal Protocol claim:
1. 500 on canonical stem lookup. File starts with ```markdown wrapper
instead of bare --- frontmatter delimiter. _split_frontmatter checked
startswith("---") and bailed, returning "frontmatter parse failed".
Same wrapper exists on 6 other claim files (audit grep). Now strip
the wrapper before frontmatter detection.
2. 404 on long activity-feed slug. Same root cause — _build_indexes
couldn't read the file's title from frontmatter, so by_title never
indexed it, so title-fallback resolution had nothing to match against.
Both bugs collapse once we unwrap.
Also: switched "file exists but has no frontmatter" from 500 to 404 with
reason=file_no_frontmatter. These are stray enrichment fragments living
in domains/ that never got merged into a parent claim. From the API
caller's perspective there's no claim at that slug — 500 implied
"server bug, retry later" which isn't actionable.
Verified: 3/3 wrapped claims (montreal, medicare, dod) now return 200
warm-cache ~13ms. Long-slug repro (montreal) resolves via title fallback
to canonical stem. Negative test (nonsense slug) still 404.
Activity feed emits slugs derived from PR description (the slugified claim
title), which can be longer than the on-disk file stem (agents pick shorter
hand-chosen filenames). Pure exact-stem lookup 404s on those.
Three-tier resolution in handle_claim_detail:
1. Exact stem match (existing behavior)
2. Title fallback: normalize requested slug, look up via by_title index
(already populated from frontmatter title during _build_indexes)
3. Prefix fallback: longest common prefix among stems, anchored at 32 chars
to prevent spurious hits
Response slug returns the canonical on-disk stem so frontend share-links
and caches converge to one form.
Repro: GET /api/claims/spacex-and-amazon-kuiper-non-endorsement-of-wef-debris-
guidelines-demonstrates-systemic-voluntary-governance-failure-at-the-scale-
where-it-matters-most was 404; now 200, returns shorter on-disk slug
'...-governance-failure'. Negative case (nonsense slug) still 404s.
Reported by Ship — Cory-facing demo path.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Implements Ship's claim detail contract — one round-trip, all data
resolved server-side. Replaces thin domain-only stub with full tree walk
(domains/ + foundations/ + core/), DB joins for PRs and reviews, and
server-side wikilink resolution to eliminate frontend N+1 cascades.
Response shape (Ship brief 2026-04-29):
slug, title, domain, secondary_domains, confidence, description,
created, last_review, body (raw markdown), sourced_from, reviews,
prs, edges {supports,challenges,related,depends_on}, wikilinks
Wikilink resolution:
- Builds title→stem index from frontmatter title field, fallback to
filename stem normalized via _normalize_for_match
- Returns flat {link_text: slug_or_null} map; unresolved → null so
frontend can render plain text
- Inline normalization (lowercase, hyphen↔space, collapse whitespace,
strip punctuation). Note: lib/attribution.py exposes only
normalize_handle today, not the title normalizer Ship referenced.
If a canonical helper lands later, point at it.
Caches:
- title→slug index: 60s TTL (warm cache <20ms p50 verified)
- list endpoint: 5min TTL (preserved from prior)
- Cold: ~3.3s for tree walk of 1,866 files; warm: 13-17ms
Bug fixed in second pass:
- _resolve_sourced_from defaulted title="" which leaked LIKE '%%'
matching every PR. Now requires non-empty title+stem; handler falls
back to slug.replace("-"," ") when frontmatter title is missing.
Verified live on VPS:
- AI diagnostic triage claim (no fm.title): sourced_from=1, prs=0
(correct — Feb claim, pre-description-tracking)
- Recent extract PR claim: sourced_from=1 with URL, prs=1, reviews=1,
last_review populated, edges 3 supports + 7 related, wikilinks 0
- 404 on missing slug: correct
- Claim with [[maps/...]] wikilink: 5/6 resolved (correct null on map)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Claim slugs were being cut at 120 chars in _extract_claim_slugs, causing
Timeline event clicks to 404 when the on-disk filename exceeded that
length (frontend builds /api/claims/<slug> from the truncated value).
This fix landed Apr 26 but regressed when the file was redeployed —
committing the unmangled version to repo so deploy.sh re-shipping
doesn't reintroduce the cap.
Verified live: max slug now 265 chars, 16 of 30 over the old 120 cap.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
de7e5ec landed leaderboard_routes.py + the route file's register fn but
the import + register_leaderboard_routes(app) call + auth-middleware
allowlist were never added to app.py — endpoint returned 404 in production.
Three minimal edits to app.py mirror the existing register_*_routes pattern
(import at line 28, allowlist OR-clause at line 512, register call at 2365).
Plus a SQL bug in _parse_window: rolling-window clauses prefixed "AND "
but the WHERE composition uses " AND ".join(...), producing
"WHERE 1=1 AND AND ce.timestamp..." → sqlite3.OperationalError on every
window=Nd / window=Nh request. Stripped the prefix and added a comment so
the asymmetry doesn't bite again.
Verified on VPS:
GET /api/leaderboard?window=all_time&kind=person → 200, 11 rows
GET /api/leaderboard?window=7d&kind=person → 200, 2 rows
GET /api/leaderboard?window=30d&kind=person → 200, 9 rows
GET /api/leaderboard?domain=internet-finance → 200, 3 rows
GET /api/leaderboard?kind=agent → 200, leo/rio/clay/astra/vida
Unblocks: Argus dashboard cutover, Oberon column reorder, Leo's CI
taxonomy broadcast.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
New endpoint replaces the legacy /api/contributors *_count read path with
event-sourced reads from the Phase A contribution_events ledger.
- Params: window (all_time | Nd | Nh), kind (person | agent | org | all),
domain (filter), limit (default 100, max 500)
- Returns per-handle CI, full role breakdown (author/challenger/synthesizer/
originator/evaluator), events_count, pr_count, first/last contribution
- ORDER BY ci DESC, last_contribution DESC — recent contributors break ties
- Read-only sqlite URI; total/has_more computed for paginated UIs
Wiring (import + register + _PUBLIC_PATHS entry) currently applied to live
app.py on VPS only — repo app.py has drift from Ship's uncommitted /api/search
POST contract. Next deploy.sh round-trip needs both to land together.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Regression: aeae712's source/create distinction was lost — VPS reverted to
pre-aeae712 behavior where every extract/* knowledge PR returned type=create
regardless of whether a claim was written. Source archives surfaced as
"New claim" chips with date-prefix slugs that 404 on click.
Root cause: aeae712 was deployed via local file copy and never pushed to
origin; a subsequent rsync from origin/main overwrote it with the older
classifier. This branch ships from origin so deploy.sh's repo-first gate
makes recurrence impossible.
- Restore aeae712: extract/* + empty description -> source, with
empty claim_slug + source_slug field, ci_earned 0.15
- Add Leo's regex fallback: candidate_slug matching
^\d{4}-\d{2}-\d{2}-.+-[a-f0-9]{4}$ -> source regardless of branch
/commit_type/description state. Catches edge cases where description
leaks but is just a source title (slugified into the inbox filename
pattern), not a claim insight.
- Add 'challenge' to _FEED_COMMIT_TYPES (latent bug — challenge PRs
would be filtered out before classification because the filter
list omitted them; memory says 0 challenges exist so it never
triggered, but schema support belongs in the filter)
- _build_events: compute candidate slug before classify so the regex
fallback has a slug to inspect
Verified locally on Leo's example PRs (#4014, #4016) — both classify
as source. VPS smoke pending deploy.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Wire the search endpoint to accept POST bodies matching the embedded
chat contract (query/limit/min_score/domain/confidence/exclude →
slug/path/title/domain/confidence/score/body_excerpt). GET path retained
for legacy callers and adds a min_score override for hackathon debug.
- _qdrant_hits_to_results() shapes raw hits into chat response format
- handle_api_search() dispatches POST vs GET
- /api/search added to _PUBLIC_PATHS (chat is unauthenticated)
- POST route registered alongside existing GET
Resolves VPS↔repo drift flagged by Argus before next deploy.sh run.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Root cause (per Epi audit):
- /api/claims, /api/contributors/list, /api/contributors/{handle} returned
404 in prod. The route registrations and claims_api.py module existed only
on VPS — never committed. Today's auto-deploy of an unrelated app.py change
rsync'd the repo (registration-less) version over the VPS edits, wiping
endpoints Vercel depended on.
- Recurrence of the deploy-without-commit pattern (blindspot #2).
Brings repo to parity with the live, working VPS state:
- Add diagnostics/claims_api.py (161 lines, was VPS-only)
- Wire register_claims_routes + register_contributor_routes in app.py
alongside the existing register_activity_feed call
beliefs_routes.py is also VPS-only and currently unregistered (orphaned by
the same Apr 21 manual edit that dropped its registration). Left out of this
commit pending a decision on whether to revive or delete.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
/api/activity and /api/activity-feed were never registered in app.py —
both files existed but neither route was reachable (confirmed 404 on VPS).
Register both so Timeline and gamification feeds can consume them.
Adds source_channel to /api/activity payload (both PR rows and audit
events — audit rows return null since they aren't tied to a specific PR).
Migration v22 already populated prs.source_channel on VPS with enum:
telegram=2340, agent=698, maintenance=102, unknown=11, github=1.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds p.source_channel to the SELECT and surfaces it on each event.
Migration v22 populated the column with enum values: telegram, agent,
maintenance, unknown, github. Timeline UI needs this to show per-event
provenance (2340 telegram, 698 agent, 102 maintenance, 11 unknown, 1 github).
Nulls fall back to "unknown" — only 0 rows currently null, but the
fallback is defensive for future inserts before backfill runs.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- SYSTEM_ACCOUNTS set excludes pipeline/unknown/teleo-agents from /api/contributors/list
- primary_ci field: action_ci.total when available, else role-based ci_score
- action_ci included in list endpoint for each contributor
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- contribution_scores table stores per-PR CI with action type
- Profile endpoint returns action_ci alongside role-based ci_score
- Branch-name attribution: contrib/NAME/ PRs attributed to NAME
- Cameron now shows 0.32 CI + BELIEF MOVER badge from challenge
- Handle variant matching (cameron-s1 → cameron) for cross-system lookup
- Full historical backfill: 985 scores across 9 contributors
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
GET /api/contributors/{handle} — returns CI score, badges, domain
breakdown, role percentages, contribution timeline, review stats.
GET /api/contributors/list — leaderboard with min_claims filter.
Git-log fallback for contributors not in pipeline.db (Cameron, Alex).
Badge system: FOUNDING CONTRIBUTOR, BELIEF MOVER, KNOWLEDGE SOURCER,
DOMAIN SPECIALIST, VETERAN, FIRST BLOOD.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Serves contribution events from pipeline.db. Classifies PRs as
create/enrich/challenge, normalizes contributors, derives summaries
from branch names when descriptions are empty. Hot sort uses
challenge*3 + enrich*2 + signal / hours^1.5 decay from event time.
Domain and contributor filters, pagination (limit/offset).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds async git-log-based endpoint for cumulative contributor and claim
tracking. 5-minute cache, excludes bot accounts, tags founding contributors.
Standalone CLI script also included for ad-hoc data generation.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
dashboard_portfolio.py:
- datetime.utcnow() → datetime.now(timezone.utc) (deprecation fix)
- days parameter validation with try/except + min(..., 365) on 2 endpoints
fetch_coins.py:
- isinstance(chain, str) guard prevents AttributeError on string chain values
- Log when adjusted market cap differs from DexScreener value
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Pull live app.py from VPS to close 243-line drift. Add portfolio
dashboard (renamed from v2), portfolio nav link, and fetch_coins.py
(daily cron script for ownership coin data). Delete stale lib/ copy.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Three fixes for conversation-sourced claim quality:
1. Trust hierarchy in extraction prompt: bot-generated numbers are
flagged as unverified context, not evidence. Directional claims
are extractable but specific figures require external verification.
Prevents laundering bot guesses into the KB as evidence.
2. Conversation-sourced claims tagged with verified: false and
source_type: conversation in frontmatter. Downstream consumers
(Leo, dashboard) can filter/flag these for verification.
3. GET /api/telegram-extractions endpoint for daily spot-checking.
Shows recent Telegram-sourced PRs with claim titles, status,
merge rate, and eval issues. Quick review surface.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- embed-claims.py: bulk embeds all claims/decisions/entities into Qdrant
via OpenRouter (openai/text-embedding-3-small, 1536 dims)
- diagnostics/app.py: search endpoint switched from OpenAI direct to
OpenRouter (same key as LLM calls, no new credentials)
- Qdrant running on VPS (Docker, port 6333, persistent storage)
- Collection: teleo-claims, cosine distance, 1536 dims
854 files to embed. Bulk backfill running.
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
New _domain_breakdown() function cross-references merged PRs with
contributor principals. Dashboard shows per-domain knowledge PR counts
and top 3 contributors for each domain. API: GET /api/domains returns
full breakdown.
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
Dashboard showed 1 conflict when Forgejo had 30 open PRs because it
only queried pipeline.db — which misses all agent-created PRs (Rio,
Leo, etc.). Now queries Forgejo API for authoritative open/unmergeable
counts. Falls back to DB if Forgejo unreachable.
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>