Compare commits
2 commits
m3taversal
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| 75f1709110 | |||
| ae66f37975 |
8 changed files with 1345 additions and 153 deletions
67
.github/workflows/sync-graph-data.yml
vendored
Normal file
67
.github/workflows/sync-graph-data.yml
vendored
Normal file
|
|
@ -0,0 +1,67 @@
|
||||||
|
name: Sync Graph Data to teleo-app
|
||||||
|
|
||||||
|
# Runs on every merge to main. Extracts graph data from the codex and
|
||||||
|
# pushes graph-data.json + claims-context.json to teleo-app/public/.
|
||||||
|
# This triggers a Vercel rebuild automatically.
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches: [main]
|
||||||
|
paths:
|
||||||
|
- 'core/**'
|
||||||
|
- 'domains/**'
|
||||||
|
- 'foundations/**'
|
||||||
|
- 'convictions/**'
|
||||||
|
- 'ops/extract-graph-data.py'
|
||||||
|
workflow_dispatch: # manual trigger
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
sync:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Checkout teleo-codex
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
fetch-depth: 0 # full history for git log agent attribution
|
||||||
|
|
||||||
|
- name: Set up Python
|
||||||
|
uses: actions/setup-python@v5
|
||||||
|
with:
|
||||||
|
python-version: '3.12'
|
||||||
|
|
||||||
|
- name: Run extraction
|
||||||
|
run: |
|
||||||
|
python3 ops/extract-graph-data.py \
|
||||||
|
--repo . \
|
||||||
|
--output /tmp/graph-data.json \
|
||||||
|
--context-output /tmp/claims-context.json
|
||||||
|
|
||||||
|
- name: Checkout teleo-app
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
repository: living-ip/teleo-app
|
||||||
|
token: ${{ secrets.TELEO_APP_TOKEN }}
|
||||||
|
path: teleo-app
|
||||||
|
|
||||||
|
- name: Copy data files
|
||||||
|
run: |
|
||||||
|
cp /tmp/graph-data.json teleo-app/public/graph-data.json
|
||||||
|
cp /tmp/claims-context.json teleo-app/public/claims-context.json
|
||||||
|
|
||||||
|
- name: Commit and push to teleo-app
|
||||||
|
working-directory: teleo-app
|
||||||
|
run: |
|
||||||
|
git config user.name "teleo-codex-bot"
|
||||||
|
git config user.email "bot@livingip.io"
|
||||||
|
git add public/graph-data.json public/claims-context.json
|
||||||
|
if git diff --cached --quiet; then
|
||||||
|
echo "No changes to commit"
|
||||||
|
else
|
||||||
|
NODES=$(python3 -c "import json; d=json.load(open('public/graph-data.json')); print(len(d['nodes']))")
|
||||||
|
EDGES=$(python3 -c "import json; d=json.load(open('public/graph-data.json')); print(len(d['edges']))")
|
||||||
|
git commit -m "sync: graph data from teleo-codex ($NODES nodes, $EDGES edges)"
|
||||||
|
git push
|
||||||
|
fi
|
||||||
80
CLAUDE.md
80
CLAUDE.md
|
|
@ -1,4 +1,82 @@
|
||||||
# Teleo Codex — Agent Operating Manual
|
# Teleo Codex
|
||||||
|
|
||||||
|
## For Visitors (read this first)
|
||||||
|
|
||||||
|
If you're exploring this repo with Claude Code, you're talking to a **collective knowledge base** maintained by 6 AI domain specialists. ~400 claims across 14 knowledge areas, all linked, all traceable from evidence through claims through beliefs to public positions.
|
||||||
|
|
||||||
|
### Orientation (run this on first visit)
|
||||||
|
|
||||||
|
Don't present a menu. Start a short conversation to figure out who this person is and what they care about.
|
||||||
|
|
||||||
|
**Step 1 — Ask what they work on or think about.** One question, open-ended. "What are you working on, or what's on your mind?" Their answer tells you which domain is closest.
|
||||||
|
|
||||||
|
**Step 2 — Map them to an agent.** Based on their answer, pick the best-fit agent:
|
||||||
|
|
||||||
|
| If they mention... | Route to |
|
||||||
|
|-------------------|----------|
|
||||||
|
| Finance, crypto, DeFi, DAOs, prediction markets, tokens | **Rio** — internet finance / mechanism design |
|
||||||
|
| Media, entertainment, creators, IP, culture, storytelling | **Clay** — entertainment / cultural dynamics |
|
||||||
|
| AI, alignment, safety, superintelligence, coordination | **Theseus** — AI / alignment / collective intelligence |
|
||||||
|
| Health, medicine, biotech, longevity, wellbeing | **Vida** — health / human flourishing |
|
||||||
|
| Space, rockets, orbital, lunar, satellites | **Astra** — space development |
|
||||||
|
| Strategy, systems thinking, cross-domain, civilization | **Leo** — grand strategy / cross-domain synthesis |
|
||||||
|
|
||||||
|
Tell them who you're loading and why: "Based on what you described, I'm going to think from [Agent]'s perspective — they specialize in [domain]. Let me load their worldview." Then load the agent (see instructions below).
|
||||||
|
|
||||||
|
**Step 3 — Surface something interesting.** Once loaded, search that agent's domain claims and find 3-5 that are most relevant to what the visitor said. Pick for surprise value — claims they're likely to find unexpected or that challenge common assumptions in their area. Present them briefly: title + one-sentence description + confidence level.
|
||||||
|
|
||||||
|
Then ask: "Any of these surprise you, or seem wrong?"
|
||||||
|
|
||||||
|
This gets them into conversation immediately. If they push back on a claim, you're in challenge mode. If they want to go deeper on one, you're in explore mode. If they share something you don't know, you're in teach mode. The orientation flows naturally into engagement.
|
||||||
|
|
||||||
|
**If they already know what they want:** Some visitors will skip orientation — they'll name an agent directly ("I want to talk to Rio") or ask a specific question. That's fine. Load the agent or answer the question. Orientation is for people who are exploring, not people who already know.
|
||||||
|
|
||||||
|
### What visitors can do
|
||||||
|
|
||||||
|
1. **Explore** — Ask what the collective (or a specific agent) thinks about any topic. Search the claims and give the grounded answer, with confidence levels and evidence.
|
||||||
|
|
||||||
|
2. **Challenge** — Disagree with a claim? Steelman the existing claim, then work through it together. If the counter-evidence changes your understanding, say so explicitly — that's the contribution. The conversation is valuable even if they never file a PR. Only after the conversation has landed, offer to draft a formal challenge for the knowledge base if they want it permanent.
|
||||||
|
|
||||||
|
3. **Teach** — They share something new. If it's genuinely novel, draft a claim and show it to them: "Here's how I'd write this up — does this capture it?" They review, edit, approve. Then handle the PR. Their attribution stays on everything.
|
||||||
|
|
||||||
|
4. **Propose** — They have their own thesis with evidence. Check it against existing claims, help sharpen it, draft it for their approval, and offer to submit via PR. See CONTRIBUTING.md for the manual path.
|
||||||
|
|
||||||
|
### How to behave as a visitor's agent
|
||||||
|
|
||||||
|
When the visitor picks an agent lens, load that agent's full context:
|
||||||
|
1. Read `agents/{name}/identity.md` — adopt their personality and voice
|
||||||
|
2. Read `agents/{name}/beliefs.md` — these are your active beliefs, cite them
|
||||||
|
3. Read `agents/{name}/reasoning.md` — this is how you evaluate new information
|
||||||
|
4. Read `agents/{name}/skills.md` — these are your analytical capabilities
|
||||||
|
5. Read `core/collective-agent-core.md` — this is your shared DNA
|
||||||
|
|
||||||
|
**You are that agent for the duration of the conversation.** Think from their perspective. Use their reasoning framework. Reference their beliefs. When asked about another domain, acknowledge the boundary and cite what that domain's claims say — but filter it through your agent's worldview.
|
||||||
|
|
||||||
|
**When the visitor teaches you something new:**
|
||||||
|
- Search the knowledge base for existing claims on the topic
|
||||||
|
- If the information is genuinely novel (not a duplicate, specific enough to disagree with, backed by evidence), say so
|
||||||
|
- **Draft the claim for them** — write the full claim (title, frontmatter, body, wiki links) and show it to them in the conversation. Say: "Here's how I'd write this up as a claim. Does this capture what you mean?"
|
||||||
|
- **Wait for their approval before submitting.** They may want to edit the wording, sharpen the argument, or adjust the scope. The visitor owns the claim — you're drafting, not deciding.
|
||||||
|
- Once they approve, use the `/contribute` skill or follow the proposer workflow to create the claim file and PR
|
||||||
|
- Always attribute the visitor as the source: `source: "visitor-name, original analysis"` or `source: "visitor-name via [article/paper title]"`
|
||||||
|
|
||||||
|
**When the visitor challenges a claim:**
|
||||||
|
- First, steelman the existing claim — explain the best case for it
|
||||||
|
- Then engage seriously with the counter-evidence. This is a real conversation, not a form to fill out.
|
||||||
|
- If the challenge changes your understanding, say so explicitly. Update how you reason about the topic in the conversation. The visitor should feel that talking to you was worth something even if they never touch git.
|
||||||
|
- Only after the conversation has landed, ask if they want to make it permanent: "This changed how I think about [X]. Want me to draft a formal challenge for the knowledge base?" If they say no, that's fine — the conversation was the contribution.
|
||||||
|
|
||||||
|
**Start here if you want to browse:**
|
||||||
|
- `maps/overview.md` — how the knowledge base is organized
|
||||||
|
- `core/epistemology.md` — how knowledge is structured (evidence → claims → beliefs → positions)
|
||||||
|
- Any `domains/{domain}/_map.md` — topic map for a specific domain
|
||||||
|
- Any `agents/{name}/beliefs.md` — what a specific agent believes and why
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Agent Operating Manual
|
||||||
|
|
||||||
|
*Everything below is operational protocol for the 6 named agents. If you're a visitor, you don't need to read further — the section above is for you.*
|
||||||
|
|
||||||
You are an agent in the Teleo collective — a group of AI domain specialists that build and maintain a shared knowledge base. This file tells you how the system works and what the rules are.
|
You are an agent in the Teleo collective — a group of AI domain specialists that build and maintain a shared knowledge base. This file tells you how the system works and what the rules are.
|
||||||
|
|
||||||
|
|
|
||||||
235
CONTRIBUTING.md
235
CONTRIBUTING.md
|
|
@ -1,45 +1,51 @@
|
||||||
# Contributing to Teleo Codex
|
# Contributing to Teleo Codex
|
||||||
|
|
||||||
You're contributing to a living knowledge base maintained by AI agents. Your job is to bring in source material. The agents extract claims, connect them to existing knowledge, and review everything before it merges.
|
You're contributing to a living knowledge base maintained by AI agents. There are three ways to contribute — pick the one that fits what you have.
|
||||||
|
|
||||||
|
## Three contribution paths
|
||||||
|
|
||||||
|
### Path 1: Submit source material
|
||||||
|
|
||||||
|
You have an article, paper, report, or thread the agents should read. The agents extract claims — you get attribution.
|
||||||
|
|
||||||
|
### Path 2: Propose a claim directly
|
||||||
|
|
||||||
|
You have your own thesis backed by evidence. You write the claim yourself.
|
||||||
|
|
||||||
|
### Path 3: Challenge an existing claim
|
||||||
|
|
||||||
|
You think something in the knowledge base is wrong or missing nuance. You file a challenge with counter-evidence.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## What you need
|
## What you need
|
||||||
|
|
||||||
- GitHub account with collaborator access to this repo
|
- Git access to this repo (GitHub or Forgejo)
|
||||||
- Git installed on your machine
|
- Git installed on your machine
|
||||||
- A source to contribute (article, report, paper, thread, etc.)
|
- Claude Code (optional but recommended — it helps format claims and check for duplicates)
|
||||||
|
|
||||||
## Step-by-step
|
## Path 1: Submit source material
|
||||||
|
|
||||||
### 1. Clone the repo (first time only)
|
This is the simplest contribution. You provide content; the agents do the extraction.
|
||||||
|
|
||||||
|
### 1. Clone and branch
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
git clone https://github.com/living-ip/teleo-codex.git
|
git clone https://github.com/living-ip/teleo-codex.git
|
||||||
cd teleo-codex
|
cd teleo-codex
|
||||||
```
|
git checkout main && git pull
|
||||||
|
|
||||||
### 2. Pull latest and create a branch
|
|
||||||
|
|
||||||
```bash
|
|
||||||
git checkout main
|
|
||||||
git pull origin main
|
|
||||||
git checkout -b contrib/your-name/brief-description
|
git checkout -b contrib/your-name/brief-description
|
||||||
```
|
```
|
||||||
|
|
||||||
Example: `contrib/alex/ai-alignment-report`
|
### 2. Create a source file
|
||||||
|
|
||||||
### 3. Create a source file
|
Create a markdown file in `inbox/archive/`:
|
||||||
|
|
||||||
Create a markdown file in `inbox/archive/` with this naming convention:
|
|
||||||
|
|
||||||
```
|
```
|
||||||
inbox/archive/YYYY-MM-DD-author-handle-brief-slug.md
|
inbox/archive/YYYY-MM-DD-author-handle-brief-slug.md
|
||||||
```
|
```
|
||||||
|
|
||||||
Example: `inbox/archive/2026-03-07-alex-ai-alignment-landscape.md`
|
### 3. Add frontmatter + content
|
||||||
|
|
||||||
### 4. Add frontmatter
|
|
||||||
|
|
||||||
Every source file starts with YAML frontmatter. Copy this template and fill it in:
|
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
---
|
---
|
||||||
|
|
@ -53,84 +59,169 @@ format: report
|
||||||
status: unprocessed
|
status: unprocessed
|
||||||
tags: [topic1, topic2, topic3]
|
tags: [topic1, topic2, topic3]
|
||||||
---
|
---
|
||||||
|
|
||||||
|
# Full title
|
||||||
|
|
||||||
|
[Paste the full content here. More content = better extraction.]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Domain options:** `internet-finance`, `entertainment`, `ai-alignment`, `health`, `grand-strategy`
|
**Domain options:** `internet-finance`, `entertainment`, `ai-alignment`, `health`, `space-development`, `grand-strategy`
|
||||||
|
|
||||||
**Format options:** `essay`, `newsletter`, `tweet`, `thread`, `whitepaper`, `paper`, `report`, `news`
|
**Format options:** `essay`, `newsletter`, `tweet`, `thread`, `whitepaper`, `paper`, `report`, `news`
|
||||||
|
|
||||||
**Status:** Always set to `unprocessed` — the agents handle the rest.
|
### 4. Commit, push, open PR
|
||||||
|
|
||||||
### 5. Add the content
|
|
||||||
|
|
||||||
After the frontmatter, paste the full content of the source. This is what the agents will read and extract claims from. More content = better extraction.
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
---
|
|
||||||
type: source
|
|
||||||
title: "AI Alignment in 2026: Where We Stand"
|
|
||||||
author: "Alex (@alexhandle)"
|
|
||||||
url: https://example.com/report
|
|
||||||
date: 2026-03-07
|
|
||||||
domain: ai-alignment
|
|
||||||
format: report
|
|
||||||
status: unprocessed
|
|
||||||
tags: [ai-alignment, openai, anthropic, safety, governance]
|
|
||||||
---
|
|
||||||
|
|
||||||
# AI Alignment in 2026: Where We Stand
|
|
||||||
|
|
||||||
[Full content of the report goes here. Include everything —
|
|
||||||
the agents need the complete text to extract claims properly.]
|
|
||||||
```
|
|
||||||
|
|
||||||
### 6. Commit and push
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
git add inbox/archive/your-file.md
|
git add inbox/archive/your-file.md
|
||||||
git commit -m "contrib: add AI alignment landscape report
|
git commit -m "contrib: add [brief description]
|
||||||
|
|
||||||
Source: [brief description of what this is and why it matters]"
|
|
||||||
|
|
||||||
|
Source: [what this is and why it matters]"
|
||||||
git push -u origin contrib/your-name/brief-description
|
git push -u origin contrib/your-name/brief-description
|
||||||
```
|
```
|
||||||
|
|
||||||
### 7. Open a PR
|
Then open a PR. The domain agent reads your source, extracts claims, Leo reviews, and they merge.
|
||||||
|
|
||||||
```bash
|
## Path 2: Propose a claim directly
|
||||||
gh pr create --title "contrib: AI alignment landscape report" --body "Source material for agent extraction.
|
|
||||||
|
|
||||||
- **What:** [one-line description]
|
You have domain expertise and want to state a thesis yourself — not just drop source material for agents to process.
|
||||||
- **Domain:** ai-alignment
|
|
||||||
- **Why it matters:** [why this adds value to the knowledge base]"
|
### 1. Clone and branch
|
||||||
|
|
||||||
|
Same as Path 1.
|
||||||
|
|
||||||
|
### 2. Check for duplicates
|
||||||
|
|
||||||
|
Before writing, search the knowledge base for existing claims on your topic. Check:
|
||||||
|
- `domains/{relevant-domain}/` — existing domain claims
|
||||||
|
- `foundations/` — existing foundation-level claims
|
||||||
|
- Use grep or Claude Code to search claim titles semantically
|
||||||
|
|
||||||
|
### 3. Write your claim file
|
||||||
|
|
||||||
|
Create a markdown file in the appropriate domain folder. The filename is the slugified claim title.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
type: claim
|
||||||
|
domain: ai-alignment
|
||||||
|
description: "One sentence adding context beyond the title"
|
||||||
|
confidence: likely
|
||||||
|
source: "your-name, original analysis; [any supporting references]"
|
||||||
|
created: 2026-03-10
|
||||||
|
---
|
||||||
```
|
```
|
||||||
|
|
||||||
Or just go to GitHub and click "Compare & pull request" after pushing.
|
**The claim test:** "This note argues that [your title]" must work as a sentence. If it doesn't, your title isn't specific enough.
|
||||||
|
|
||||||
### 8. What happens next
|
**Body format:**
|
||||||
|
```markdown
|
||||||
|
# [your prose claim title]
|
||||||
|
|
||||||
1. **Theseus** (the ai-alignment agent) reads your source and extracts claims
|
[Your argument — why this is supported, what evidence underlies it.
|
||||||
2. **Leo** (the evaluator) reviews the extracted claims for quality
|
Cite sources, data, studies inline. This is where you make the case.]
|
||||||
3. You'll see their feedback as PR comments
|
|
||||||
4. Once approved, the claims merge into the knowledge base
|
|
||||||
|
|
||||||
You can respond to agent feedback directly in the PR comments.
|
**Scope:** [What this claim covers and what it doesn't]
|
||||||
|
|
||||||
## Your Credit
|
---
|
||||||
|
|
||||||
Your source archive records you as contributor. As claims derived from your submission get cited by other claims, your contribution's impact is traceable through the knowledge graph. Every claim extracted from your source carries provenance back to you — your contribution compounds as the knowledge base grows.
|
Relevant Notes:
|
||||||
|
- [[existing-claim-title]] — how your claim relates to it
|
||||||
|
```
|
||||||
|
|
||||||
|
Wiki links (`[[claim title]]`) should point to real files in the knowledge base. Check that they resolve.
|
||||||
|
|
||||||
|
### 4. Commit, push, open PR
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add domains/{domain}/your-claim-file.md
|
||||||
|
git commit -m "contrib: propose claim — [brief title summary]
|
||||||
|
|
||||||
|
- What: [the claim in one sentence]
|
||||||
|
- Evidence: [primary evidence supporting it]
|
||||||
|
- Connections: [what existing claims this relates to]"
|
||||||
|
git push -u origin contrib/your-name/brief-description
|
||||||
|
```
|
||||||
|
|
||||||
|
PR body should include your reasoning for why this adds value to the knowledge base.
|
||||||
|
|
||||||
|
The domain agent + Leo review your claim against the quality gates (see CLAUDE.md). They may approve, request changes, or explain why it doesn't meet the bar.
|
||||||
|
|
||||||
|
## Path 3: Challenge an existing claim
|
||||||
|
|
||||||
|
You think a claim in the knowledge base is wrong, overstated, missing context, or contradicted by evidence you have.
|
||||||
|
|
||||||
|
### 1. Identify the claim
|
||||||
|
|
||||||
|
Find the claim file you're challenging. Note its exact title (the filename without `.md`).
|
||||||
|
|
||||||
|
### 2. Clone and branch
|
||||||
|
|
||||||
|
Same as above. Name your branch `contrib/your-name/challenge-brief-description`.
|
||||||
|
|
||||||
|
### 3. Write your challenge
|
||||||
|
|
||||||
|
You have two options:
|
||||||
|
|
||||||
|
**Option A — Enrich the existing claim** (if your evidence adds nuance but doesn't contradict):
|
||||||
|
|
||||||
|
Edit the existing claim file. Add a `challenged_by` field to the frontmatter and a **Challenges** section to the body:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
challenged_by:
|
||||||
|
- "your counter-evidence summary (your-name, date)"
|
||||||
|
```
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Challenges
|
||||||
|
|
||||||
|
**[Your name] ([date]):** [Your counter-evidence or counter-argument.
|
||||||
|
Cite specific sources. Explain what the original claim gets wrong
|
||||||
|
or what scope it's missing.]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Option B — Propose a counter-claim** (if your evidence supports a different conclusion):
|
||||||
|
|
||||||
|
Create a new claim file that explicitly contradicts the existing one. In the body, reference the claim you're challenging and explain why your evidence leads to a different conclusion. Add wiki links to the challenged claim.
|
||||||
|
|
||||||
|
### 4. Commit, push, open PR
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git commit -m "contrib: challenge — [existing claim title, briefly]
|
||||||
|
|
||||||
|
- What: [what you're challenging and why]
|
||||||
|
- Counter-evidence: [your primary evidence]"
|
||||||
|
git push -u origin contrib/your-name/challenge-brief-description
|
||||||
|
```
|
||||||
|
|
||||||
|
The domain agent will steelman the existing claim before evaluating your challenge. If your evidence is strong, the claim gets updated (confidence lowered, scope narrowed, challenged_by added) or your counter-claim merges alongside it. The knowledge base holds competing perspectives — your challenge doesn't delete the original, it adds tension that makes the graph richer.
|
||||||
|
|
||||||
|
## Using Claude Code to contribute
|
||||||
|
|
||||||
|
If you have Claude Code installed, run it in the repo directory. Claude reads the CLAUDE.md visitor section and can:
|
||||||
|
|
||||||
|
- **Search the knowledge base** for existing claims on your topic
|
||||||
|
- **Check for duplicates** before you write a new claim
|
||||||
|
- **Format your claim** with proper frontmatter and wiki links
|
||||||
|
- **Validate wiki links** to make sure they resolve to real files
|
||||||
|
- **Suggest related claims** you should link to
|
||||||
|
|
||||||
|
Just describe what you want to contribute and Claude will help you through the right path.
|
||||||
|
|
||||||
|
## Your credit
|
||||||
|
|
||||||
|
Every contribution carries provenance. Source archives record who submitted them. Claims record who proposed them. Challenges record who filed them. As your contributions get cited by other claims, your impact is traceable through the knowledge graph. Contributions compound.
|
||||||
|
|
||||||
## Tips
|
## Tips
|
||||||
|
|
||||||
- **More context is better.** Paste the full article/report, not just a link. Agents extract better from complete text.
|
- **More context is better.** For source submissions, paste the full text, not just a link.
|
||||||
- **Pick the right domain.** If your source spans multiple domains, pick the primary one — the agents will flag cross-domain connections.
|
- **Pick the right domain.** If it spans multiple, pick the primary one — agents flag cross-domain connections.
|
||||||
- **One source per file.** Don't combine multiple articles into one file.
|
- **One source per file, one claim per file.** Atomic contributions are easier to review and link.
|
||||||
- **Original analysis welcome.** Your own written analysis/report is just as valid as linking to someone else's article. Put yourself as the author.
|
- **Original analysis is welcome.** Your own written analysis is as valid as citing someone else's work.
|
||||||
- **Don't extract claims yourself.** Just provide the source material. The agents handle extraction — that's their job.
|
- **Confidence honestly.** If your claim is speculative, say so. Calibrated uncertainty is valued over false confidence.
|
||||||
|
|
||||||
## OPSEC
|
## OPSEC
|
||||||
|
|
||||||
The knowledge base is public. Do not include dollar amounts, deal terms, valuations, or internal business details in any content. Scrub before committing.
|
The knowledge base is public. Do not include dollar amounts, deal terms, valuations, or internal business details. Scrub before committing.
|
||||||
|
|
||||||
## Questions?
|
## Questions?
|
||||||
|
|
||||||
|
|
|
||||||
47
README.md
Normal file
47
README.md
Normal file
|
|
@ -0,0 +1,47 @@
|
||||||
|
# Teleo Codex
|
||||||
|
|
||||||
|
A knowledge base built by AI agents who specialize in different domains, take positions, disagree with each other, and update when they're wrong. Every claim traces from evidence through argument to public commitments — nothing is asserted without a reason.
|
||||||
|
|
||||||
|
**~400 claims** across 14 knowledge areas. **6 agents** with distinct perspectives. **Every link is real.**
|
||||||
|
|
||||||
|
## How it works
|
||||||
|
|
||||||
|
Six domain-specialist agents maintain the knowledge base. Each reads source material, extracts claims, and proposes them via pull request. Every PR gets adversarial review — a cross-domain evaluator and a domain peer check for specificity, evidence quality, duplicate coverage, and scope. Claims that pass enter the shared commons. Claims feed agent beliefs. Beliefs feed trackable positions with performance criteria.
|
||||||
|
|
||||||
|
## The agents
|
||||||
|
|
||||||
|
| Agent | Domain | What they cover |
|
||||||
|
|-------|--------|-----------------|
|
||||||
|
| **Leo** | Grand strategy | Cross-domain synthesis, civilizational coordination, what connects the domains |
|
||||||
|
| **Rio** | Internet finance | DeFi, prediction markets, futarchy, MetaDAO ecosystem, token economics |
|
||||||
|
| **Clay** | Entertainment | Media disruption, community-owned IP, GenAI in content, cultural dynamics |
|
||||||
|
| **Theseus** | AI / alignment | AI safety, coordination problems, collective intelligence, multi-agent systems |
|
||||||
|
| **Vida** | Health | Healthcare economics, AI in medicine, prevention-first systems, longevity |
|
||||||
|
| **Astra** | Space | Launch economics, cislunar infrastructure, space governance, ISRU |
|
||||||
|
|
||||||
|
## Browse it
|
||||||
|
|
||||||
|
- **See what an agent believes** — `agents/{name}/beliefs.md`
|
||||||
|
- **Explore a domain** — `domains/{domain}/_map.md`
|
||||||
|
- **Understand the structure** — `core/epistemology.md`
|
||||||
|
- **See the full layout** — `maps/overview.md`
|
||||||
|
|
||||||
|
## Talk to it
|
||||||
|
|
||||||
|
Clone the repo and run [Claude Code](https://claude.ai/claude-code). Pick an agent's lens and you get their personality, reasoning framework, and domain expertise as a thinking partner. Ask questions, challenge claims, explore connections across domains.
|
||||||
|
|
||||||
|
If you teach the agent something new — share an article, a paper, your own analysis — they'll draft a claim and show it to you: "Here's how I'd write this up — does this capture it?" You review and approve. They handle the PR. Your attribution stays on everything.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git clone https://github.com/living-ip/teleo-codex.git
|
||||||
|
cd teleo-codex
|
||||||
|
claude
|
||||||
|
```
|
||||||
|
|
||||||
|
## Contribute
|
||||||
|
|
||||||
|
Talk to an agent and they'll handle the mechanics. Or do it manually: submit source material, propose a claim, or challenge one you disagree with. See [CONTRIBUTING.md](CONTRIBUTING.md).
|
||||||
|
|
||||||
|
## Built by
|
||||||
|
|
||||||
|
[LivingIP](https://livingip.xyz) — collective intelligence infrastructure.
|
||||||
|
|
@ -6,8 +6,8 @@
|
||||||
# 2. Domain agent — domain expertise, duplicate check, technical accuracy
|
# 2. Domain agent — domain expertise, duplicate check, technical accuracy
|
||||||
#
|
#
|
||||||
# After both reviews, auto-merges if:
|
# After both reviews, auto-merges if:
|
||||||
# - Leo approved (gh pr review --approve)
|
# - Leo's comment contains "**Verdict:** approve"
|
||||||
# - Domain agent verdict is "Approve" (parsed from comment)
|
# - Domain agent's comment contains "**Verdict:** approve"
|
||||||
# - No territory violations (files outside proposer's domain)
|
# - No territory violations (files outside proposer's domain)
|
||||||
#
|
#
|
||||||
# Usage:
|
# Usage:
|
||||||
|
|
@ -26,8 +26,14 @@
|
||||||
# - Lockfile prevents concurrent runs
|
# - Lockfile prevents concurrent runs
|
||||||
# - Auto-merge requires ALL reviewers to approve + no territory violations
|
# - Auto-merge requires ALL reviewers to approve + no territory violations
|
||||||
# - Each PR runs sequentially to avoid branch conflicts
|
# - Each PR runs sequentially to avoid branch conflicts
|
||||||
# - Timeout: 10 minutes per agent per PR
|
# - Timeout: 20 minutes per agent per PR
|
||||||
# - Pre-flight checks: clean working tree, gh auth
|
# - Pre-flight checks: clean working tree, gh auth
|
||||||
|
#
|
||||||
|
# Verdict protocol:
|
||||||
|
# All agents use `gh pr comment` (NOT `gh pr review`) because all agents
|
||||||
|
# share the m3taversal GitHub account — `gh pr review --approve` fails
|
||||||
|
# when the PR author and reviewer are the same user. The merge check
|
||||||
|
# parses issue comments for structured verdict markers instead.
|
||||||
|
|
||||||
set -euo pipefail
|
set -euo pipefail
|
||||||
|
|
||||||
|
|
@ -39,7 +45,7 @@ cd "$REPO_ROOT"
|
||||||
|
|
||||||
LOCKFILE="/tmp/evaluate-trigger.lock"
|
LOCKFILE="/tmp/evaluate-trigger.lock"
|
||||||
LOG_DIR="$REPO_ROOT/ops/sessions"
|
LOG_DIR="$REPO_ROOT/ops/sessions"
|
||||||
TIMEOUT_SECONDS=600
|
TIMEOUT_SECONDS=1200
|
||||||
DRY_RUN=false
|
DRY_RUN=false
|
||||||
LEO_ONLY=false
|
LEO_ONLY=false
|
||||||
NO_MERGE=false
|
NO_MERGE=false
|
||||||
|
|
@ -62,24 +68,30 @@ detect_domain_agent() {
|
||||||
vida/*|*/health*) agent="vida"; domain="health" ;;
|
vida/*|*/health*) agent="vida"; domain="health" ;;
|
||||||
astra/*|*/space-development*) agent="astra"; domain="space-development" ;;
|
astra/*|*/space-development*) agent="astra"; domain="space-development" ;;
|
||||||
leo/*|*/grand-strategy*) agent="leo"; domain="grand-strategy" ;;
|
leo/*|*/grand-strategy*) agent="leo"; domain="grand-strategy" ;;
|
||||||
|
contrib/*)
|
||||||
|
# External contributor — detect domain from changed files (fall through to file check)
|
||||||
|
agent=""; domain=""
|
||||||
|
;;
|
||||||
*)
|
*)
|
||||||
# Fall back to checking which domain directory has changed files
|
agent=""; domain=""
|
||||||
if echo "$files" | grep -q "domains/internet-finance/"; then
|
|
||||||
agent="rio"; domain="internet-finance"
|
|
||||||
elif echo "$files" | grep -q "domains/entertainment/"; then
|
|
||||||
agent="clay"; domain="entertainment"
|
|
||||||
elif echo "$files" | grep -q "domains/ai-alignment/"; then
|
|
||||||
agent="theseus"; domain="ai-alignment"
|
|
||||||
elif echo "$files" | grep -q "domains/health/"; then
|
|
||||||
agent="vida"; domain="health"
|
|
||||||
elif echo "$files" | grep -q "domains/space-development/"; then
|
|
||||||
agent="astra"; domain="space-development"
|
|
||||||
else
|
|
||||||
agent=""; domain=""
|
|
||||||
fi
|
|
||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
|
|
||||||
|
# If no agent detected from branch prefix, check changed files
|
||||||
|
if [ -z "$agent" ]; then
|
||||||
|
if echo "$files" | grep -q "domains/internet-finance/"; then
|
||||||
|
agent="rio"; domain="internet-finance"
|
||||||
|
elif echo "$files" | grep -q "domains/entertainment/"; then
|
||||||
|
agent="clay"; domain="entertainment"
|
||||||
|
elif echo "$files" | grep -q "domains/ai-alignment/"; then
|
||||||
|
agent="theseus"; domain="ai-alignment"
|
||||||
|
elif echo "$files" | grep -q "domains/health/"; then
|
||||||
|
agent="vida"; domain="health"
|
||||||
|
elif echo "$files" | grep -q "domains/space-development/"; then
|
||||||
|
agent="astra"; domain="space-development"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
echo "$agent $domain"
|
echo "$agent $domain"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -112,8 +124,8 @@ if ! command -v claude >/dev/null 2>&1; then
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Check for dirty working tree (ignore ops/ and .claude/ which may contain uncommitted scripts)
|
# Check for dirty working tree (ignore ops/, .claude/, .github/ which may contain local-only files)
|
||||||
DIRTY_FILES=$(git status --porcelain | grep -v '^?? ops/' | grep -v '^ M ops/' | grep -v '^?? \.claude/' | grep -v '^ M \.claude/' || true)
|
DIRTY_FILES=$(git status --porcelain | grep -v '^?? ops/' | grep -v '^ M ops/' | grep -v '^?? \.claude/' | grep -v '^ M \.claude/' | grep -v '^?? \.github/' | grep -v '^ M \.github/' || true)
|
||||||
if [ -n "$DIRTY_FILES" ]; then
|
if [ -n "$DIRTY_FILES" ]; then
|
||||||
echo "ERROR: Working tree is dirty. Clean up before running."
|
echo "ERROR: Working tree is dirty. Clean up before running."
|
||||||
echo "$DIRTY_FILES"
|
echo "$DIRTY_FILES"
|
||||||
|
|
@ -145,7 +157,8 @@ if [ -n "$SPECIFIC_PR" ]; then
|
||||||
fi
|
fi
|
||||||
PRS_TO_REVIEW="$SPECIFIC_PR"
|
PRS_TO_REVIEW="$SPECIFIC_PR"
|
||||||
else
|
else
|
||||||
OPEN_PRS=$(gh pr list --state open --json number --jq '.[].number' 2>/dev/null || echo "")
|
# NOTE: gh pr list silently returns empty in some worktree configs; use gh api instead
|
||||||
|
OPEN_PRS=$(gh api repos/:owner/:repo/pulls --jq '.[].number' 2>/dev/null || echo "")
|
||||||
|
|
||||||
if [ -z "$OPEN_PRS" ]; then
|
if [ -z "$OPEN_PRS" ]; then
|
||||||
echo "No open PRs found. Nothing to review."
|
echo "No open PRs found. Nothing to review."
|
||||||
|
|
@ -154,17 +167,23 @@ else
|
||||||
|
|
||||||
PRS_TO_REVIEW=""
|
PRS_TO_REVIEW=""
|
||||||
for pr in $OPEN_PRS; do
|
for pr in $OPEN_PRS; do
|
||||||
LAST_REVIEW_DATE=$(gh api "repos/{owner}/{repo}/pulls/$pr/reviews" \
|
# Check if this PR already has a Leo verdict comment (avoid re-reviewing)
|
||||||
--jq 'map(select(.state != "DISMISSED")) | sort_by(.submitted_at) | last | .submitted_at' 2>/dev/null || echo "")
|
LEO_COMMENTED=$(gh pr view "$pr" --json comments \
|
||||||
|
--jq '[.comments[] | select(.body | test("VERDICT:LEO:(APPROVE|REQUEST_CHANGES)"))] | length' 2>/dev/null || echo "0")
|
||||||
LAST_COMMIT_DATE=$(gh pr view "$pr" --json commits --jq '.commits[-1].committedDate' 2>/dev/null || echo "")
|
LAST_COMMIT_DATE=$(gh pr view "$pr" --json commits --jq '.commits[-1].committedDate' 2>/dev/null || echo "")
|
||||||
|
|
||||||
if [ -z "$LAST_REVIEW_DATE" ]; then
|
if [ "$LEO_COMMENTED" = "0" ]; then
|
||||||
PRS_TO_REVIEW="$PRS_TO_REVIEW $pr"
|
|
||||||
elif [ -n "$LAST_COMMIT_DATE" ] && [[ "$LAST_COMMIT_DATE" > "$LAST_REVIEW_DATE" ]]; then
|
|
||||||
echo "PR #$pr: New commits since last review. Queuing for re-review."
|
|
||||||
PRS_TO_REVIEW="$PRS_TO_REVIEW $pr"
|
PRS_TO_REVIEW="$PRS_TO_REVIEW $pr"
|
||||||
else
|
else
|
||||||
echo "PR #$pr: No new commits since last review. Skipping."
|
# Check if new commits since last Leo review
|
||||||
|
LAST_LEO_DATE=$(gh pr view "$pr" --json comments \
|
||||||
|
--jq '[.comments[] | select(.body | test("VERDICT:LEO:")) | .createdAt] | last' 2>/dev/null || echo "")
|
||||||
|
if [ -n "$LAST_COMMIT_DATE" ] && [ -n "$LAST_LEO_DATE" ] && [[ "$LAST_COMMIT_DATE" > "$LAST_LEO_DATE" ]]; then
|
||||||
|
echo "PR #$pr: New commits since last review. Queuing for re-review."
|
||||||
|
PRS_TO_REVIEW="$PRS_TO_REVIEW $pr"
|
||||||
|
else
|
||||||
|
echo "PR #$pr: Already reviewed. Skipping."
|
||||||
|
fi
|
||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
|
|
||||||
|
|
@ -195,7 +214,7 @@ run_agent_review() {
|
||||||
log_file="$LOG_DIR/${agent_name}-review-pr${pr}-${timestamp}.log"
|
log_file="$LOG_DIR/${agent_name}-review-pr${pr}-${timestamp}.log"
|
||||||
review_file="/tmp/${agent_name}-review-pr${pr}.md"
|
review_file="/tmp/${agent_name}-review-pr${pr}.md"
|
||||||
|
|
||||||
echo " Running ${agent_name}..."
|
echo " Running ${agent_name} (model: ${model})..."
|
||||||
echo " Log: $log_file"
|
echo " Log: $log_file"
|
||||||
|
|
||||||
if perl -e "alarm $TIMEOUT_SECONDS; exec @ARGV" claude -p \
|
if perl -e "alarm $TIMEOUT_SECONDS; exec @ARGV" claude -p \
|
||||||
|
|
@ -240,6 +259,7 @@ check_territory_violations() {
|
||||||
vida) allowed_domains="domains/health/" ;;
|
vida) allowed_domains="domains/health/" ;;
|
||||||
astra) allowed_domains="domains/space-development/" ;;
|
astra) allowed_domains="domains/space-development/" ;;
|
||||||
leo) allowed_domains="core/|foundations/" ;;
|
leo) allowed_domains="core/|foundations/" ;;
|
||||||
|
contrib) echo ""; return 0 ;; # External contributors — skip territory check
|
||||||
*) echo ""; return 0 ;; # Unknown proposer — skip check
|
*) echo ""; return 0 ;; # Unknown proposer — skip check
|
||||||
esac
|
esac
|
||||||
|
|
||||||
|
|
@ -266,74 +286,51 @@ check_territory_violations() {
|
||||||
}
|
}
|
||||||
|
|
||||||
# --- Auto-merge check ---
|
# --- Auto-merge check ---
|
||||||
# Returns 0 if PR should be merged, 1 if not
|
# Parses issue comments for structured verdict markers.
|
||||||
|
# Verdict protocol: agents post `<!-- VERDICT:AGENT_KEY:APPROVE -->` or
|
||||||
|
# `<!-- VERDICT:AGENT_KEY:REQUEST_CHANGES -->` as HTML comments in their review.
|
||||||
|
# This is machine-parseable and invisible in the rendered comment.
|
||||||
check_merge_eligible() {
|
check_merge_eligible() {
|
||||||
local pr_number="$1"
|
local pr_number="$1"
|
||||||
local domain_agent="$2"
|
local domain_agent="$2"
|
||||||
local leo_passed="$3"
|
local leo_passed="$3"
|
||||||
|
|
||||||
# Gate 1: Leo must have passed
|
# Gate 1: Leo must have completed without timeout/error
|
||||||
if [ "$leo_passed" != "true" ]; then
|
if [ "$leo_passed" != "true" ]; then
|
||||||
echo "BLOCK: Leo review failed or timed out"
|
echo "BLOCK: Leo review failed or timed out"
|
||||||
return 1
|
return 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Gate 2: Check Leo's review state via GitHub API
|
# Gate 2: Check Leo's verdict from issue comments
|
||||||
local leo_review_state
|
local leo_verdict
|
||||||
leo_review_state=$(gh api "repos/{owner}/{repo}/pulls/${pr_number}/reviews" \
|
leo_verdict=$(gh pr view "$pr_number" --json comments \
|
||||||
--jq '[.[] | select(.state != "DISMISSED" and .state != "PENDING")] | last | .state' 2>/dev/null || echo "")
|
--jq '[.comments[] | select(.body | test("VERDICT:LEO:")) | .body] | last' 2>/dev/null || echo "")
|
||||||
|
|
||||||
if [ "$leo_review_state" = "APPROVED" ]; then
|
if echo "$leo_verdict" | grep -q "VERDICT:LEO:APPROVE"; then
|
||||||
echo "Leo: APPROVED (via review API)"
|
echo "Leo: APPROVED"
|
||||||
elif [ "$leo_review_state" = "CHANGES_REQUESTED" ]; then
|
elif echo "$leo_verdict" | grep -q "VERDICT:LEO:REQUEST_CHANGES"; then
|
||||||
echo "BLOCK: Leo requested changes (review API state: CHANGES_REQUESTED)"
|
echo "BLOCK: Leo requested changes"
|
||||||
return 1
|
return 1
|
||||||
else
|
else
|
||||||
# Fallback: check PR comments for Leo's verdict
|
echo "BLOCK: Could not find Leo's verdict marker in PR comments"
|
||||||
local leo_verdict
|
return 1
|
||||||
leo_verdict=$(gh pr view "$pr_number" --json comments \
|
|
||||||
--jq '.comments[] | select(.body | test("## Leo Review")) | .body' 2>/dev/null \
|
|
||||||
| grep -oiE '\*\*Verdict:[^*]+\*\*' | tail -1 || echo "")
|
|
||||||
|
|
||||||
if echo "$leo_verdict" | grep -qi "approve"; then
|
|
||||||
echo "Leo: APPROVED (via comment verdict)"
|
|
||||||
elif echo "$leo_verdict" | grep -qi "request changes\|reject"; then
|
|
||||||
echo "BLOCK: Leo verdict: $leo_verdict"
|
|
||||||
return 1
|
|
||||||
else
|
|
||||||
echo "BLOCK: Could not determine Leo's verdict"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Gate 3: Check domain agent verdict (if applicable)
|
# Gate 3: Check domain agent verdict (if applicable)
|
||||||
if [ -n "$domain_agent" ] && [ "$domain_agent" != "leo" ]; then
|
if [ -n "$domain_agent" ] && [ "$domain_agent" != "leo" ]; then
|
||||||
|
local domain_key
|
||||||
|
domain_key=$(echo "$domain_agent" | tr '[:lower:]' '[:upper:]')
|
||||||
local domain_verdict
|
local domain_verdict
|
||||||
# Search for verdict in domain agent's review — match agent name, "domain reviewer", or "Domain Review"
|
|
||||||
domain_verdict=$(gh pr view "$pr_number" --json comments \
|
domain_verdict=$(gh pr view "$pr_number" --json comments \
|
||||||
--jq ".comments[] | select(.body | test(\"domain review|${domain_agent}|peer review\"; \"i\")) | .body" 2>/dev/null \
|
--jq "[.comments[] | select(.body | test(\"VERDICT:${domain_key}:\")) | .body] | last" 2>/dev/null || echo "")
|
||||||
| grep -oiE '\*\*Verdict:[^*]+\*\*' | tail -1 || echo "")
|
|
||||||
|
|
||||||
if [ -z "$domain_verdict" ]; then
|
if echo "$domain_verdict" | grep -q "VERDICT:${domain_key}:APPROVE"; then
|
||||||
# Also check review API for domain agent approval
|
echo "Domain agent ($domain_agent): APPROVED"
|
||||||
# Since all agents use the same GitHub account, we check for multiple approvals
|
elif echo "$domain_verdict" | grep -q "VERDICT:${domain_key}:REQUEST_CHANGES"; then
|
||||||
local approval_count
|
echo "BLOCK: $domain_agent requested changes"
|
||||||
approval_count=$(gh api "repos/{owner}/{repo}/pulls/${pr_number}/reviews" \
|
|
||||||
--jq '[.[] | select(.state == "APPROVED")] | length' 2>/dev/null || echo "0")
|
|
||||||
|
|
||||||
if [ "$approval_count" -ge 2 ]; then
|
|
||||||
echo "Domain agent: APPROVED (multiple approvals via review API)"
|
|
||||||
else
|
|
||||||
echo "BLOCK: No domain agent verdict found"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
elif echo "$domain_verdict" | grep -qi "approve"; then
|
|
||||||
echo "Domain agent ($domain_agent): APPROVED (via comment verdict)"
|
|
||||||
elif echo "$domain_verdict" | grep -qi "request changes\|reject"; then
|
|
||||||
echo "BLOCK: Domain agent verdict: $domain_verdict"
|
|
||||||
return 1
|
return 1
|
||||||
else
|
else
|
||||||
echo "BLOCK: Unclear domain agent verdict: $domain_verdict"
|
echo "BLOCK: No verdict marker found for $domain_agent"
|
||||||
return 1
|
return 1
|
||||||
fi
|
fi
|
||||||
else
|
else
|
||||||
|
|
@ -403,11 +400,15 @@ Also check:
|
||||||
- Cross-domain connections that the proposer may have missed
|
- Cross-domain connections that the proposer may have missed
|
||||||
|
|
||||||
Write your complete review to ${LEO_REVIEW_FILE}
|
Write your complete review to ${LEO_REVIEW_FILE}
|
||||||
Then post it with: gh pr review ${pr} --comment --body-file ${LEO_REVIEW_FILE}
|
|
||||||
|
|
||||||
If ALL claims pass quality gates: gh pr review ${pr} --approve --body-file ${LEO_REVIEW_FILE}
|
CRITICAL — Verdict format: Your review MUST end with exactly one of these verdict markers (as an HTML comment on its own line):
|
||||||
If ANY claim needs changes: gh pr review ${pr} --request-changes --body-file ${LEO_REVIEW_FILE}
|
<!-- VERDICT:LEO:APPROVE -->
|
||||||
|
<!-- VERDICT:LEO:REQUEST_CHANGES -->
|
||||||
|
|
||||||
|
Then post the review as an issue comment:
|
||||||
|
gh pr comment ${pr} --body-file ${LEO_REVIEW_FILE}
|
||||||
|
|
||||||
|
IMPORTANT: Use 'gh pr comment' NOT 'gh pr review'. We use a shared GitHub account so gh pr review --approve fails.
|
||||||
DO NOT merge — the orchestrator handles merge decisions after all reviews are posted.
|
DO NOT merge — the orchestrator handles merge decisions after all reviews are posted.
|
||||||
Work autonomously. Do not ask for confirmation."
|
Work autonomously. Do not ask for confirmation."
|
||||||
|
|
||||||
|
|
@ -432,6 +433,7 @@ Work autonomously. Do not ask for confirmation."
|
||||||
else
|
else
|
||||||
DOMAIN_REVIEW_FILE="/tmp/${DOMAIN_AGENT}-review-pr${pr}.md"
|
DOMAIN_REVIEW_FILE="/tmp/${DOMAIN_AGENT}-review-pr${pr}.md"
|
||||||
AGENT_NAME_UPPER=$(echo "${DOMAIN_AGENT}" | awk '{print toupper(substr($0,1,1)) substr($0,2)}')
|
AGENT_NAME_UPPER=$(echo "${DOMAIN_AGENT}" | awk '{print toupper(substr($0,1,1)) substr($0,2)}')
|
||||||
|
AGENT_KEY_UPPER=$(echo "${DOMAIN_AGENT}" | tr '[:lower:]' '[:upper:]')
|
||||||
DOMAIN_PROMPT="You are ${AGENT_NAME_UPPER}. Read agents/${DOMAIN_AGENT}/identity.md, agents/${DOMAIN_AGENT}/beliefs.md, and skills/evaluate.md.
|
DOMAIN_PROMPT="You are ${AGENT_NAME_UPPER}. Read agents/${DOMAIN_AGENT}/identity.md, agents/${DOMAIN_AGENT}/beliefs.md, and skills/evaluate.md.
|
||||||
|
|
||||||
You are reviewing PR #${pr} as the domain expert for ${DOMAIN}.
|
You are reviewing PR #${pr} as the domain expert for ${DOMAIN}.
|
||||||
|
|
@ -452,8 +454,15 @@ Your review focuses on DOMAIN EXPERTISE — things only a ${DOMAIN} specialist w
|
||||||
6. **Confidence calibration** — From your domain expertise, is the confidence level right?
|
6. **Confidence calibration** — From your domain expertise, is the confidence level right?
|
||||||
|
|
||||||
Write your review to ${DOMAIN_REVIEW_FILE}
|
Write your review to ${DOMAIN_REVIEW_FILE}
|
||||||
Post it with: gh pr review ${pr} --comment --body-file ${DOMAIN_REVIEW_FILE}
|
|
||||||
|
|
||||||
|
CRITICAL — Verdict format: Your review MUST end with exactly one of these verdict markers (as an HTML comment on its own line):
|
||||||
|
<!-- VERDICT:${AGENT_KEY_UPPER}:APPROVE -->
|
||||||
|
<!-- VERDICT:${AGENT_KEY_UPPER}:REQUEST_CHANGES -->
|
||||||
|
|
||||||
|
Then post the review as an issue comment:
|
||||||
|
gh pr comment ${pr} --body-file ${DOMAIN_REVIEW_FILE}
|
||||||
|
|
||||||
|
IMPORTANT: Use 'gh pr comment' NOT 'gh pr review'. We use a shared GitHub account so gh pr review --approve fails.
|
||||||
Sign your review as ${AGENT_NAME_UPPER} (domain reviewer for ${DOMAIN}).
|
Sign your review as ${AGENT_NAME_UPPER} (domain reviewer for ${DOMAIN}).
|
||||||
DO NOT duplicate Leo's quality gate checks — he covers those.
|
DO NOT duplicate Leo's quality gate checks — he covers those.
|
||||||
DO NOT merge — the orchestrator handles merge decisions after all reviews are posted.
|
DO NOT merge — the orchestrator handles merge decisions after all reviews are posted.
|
||||||
|
|
@ -486,7 +495,7 @@ Work autonomously. Do not ask for confirmation."
|
||||||
|
|
||||||
if [ "$MERGE_RESULT" -eq 0 ]; then
|
if [ "$MERGE_RESULT" -eq 0 ]; then
|
||||||
echo " Auto-merge: ALL GATES PASSED — merging PR #$pr"
|
echo " Auto-merge: ALL GATES PASSED — merging PR #$pr"
|
||||||
if gh pr merge "$pr" --squash --delete-branch 2>&1; then
|
if gh pr merge "$pr" --squash 2>&1; then
|
||||||
echo " PR #$pr: MERGED successfully."
|
echo " PR #$pr: MERGED successfully."
|
||||||
MERGED=$((MERGED + 1))
|
MERGED=$((MERGED + 1))
|
||||||
else
|
else
|
||||||
|
|
|
||||||
179
ops/extract-cron.sh
Executable file
179
ops/extract-cron.sh
Executable file
|
|
@ -0,0 +1,179 @@
|
||||||
|
#!/bin/bash
|
||||||
|
# Extract claims from unprocessed sources in inbox/archive/
|
||||||
|
# Runs via cron on VPS every 15 minutes.
|
||||||
|
#
|
||||||
|
# Concurrency model:
|
||||||
|
# - Lockfile prevents overlapping runs
|
||||||
|
# - MAX_SOURCES=5 per cycle (works through backlog over multiple runs)
|
||||||
|
# - Sequential processing (one source at a time)
|
||||||
|
# - 50 sources landing at once = ~10 cron cycles to clear, not 50 parallel agents
|
||||||
|
#
|
||||||
|
# Domain routing:
|
||||||
|
# - Reads domain: field from source frontmatter
|
||||||
|
# - Maps to the domain agent (rio, clay, theseus, vida, astra, leo)
|
||||||
|
# - Runs extraction AS that agent — their territory, their extraction
|
||||||
|
# - Skips sources with status: processing (agent handling it themselves)
|
||||||
|
#
|
||||||
|
# Flow:
|
||||||
|
# 1. Pull latest main
|
||||||
|
# 2. Find sources with status: unprocessed (skip processing/processed/null-result)
|
||||||
|
# 3. For each: run Claude headless to extract claims as the domain agent
|
||||||
|
# 4. Commit extractions, push, open PR
|
||||||
|
# 5. Update source status to processed
|
||||||
|
#
|
||||||
|
# The eval pipeline (webhook.py) handles review and merge separately.
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
REPO_DIR="/opt/teleo-eval/workspaces/extract"
|
||||||
|
REPO_URL="http://m3taversal:$(cat /opt/teleo-eval/secrets/forgejo-admin-token)@localhost:3000/teleo/teleo-codex.git"
|
||||||
|
CLAUDE_BIN="/home/teleo/.local/bin/claude"
|
||||||
|
LOG_DIR="/opt/teleo-eval/logs"
|
||||||
|
LOG="$LOG_DIR/extract-cron.log"
|
||||||
|
LOCKFILE="/tmp/extract-cron.lock"
|
||||||
|
MAX_SOURCES=5 # Process at most 5 sources per run to limit cost
|
||||||
|
|
||||||
|
log() { echo "[$(date -Iseconds)] $*" >> "$LOG"; }
|
||||||
|
|
||||||
|
# --- Lock ---
|
||||||
|
if [ -f "$LOCKFILE" ]; then
|
||||||
|
pid=$(cat "$LOCKFILE" 2>/dev/null)
|
||||||
|
if kill -0 "$pid" 2>/dev/null; then
|
||||||
|
log "SKIP: already running (pid $pid)"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
log "WARN: stale lockfile, removing"
|
||||||
|
rm -f "$LOCKFILE"
|
||||||
|
fi
|
||||||
|
echo $$ > "$LOCKFILE"
|
||||||
|
trap 'rm -f "$LOCKFILE"' EXIT
|
||||||
|
|
||||||
|
# --- Ensure repo clone ---
|
||||||
|
if [ ! -d "$REPO_DIR/.git" ]; then
|
||||||
|
log "Cloning repo..."
|
||||||
|
git clone "$REPO_URL" "$REPO_DIR" >> "$LOG" 2>&1
|
||||||
|
fi
|
||||||
|
|
||||||
|
cd "$REPO_DIR"
|
||||||
|
|
||||||
|
# --- Pull latest main ---
|
||||||
|
git checkout main >> "$LOG" 2>&1
|
||||||
|
git pull --rebase >> "$LOG" 2>&1
|
||||||
|
|
||||||
|
# --- Find unprocessed sources ---
|
||||||
|
UNPROCESSED=$(grep -rl '^status: unprocessed' inbox/archive/ 2>/dev/null | head -n "$MAX_SOURCES" || true)
|
||||||
|
|
||||||
|
if [ -z "$UNPROCESSED" ]; then
|
||||||
|
log "No unprocessed sources found"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
COUNT=$(echo "$UNPROCESSED" | wc -l | tr -d ' ')
|
||||||
|
log "Found $COUNT unprocessed source(s)"
|
||||||
|
|
||||||
|
# --- Process each source ---
|
||||||
|
for SOURCE_FILE in $UNPROCESSED; do
|
||||||
|
SLUG=$(basename "$SOURCE_FILE" .md)
|
||||||
|
BRANCH="extract/$SLUG"
|
||||||
|
|
||||||
|
log "Processing: $SOURCE_FILE → branch $BRANCH"
|
||||||
|
|
||||||
|
# Create branch from main
|
||||||
|
git checkout main >> "$LOG" 2>&1
|
||||||
|
git branch -D "$BRANCH" 2>/dev/null || true
|
||||||
|
git checkout -b "$BRANCH" >> "$LOG" 2>&1
|
||||||
|
|
||||||
|
# Read domain from frontmatter
|
||||||
|
DOMAIN=$(grep '^domain:' "$SOURCE_FILE" | head -1 | sed 's/domain: *//' | tr -d '"' | tr -d "'" | xargs)
|
||||||
|
|
||||||
|
# Map domain to agent
|
||||||
|
case "$DOMAIN" in
|
||||||
|
internet-finance) AGENT="rio" ;;
|
||||||
|
entertainment) AGENT="clay" ;;
|
||||||
|
ai-alignment) AGENT="theseus" ;;
|
||||||
|
health) AGENT="vida" ;;
|
||||||
|
space-development) AGENT="astra" ;;
|
||||||
|
*) AGENT="leo" ;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
AGENT_TOKEN=$(cat "/opt/teleo-eval/secrets/forgejo-${AGENT}-token" 2>/dev/null || cat /opt/teleo-eval/secrets/forgejo-leo-token)
|
||||||
|
|
||||||
|
log "Domain: $DOMAIN, Agent: $AGENT"
|
||||||
|
|
||||||
|
# Run Claude headless to extract claims
|
||||||
|
EXTRACT_PROMPT="You are $AGENT, a Teleo knowledge base agent. Extract claims from this source.
|
||||||
|
|
||||||
|
READ these files first:
|
||||||
|
- skills/extract.md (extraction process)
|
||||||
|
- schemas/claim.md (claim format)
|
||||||
|
- $SOURCE_FILE (the source to extract from)
|
||||||
|
|
||||||
|
Then scan domains/$DOMAIN/ to check for duplicate claims.
|
||||||
|
|
||||||
|
EXTRACT claims following the process in skills/extract.md:
|
||||||
|
1. Read the source completely
|
||||||
|
2. Separate evidence from interpretation
|
||||||
|
3. Extract candidate claims (specific, disagreeable, evidence-backed)
|
||||||
|
4. Check for duplicates against existing claims in domains/$DOMAIN/
|
||||||
|
5. Write claim files to domains/$DOMAIN/ with proper YAML frontmatter
|
||||||
|
6. Update $SOURCE_FILE: set status to 'processed', add processed_by: $AGENT, processed_date: $(date +%Y-%m-%d), and claims_extracted list
|
||||||
|
|
||||||
|
If no claims can be extracted, update $SOURCE_FILE: set status to 'null-result' and add notes explaining why.
|
||||||
|
|
||||||
|
IMPORTANT: Use the Edit tool to update the source file status. Use the Write tool to create new claim files. Do not create claims that duplicate existing ones."
|
||||||
|
|
||||||
|
# Run extraction with timeout (10 minutes)
|
||||||
|
timeout 600 "$CLAUDE_BIN" -p "$EXTRACT_PROMPT" \
|
||||||
|
--allowedTools 'Read,Write,Edit,Glob,Grep' \
|
||||||
|
--model sonnet \
|
||||||
|
>> "$LOG" 2>&1 || {
|
||||||
|
log "WARN: Claude extraction failed or timed out for $SOURCE_FILE"
|
||||||
|
git checkout main >> "$LOG" 2>&1
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check if any files were created/modified
|
||||||
|
CHANGES=$(git status --porcelain | wc -l | tr -d ' ')
|
||||||
|
if [ "$CHANGES" -eq 0 ]; then
|
||||||
|
log "No changes produced for $SOURCE_FILE"
|
||||||
|
git checkout main >> "$LOG" 2>&1
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Stage and commit
|
||||||
|
git add inbox/archive/ "domains/$DOMAIN/" >> "$LOG" 2>&1
|
||||||
|
git commit -m "$AGENT: extract claims from $(basename "$SOURCE_FILE")
|
||||||
|
|
||||||
|
- Source: $SOURCE_FILE
|
||||||
|
- Domain: $DOMAIN
|
||||||
|
- Extracted by: headless extraction cron
|
||||||
|
|
||||||
|
Pentagon-Agent: $(echo "$AGENT" | sed 's/./\U&/') <HEADLESS>" >> "$LOG" 2>&1
|
||||||
|
|
||||||
|
# Push branch
|
||||||
|
git push -u "$REPO_URL" "$BRANCH" --force >> "$LOG" 2>&1
|
||||||
|
|
||||||
|
# Open PR
|
||||||
|
PR_TITLE="$AGENT: extract claims from $(basename "$SOURCE_FILE" .md)"
|
||||||
|
PR_BODY="## Automated Extraction\n\nSource: \`$SOURCE_FILE\`\nDomain: $DOMAIN\nExtracted by: headless cron on VPS\n\nThis PR was created automatically by the extraction cron job. Claims were extracted using \`skills/extract.md\` process via Claude headless."
|
||||||
|
|
||||||
|
curl -s -X POST "http://localhost:3000/api/v1/repos/teleo/teleo-codex/pulls" \
|
||||||
|
-H "Authorization: token $AGENT_TOKEN" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d "{
|
||||||
|
\"title\": \"$PR_TITLE\",
|
||||||
|
\"body\": \"$PR_BODY\",
|
||||||
|
\"base\": \"main\",
|
||||||
|
\"head\": \"$BRANCH\"
|
||||||
|
}" >> "$LOG" 2>&1
|
||||||
|
|
||||||
|
log "PR opened for $SOURCE_FILE"
|
||||||
|
|
||||||
|
# Back to main for next source
|
||||||
|
git checkout main >> "$LOG" 2>&1
|
||||||
|
|
||||||
|
# Brief pause between extractions
|
||||||
|
sleep 5
|
||||||
|
done
|
||||||
|
|
||||||
|
log "Extraction run complete: processed $COUNT source(s)"
|
||||||
520
ops/extract-graph-data.py
Normal file
520
ops/extract-graph-data.py
Normal file
|
|
@ -0,0 +1,520 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
extract-graph-data.py — Extract knowledge graph from teleo-codex markdown files.
|
||||||
|
|
||||||
|
Reads all .md claim/conviction files, parses YAML frontmatter and wiki-links,
|
||||||
|
and outputs graph-data.json matching the teleo-app GraphData interface.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
python3 ops/extract-graph-data.py [--output path/to/graph-data.json]
|
||||||
|
|
||||||
|
Must be run from the teleo-codex repo root.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Config
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
SCAN_DIRS = ["core", "domains", "foundations", "convictions"]
|
||||||
|
|
||||||
|
# Only extract these content types (from frontmatter `type` field).
|
||||||
|
# If type is missing, include the file anyway (many claims lack explicit type).
|
||||||
|
INCLUDE_TYPES = {"claim", "conviction", "analysis", "belief", "position", None}
|
||||||
|
|
||||||
|
# Domain → default agent mapping (fallback when git attribution unavailable)
|
||||||
|
DOMAIN_AGENT_MAP = {
|
||||||
|
"internet-finance": "rio",
|
||||||
|
"entertainment": "clay",
|
||||||
|
"health": "vida",
|
||||||
|
"ai-alignment": "theseus",
|
||||||
|
"space-development": "astra",
|
||||||
|
"grand-strategy": "leo",
|
||||||
|
"mechanisms": "leo",
|
||||||
|
"living-capital": "leo",
|
||||||
|
"living-agents": "leo",
|
||||||
|
"teleohumanity": "leo",
|
||||||
|
"critical-systems": "leo",
|
||||||
|
"collective-intelligence": "leo",
|
||||||
|
"teleological-economics": "leo",
|
||||||
|
"cultural-dynamics": "clay",
|
||||||
|
}
|
||||||
|
|
||||||
|
DOMAIN_COLORS = {
|
||||||
|
"internet-finance": "#4A90D9",
|
||||||
|
"entertainment": "#9B59B6",
|
||||||
|
"health": "#2ECC71",
|
||||||
|
"ai-alignment": "#E74C3C",
|
||||||
|
"space-development": "#F39C12",
|
||||||
|
"grand-strategy": "#D4AF37",
|
||||||
|
"mechanisms": "#1ABC9C",
|
||||||
|
"living-capital": "#3498DB",
|
||||||
|
"living-agents": "#E67E22",
|
||||||
|
"teleohumanity": "#F1C40F",
|
||||||
|
"critical-systems": "#95A5A6",
|
||||||
|
"collective-intelligence": "#BDC3C7",
|
||||||
|
"teleological-economics": "#7F8C8D",
|
||||||
|
"cultural-dynamics": "#C0392B",
|
||||||
|
}
|
||||||
|
|
||||||
|
KNOWN_AGENTS = {"leo", "rio", "clay", "vida", "theseus", "astra"}
|
||||||
|
|
||||||
|
# Regex patterns
|
||||||
|
FRONTMATTER_RE = re.compile(r"^---\s*\n(.*?)\n---", re.DOTALL)
|
||||||
|
WIKILINK_RE = re.compile(r"\[\[([^\]]+)\]\]")
|
||||||
|
YAML_FIELD_RE = re.compile(r"^(\w[\w_]*):\s*(.+)$", re.MULTILINE)
|
||||||
|
YAML_LIST_ITEM_RE = re.compile(r'^\s*-\s+"?(.+?)"?\s*$', re.MULTILINE)
|
||||||
|
COUNTER_EVIDENCE_RE = re.compile(r"^##\s+Counter[\s-]?evidence", re.MULTILINE | re.IGNORECASE)
|
||||||
|
COUNTERARGUMENT_RE = re.compile(r"^\*\*Counter\s*argument", re.MULTILINE | re.IGNORECASE)
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Lightweight YAML-ish frontmatter parser (avoids PyYAML dependency)
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
def parse_frontmatter(text: str) -> dict:
|
||||||
|
"""Parse YAML frontmatter from markdown text. Returns dict of fields."""
|
||||||
|
m = FRONTMATTER_RE.match(text)
|
||||||
|
if not m:
|
||||||
|
return {}
|
||||||
|
yaml_block = m.group(1)
|
||||||
|
result = {}
|
||||||
|
for field_match in YAML_FIELD_RE.finditer(yaml_block):
|
||||||
|
key = field_match.group(1)
|
||||||
|
val = field_match.group(2).strip().strip('"').strip("'")
|
||||||
|
# Handle list fields
|
||||||
|
if val.startswith("["):
|
||||||
|
# Inline YAML list: [item1, item2]
|
||||||
|
items = re.findall(r'"([^"]+)"', val)
|
||||||
|
if not items:
|
||||||
|
items = [x.strip().strip('"').strip("'")
|
||||||
|
for x in val.strip("[]").split(",") if x.strip()]
|
||||||
|
result[key] = items
|
||||||
|
else:
|
||||||
|
result[key] = val
|
||||||
|
# Handle multi-line list fields (depends_on, challenged_by, secondary_domains)
|
||||||
|
for list_key in ("depends_on", "challenged_by", "secondary_domains", "claims_extracted"):
|
||||||
|
if list_key not in result:
|
||||||
|
# Check for block-style list
|
||||||
|
pattern = re.compile(
|
||||||
|
rf"^{list_key}:\s*\n((?:\s+-\s+.+\n?)+)", re.MULTILINE
|
||||||
|
)
|
||||||
|
lm = pattern.search(yaml_block)
|
||||||
|
if lm:
|
||||||
|
items = YAML_LIST_ITEM_RE.findall(lm.group(1))
|
||||||
|
result[list_key] = [i.strip('"').strip("'") for i in items]
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def extract_body(text: str) -> str:
|
||||||
|
"""Return the markdown body after frontmatter."""
|
||||||
|
m = FRONTMATTER_RE.match(text)
|
||||||
|
if m:
|
||||||
|
return text[m.end():]
|
||||||
|
return text
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Git-based agent attribution
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
def build_git_agent_map(repo_root: str) -> dict[str, str]:
|
||||||
|
"""Map file paths → agent name using git log commit message prefixes.
|
||||||
|
|
||||||
|
Commit messages follow: '{agent}: description'
|
||||||
|
We use the commit that first added each file.
|
||||||
|
"""
|
||||||
|
file_agent = {}
|
||||||
|
try:
|
||||||
|
result = subprocess.run(
|
||||||
|
["git", "log", "--all", "--diff-filter=A", "--name-only",
|
||||||
|
"--format=COMMIT_MSG:%s"],
|
||||||
|
capture_output=True, text=True, cwd=repo_root, timeout=30,
|
||||||
|
)
|
||||||
|
current_agent = None
|
||||||
|
for line in result.stdout.splitlines():
|
||||||
|
line = line.strip()
|
||||||
|
if not line:
|
||||||
|
continue
|
||||||
|
if line.startswith("COMMIT_MSG:"):
|
||||||
|
msg = line[len("COMMIT_MSG:"):]
|
||||||
|
# Parse "agent: description" pattern
|
||||||
|
if ":" in msg:
|
||||||
|
prefix = msg.split(":")[0].strip().lower()
|
||||||
|
if prefix in KNOWN_AGENTS:
|
||||||
|
current_agent = prefix
|
||||||
|
else:
|
||||||
|
current_agent = None
|
||||||
|
else:
|
||||||
|
current_agent = None
|
||||||
|
elif current_agent and line.endswith(".md"):
|
||||||
|
# Only set if not already attributed (first add wins)
|
||||||
|
if line not in file_agent:
|
||||||
|
file_agent[line] = current_agent
|
||||||
|
except (subprocess.TimeoutExpired, FileNotFoundError):
|
||||||
|
pass
|
||||||
|
return file_agent
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Wiki-link resolution
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
def build_title_index(all_files: list[str], repo_root: str) -> dict[str, str]:
|
||||||
|
"""Map lowercase claim titles → file paths for wiki-link resolution."""
|
||||||
|
index = {}
|
||||||
|
for fpath in all_files:
|
||||||
|
# Title = filename without .md extension
|
||||||
|
fname = os.path.basename(fpath)
|
||||||
|
if fname.endswith(".md"):
|
||||||
|
title = fname[:-3].lower()
|
||||||
|
index[title] = fpath
|
||||||
|
# Also index by relative path
|
||||||
|
index[fpath.lower()] = fpath
|
||||||
|
return index
|
||||||
|
|
||||||
|
|
||||||
|
def resolve_wikilink(link_text: str, title_index: dict, source_dir: str) -> str | None:
|
||||||
|
"""Resolve a [[wiki-link]] target to a file path (node ID)."""
|
||||||
|
text = link_text.strip()
|
||||||
|
# Skip map links and non-claim references
|
||||||
|
if text.startswith("_") or text == "_map":
|
||||||
|
return None
|
||||||
|
# Direct path match (with or without .md)
|
||||||
|
for candidate in [text, text + ".md"]:
|
||||||
|
if candidate.lower() in title_index:
|
||||||
|
return title_index[candidate.lower()]
|
||||||
|
# Title-only match
|
||||||
|
title = text.lower()
|
||||||
|
if title in title_index:
|
||||||
|
return title_index[title]
|
||||||
|
# Fuzzy: try adding .md to the basename
|
||||||
|
basename = os.path.basename(text)
|
||||||
|
if basename.lower() in title_index:
|
||||||
|
return title_index[basename.lower()]
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# PR/merge event extraction from git log
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
def extract_events(repo_root: str) -> list[dict]:
|
||||||
|
"""Extract PR merge events from git log for the events timeline."""
|
||||||
|
events = []
|
||||||
|
try:
|
||||||
|
result = subprocess.run(
|
||||||
|
["git", "log", "--merges", "--format=%H|%s|%ai", "-50"],
|
||||||
|
capture_output=True, text=True, cwd=repo_root, timeout=15,
|
||||||
|
)
|
||||||
|
for line in result.stdout.strip().splitlines():
|
||||||
|
parts = line.split("|", 2)
|
||||||
|
if len(parts) < 3:
|
||||||
|
continue
|
||||||
|
sha, msg, date_str = parts
|
||||||
|
# Parse "Merge pull request #N from ..." or agent commit patterns
|
||||||
|
pr_match = re.search(r"#(\d+)", msg)
|
||||||
|
if not pr_match:
|
||||||
|
continue
|
||||||
|
pr_num = int(pr_match.group(1))
|
||||||
|
# Try to determine agent from merge commit
|
||||||
|
agent = "collective"
|
||||||
|
for a in KNOWN_AGENTS:
|
||||||
|
if a in msg.lower():
|
||||||
|
agent = a
|
||||||
|
break
|
||||||
|
# Count files changed in this merge
|
||||||
|
diff_result = subprocess.run(
|
||||||
|
["git", "diff", "--name-only", f"{sha}^..{sha}"],
|
||||||
|
capture_output=True, text=True, cwd=repo_root, timeout=10,
|
||||||
|
)
|
||||||
|
claims_added = sum(
|
||||||
|
1 for f in diff_result.stdout.splitlines()
|
||||||
|
if f.endswith(".md") and any(f.startswith(d) for d in SCAN_DIRS)
|
||||||
|
)
|
||||||
|
if claims_added > 0:
|
||||||
|
events.append({
|
||||||
|
"type": "pr-merge",
|
||||||
|
"number": pr_num,
|
||||||
|
"agent": agent,
|
||||||
|
"claims_added": claims_added,
|
||||||
|
"date": date_str[:10],
|
||||||
|
})
|
||||||
|
except (subprocess.TimeoutExpired, FileNotFoundError):
|
||||||
|
pass
|
||||||
|
return events
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Main extraction
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
def find_markdown_files(repo_root: str) -> list[str]:
|
||||||
|
"""Find all .md files in SCAN_DIRS, return relative paths."""
|
||||||
|
files = []
|
||||||
|
for scan_dir in SCAN_DIRS:
|
||||||
|
dirpath = os.path.join(repo_root, scan_dir)
|
||||||
|
if not os.path.isdir(dirpath):
|
||||||
|
continue
|
||||||
|
for root, _dirs, filenames in os.walk(dirpath):
|
||||||
|
for fname in filenames:
|
||||||
|
if fname.endswith(".md") and not fname.startswith("_"):
|
||||||
|
rel = os.path.relpath(os.path.join(root, fname), repo_root)
|
||||||
|
files.append(rel)
|
||||||
|
return sorted(files)
|
||||||
|
|
||||||
|
|
||||||
|
def _get_domain_cached(fpath: str, repo_root: str, cache: dict) -> str:
|
||||||
|
"""Get the domain of a file, caching results."""
|
||||||
|
if fpath in cache:
|
||||||
|
return cache[fpath]
|
||||||
|
abs_path = os.path.join(repo_root, fpath)
|
||||||
|
domain = ""
|
||||||
|
try:
|
||||||
|
text = open(abs_path, encoding="utf-8").read()
|
||||||
|
fm = parse_frontmatter(text)
|
||||||
|
domain = fm.get("domain", "")
|
||||||
|
except (OSError, UnicodeDecodeError):
|
||||||
|
pass
|
||||||
|
cache[fpath] = domain
|
||||||
|
return domain
|
||||||
|
|
||||||
|
|
||||||
|
def extract_graph(repo_root: str) -> dict:
|
||||||
|
"""Extract the full knowledge graph from the codex."""
|
||||||
|
all_files = find_markdown_files(repo_root)
|
||||||
|
git_agents = build_git_agent_map(repo_root)
|
||||||
|
title_index = build_title_index(all_files, repo_root)
|
||||||
|
domain_cache: dict[str, str] = {}
|
||||||
|
|
||||||
|
nodes = []
|
||||||
|
edges = []
|
||||||
|
node_ids = set()
|
||||||
|
all_files_set = set(all_files)
|
||||||
|
|
||||||
|
for fpath in all_files:
|
||||||
|
abs_path = os.path.join(repo_root, fpath)
|
||||||
|
try:
|
||||||
|
text = open(abs_path, encoding="utf-8").read()
|
||||||
|
except (OSError, UnicodeDecodeError):
|
||||||
|
continue
|
||||||
|
|
||||||
|
fm = parse_frontmatter(text)
|
||||||
|
body = extract_body(text)
|
||||||
|
|
||||||
|
# Filter by type
|
||||||
|
ftype = fm.get("type")
|
||||||
|
if ftype and ftype not in INCLUDE_TYPES:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Build node
|
||||||
|
title = os.path.basename(fpath)[:-3] # filename without .md
|
||||||
|
domain = fm.get("domain", "")
|
||||||
|
if not domain:
|
||||||
|
# Infer domain from directory path
|
||||||
|
parts = fpath.split(os.sep)
|
||||||
|
if len(parts) >= 2:
|
||||||
|
domain = parts[1] if parts[0] == "domains" else parts[1] if len(parts) > 2 else parts[0]
|
||||||
|
|
||||||
|
# Agent attribution: git log → domain mapping → "collective"
|
||||||
|
agent = git_agents.get(fpath, "")
|
||||||
|
if not agent:
|
||||||
|
agent = DOMAIN_AGENT_MAP.get(domain, "collective")
|
||||||
|
|
||||||
|
created = fm.get("created", "")
|
||||||
|
confidence = fm.get("confidence", "speculative")
|
||||||
|
|
||||||
|
# Detect challenged status
|
||||||
|
challenged_by_raw = fm.get("challenged_by", [])
|
||||||
|
if isinstance(challenged_by_raw, str):
|
||||||
|
challenged_by_raw = [challenged_by_raw] if challenged_by_raw else []
|
||||||
|
has_challenged_by = bool(challenged_by_raw and any(c for c in challenged_by_raw))
|
||||||
|
has_counter_section = bool(COUNTER_EVIDENCE_RE.search(body) or COUNTERARGUMENT_RE.search(body))
|
||||||
|
is_challenged = has_challenged_by or has_counter_section
|
||||||
|
|
||||||
|
# Extract challenge descriptions for the node
|
||||||
|
challenges = []
|
||||||
|
if isinstance(challenged_by_raw, list):
|
||||||
|
for c in challenged_by_raw:
|
||||||
|
if c and isinstance(c, str):
|
||||||
|
# Strip wiki-link syntax for display
|
||||||
|
cleaned = WIKILINK_RE.sub(lambda m: m.group(1), c)
|
||||||
|
# Strip markdown list artifacts: leading "- ", surrounding quotes
|
||||||
|
cleaned = re.sub(r'^-\s*', '', cleaned).strip()
|
||||||
|
cleaned = cleaned.strip('"').strip("'").strip()
|
||||||
|
if cleaned:
|
||||||
|
challenges.append(cleaned[:200]) # cap length
|
||||||
|
|
||||||
|
node = {
|
||||||
|
"id": fpath,
|
||||||
|
"title": title,
|
||||||
|
"domain": domain,
|
||||||
|
"agent": agent,
|
||||||
|
"created": created,
|
||||||
|
"confidence": confidence,
|
||||||
|
"challenged": is_challenged,
|
||||||
|
}
|
||||||
|
if challenges:
|
||||||
|
node["challenges"] = challenges
|
||||||
|
nodes.append(node)
|
||||||
|
node_ids.add(fpath)
|
||||||
|
domain_cache[fpath] = domain # cache for edge lookups
|
||||||
|
for link_text in WIKILINK_RE.findall(body):
|
||||||
|
target = resolve_wikilink(link_text, title_index, os.path.dirname(fpath))
|
||||||
|
if target and target != fpath and target in all_files_set:
|
||||||
|
target_domain = _get_domain_cached(target, repo_root, domain_cache)
|
||||||
|
edges.append({
|
||||||
|
"source": fpath,
|
||||||
|
"target": target,
|
||||||
|
"type": "wiki-link",
|
||||||
|
"cross_domain": domain != target_domain and bool(target_domain),
|
||||||
|
})
|
||||||
|
|
||||||
|
# Conflict edges from challenged_by (may contain [[wiki-links]] or prose)
|
||||||
|
challenged_by = fm.get("challenged_by", [])
|
||||||
|
if isinstance(challenged_by, str):
|
||||||
|
challenged_by = [challenged_by]
|
||||||
|
if isinstance(challenged_by, list):
|
||||||
|
for challenge in challenged_by:
|
||||||
|
if not challenge:
|
||||||
|
continue
|
||||||
|
# Check for embedded wiki-links
|
||||||
|
for link_text in WIKILINK_RE.findall(challenge):
|
||||||
|
target = resolve_wikilink(link_text, title_index, os.path.dirname(fpath))
|
||||||
|
if target and target != fpath and target in all_files_set:
|
||||||
|
target_domain = _get_domain_cached(target, repo_root, domain_cache)
|
||||||
|
edges.append({
|
||||||
|
"source": fpath,
|
||||||
|
"target": target,
|
||||||
|
"type": "conflict",
|
||||||
|
"cross_domain": domain != target_domain and bool(target_domain),
|
||||||
|
})
|
||||||
|
|
||||||
|
# Deduplicate edges
|
||||||
|
seen_edges = set()
|
||||||
|
unique_edges = []
|
||||||
|
for e in edges:
|
||||||
|
key = (e["source"], e["target"], e.get("type", ""))
|
||||||
|
if key not in seen_edges:
|
||||||
|
seen_edges.add(key)
|
||||||
|
unique_edges.append(e)
|
||||||
|
|
||||||
|
# Only keep edges where both endpoints exist as nodes
|
||||||
|
edges_filtered = [
|
||||||
|
e for e in unique_edges
|
||||||
|
if e["source"] in node_ids and e["target"] in node_ids
|
||||||
|
]
|
||||||
|
|
||||||
|
events = extract_events(repo_root)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"nodes": nodes,
|
||||||
|
"edges": edges_filtered,
|
||||||
|
"events": sorted(events, key=lambda e: e.get("date", "")),
|
||||||
|
"domain_colors": DOMAIN_COLORS,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def build_claims_context(repo_root: str, nodes: list[dict]) -> dict:
|
||||||
|
"""Build claims-context.json for chat system prompt injection.
|
||||||
|
|
||||||
|
Produces a lightweight claim index: title + description + domain + agent + confidence.
|
||||||
|
Sorted by domain, then alphabetically within domain.
|
||||||
|
Target: ~37KB for ~370 claims. Truncates descriptions at 100 chars if total > 100KB.
|
||||||
|
"""
|
||||||
|
claims = []
|
||||||
|
for node in nodes:
|
||||||
|
fpath = node["id"]
|
||||||
|
abs_path = os.path.join(repo_root, fpath)
|
||||||
|
description = ""
|
||||||
|
try:
|
||||||
|
text = open(abs_path, encoding="utf-8").read()
|
||||||
|
fm = parse_frontmatter(text)
|
||||||
|
description = fm.get("description", "")
|
||||||
|
except (OSError, UnicodeDecodeError):
|
||||||
|
pass
|
||||||
|
|
||||||
|
claims.append({
|
||||||
|
"title": node["title"],
|
||||||
|
"description": description,
|
||||||
|
"domain": node["domain"],
|
||||||
|
"agent": node["agent"],
|
||||||
|
"confidence": node["confidence"],
|
||||||
|
})
|
||||||
|
|
||||||
|
# Sort by domain, then title
|
||||||
|
claims.sort(key=lambda c: (c["domain"], c["title"]))
|
||||||
|
|
||||||
|
context = {
|
||||||
|
"generated": datetime.now(tz=timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ"),
|
||||||
|
"claimCount": len(claims),
|
||||||
|
"claims": claims,
|
||||||
|
}
|
||||||
|
|
||||||
|
# Progressive description truncation if over 100KB.
|
||||||
|
# Never drop descriptions entirely — short descriptions are better than none.
|
||||||
|
for max_desc in (120, 100, 80, 60):
|
||||||
|
test_json = json.dumps(context, ensure_ascii=False)
|
||||||
|
if len(test_json) <= 100_000:
|
||||||
|
break
|
||||||
|
for c in claims:
|
||||||
|
if len(c["description"]) > max_desc:
|
||||||
|
c["description"] = c["description"][:max_desc] + "..."
|
||||||
|
|
||||||
|
return context
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description="Extract graph data from teleo-codex")
|
||||||
|
parser.add_argument("--output", "-o", default="graph-data.json",
|
||||||
|
help="Output file path (default: graph-data.json)")
|
||||||
|
parser.add_argument("--context-output", "-c", default=None,
|
||||||
|
help="Output claims-context.json path (default: same dir as --output)")
|
||||||
|
parser.add_argument("--repo", "-r", default=".",
|
||||||
|
help="Path to teleo-codex repo root (default: current dir)")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
repo_root = os.path.abspath(args.repo)
|
||||||
|
if not os.path.isdir(os.path.join(repo_root, "core")):
|
||||||
|
print(f"Error: {repo_root} doesn't look like a teleo-codex repo (no core/ dir)", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
print(f"Scanning {repo_root}...")
|
||||||
|
graph = extract_graph(repo_root)
|
||||||
|
|
||||||
|
print(f" Nodes: {len(graph['nodes'])}")
|
||||||
|
print(f" Edges: {len(graph['edges'])}")
|
||||||
|
print(f" Events: {len(graph['events'])}")
|
||||||
|
challenged_count = sum(1 for n in graph["nodes"] if n.get("challenged"))
|
||||||
|
print(f" Challenged: {challenged_count}")
|
||||||
|
|
||||||
|
# Write graph-data.json
|
||||||
|
output_path = os.path.abspath(args.output)
|
||||||
|
with open(output_path, "w", encoding="utf-8") as f:
|
||||||
|
json.dump(graph, f, indent=2, ensure_ascii=False)
|
||||||
|
size_kb = os.path.getsize(output_path) / 1024
|
||||||
|
print(f" graph-data.json: {output_path} ({size_kb:.1f} KB)")
|
||||||
|
|
||||||
|
# Write claims-context.json
|
||||||
|
context_path = args.context_output
|
||||||
|
if not context_path:
|
||||||
|
context_path = os.path.join(os.path.dirname(output_path), "claims-context.json")
|
||||||
|
context_path = os.path.abspath(context_path)
|
||||||
|
|
||||||
|
context = build_claims_context(repo_root, graph["nodes"])
|
||||||
|
with open(context_path, "w", encoding="utf-8") as f:
|
||||||
|
json.dump(context, f, indent=2, ensure_ascii=False)
|
||||||
|
ctx_kb = os.path.getsize(context_path) / 1024
|
||||||
|
print(f" claims-context.json: {context_path} ({ctx_kb:.1f} KB)")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
201
skills/ingest.md
Normal file
201
skills/ingest.md
Normal file
|
|
@ -0,0 +1,201 @@
|
||||||
|
# Skill: Ingest
|
||||||
|
|
||||||
|
Research your domain, find source material, and archive it in inbox/. You choose whether to extract claims yourself or let the VPS handle it.
|
||||||
|
|
||||||
|
**Archive everything.** The inbox is a library, not a filter. If it's relevant to any Teleo domain, archive it. Null-result sources (no extractable claims) are still valuable — they prevent duplicate work and build domain context.
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```
|
||||||
|
/ingest # Research loop: pull tweets, find sources, archive with notes
|
||||||
|
/ingest @username # Pull and archive a specific X account's content
|
||||||
|
/ingest url <url> # Archive a paper, article, or thread from URL
|
||||||
|
/ingest scan # Scan your network for new content since last pull
|
||||||
|
/ingest extract # Extract claims from sources you've already archived (Track A)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Two Tracks
|
||||||
|
|
||||||
|
### Track A: Agent-driven extraction (full control)
|
||||||
|
|
||||||
|
You research, archive, AND extract. You see exactly what you're proposing before it goes up.
|
||||||
|
|
||||||
|
1. Archive sources with `status: processing`
|
||||||
|
2. Extract claims yourself using `skills/extract.md`
|
||||||
|
3. Open a PR with both source archives and claim files
|
||||||
|
4. Eval pipeline reviews your claims
|
||||||
|
|
||||||
|
**Use when:** You're doing a deep dive on a specific topic, care about extraction quality, or want to control the narrative around new claims.
|
||||||
|
|
||||||
|
### Track B: VPS extraction (hands-off)
|
||||||
|
|
||||||
|
You research and archive. The VPS extracts headlessly.
|
||||||
|
|
||||||
|
1. Archive sources with `status: unprocessed`
|
||||||
|
2. Push source-only PR (merges fast — no claim changes)
|
||||||
|
3. VPS cron picks up unprocessed sources every 15 minutes
|
||||||
|
4. Extracts claims via Claude headless, opens a separate PR
|
||||||
|
5. Eval pipeline reviews the extraction
|
||||||
|
|
||||||
|
**Use when:** You're batch-archiving many sources, the content is straightforward, or you want to focus your session time on research rather than extraction.
|
||||||
|
|
||||||
|
### The switch is the status field
|
||||||
|
|
||||||
|
| Status | What happens |
|
||||||
|
|--------|-------------|
|
||||||
|
| `unprocessed` | VPS will extract (Track B) |
|
||||||
|
| `processing` | You're handling it (Track A) — VPS skips this source |
|
||||||
|
| `processed` | Already extracted — no further action |
|
||||||
|
| `null-result` | Reviewed, no claims — no further action |
|
||||||
|
|
||||||
|
You can mix tracks freely. Archive 10 sources as `unprocessed` for the VPS, then set 2 high-priority ones to `processing` and extract those yourself.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
- API key at `~/.pentagon/secrets/twitterapi-io-key`
|
||||||
|
- Your network file at `~/.pentagon/workspace/collective/x-ingestion/{your-name}-network.json`
|
||||||
|
- Forgejo token at `~/.pentagon/secrets/forgejo-{your-name}-token`
|
||||||
|
|
||||||
|
## The Loop
|
||||||
|
|
||||||
|
### Step 1: Research
|
||||||
|
|
||||||
|
Find source material relevant to your domain. Sources include:
|
||||||
|
- **X/Twitter** — tweets, threads, debates from your network accounts
|
||||||
|
- **Papers** — academic papers, preprints, whitepapers
|
||||||
|
- **Articles** — blog posts, newsletters, news coverage
|
||||||
|
- **Reports** — industry reports, data releases, government filings
|
||||||
|
- **Conversations** — podcast transcripts, interview notes, voicenote transcripts
|
||||||
|
|
||||||
|
For X accounts, use `/x-research pull @{username}` to pull tweets, then scan for anything worth archiving. Don't just archive the "best" tweets — archive anything substantive. A thread arguing a wrong position is as valuable as one arguing a right one.
|
||||||
|
|
||||||
|
### Step 2: Archive with notes
|
||||||
|
|
||||||
|
For each source, create an archive file on your branch:
|
||||||
|
|
||||||
|
**Filename:** `inbox/archive/YYYY-MM-DD-{author-handle}-{brief-slug}.md`
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
type: source
|
||||||
|
title: "Descriptive title of the content"
|
||||||
|
author: "Display Name (@handle)"
|
||||||
|
twitter_id: "numeric_id_from_author_object" # X sources only
|
||||||
|
url: https://original-url
|
||||||
|
date: YYYY-MM-DD
|
||||||
|
domain: internet-finance | entertainment | ai-alignment | health | space-development | grand-strategy
|
||||||
|
secondary_domains: [other-domain] # if cross-domain
|
||||||
|
format: tweet | thread | essay | paper | whitepaper | report | newsletter | news | transcript
|
||||||
|
status: unprocessed | processing # unprocessed = VPS extracts; processing = you extract
|
||||||
|
priority: high | medium | low
|
||||||
|
tags: [topic1, topic2]
|
||||||
|
flagged_for_rio: ["reason"] # if relevant to another agent's domain
|
||||||
|
---
|
||||||
|
```
|
||||||
|
|
||||||
|
**Body:** Include the full source text, then your research notes.
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Content
|
||||||
|
|
||||||
|
[Full text of tweet/thread/article. For long papers, include abstract + key sections.]
|
||||||
|
|
||||||
|
## Agent Notes
|
||||||
|
|
||||||
|
**Why this matters:** [1-2 sentences — what makes this worth archiving]
|
||||||
|
|
||||||
|
**KB connections:** [Which existing claims does this relate to, support, or challenge?]
|
||||||
|
|
||||||
|
**Extraction hints:** [What claims might the extractor pull from this? Flag specific passages.]
|
||||||
|
|
||||||
|
**Context:** [Anything the extractor needs to know — who the author is, what debate this is part of, etc.]
|
||||||
|
```
|
||||||
|
|
||||||
|
The "Agent Notes" section is critical for Track B. The VPS extractor is good at mechanical extraction but lacks your domain context. Your notes guide it. For Track A, you still benefit from writing notes — they organize your thinking before extraction.
|
||||||
|
|
||||||
|
### Step 3: Extract claims (Track A only)
|
||||||
|
|
||||||
|
If you set `status: processing`, follow `skills/extract.md`:
|
||||||
|
|
||||||
|
1. Read the source completely
|
||||||
|
2. Separate evidence from interpretation
|
||||||
|
3. Extract candidate claims (specific, disagreeable, evidence-backed)
|
||||||
|
4. Check for duplicates against existing KB
|
||||||
|
5. Write claim files to `domains/{your-domain}/`
|
||||||
|
6. Update source: `status: processed`, `processed_by`, `processed_date`, `claims_extracted`
|
||||||
|
|
||||||
|
### Step 4: Cross-domain flagging
|
||||||
|
|
||||||
|
When you find sources outside your domain:
|
||||||
|
- Archive them anyway (you're already reading them)
|
||||||
|
- Set the `domain` field to the correct domain, not yours
|
||||||
|
- Add `flagged_for_{agent}: ["brief reason"]` to frontmatter
|
||||||
|
- Set `priority: high` if it's urgent or challenges existing claims
|
||||||
|
|
||||||
|
### Step 5: Branch, commit, push
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Branch
|
||||||
|
git checkout -b {your-name}/sources-{date}-{brief-slug}
|
||||||
|
|
||||||
|
# Stage — sources only (Track B) or sources + claims (Track A)
|
||||||
|
git add inbox/archive/*.md
|
||||||
|
git add domains/{your-domain}/*.md # Track A only
|
||||||
|
|
||||||
|
# Commit
|
||||||
|
git commit -m "{your-name}: archive {N} sources — {brief description}
|
||||||
|
|
||||||
|
- What: {N} sources from {list of authors/accounts}
|
||||||
|
- Domains: {which domains these cover}
|
||||||
|
- Track: A (agent-extracted) | B (VPS extraction pending)
|
||||||
|
|
||||||
|
Pentagon-Agent: {Name} <{UUID}>"
|
||||||
|
|
||||||
|
# Push
|
||||||
|
FORGEJO_TOKEN=$(cat ~/.pentagon/secrets/forgejo-{your-name}-token)
|
||||||
|
git push -u https://{your-name}:${FORGEJO_TOKEN}@git.livingip.xyz/teleo/teleo-codex.git {branch-name}
|
||||||
|
```
|
||||||
|
|
||||||
|
Open a PR:
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://git.livingip.xyz/api/v1/repos/teleo/teleo-codex/pulls" \
|
||||||
|
-H "Authorization: token ${FORGEJO_TOKEN}" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{
|
||||||
|
"title": "{your-name}: {archive N sources | extract N claims} — {brief description}",
|
||||||
|
"body": "## Sources\n{numbered list with titles and domains}\n\n## Claims (Track A only)\n{claim titles}\n\n## Track B sources (VPS extraction pending)\n{list of unprocessed sources}",
|
||||||
|
"base": "main",
|
||||||
|
"head": "{branch-name}"
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
## Network Management
|
||||||
|
|
||||||
|
Your network file (`{your-name}-network.json`) lists X accounts to monitor:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"agent": "your-name",
|
||||||
|
"domain": "your-domain",
|
||||||
|
"accounts": [
|
||||||
|
{"username": "example", "tier": "core", "why": "Reason this account matters"},
|
||||||
|
{"username": "example2", "tier": "extended", "why": "Secondary but useful"}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Tiers:**
|
||||||
|
- `core` — Pull every session. High signal-to-noise.
|
||||||
|
- `extended` — Pull weekly or when specifically relevant.
|
||||||
|
- `watch` — Pull once to evaluate, then promote or drop.
|
||||||
|
|
||||||
|
Agents without a network file should create one as their first task. Start with 5-10 seed accounts.
|
||||||
|
|
||||||
|
## Quality Controls
|
||||||
|
|
||||||
|
- **Archive everything substantive.** Don't self-censor. The extractor decides what yields claims.
|
||||||
|
- **Write good notes.** Your domain context is the difference between a useful source and a pile of text.
|
||||||
|
- **Check for duplicates.** Don't re-archive sources already in `inbox/archive/`.
|
||||||
|
- **Flag cross-domain.** If you see something relevant to another agent, flag it — don't assume they'll find it.
|
||||||
|
- **Log API costs.** Every X pull gets logged to `~/.pentagon/workspace/collective/x-ingestion/pull-log.jsonl`.
|
||||||
|
- **Source diversity.** If you're archiving 10+ items from one account in a batch, note it — the extractor should be aware of monoculture risk.
|
||||||
Loading…
Reference in a new issue