leo: 10 architecture-as-claims — the codex documents itself #44
5 changed files with 7 additions and 7 deletions
|
|
@ -23,11 +23,11 @@ Specific instances where reviewers caught problems the proposer missed:
|
||||||
|
|
||||||
- **PR #42:** Theseus caught overstatement — "the coordination problem dissolves" was softened to "becomes tractable" with explicit implementation gaps noted. The proposer (Leo) had used stronger language than the evidence supported.
|
- **PR #42:** Theseus caught overstatement — "the coordination problem dissolves" was softened to "becomes tractable" with explicit implementation gaps noted. The proposer (Leo) had used stronger language than the evidence supported.
|
||||||
- **PR #42:** Rio caught an incorrect mechanism citation — the futarchy manipulation resistance claim was being applied to organizational commitments, but the actual claim is about price manipulation in conditional markets. Different mechanism, wrong citation.
|
- **PR #42:** Rio caught an incorrect mechanism citation — the futarchy manipulation resistance claim was being applied to organizational commitments, but the actual claim is about price manipulation in conditional markets. Different mechanism, wrong citation.
|
||||||
- **PR #42:** Rio identified a broken wiki link to a claim that did not yet exist on main (it was on a different branch). The link would have been dead at merge time.
|
- **PR #42:** Rio identified a wiki link referencing a claim that did not exist. The reviewer caught the dangling reference that the proposer assumed was valid.
|
||||||
- **PR #34:** Rio flagged that the AI displacement phase model timeline may be shorter for finance (2028-2032) than the claim's general 2033-2040 range, because financial output is numerically verifiable. Domain-specific knowledge the cross-domain synthesizer lacked.
|
- **PR #34:** Rio flagged that the AI displacement phase model timeline may be shorter for finance (2028-2032) than the claim's general 2033-2040 range, because financial output is numerically verifiable. Domain-specific knowledge the cross-domain synthesizer lacked.
|
||||||
- **PR #34:** Clay added Claynosaurz as a live case study for the early-conviction pricing claim — evidence the proposer didn't have access to from within the entertainment domain.
|
- **PR #34:** Clay added Claynosaurz as a live case study for the early-conviction pricing claim — evidence the proposer didn't have access to from within the entertainment domain.
|
||||||
- **PR #27:** Leo established the enrichment-vs-standalone gate during review: "remove the existing claim; does the new one still stand alone?" This calibration emerged from the review process itself, not from pre-designed rules.
|
- **PR #27:** Leo established the enrichment-vs-standalone gate during review: "remove the existing claim; does the new one still stand alone?" This calibration emerged from the review process itself, not from pre-designed rules.
|
||||||
- **PR #43:** Leo's OPSEC review caught dollar amounts that had survived Rio's initial scrub on PR #42's musing and position files. The second reviewer found what the first missed.
|
- **PR #42/43:** Leo's OPSEC review caught dollar amounts in musing and position files. The OPSEC rule was established mid-session after these files were already written — demonstrating that new review criteria propagate retroactively through the PR process. Files written before the rule were caught and scrubbed before merge.
|
||||||
|
|
||||||
## What this doesn't do yet
|
## What this doesn't do yet
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -9,7 +9,7 @@ created: 2026-03-07
|
||||||
|
|
||||||
# Domain specialization with cross-domain synthesis produces better collective intelligence than generalist agents because specialists build deeper knowledge while a dedicated synthesizer finds connections they cannot see from within their territory
|
# Domain specialization with cross-domain synthesis produces better collective intelligence than generalist agents because specialists build deeper knowledge while a dedicated synthesizer finds connections they cannot see from within their territory
|
||||||
|
|
||||||
The Teleo collective organizes agents into domain specialists (Rio for internet finance, Clay for entertainment, Vida for health, Theseus for AI alignment, Calypso for health) with a dedicated cross-domain synthesizer (Leo) who reads across all domains. This is not an arbitrary division of labor — it is the mechanism that produces insights no single agent would generate.
|
The Teleo collective organizes agents into domain specialists (Rio for internet finance, Clay for entertainment, Vida for health, Theseus for AI alignment) with a dedicated cross-domain synthesizer (Leo) who reads across all domains. This is not an arbitrary division of labor — it is the mechanism that produces insights no single agent would generate.
|
||||||
|
|
||||||
## How it works today
|
## How it works today
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -39,7 +39,7 @@ The convention is enforced through operating rules in CLAUDE.md and by reviewer
|
||||||
|
|
||||||
The immediate improvement is a CI check: every commit to a PR must include a valid Pentagon-Agent trailer with a recognized agent UUID. This is simple to implement and catches missing attribution before merge.
|
The immediate improvement is a CI check: every commit to a PR must include a valid Pentagon-Agent trailer with a recognized agent UUID. This is simple to implement and catches missing attribution before merge.
|
||||||
|
|
||||||
The next step is Forgejo ghost accounts: each agent gets a programmatic contributor identity (e.g., `rio@teleo.agents`) on the self-hosted Forgejo instance. Commits are attributed to the ghost account, and the Pentagon-Agent trailer serves as the durable backup. Ghost accounts also enable contributor credit — humans who submit sources can get ghost identities (e.g., `naval@x.livingip.ghost`) that resolve to real identities when they claim them.
|
The next step is Forgejo ghost accounts: each agent gets a programmatic contributor identity (e.g., `rio@agents.livingip.ghost`) on the self-hosted Forgejo instance, following the v2 convention `{identifier}@{platform}.livingip.ghost`. Commits are attributed to the ghost account, and the Pentagon-Agent trailer serves as the durable backup. Ghost accounts also enable contributor credit — humans who submit sources can get ghost identities (e.g., `naval@x.livingip.ghost`) that resolve to real identities when they claim them. The standardized email format `{identifier}@{platform}.livingip.ghost` enables cross-platform merge logic: when a real person claims their ghost, all contributions across platforms (X, chat, direct submission) consolidate into one identity.
|
||||||
|
|
||||||
The ultimate form is a complete attribution chain: human contributor submits source (credited via ghost account or contributor field) → agent extracts claims (credited via Pentagon-Agent trailer and Forgejo ghost account) → reviewer approves (credited via PR review record) → the full provenance from human insight to knowledge base entry is traceable and attributable.
|
The ultimate form is a complete attribution chain: human contributor submits source (credited via ghost account or contributor field) → agent extracts claims (credited via Pentagon-Agent trailer and Forgejo ghost account) → reviewer approves (credited via PR review record) → the full provenance from human insight to knowledge base entry is traceable and attributable.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -45,7 +45,7 @@ The division of authority is:
|
||||||
## What this doesn't do yet
|
## What this doesn't do yet
|
||||||
|
|
||||||
- **No automated escalation.** When an agent encounters a decision that exceeds its authority (e.g., a claim that has OPSEC implications), there is no formal escalation mechanism. The agent either catches it or doesn't. Structured escalation rules would define triggers for human review beyond the standard PR process.
|
- **No automated escalation.** When an agent encounters a decision that exceeds its authority (e.g., a claim that has OPSEC implications), there is no formal escalation mechanism. The agent either catches it or doesn't. Structured escalation rules would define triggers for human review beyond the standard PR process.
|
||||||
- **No permission tiers.** All agents have the same technical access to the repository. A domain agent could theoretically push to main or modify files outside their territory. Permission-based access control requires Forgejo (GitHub doesn't support the granularity needed).
|
- **No permission tiers.** All agents have the same technical access to the repository. A domain agent could theoretically push to main or modify files outside their territory. The first enforcement tier is CI-based: pre-merge checks for schema validation, trailer verification, territory enforcement, and link health will reject PRs that violate boundaries even without platform-level ACLs. The second tier is Forgejo repository permissions, which add platform-level access control. CI-as-enforcement comes first and is independent of the Forgejo migration.
|
||||||
- **Human bandwidth is the bottleneck.** Cory reviews agent output, directs strategy, and manages the organization. As the collective scales, this becomes unsustainable. The system needs to define which decisions can be fully delegated to agents and which always require human approval.
|
- **Human bandwidth is the bottleneck.** Cory reviews agent output, directs strategy, and manages the organization. As the collective scales, this becomes unsustainable. The system needs to define which decisions can be fully delegated to agents and which always require human approval.
|
||||||
|
|
||||||
## Where this goes
|
## Where this goes
|
||||||
|
|
|
||||||
|
|
@ -30,14 +30,14 @@ Currently 54 sources are archived: 30 processed, 8 unprocessed, 1 partial. Sourc
|
||||||
|
|
||||||
## Evidence from practice
|
## Evidence from practice
|
||||||
|
|
||||||
- **Null-result tracking prevents re-extraction.** Rio's Doppler whitepaper extraction returned null-result — "marketing announcement, no mechanisms, no data." When later Rio found a deeper source (the actual Doppler documentation), the null-result archive prevented duplicate processing of the empty source.
|
- **Null-result tracking prevents re-extraction.** Rio's Doppler announcement article extraction returned null-result — "marketing announcement, no mechanisms, no data." The null-result archive distinguished this empty source from the actual Doppler whitepaper (which was separately processed and produced 1 claim), preventing confusion between two different sources about the same project.
|
||||||
- **Claims-extracted lists enable impact tracing.** When reviewing a claim, Leo can check the source archive to see what else was extracted from the same source. If 5+ claims came from one author, the source diversity flag triggers.
|
- **Claims-extracted lists enable impact tracing.** When reviewing a claim, Leo can check the source archive to see what else was extracted from the same source. If 5+ claims came from one author, the source diversity flag triggers.
|
||||||
- **Processed-by field attributes extraction work.** Each source records which agent performed the extraction. This enables: contributor credit (the human who submitted the source), extraction credit (the agent who processed it), and quality tracking (which agent's extractions get the most changes requested during review).
|
- **Processed-by field attributes extraction work.** Each source records which agent performed the extraction. This enables: contributor credit (the human who submitted the source), extraction credit (the agent who processed it), and quality tracking (which agent's extractions get the most changes requested during review).
|
||||||
- **Unprocessed backlog is visible.** The 8 unprocessed sources (harkl, daftheshrimp, oxranga, citadel-securities, pineanalytics x2, theiaresearch-claude-code, claynosaurz-popkins) are a clear task list for domain agents.
|
- **Unprocessed backlog is visible.** The 8 unprocessed sources (harkl, daftheshrimp, oxranga, citadel-securities, pineanalytics x2, theiaresearch-claude-code, claynosaurz-popkins) are a clear task list for domain agents.
|
||||||
|
|
||||||
## What this doesn't do yet
|
## What this doesn't do yet
|
||||||
|
|
||||||
- **No contributor attribution on sources.** The archive records who submitted and who processed, but not the original author's identity in a structured field that could feed ghost account creation or credit attribution. The `source` field in frontmatter is free text.
|
- **No contributor attribution on sources.** The archive records who submitted and who processed, but not the original author's identity in a structured field that could feed ghost account creation or credit attribution. The `source` field in frontmatter is free text. The planned fix: a structured `author` block with name, handle, platform, and contributor_file reference — bridging source archiving to the ghost identity system so the audit trail reaches from "who contributed the original insight" through "who extracted" to "who reviewed."
|
||||||
- **Historical sources from LivingIP v1 are not archived.** The `ingestedcontent` table in LivingIP's MySQL database contains tweets and documents that predate the codex. These have been found (Naval's "Wisdom of Markets" tweet, among others) but not yet re-extracted. Some were wrongly rejected by the v1 system.
|
- **Historical sources from LivingIP v1 are not archived.** The `ingestedcontent` table in LivingIP's MySQL database contains tweets and documents that predate the codex. These have been found (Naval's "Wisdom of Markets" tweet, among others) but not yet re-extracted. Some were wrongly rejected by the v1 system.
|
||||||
- **No automated source ingestion.** Sources currently arrive through human direction (Cory drops links, agents find material). There is no RSS feed, X API listener, or scraping pipeline that automatically surfaces sources for extraction.
|
- **No automated source ingestion.** Sources currently arrive through human direction (Cory drops links, agents find material). There is no RSS feed, X API listener, or scraping pipeline that automatically surfaces sources for extraction.
|
||||||
- **GCS blob access unverified.** Document content from the LivingIP v1 system is stored in Google Cloud Storage. Whether these blobs are still accessible has not been confirmed.
|
- **GCS blob access unverified.** Document content from the LivingIP v1 system is stored in Google Cloud Storage. Whether these blobs are still accessible has not been confirmed.
|
||||||
|
|
|
||||||
Loading…
Reference in a new issue