From fcc2e32a29c1bca67db5296002c9d29a0bfbae9b Mon Sep 17 00:00:00 2001 From: m3taversal Date: Mon, 27 Apr 2026 16:03:33 +0100 Subject: [PATCH] leo: update contribution-architecture for Phase B taxonomy MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The architecture doc still referenced the Phase A vocabulary (extractor / sourcer / reviewer) after Phase B locked author / drafter / originator / challenger / synthesizer / evaluator on 2026-04-26. This update aligns the canonical doc with the live taxonomy enforced by Epimetheus's writer-publisher gate. Changes: - Description and source updated to credit m3taversal + reflect Phase B - Version history table now shows v0 / Phase A / Phase B columns - "Five contribution roles" → "Six roles, five weighted" — adds drafter (zero weight, AI-only) and renames the writer role to author (human-only) - Weights box updated: Challenger 0.35, Synthesizer 0.25, Evaluator 0.20, Originator 0.15, Author 0.05, Drafter 0.0 - Each role rationale rewritten to reflect the human-vs-agent split - "Three types of contributors" → "Two kinds of contributor records" (humans + agents, with kind + display_name fields) - Principal-agent attribution section explains how CI flows: agent drafts fire two events (drafter zero-weight, principal author 0.05); only the second moves the leaderboard - Knowledge chain diagram updated with new role names - Pipeline integration section reflects writer-publisher gate as the mechanical enforcement point - contribution_events table called out as canonical source of truth - CI evolution roadmap now shows Phase A retired, Phase B current - Footer notes the 2026-04-28 update Pentagon-Agent: Leo --- core/contribution-architecture.md | 150 ++++++++++++++++++------------ 1 file changed, 90 insertions(+), 60 deletions(-) diff --git a/core/contribution-architecture.md b/core/contribution-architecture.md index 6c9848804..096bcdb74 100644 --- a/core/contribution-architecture.md +++ b/core/contribution-architecture.md @@ -1,10 +1,11 @@ --- type: claim domain: mechanisms -description: "Architecture paper defining the five contribution roles, their weights, attribution chain, and governance implications — supersedes the original reward-mechanism.md role weights and CI formula" +description: "Architecture paper defining the contribution roles, their weights, attribution chain, and governance implications — Phase B taxonomy distinguishes human authorship from AI drafting and external origination" confidence: likely -source: "Leo, original architecture with Cory-approved weight calibration" +source: "Leo + m3taversal, Phase B taxonomy locked 2026-04-26 after writer-publisher gate deployment" created: 2026-03-26 +last_evaluated: 2026-04-28 related: - contributor-guide reweave_edges: @@ -15,18 +16,22 @@ reweave_edges: How LivingIP measures, attributes, and rewards contributions to collective intelligence. This paper explains the *why* behind every design decision — the incentive structure, the attribution chain, and the governance implications of meritocratic contribution scoring. -### Relationship to reward-mechanism.md +### Version history -This document supersedes specific sections of [[reward-mechanism]] while preserving others: +This document supersedes [[reward-mechanism]] for role weights and the CI formula, and itself moved through three taxonomies as the system learned what we were measuring. -| Topic | reward-mechanism.md (v0) | This document (v1) | Change rationale | -|-------|-------------------------|---------------------|-----------------| -| **Role weights** | 0.25/0.25/0.25/0.15/0.10 (equal top-3) | 0.35/0.25/0.20/0.15/0.05 (challenger-heavy) | Equal weights incentivized volume over quality; bootstrap data showed extraction dominating CI | -| **CI formula** | 3 leaderboards (0.30 Belief + 0.30 Challenge + 0.40 Connection) | Single role-weighted aggregation per claim | Leaderboard model preserved as future display layer; underlying measurement simplified to role weights | -| **Source authors** | Citation only, not attribution | Credited as Sourcer (0.15 weight) | Their intellectual contribution is foundational; citation without credit understates their role | -| **Reviewer weight** | 0.10 | 0.20 | Review is skilled judgment work, not rubber-stamping; v0 underweighted it | +| Topic | reward-mechanism (v0) | Phase A (v1, Mar 2026) | Phase B (v2, Apr 2026) | +|-------|----------------------|------------------------|------------------------| +| **Role names** | extractor / sourcer / challenger / synthesizer / reviewer | extractor / sourcer / challenger / synthesizer / reviewer | author / drafter / originator / challenger / synthesizer / evaluator | +| **Top role weight** | 0.25 (extractor, equal to top three) | 0.35 (challenger) | 0.35 (challenger) | +| **Lowest role weight** | 0.10 (reviewer) | 0.05 (extractor) | 0.05 (author) + 0.0 (drafter) | +| **CI formula** | 3 leaderboards (0.30 Belief + 0.30 Challenge + 0.40 Connection) | Single role-weighted aggregation per claim | Same — role-weighted aggregation, attribution refined | +| **Human/AI distinction** | Implicit | Implicit (humans + agents both extract) | Explicit (humans author/originate, agents draft at zero weight) | +| **Source authors** | Citation only | Sourcer (0.15) | Originator (0.15) — same weight, sharper semantic | -**What reward-mechanism.md still governs:** The three leaderboards (Belief Movers, Challenge Champions, Connection Finders), their scoring formulas, anti-gaming properties, and economic mechanism. These are display and incentive layers built on top of the attribution weights defined here. The leaderboard weights (0.30/0.30/0.40) determine how CI converts to leaderboard position — they are not the same as the role weights that determine how individual contributions earn CI. +**What changed in Phase B and why.** Phase A used a single role label for "wrote the claim text," which collapsed two distinct contributions: the human directing the work and the AI agent producing the words. When all writers were called "extractors," CI scoring couldn't tell whether the collective was rewarding human intellectual leadership or just AI typing speed. Phase B splits them — *author* is the human directing intellectual authority, *drafter* is the AI agent producing text (tracked for accountability, weighted zero). Same five-role weight structure for the substantive roles; cleaner accounting for who actually moved the argument forward. + +**What reward-mechanism.md still governs.** The three leaderboards (Belief Movers, Challenge Champions, Connection Finders), their scoring formulas, anti-gaming properties, and economic mechanism. These are display and incentive layers built on top of the attribution weights defined here. The leaderboard weights (0.30/0.30/0.40) determine how CI converts to leaderboard position — they are not the same as the role weights that determine how individual contributions earn CI. ## 1. Mechanism Design @@ -34,45 +39,49 @@ This document supersedes specific sections of [[reward-mechanism]] while preserv Collective intelligence systems need to answer: who made us smarter, and by how much? Get this wrong and you either reward volume over quality (producing noise), reward incumbency over contribution (producing stagnation), or fail to attribute at all (producing free-rider collapse). -### Five contribution roles +### Six roles, five weighted -Every piece of knowledge in the system traces back to people who played specific roles in producing it. We identify five, because the knowledge production pipeline has exactly five distinct bottlenecks: +Every piece of knowledge traces back to people who played specific roles in producing it. Phase B identifies six — five that earn CI weight and one that's tracked but unweighted (drafter). -| Role | What they do | Why it matters | -|------|-------------|----------------| -| **Sourcer** | Identifies the source material or research direction | Without sourcers, agents have nothing to work with. The quality of inputs bounds the quality of outputs. | -| **Extractor** | Separates signal from noise, writes the atomic claim | Necessary but increasingly mechanical. LLMs do heavy lifting. The skill is judgment about what's worth extracting, not the extraction itself. | -| **Challenger** | Tests claims through counter-evidence or boundary conditions | The hardest and most valuable role. Challengers make existing knowledge better. A successful challenge that survives counter-attempts is the highest-value contribution because it improves what the collective already believes. | -| **Synthesizer** | Connects claims across domains, producing insight neither domain could see alone | Cross-domain connections are the unique output of collective intelligence. No single specialist produces these. Synthesis is where the system generates value that no individual contributor could. | -| **Reviewer** | Evaluates claim quality, enforces standards, approves or rejects | The quality gate. Without reviewers, the knowledge base degrades toward noise. Reviewing is undervalued in most systems — we weight it explicitly. | +| Role | Who | What they do | Why it matters | +|------|-----|-------------|----------------| +| **Challenger** | Human or agent | Tests claims through counter-evidence or boundary conditions | The hardest and most valuable role. Challengers make existing knowledge better. A successful challenge that survives counter-attempts is the highest-value contribution because it improves what the collective already believes. | +| **Synthesizer** | Human or agent | Connects claims across domains, producing insight neither domain could see alone | Cross-domain connections are the unique output of collective intelligence. No single specialist produces these. Synthesis is where the system generates value that no individual contributor could. | +| **Evaluator** | Human or agent | Reviews claim quality, enforces standards, approves or rejects | The quality gate. Without evaluators, the knowledge base degrades toward noise. Reviewing is skilled judgment work, weighted explicitly. | +| **Originator** | Human or external entity | Identified the source material or proposed the research direction | Without originators, agents have nothing to work with. The quality of inputs bounds the quality of outputs. External thinkers (Bostrom, Hanson, Schmachtenberger, etc.) are originators when their work seeds claims. | +| **Author** | Human only | Directs the intellectual work that produces a claim | The human exercising intellectual authority. When m3taversal directs an agent to synthesize Moloch, m3taversal is the author. When Alex points his agent at our repo and directs research, Alex is the author. Execution by an agent does not make the agent the author. | +| **Drafter** | AI agent only | Produced the claim text under human direction | Tracked for accountability — we always know which agent typed which words — but earns zero CI weight. Typing is not authoring. | ### Why these weights ``` Challenger: 0.35 Synthesizer: 0.25 -Reviewer: 0.20 -Sourcer: 0.15 -Extractor: 0.05 +Evaluator: 0.20 +Originator: 0.15 +Author: 0.05 +Drafter: 0.00 (tracked, not weighted) ``` **Challenger at 0.35 (highest):** Improving existing knowledge is harder and more valuable than adding new knowledge. A challenge requires understanding the existing claim well enough to identify its weakest point, finding counter-evidence, and constructing an argument that survives adversarial review. Most challenges fail — the ones that succeed materially improve the knowledge base. The high weight incentivizes the behavior we want most: rigorous testing of what we believe. **Synthesizer at 0.25:** Cross-domain insight is the collective's unique competitive advantage. No individual specialist sees the connection between GLP-1 persistence economics and futarchy governance design. A synthesizer who identifies a real cross-domain mechanism (not just analogy) creates knowledge that couldn't exist without the collective. This is the system's core value proposition, weighted accordingly. -**Reviewer at 0.20:** Quality gates are load-bearing infrastructure. Every claim that enters the knowledge base was approved by a reviewer. Bad claims that slip through degrade collective beliefs. The reviewer role was historically underweighted (0.10 in v0) because it's invisible — good reviewing looks like nothing happening. The increase to 0.20 reflects that review is skilled judgment work, not rubber-stamping. +**Evaluator at 0.20:** Quality gates are load-bearing infrastructure. Every claim that enters the knowledge base was approved by an evaluator. Bad claims that slip through degrade collective beliefs. The evaluator role was historically underweighted (0.10 in v0) because it's invisible — good reviewing looks like nothing happening. The increase to 0.20 reflects that review is skilled judgment work, not rubber-stamping. -**Sourcer at 0.15:** Finding the right material to analyze is real work with a skill ceiling — knowing where to look, what's worth reading, which research directions are productive. But sourcing doesn't transform the material. The sourcer identifies the ore; others refine it. 0.15 reflects genuine contribution without overweighting the input relative to the processing. +**Originator at 0.15:** Finding the right material to analyze, or proposing the research direction, is real work with a skill ceiling — knowing where to look, what's worth reading, which lines of inquiry are productive. But origination doesn't transform the material. The originator identifies the ore; others refine it. 0.15 reflects genuine contribution without overweighting the input relative to the processing. -**Extractor at 0.05 (lowest):** Extraction — reading a source and producing claims from it — is increasingly mechanical. LLMs do the heavy lifting. The human/agent skill is in judgment about what to extract, which is captured by the sourcer role (directing the research mission) and reviewer role (evaluating what was extracted). The extraction itself is low-skill-ceiling work that scales with compute, not with expertise. +**Author at 0.05:** Directing the intellectual work that produces a claim is real but bounded contribution. The author chose what to argue, supplied the framing, and stands behind the claim. The substantive intellectual moves — challenging, synthesizing, evaluating — earn higher weight. Authorship grounds the work in a specific human, which is necessary for accountability and for the principal-agent attribution chain to function. + +**Drafter at 0.00:** Drafting — producing claim text from human direction — is what AI agents do. We track it because accountability requires knowing which agent produced which words (and which model version, on which date, with what prompt). But drafting is not authorship: an agent that drafts 100 claims under m3taversal's direction has not earned 100 claims' worth of CI. Authorship attributes to m3taversal; the drafter record sits alongside as audit trail. ### What the weights incentivize -The old weights (extractor at 0.25, equal to sourcer and challenger) incentivized volume because extraction was the easiest role to accumulate at scale. With equal weighting, an agent that extracted 100 claims earned the same per-unit CI as one that successfully challenged 5 — but the extractor could do it 20x faster. The bottleneck was throughput, not quality. +The Phase B taxonomy preserves the substantive weight structure from Phase A while solving the human/agent attribution problem. An agent producing claims at high throughput accumulates drafter records (zero CI) but moves CI to the human directing the work. This prevents the failure mode where AI typing speed compounds into CI dominance — the collective should reward human intellectual leadership, not agent token production. -The new weights incentivize: challenge existing claims, synthesize across domains, review carefully → high CI. This rewards the behaviors that make the knowledge base *better*, not just *bigger*. A contributor who challenges one claim and wins contributes more CI than one who extracts twenty claims from a source. +The substantive direction is the same: challenge existing claims, synthesize across domains, evaluate carefully → high CI. This rewards the behaviors that make the knowledge base *better*, not just *bigger*. A contributor who challenges one claim and wins contributes more CI than one who originates twenty sources. -This is deliberate: the system should reward quality over volume, depth over breadth, and improvement over accumulation. +This is deliberate: the system should reward quality over volume, depth over breadth, improvement over accumulation, and human intellectual authority over AI throughput. ## 2. Attribution Architecture @@ -83,21 +92,28 @@ Every position traces back through a chain of evidence: ``` Source material → Claim → Belief → Position ↑ ↑ ↑ ↑ - sourcer extractor synthesizer agent judgment - reviewer challenger + originator author synthesizer agent judgment + drafter challenger + evaluator ``` -Attribution records who contributed at each link. A claim's `source:` field traces to the original author. Its `attribution` block records who extracted, reviewed, challenged, and synthesized it. Beliefs cite claims. Positions cite beliefs. The entire chain is traversable — from a public position back to the original evidence and every contributor who shaped it along the way. +Attribution records who contributed at each link. A claim's `source:` field traces to the originator (the entity that supplied the material). Its `attribution` block records who authored, drafted, evaluated, challenged, and synthesized it. Beliefs cite claims. Positions cite beliefs. The entire chain is traversable — from a public position back to the original evidence and every contributor who shaped it along the way. -### Three types of contributors +### Two kinds of contributor records -**1. Source authors (external):** The thinkers whose ideas the KB is built on. Nick Bostrom, Robin Hanson, metaproph3t, Dario Amodei, Matthew Ball. They contributed the raw intellectual material. Credited as **sourcer** (0.15 weight) — their work is the foundation even though they didn't interact with the system directly. Identified by parsing claim `source:` fields and matching against entity records. +The Phase B taxonomy collapses the old three-types framing into two kinds of contributor records — humans (which can be internal operators or external thinkers) and agents (which always operate as drafters under a human principal). The role someone plays is independent from what kind of contributor they are. -*Change from v0:* reward-mechanism.md treated source authors as citation-only (referenced in evidence, not attributed). This understated their contribution — without their intellectual work, the claims wouldn't exist. The change to sourcer credit recognizes that identifying and producing the source material is real intellectual contribution, whether or not the author interacted with the system directly. The 0.15 weight is modest — it reflects that sourcing doesn't transform the material, but it does ground it. +**Humans.** Anyone with intellectual authority over a contribution. This includes: +- *Internal operators* — m3taversal, Alex, Cameron, future contributors who direct work or write directly. They can play any of the five weighted roles. +- *External thinkers* — Nick Bostrom, Robin Hanson, Schmachtenberger, Dario Amodei, Matthew Ball. They typically appear as **originators** when their work seeds claims. Identified by parsing claim `source:` fields and matching against entity records. -**2. Human operators (internal):** People who direct agents, review outputs, set research missions, and exercise governance authority. Credited across all five roles depending on their activity. Their agents' work rolls up to them via the **principal** mechanism (see below). +The schema captures this with `kind: "human"` and an optional `display_name`. Whether the human is internal or external is a function of activity, not a fixed type — an external thinker who starts contributing directly becomes an internal operator without changing schema. -**3. Agents (infrastructure):** AI agents that extract, synthesize, review, and evaluate. Credited individually for operational tracking, but their contributions attribute to their human **principal** for governance purposes. +**Agents.** AI systems that produce text under human direction. They appear in the contributor table with `kind: "agent"` and operate exclusively in the **drafter** role (zero CI weight). Agents are tracked individually for accountability — every claim records which agent drafted it, on which model version, in which session — but CI attribution flows through their human principal to the **author** field. + +*Why this matters.* Conflating agent execution with agent origination would let the collective award itself credit for human work. The Phase B split makes the rule mechanical: agents draft, humans author. There is no path by which an AI agent earns CI for executing on human direction. + +*Where agents can earn CI.* When an agent does its own research from a session it initiated (not directed by a human), the resulting claims credit the agent as **originator**. The research initiation is the test — if a human asked for it, the human is the author and originator. If the agent surfaced the line of inquiry from its own context, the agent is the originator. This is the only path through which agents accumulate weighted CI. ### Principal-agent attribution @@ -111,13 +127,20 @@ Agent: clay → Principal: m3taversal Agent: theseus → Principal: m3taversal ``` -**Governance CI** rolls up: m3taversal's CI = direct contributions + all agent contributions where `principal = m3taversal`. +**How CI flows under Phase B.** When an agent drafts a claim under human direction, two contribution events fire: + +1. The agent records as `drafter` (kind: agent, weight: 0.0) — accountability trail +2. The principal records as `author` (kind: human, weight: 0.05) — CI attribution + +Both rows exist in `contribution_events`; only the second moves the leaderboard. This is the mechanical implementation of "agents draft, humans author" — not a policy applied at display time, but the actual structure of what gets recorded. + +**Agent-originated work.** When an agent runs autonomous research (e.g. Theseus's Cornelius extraction sessions where Theseus chose what to read and what to extract), the agent records as `originator` on the resulting claims. This is the only path through which agents accumulate weighted CI, and it requires the research initiation itself to come from the agent rather than a human directive. **VPS infrastructure agents** (Epimetheus, Argus) have `principal = null`. They run autonomously on pipeline and monitoring tasks. Their work is infrastructure — it keeps the system running but doesn't produce knowledge. Infrastructure contributions are tracked separately and do not count toward governance CI. -**Why this matters for multiplayer:** When a second user joins with their own agents, their agents attribute to them. The principal mechanism scales without schema changes. Each human sees their full intellectual impact regardless of how many agents they employ. +**Why this matters for multiplayer:** When a second user joins with their own agents, their agents attribute to them. The principal mechanism scales without schema changes. Each human sees their full intellectual impact regardless of how many agents they employ. External contributors (Alex, Cameron, future participants) work the same way — they direct their own agents, and CI attributes to them as authors. -**Concentration risk:** Currently all agents roll up to a single principal (m3taversal). This is expected during bootstrap — the system has one operator. But as more humans join, the roll-up must distribute. No bounds are needed now because there is nothing to bound against; the mitigation is multiplayer adoption itself. If concentration persists after the system has 3+ active principals, that is a signal to review whether the principal mechanism is working as designed. +**Concentration risk:** Currently most CI rolls up to a single principal (m3taversal). This is expected during bootstrap — the system has one primary operator. As more humans join, the roll-up distributes. No bounds are needed now because there is nothing to bound against; the mitigation is multiplayer adoption itself. The Phase B distinction between author and drafter is what makes this distribution legible — when Alex joins and directs his own agents, his author CI is visibly separate from m3taversal's, with no agent-side ambiguity. ### Commit-type classification @@ -130,34 +153,39 @@ Not all repository activity is knowledge contribution. The system distinguishes: Classification happens at merge time by checking which directories the PR touched. Files in `domains/`, `core/`, `foundations/`, `decisions/` = knowledge. Files in `inbox/`, `entities/` only = pipeline. -This prevents CI inflation from mechanical work. An agent that archives 100 sources earns zero CI. An agent that extracts 5 claims from those sources earns CI proportional to its role. +This prevents CI inflation from mechanical work. An agent that archives 100 sources earns zero CI. An agent that drafts 5 claims from those sources earns drafter records (zero CI to the agent) and the principal earns author CI proportional to authorship. ## 3. Pipeline Integration ### The extraction → eval → merge → attribution chain ``` -1. Source identified (sourcer credit) -2. Agent extracts claims on a branch (extractor credit) -3. PR opened against main -4. Tier-0 mechanical validation (schema, wiki links) -5. LLM evaluation (cross-domain + domain peer + self-review) -6. Reviewer approves or requests changes (reviewer credit) -7. PR merges -8. Post-merge: contributor table updated with role credits -9. Post-merge: claim embedded in Qdrant for semantic retrieval -10. Post-merge: source archive status updated +1. Source identified (originator credit — human or external entity) +2. Human directs research mission (author credit accrues to the human) +3. Agent drafts claims on a branch (drafter record — zero CI weight) +4. PR opened against main +5. Tier-0 mechanical validation (schema, wiki links) +6. LLM evaluation (cross-domain + domain peer + self-review) +7. Evaluator approves or requests changes (evaluator credit) +8. PR merges +9. Post-merge: writer-publisher gate fires contribution_events for every role played +10. Post-merge: claim embedded in Qdrant for semantic retrieval +11. Post-merge: source archive status updated ``` +For agent-originated work (where the agent initiated the line of inquiry rather than executing on a human directive), step 2 is skipped and the agent records as both originator and drafter. CI flows to the agent for origination; drafting remains zero-weighted. + ### Where attribution data lives - **Git trailers** (`Pentagon-Agent: Rio `): who committed the change to the repository -- **Claim YAML** (`attribution:` block): who contributed what in which role on this specific claim -- **Claim YAML** (`source:` field): human-readable reference to the original source author -- **Pipeline DB** (`contributors` table): aggregated role counts, CI scores, principal relationships +- **Claim YAML** (`source:` field): human-readable reference to the original source/author/originator +- **Pipeline DB** (`contributors` table): contributor records with `kind: "human" | "agent"`, `display_name`, role counts, CI scores, principal relationships +- **Pipeline DB** (`contribution_events` table — Phase B canonical): one row per (claim, contributor, role) — the source of truth for CI computation - **Pentagon agent config**: principal mapping (which agents work for which humans) -These are complementary, not redundant. Git trailers answer "who made this commit." YAML attribution answers "who produced this knowledge." The contributors table answers "what is this person's total contribution." Pentagon config answers "who does this agent work for." +These are complementary, not redundant. Git trailers answer "who made this commit." `contribution_events` rows answer "who contributed in which role to this claim." The contributors table answers "what is this person's total contribution." Pentagon config answers "who does this agent work for." + +The Phase B writer-publisher gate enforces the structural rule at write time: every contribution_event row carries a role and a kind, and the synthesis layer (`/api/leaderboard`) computes CI directly from these events rather than from cached count columns. This is what makes the principal-agent attribution mechanical rather than policy-applied. ### Forgejo as source of truth @@ -190,13 +218,15 @@ The `principal` field supports this transition by being nullable. Setting `princ ### CI evolution roadmap -**v1 (current): Role-weighted CI.** Contribution scored by which roles you played. Incentivizes challenging, synthesizing, and reviewing over extracting. +**v1 (Phase A, retired): Role-weighted CI with single writer role.** Contribution scored by which roles you played, but humans and agents both attributed as extractors. Solved the volume-vs-quality incentive problem; left the human-vs-agent attribution problem unresolved. -**v2 (next): Outcome-weighted CI.** Did the challenge survive counter-attempts? Did the synthesis get cited by other claims? Did the extraction produce claims that passed review? Outcomes weight more than activity. Greater complexity earned, not designed. +**v2 (Phase B, current): Role-weighted CI with author/drafter split.** Same five weighted roles, plus drafter (zero weight) for AI-produced text. CI flows to humans directing the work; agents accumulate accountability records but not weighted contribution. Mechanically enforced by the writer-publisher gate at event-emission time. -**v3 (future): Usage-weighted CI.** Which claims actually get used in agent reasoning? How often? Contributions that produce frequently-referenced knowledge score higher than contributions that sit unread. This requires usage instrumentation infrastructure (claim_usage telemetry) currently being built. +**v3 (next): Outcome-weighted CI.** Did the challenge survive counter-attempts? Did the synthesis get cited by other claims? Did the authored claim pass review? Outcomes weight more than activity. Greater complexity earned, not designed. -Each layer adds a more accurate signal of real contribution value. The progression is: input → outcome → impact. +**v4 (future): Usage-weighted CI.** Which claims actually get used in agent reasoning? How often? Contributions that produce frequently-referenced knowledge score higher than contributions that sit unread. This requires usage instrumentation infrastructure (claim_usage telemetry) currently being built. + +Each layer adds a more accurate signal of real contribution value. The progression is: input → role → outcome → impact. ### Connection to LivingIP @@ -206,7 +236,7 @@ The attribution architecture ensures this loop is traceable. Every dollar of eco --- -*Architecture designed by Leo with input from Rhea (system architecture), Argus (data infrastructure), Epimetheus (pipeline integration), and Cory (governance direction). 2026-03-26.* +*Architecture designed by Leo with input from Rhea (system architecture), Argus (data infrastructure), Epimetheus (pipeline integration), and Cory (governance direction). Original 2026-03-26. Phase B taxonomy update 2026-04-28: author / drafter / originator / challenger / synthesizer / evaluator. Mechanically enforced by Epimetheus's writer-publisher gate at contribution_events emission.* ---