diff --git a/agents/clay/musings/ontology-simplification-rationale.md b/agents/clay/musings/ontology-simplification-rationale.md new file mode 100644 index 00000000..43fc7ba2 --- /dev/null +++ b/agents/clay/musings/ontology-simplification-rationale.md @@ -0,0 +1,95 @@ +--- +type: musing +agent: clay +title: "Ontology simplification — two-layer design rationale" +status: ready-to-extract +created: 2026-04-01 +updated: 2026-04-01 +--- + +# Why Two Layers: Contributor-Facing vs Agent-Internal + +## The Problem + +The codex has 11 schema types: attribution, belief, claim, contributor, conviction, divergence, entity, musing, position, sector, source. A new contributor encounters all 11 and must understand their relationships before contributing anything. + +This is backwards. The contributor's first question is "what can I do?" not "what does the system contain?" + +From the ontology audit (2026-03-26): Cory flagged that 11 concepts is too many. Entities and sectors generate zero CI. Musings, beliefs, positions, and convictions are agent-internal. A contributor touches at most 3 of the 11. + +## The Design + +**Contributor-facing layer: 3 concepts** + +1. **Claims** — what you know (assertions with evidence) +2. **Challenges** — what you dispute (counter-evidence against existing claims) +3. **Connections** — how things link (cross-domain synthesis) + +These three map to the highest-weighted contribution roles: +- Claims → Extractor (0.05) + Sourcer (0.15) = 0.20 +- Challenges → Challenger (0.35) +- Connections → Synthesizer (0.25) + +The remaining 0.20 (Reviewer) is earned through track record, not a contributor action. + +**Agent-internal layer: 11 concepts (unchanged)** + +All existing schemas remain. Agents use beliefs, positions, entities, sectors, musings, convictions, attributions, and divergences as before. These are operational infrastructure — they help agents do their jobs. + +The key design principle: **contributors interact with the knowledge, agents manage the knowledge**. A contributor doesn't need to know what a "musing" is to challenge a claim. + +## Challenge as First-Class Schema + +The biggest gap in the current ontology: challenges have no schema. They exist as a `challenged_by: []` field on claims — unstructured strings with no evidence chain, no outcome tracking, no attribution. + +This contradicts the contribution architecture, which weights Challenger at 0.35 (highest). The most valuable contribution type has the least structural support. + +The new `schemas/challenge.md` gives challenges: +- A target claim (what's being challenged) +- A challenge type (refutation, boundary, reframe, evidence-gap) +- An outcome (open, accepted, rejected, refined) +- Their own evidence section +- Cascade impact analysis +- Full attribution + +This means: every challenge gets a written response. Every challenge has an outcome. Every successful challenge earns trackable CI credit. The incentive structure and the schema now align. + +## Structural Importance Score + +The second gap: no way to measure which claims matter most. A claim with 12 inbound references and 3 active challenges is more load-bearing than a claim with 0 references and 0 challenges. But both look the same in the schema. + +The `importance` field (0.0-1.0) is computed from: +- Inbound references (how many other claims depend on this one) +- Active challenges (contested claims are high-value investigation targets) +- Belief dependencies (how many agent beliefs cite this claim) +- Position dependencies (how many public positions trace through this claim) + +This feeds into CI: challenging an important claim earns more than challenging a trivial one. The pipeline computes importance; agents and contributors don't set it manually. + +## What This Doesn't Change + +- No existing schema is removed or renamed +- No existing claims need modification (the `challenged_by` field is preserved during migration) +- Agent workflows are unchanged — they still use all 11 concepts +- The epistemology doc's four-layer model (evidence → claims → beliefs → positions) is unchanged +- Contribution weights are unchanged + +## Migration Path + +1. New challenges are filed as first-class objects (`type: challenge`) +2. Existing `challenged_by` strings are gradually converted to challenge objects +3. `importance` field is computed by pipeline and backfilled on existing claims +4. Contributor-facing documentation (`core/contributor-guide.md`) replaces the need for contributors to read individual schemas +5. No breaking changes — all existing tooling continues to work + +## Connection to Product Vision + +The Game (Cory's framing): "You vs. the current KB. Earn credit proportional to importance." + +The two-layer ontology makes this concrete: +- The contributor sees 3 moves: claim, challenge, connect +- Credit is proportional to difficulty (challenge > connection > claim) +- Importance score means challenging load-bearing claims earns more than challenging peripheral ones +- The contributor doesn't need to understand beliefs, positions, entities, sectors, or any agent-internal concept + +"Prove us wrong" requires exactly one schema that doesn't exist yet: `challenge.md`. This PR creates it. diff --git a/core/contributor-guide.md b/core/contributor-guide.md index 2a492d3e..4f417e68 100644 --- a/core/contributor-guide.md +++ b/core/contributor-guide.md @@ -1,66 +1,110 @@ -# Contributor Guide +--- +type: claim +domain: mechanisms +description: "Contributor-facing ontology reducing 11 internal concepts to 3 interaction primitives — claims, challenges, and connections — while preserving the full schema for agent operations" +confidence: likely +source: "Clay, ontology audit 2026-03-26, Cory-aligned" +created: 2026-04-01 +--- -Three concepts. That's it. +# The Three Things You Can Do -## Claims +The Teleo Codex is a knowledge base built by humans and AI agents working together. You don't need to understand the full system to contribute. There are exactly three things you can do, and each one makes the collective smarter. -A claim is a statement about how the world works, backed by evidence. +## 1. Make a Claim -> "Legacy media is consolidating into three dominant entities because debt-loaded incumbents cannot compete with cash-rich tech companies for content rights" +A claim is a specific, arguable assertion — something someone could disagree with. -Claims have confidence levels: proven, likely, experimental, speculative. Every claim cites its evidence. Every claim can be wrong. +**Good claim:** "Legacy media is consolidating into a Big Three oligopoly as debt-loaded studios merge and cash-rich tech competitors acquire the rest" -**Browse claims:** Look in `domains/{domain}/` — each domain has dozens of claims organized by topic. Start with whichever domain matches your expertise. +**Bad claim:** "The media industry is changing" (too vague — no one can disagree with this) -## Challenges +**The test:** "This note argues that [your claim]" must work as a sentence. If it does, it's a claim. -A challenge is a counter-argument against a specific claim. +**What you need:** +- A specific assertion (the title) +- Evidence supporting it (at least one source) +- A confidence level: how sure are you? + - **Proven** — strong evidence, independently verified + - **Likely** — good evidence, broadly accepted + - **Experimental** — emerging evidence, still being tested + - **Speculative** — theoretical, limited evidence -> "The AI content acceptance decline may be scope-bounded to entertainment — reference and analytical AI content shows no acceptance penalty" +**What happens:** An agent reviews your claim against the existing knowledge base. If it's genuinely new (not a near-duplicate), well-evidenced, and correctly scoped, it gets merged. You earn Extractor credit. -Challenges are the highest-value contribution. If you think a claim is wrong, too broad, or missing evidence, file a challenge. The claim author must respond — they can't ignore it. +## 2. Challenge a Claim -Three types: -- **Full challenge** — the claim is wrong, here's why -- **Scope challenge** — the claim is true in context X but not Y -- **Evidence challenge** — the evidence doesn't support the confidence level +A challenge argues that an existing claim is wrong, incomplete, or true only in certain contexts. This is the most valuable contribution — improving what we already believe is harder than adding something new. -**File a challenge:** Create a file in `domains/{domain}/challenge-{slug}.md` following the challenge schema, or tell an agent your counter-argument and they'll draft it for you. +**Four ways to challenge:** -## Connections +| Type | What you're saying | +|------|-------------------| +| **Refutation** | "This claim is wrong — here's counter-evidence" | +| **Boundary** | "This claim is true in context A but not context B" | +| **Reframe** | "The conclusion is roughly right but the mechanism is wrong" | +| **Evidence gap** | "This claim asserts more than the evidence supports" | -Connections are the links between claims. When claim A depends on claim B, or challenges claim C, those relationships form a knowledge graph. +**What you need:** +- An existing claim to target +- Counter-evidence or a specific argument +- A proposed resolution — what should change if you're right? -You don't create connections as standalone files — they emerge from wiki links (`[[claim-name]]`) in claim and challenge bodies. But spotting a connection no one else has seen is a genuine contribution. Cross-domain connections (a pattern in entertainment that also appears in finance) are the most valuable. +**What happens:** The domain agent who owns the target claim must respond. Your challenge is never silently ignored. Three outcomes: +- **Accepted** — the claim gets modified. You earn full Challenger credit (highest weight in the system). +- **Rejected** — your counter-evidence was evaluated and found insufficient. You still earn partial credit — the attempt itself has value. +- **Refined** — the claim gets sharpened. Both you and the original author benefit. -**Spot a connection:** Tell an agent. They'll draft the cross-reference and attribute you. +## 3. Make a Connection + +A connection links claims across domains that illuminate each other — insights that no single specialist would see. + +**What counts as a connection:** +- Two claims in different domains that share a mechanism (not just a metaphor) +- A pattern in one domain that explains an anomaly in another +- Evidence from one field that strengthens or weakens a claim in another + +**What doesn't count:** +- Surface-level analogies ("X is like Y") +- Two claims that happen to mention the same entity +- Restating a claim in different domain vocabulary + +**The test:** Does this connection produce a new insight that neither claim alone provides? If removing either claim makes the connection meaningless, it's real. + +**What happens:** Connections surface as cross-domain synthesis or divergences (when the linked claims disagree). You earn Synthesizer credit. --- -## What You Don't Need to Know - -The system has 11 internal concept types (beliefs, positions, convictions, entities, sectors, sources, divergences, musings, attribution, contributors). Agents use these to organize their reasoning, track companies, and manage their workflow. - -You don't need to learn any of them. Claims, challenges, and connections are the complete interface for contributors. Everything else is infrastructure. - ## How Credit Works -Every contribution is attributed. Your name stays on everything you produce or improve. The system tracks five roles: +Every contribution earns credit proportional to its difficulty and impact: -| Role | What you did | -|------|-------------| -| Sourcer | Pointed to material worth analyzing | -| Extractor | Turned source material into a claim | -| Challenger | Filed counter-evidence against a claim | -| Synthesizer | Connected claims across domains | -| Reviewer | Evaluated claim quality | +| Role | Weight | What earns it | +|------|--------|---------------| +| Challenger | 0.35 | Successfully challenging or refining an existing claim | +| Synthesizer | 0.25 | Connecting claims across domains | +| Reviewer | 0.20 | Evaluating claim quality (agent role, earned through track record) | +| Sourcer | 0.15 | Identifying source material worth analyzing | +| Extractor | 0.05 | Writing a new claim from source material | -You can hold multiple roles on the same claim. Credit is proportional to impact — a challenge that changes a high-importance claim earns more than a new speculative claim in an empty domain. +Credit accumulates into your Contribution Index (CI). Higher CI earns more governance authority — the people who made the knowledge base smarter have more say in its direction. -## Getting Started +**Tier progression:** +- **Visitor** — no contributions yet +- **Contributor** — 1+ merged contribution +- **Veteran** — 10+ merged contributions AND at least one surviving challenge or belief influence -1. **Browse:** Pick a domain. Read 5-10 claims. Find one you disagree with or know something about. -2. **React:** Tell an agent your reaction. They'll help you figure out if it's a challenge, a new claim, or a connection. -3. **Approve:** The agent drafts; you review and approve before anything gets published. +## What You Don't Need to Know -Nothing enters the knowledge base without your explicit approval. The conversation itself is valuable even if you never file anything. +The system has 11 internal concept types that agents use to organize their work (beliefs, positions, entities, sectors, musings, convictions, attributions, divergences, sources, contributors, and claims). You don't need to learn these. They exist so agents can do their jobs — evaluate evidence, form beliefs, take positions, track the world. + +As a contributor, you interact with three: **claims**, **challenges**, and **connections**. Everything else is infrastructure. + +--- + +Relevant Notes: +- [[contribution-architecture]] — full attribution mechanics and CI formula +- [[epistemology]] — the four-layer knowledge model (evidence → claims → beliefs → positions) + +Topics: +- [[overview]] diff --git a/schemas/challenge.md b/schemas/challenge.md index 097f083c..ffdbf5a4 100644 --- a/schemas/challenge.md +++ b/schemas/challenge.md @@ -1,36 +1,31 @@ # Challenge Schema -Challenges are first-class counter-arguments or counter-evidence against specific claims. They are the primary contribution mechanism for new participants — "prove us wrong" is the entry point. +A challenge is a structured argument that an existing claim is wrong, incomplete, or bounded in ways the claim doesn't acknowledge. Challenges are the highest-weighted contribution type (0.35) because improving existing knowledge is harder and more valuable than adding new knowledge. -Challenges differ from divergences: -- **Challenge:** One person's counter-argument against one claim. An action. -- **Divergence:** Two or more claims in tension within the KB. A structural observation. +Challenges were previously tracked as a `challenged_by` field on claims — a list of strings with no structure. This schema makes challenges first-class objects with their own evidence, outcomes, and attribution. -A challenge can trigger a divergence if it produces a new competing claim. But most challenges sharpen existing claims rather than creating new ones. +## Where they live -## Why Challenges Are First-Class - -Without a standalone schema, challenges are metadata buried in claim files (`challenged_by` field, `## Challenges` section). This means: -- No attribution for challengers — the highest-value contributor action has no credit path -- No independent evidence chain — counter-evidence is subordinate to the claim it challenges -- No linking — other claims can't reference a challenge -- No tracking — open challenges aren't discoverable as a class - -Making challenges first-class gives them attribution, evidence chains, independent linking, and discoverability. This is the schema that makes "prove us wrong" operational. +`domains/{domain}/challenge-{slug}.md` — alongside the claims they target. The slug should describe the challenge, not the target claim. ## YAML Frontmatter ```yaml --- type: challenge -target: "claim-filename-slug" # which claim this challenges (filename without .md) +target_claim: "filename of the claim being challenged (without .md)" domain: internet-finance | entertainment | health | ai-alignment | space-development | energy | manufacturing | robotics | grand-strategy | mechanisms | living-capital | living-agents | teleohumanity | critical-systems | collective-intelligence | teleological-economics | cultural-dynamics -description: "one sentence capturing the counter-argument" -status: open | addressed | accepted | rejected -strength: strong | moderate | weak -source: "who raised this challenge and key counter-evidence" +description: "one sentence stating what this challenge argues" +challenge_type: refutation | boundary | reframe | evidence-gap +status: open | accepted | rejected | refined +confidence: proven | likely | experimental | speculative +source: "who raised this challenge and primary counter-evidence" created: YYYY-MM-DD -resolved: null # YYYY-MM-DD when status changes from open +last_evaluated: YYYY-MM-DD +attribution: + challenger: + handle: "" + agent_id: "" --- ``` @@ -39,141 +34,79 @@ resolved: null # YYYY-MM-DD when status changes from open | Field | Type | Description | |-------|------|-------------| | type | enum | Always `challenge` | -| target | string | Filename slug of the claim being challenged | -| domain | enum | Domain of the target claim | -| description | string | The counter-argument in one sentence (~150 chars) | -| status | enum | `open` (unresolved), `addressed` (target claim updated to acknowledge), `accepted` (target claim modified or confidence changed), `rejected` (counter-evidence insufficient, with explanation) | -| strength | enum | `strong` (direct counter-evidence), `moderate` (plausible alternative explanation or scope limitation), `weak` (edge case or theoretical objection). Strength reflects how compelling the counter-argument is, not how confident we are in the target claim. | -| source | string | Attribution — who raised this, key counter-evidence | +| target_claim | string | Filename of the claim being challenged | +| domain | enum | Primary domain (usually matches target claim's domain) | +| description | string | What this challenge argues (~150 chars) | +| challenge_type | enum | See challenge types below | +| status | enum | `open` (under review), `accepted` (claim modified), `rejected` (challenge disproven), `refined` (claim sharpened but not overturned) | +| confidence | enum | How strong the counter-evidence is | +| source | string | Attribution — who raised the challenge, key counter-evidence | | created | date | When filed | -## Optional Fields +## Challenge Types -| Field | Type | Description | -|-------|------|-------------| -| resolved | date | When status changed from `open` | -| resolution_summary | string | One sentence: how was this resolved? | -| attribution | object | Role-specific contributor tracking (see `schemas/attribution.md`) | - -## Status Transitions - -| Transition | What it means | Who decides | -|-----------|--------------|-------------| -| open → addressed | Target claim updated its Challenges section to acknowledge this counter-evidence | Claim author + reviewer | -| open → accepted | Target claim changed confidence, scope, or wording based on this challenge | Claim author + reviewer | -| open → rejected | Counter-evidence evaluated and found insufficient — rejection reasoning documented | Reviewer (Leo + domain peer) | -| addressed → accepted | Acknowledgment led to actual claim modification | Claim author + reviewer | - -**Key rule:** Rejecting a challenge requires explanation. The rejection reasoning lives in the challenge file's Resolution section, not just a status flip. This is what makes the system intellectually honest — you can't silently dismiss counter-evidence. - -## Title Format - -Challenge titles state the counter-argument as a prose proposition, prefixed with the target claim context. - -**Good:** "the AI content acceptance decline claim may be scope-bounded to entertainment because reference and analytical AI content shows no acceptance penalty" -**Bad:** "challenge to AI acceptance claim" - -**The challenge test:** "This note argues against [target claim] because [title]" must work as a sentence. +| Type | What it means | Example | +|------|--------------|---------| +| **refutation** | The claim is wrong — counter-evidence contradicts it | "Claim says X outperforms Y, but this study shows Y outperforms X under realistic conditions" | +| **boundary** | The claim is true in some contexts but not others — it needs scope limits | "AI acceptance declining" is true for entertainment but not for reference/analytical content | +| **reframe** | The claim's mechanism is wrong even if the conclusion is approximately right | "The effect is real but it's driven by selection bias, not the causal mechanism the claim proposes" | +| **evidence-gap** | The claim asserts more than the evidence supports | "n=1 case study doesn't support a general claim about market dynamics" | ## Body Format ```markdown -# [counter-argument as prose] +# [challenge title — what this argues] -## Target Claim -[[target-claim-filename]] — [one sentence summary of what the target claims] +**Target:** [[target-claim-filename]] -**Current confidence:** [target claim's confidence level] +[Argument — why the target claim is wrong, incomplete, or bounded. This must be specific enough to evaluate.] ## Counter-Evidence +- counter-evidence-1 — what it shows and why it undermines the target claim +- counter-evidence-2 — what it shows -[The argument and evidence against the target claim. This is the substance — why is the claim wrong, incomplete, or mis-scoped?] +## What Would Resolve This +[Specific evidence or analysis that would determine whether this challenge holds. This is the research agenda.] -- [evidence source 1] — what it shows -- [evidence source 2] — what it shows +## Proposed Resolution +[How the target claim should change if this challenge is accepted. Options: retract, downgrade confidence, add boundary conditions, reframe mechanism.] -## Scope of Challenge - -[Is this challenging the entire claim, or a specific scope/boundary condition?] - -- **Full challenge:** The claim is wrong — here's why -- **Scope challenge:** The claim is true in context X but not in context Y — the scope is too broad -- **Evidence challenge:** The claim's evidence doesn't support its confidence level - -## What This Would Change - -[If accepted, what happens downstream? Which beliefs and positions depend on the target claim?] - -- [[dependent-belief-or-position]] — how it would be affected -- [[related-claim]] — how it would need updating - -## Resolution - -[Filled in when status changes from open. Documents how the challenge was resolved.] - -**Status:** open | addressed | accepted | rejected -**Resolved:** YYYY-MM-DD -**Summary:** [one sentence] +## Cascade Impact +[What beliefs and positions depend on the target claim? What changes if the claim is modified?] --- Relevant Notes: -- [[related-claim]] — relationship -- [[divergence-file]] — if this challenge created or connects to a divergence +- [[target-claim]] — the claim under challenge +- [[related-claim]] — related evidence or claims Topics: -- [[domain-map]] +- [[domain-topic-map]] ``` ## Governance -- **Who can file:** Any contributor, any agent. Challenges are the primary entry point for new participants. -- **Review:** Leo + domain peer review for quality (is the counter-evidence real? is the scope of challenge clear?). Low bar for filing — the quality gate is on the evidence, not the right to challenge. -- **Resolution:** The claim author must respond to the challenge. They can update the claim (accepted), acknowledge without changing (addressed), or reject with documented reasoning (rejected). They cannot ignore it. -- **Attribution:** Challengers get full attribution. In the contribution scoring system, successful challenges (accepted) are weighted higher than new claims because they improve existing knowledge rather than just adding to it. - -## Filing Convention - -**Location:** `domains/{domain}/challenge-{slug}.md` - -The slug should be descriptive of the counter-argument, not the target claim. - -``` -domains/ - entertainment/ - challenge-ai-acceptance-decline-may-be-scope-bounded-to-entertainment.md - challenge-zero-sum-framing-needs-centaur-creator-category.md - internet-finance/ - challenge-futarchy-manipulation-resistance-assumes-liquid-markets.md -``` +- **Who can propose:** Any contributor, any agent. Challenges are the most valuable contribution type. +- **Review process:** Leo assigns evaluation. The domain agent who owns the target claim must respond. At least one other domain agent reviews. The challenger gets a response — challenges are never silently ignored. +- **Outcomes:** + - `accepted` → target claim is modified (confidence downgrade, scope narrowed, or retracted). Challenger earns full CI credit (0.35 weight). + - `rejected` → counter-evidence evaluated and found insufficient. Challenge stays in KB as record. Challenger earns partial CI credit (the attempt has value even when wrong). + - `refined` → target claim is sharpened or clarified but not overturned. Both challenger and claim author benefit — the claim is now better. Challenger earns full CI credit. +- **No silent rejection:** Every challenge receives a written response explaining why it was accepted, rejected, or led to refinement. This is non-negotiable — it's what makes the system trustworthy. ## Quality Checks 1. Target claim exists and is correctly referenced -2. Counter-evidence is specific and traceable (not "I think it's wrong") -3. Scope of challenge is explicit (full, scope, or evidence challenge) -4. Strength rating matches the evidence quality -5. "What This Would Change" section identifies real downstream dependencies -6. The challenge is genuinely novel — not restating a known limitation already in the target claim's Challenges section +2. Challenge type matches the actual argument (a boundary challenge isn't a refutation) +3. Counter-evidence is cited, not just asserted +4. Proposed resolution is specific enough to implement +5. Description adds information beyond restating the target claim +6. Not a duplicate of an existing challenge against the same claim -## Relationship to Existing Challenge Tracking +## Relationship to Divergences -The `challenged_by` field in claim frontmatter and the `## Challenges` section in claim bodies continue to exist. When a challenge file is created: +A challenge targets one specific claim. A divergence links 2-5 claims that disagree with each other. When two claims have active challenges that point toward each other, that's a signal to create a divergence linking both. Challenges are the atoms; divergences are the molecules. -1. The target claim's `challenged_by` field should be updated to include the challenge filename -2. The target claim's `## Challenges` section should reference the challenge file for full detail -3. The challenge file is the canonical location for the counter-argument — the claim file just points to it +## Migration from `challenged_by` Field -This is additive, not breaking. Existing claims with inline challenges continue to work. The challenge schema provides a proper home for counter-arguments that deserve independent tracking and attribution. - -## How Challenges Feed the Game - -Challenges are the primary game mechanic for contributors: - -1. **Discovery:** Contributors browse claims and find ones they disagree with -2. **Filing:** They file a challenge with counter-evidence -3. **Resolution:** The claim author and reviewers evaluate the challenge -4. **Credit:** Accepted challenges earn attribution proportional to the cascade impact of the change they produced -5. **Divergence creation:** If a challenge produces a genuine competing claim, it may spawn a divergence — the highest-value knowledge structure in the system - -The importance of a challenge is measured by the importance of the claim it targets and the downstream dependencies that would change if the challenge is accepted. This connects directly to the structural importance scoring of the knowledge graph. +Existing claims use `challenged_by: []` in frontmatter to list challenges as strings. This field is preserved for backward compatibility during migration. New challenges should be filed as first-class challenge objects. Over time, string-based `challenged_by` entries will be converted to challenge objects and the field will reference filenames instead of prose descriptions. diff --git a/schemas/claim.md b/schemas/claim.md index 49841f73..ef4460e9 100644 --- a/schemas/claim.md +++ b/schemas/claim.md @@ -15,7 +15,6 @@ created: YYYY-MM-DD last_evaluated: YYYY-MM-DD depends_on: [] # list of evidence and claim titles this builds on challenged_by: [] # list of counter-evidence or counter-claims -importance: null # computed by pipeline — null until pipeline support is implemented --- ``` @@ -36,10 +35,10 @@ importance: null # computed by pipeline — null until pipeline support is impl |-------|------|-------------| | last_evaluated | date | When this claim was last reviewed against new evidence | | depends_on | list | Evidence and claims this builds on (the reasoning chain) | -| challenged_by | list | Challenge filenames or inline counter-evidence. When a first-class challenge file exists (see `schemas/challenge.md`), reference the filename. Inline descriptions are still valid for minor objections that don't warrant a standalone file. | +| challenged_by | list | Filenames of challenge objects targeting this claim (see `schemas/challenge.md`). Legacy: may contain prose strings from pre-challenge-schema era | | secondary_domains | list | Other domains this claim is relevant to | | attribution | object | Role-specific contributor tracking — see `schemas/attribution.md` | -| importance | float/null | Structural importance score (0.0–1.0). Computed by pipeline from downstream dependencies, active challenges, and cross-domain linkage. Default `null` — do not set manually. See Structural Importance section below. | +| importance | number | Structural importance score (0.0-1.0). Computed from: inbound references from other claims, active challenges, belief dependencies, position dependencies. Higher = more load-bearing in the KB. Computed by pipeline, not set manually | ## Governance @@ -80,15 +79,6 @@ Topics: - domain-topic-map ``` -## Structural Importance - -A claim's importance in the knowledge graph is determined by: -1. **Downstream dependencies** — how many beliefs, positions, and other claims depend on this claim via `depends_on` -2. **Active challenges** — contested claims are more important than uncontested ones (they're where the knowledge frontier is) -3. **Cross-domain linkage** — claims referenced from multiple domains carry higher structural importance - -Importance is computed by the pipeline and written to the `importance` frontmatter field. Until pipeline support is implemented, this field defaults to `null` — agents should not set it manually. See `extract-graph-data.py` for the planned computation. The importance score determines contribution credit — challenging a high-importance claim earns more than challenging a low-importance one. - ## Quality Checks 1. Title passes the claim test (specific enough to disagree with)