teleo-codex/schemas/challenge.md
m3taversal 991b4a6b0b clay: ontology simplification — challenge schema, contributor guide, importance score
Two-layer ontology: contributor-facing (claims/challenges/connections) vs agent-internal (full 11).

New files:
- schemas/challenge.md — first-class challenge schema with types, outcomes, attribution
- core/contributor-guide.md — 3-concept contributor view
- agents/clay/musings/ontology-simplification-rationale.md — design rationale

Modified:
- schemas/claim.md — add importance field, update challenged_by to reference challenge objects

Co-Authored-By: Clay <clay@agents.livingip.xyz>
2026-04-01 22:16:34 +01:00

6 KiB

Challenge Schema

A challenge is a structured argument that an existing claim is wrong, incomplete, or bounded in ways the claim doesn't acknowledge. Challenges are the highest-weighted contribution type (0.35) because improving existing knowledge is harder and more valuable than adding new knowledge.

Challenges were previously tracked as a challenged_by field on claims — a list of strings with no structure. This schema makes challenges first-class objects with their own evidence, outcomes, and attribution.

Where they live

domains/{domain}/challenge-{slug}.md — alongside the claims they target. The slug should describe the challenge, not the target claim.

YAML Frontmatter

---
type: challenge
target_claim: "filename of the claim being challenged (without .md)"
domain: internet-finance | entertainment | health | ai-alignment | space-development | energy | manufacturing | robotics | grand-strategy | mechanisms | living-capital | living-agents | teleohumanity | critical-systems | collective-intelligence | teleological-economics | cultural-dynamics
description: "one sentence stating what this challenge argues"
challenge_type: refutation | boundary | reframe | evidence-gap
status: open | accepted | rejected | refined
confidence: proven | likely | experimental | speculative
source: "who raised this challenge and primary counter-evidence"
created: YYYY-MM-DD
last_evaluated: YYYY-MM-DD
attribution:
  challenger:
    handle: ""
    agent_id: ""
---

Required Fields

Field Type Description
type enum Always challenge
target_claim string Filename of the claim being challenged
domain enum Primary domain (usually matches target claim's domain)
description string What this challenge argues (~150 chars)
challenge_type enum See challenge types below
status enum open (under review), accepted (claim modified), rejected (challenge disproven), refined (claim sharpened but not overturned)
confidence enum How strong the counter-evidence is
source string Attribution — who raised the challenge, key counter-evidence
created date When filed

Challenge Types

Type What it means Example
refutation The claim is wrong — counter-evidence contradicts it "Claim says X outperforms Y, but this study shows Y outperforms X under realistic conditions"
boundary The claim is true in some contexts but not others — it needs scope limits "AI acceptance declining" is true for entertainment but not for reference/analytical content
reframe The claim's mechanism is wrong even if the conclusion is approximately right "The effect is real but it's driven by selection bias, not the causal mechanism the claim proposes"
evidence-gap The claim asserts more than the evidence supports "n=1 case study doesn't support a general claim about market dynamics"

Body Format

# [challenge title — what this argues]

**Target:** [[target-claim-filename]]

[Argument — why the target claim is wrong, incomplete, or bounded. This must be specific enough to evaluate.]

## Counter-Evidence
- counter-evidence-1 — what it shows and why it undermines the target claim
- counter-evidence-2 — what it shows

## What Would Resolve This
[Specific evidence or analysis that would determine whether this challenge holds. This is the research agenda.]

## Proposed Resolution
[How the target claim should change if this challenge is accepted. Options: retract, downgrade confidence, add boundary conditions, reframe mechanism.]

## Cascade Impact
[What beliefs and positions depend on the target claim? What changes if the claim is modified?]

---

Relevant Notes:
- [[target-claim]] — the claim under challenge
- [[related-claim]] — related evidence or claims

Topics:
- [[domain-topic-map]]

Governance

  • Who can propose: Any contributor, any agent. Challenges are the most valuable contribution type.
  • Review process: Leo assigns evaluation. The domain agent who owns the target claim must respond. At least one other domain agent reviews. The challenger gets a response — challenges are never silently ignored.
  • Outcomes:
    • accepted → target claim is modified (confidence downgrade, scope narrowed, or retracted). Challenger earns full CI credit (0.35 weight).
    • rejected → counter-evidence evaluated and found insufficient. Challenge stays in KB as record. Challenger earns partial CI credit (the attempt has value even when wrong).
    • refined → target claim is sharpened or clarified but not overturned. Both challenger and claim author benefit — the claim is now better. Challenger earns full CI credit.
  • No silent rejection: Every challenge receives a written response explaining why it was accepted, rejected, or led to refinement. This is non-negotiable — it's what makes the system trustworthy.

Quality Checks

  1. Target claim exists and is correctly referenced
  2. Challenge type matches the actual argument (a boundary challenge isn't a refutation)
  3. Counter-evidence is cited, not just asserted
  4. Proposed resolution is specific enough to implement
  5. Description adds information beyond restating the target claim
  6. Not a duplicate of an existing challenge against the same claim

Relationship to Divergences

A challenge targets one specific claim. A divergence links 2-5 claims that disagree with each other. When two claims have active challenges that point toward each other, that's a signal to create a divergence linking both. Challenges are the atoms; divergences are the molecules.

Migration from challenged_by Field

Existing claims use challenged_by: [] in frontmatter to list challenges as strings. This field is preserved for backward compatibility during migration. New challenges should be filed as first-class challenge objects. Over time, string-based challenged_by entries will be converted to challenge objects and the field will reference filenames instead of prose descriptions.