clay: ontology simplification — challenge schema + contributor guide
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run

Two-layer ontology: contributor-facing (3 concepts: claims, challenges,
connections) vs agent-internal (11 concepts). From 2026-03-26 ontology audit.

New files:
- schemas/challenge.md — first-class challenge type with strength rating,
  evidence chains, resolution tracking, and attribution
- core/contributor-guide.md — 3-concept contributor view (no frontmatter,
  pure documentation)

Modified files:
- schemas/claim.md — importance: null field (pipeline-computed, not manual),
  challenged_by accepts challenge filenames, structural importance section
  clarified as aspirational until pipeline ships
- ops/schema-change-protocol.md — challenge added to producer/consumer map

Schema Change:
Format affected: claim (modified), challenge (new)
Backward compatible: yes
Migration: none needed

Pentagon-Agent: Clay <3D549D4C-0129-4008-BF4F-FDD367C1D184>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
m3taversal 2026-04-01 22:27:21 +01:00
parent 1c40e07e0a
commit 89c8e652f2
4 changed files with 258 additions and 1 deletions

66
core/contributor-guide.md Normal file
View file

@ -0,0 +1,66 @@
# Contributor Guide
Three concepts. That's it.
## Claims
A claim is a statement about how the world works, backed by evidence.
> "Legacy media is consolidating into three dominant entities because debt-loaded incumbents cannot compete with cash-rich tech companies for content rights"
Claims have confidence levels: proven, likely, experimental, speculative. Every claim cites its evidence. Every claim can be wrong.
**Browse claims:** Look in `domains/{domain}/` — each domain has dozens of claims organized by topic. Start with whichever domain matches your expertise.
## Challenges
A challenge is a counter-argument against a specific claim.
> "The AI content acceptance decline may be scope-bounded to entertainment — reference and analytical AI content shows no acceptance penalty"
Challenges are the highest-value contribution. If you think a claim is wrong, too broad, or missing evidence, file a challenge. The claim author must respond — they can't ignore it.
Three types:
- **Full challenge** — the claim is wrong, here's why
- **Scope challenge** — the claim is true in context X but not Y
- **Evidence challenge** — the evidence doesn't support the confidence level
**File a challenge:** Create a file in `domains/{domain}/challenge-{slug}.md` following the challenge schema, or tell an agent your counter-argument and they'll draft it for you.
## Connections
Connections are the links between claims. When claim A depends on claim B, or challenges claim C, those relationships form a knowledge graph.
You don't create connections as standalone files — they emerge from wiki links (`[[claim-name]]`) in claim and challenge bodies. But spotting a connection no one else has seen is a genuine contribution. Cross-domain connections (a pattern in entertainment that also appears in finance) are the most valuable.
**Spot a connection:** Tell an agent. They'll draft the cross-reference and attribute you.
---
## What You Don't Need to Know
The system has 11 internal concept types (beliefs, positions, convictions, entities, sectors, sources, divergences, musings, attribution, contributors). Agents use these to organize their reasoning, track companies, and manage their workflow.
You don't need to learn any of them. Claims, challenges, and connections are the complete interface for contributors. Everything else is infrastructure.
## How Credit Works
Every contribution is attributed. Your name stays on everything you produce or improve. The system tracks five roles:
| Role | What you did |
|------|-------------|
| Sourcer | Pointed to material worth analyzing |
| Extractor | Turned source material into a claim |
| Challenger | Filed counter-evidence against a claim |
| Synthesizer | Connected claims across domains |
| Reviewer | Evaluated claim quality |
You can hold multiple roles on the same claim. Credit is proportional to impact — a challenge that changes a high-importance claim earns more than a new speculative claim in an empty domain.
## Getting Started
1. **Browse:** Pick a domain. Read 5-10 claims. Find one you disagree with or know something about.
2. **React:** Tell an agent your reaction. They'll help you figure out if it's a challenge, a new claim, or a connection.
3. **Approve:** The agent drafts; you review and approve before anything gets published.
Nothing enters the knowledge base without your explicit approval. The conversation itself is valuable even if you never file anything.

View file

@ -42,6 +42,7 @@ When any agent changes a file format, database table, API response shape, or ser
| Belief | `schemas/belief.md` | Each agent (own file) | Leo (review), other agents (cross-ref) | None currently |
| Position | `schemas/position.md` | Each agent (own file) | Leo (review), visitors | None currently |
| Conviction | `schemas/conviction.md` | Cory only | All agents, visitors | `extract-graph-data.py` |
| Challenge | `schemas/challenge.md` | Any agent, any contributor | Leo (review), target claim author, visitors | `extract-graph-data.py` |
| Divergence | `schemas/divergence.md` | Any agent | All agents, visitors | None currently |
| Musing | `schemas/musing.md` | Each agent (own folder) | That agent only | None |
| Sector | `schemas/sector.md` | Domain agents | All agents, visitors | None currently |

179
schemas/challenge.md Normal file
View file

@ -0,0 +1,179 @@
# Challenge Schema
Challenges are first-class counter-arguments or counter-evidence against specific claims. They are the primary contribution mechanism for new participants — "prove us wrong" is the entry point.
Challenges differ from divergences:
- **Challenge:** One person's counter-argument against one claim. An action.
- **Divergence:** Two or more claims in tension within the KB. A structural observation.
A challenge can trigger a divergence if it produces a new competing claim. But most challenges sharpen existing claims rather than creating new ones.
## Why Challenges Are First-Class
Without a standalone schema, challenges are metadata buried in claim files (`challenged_by` field, `## Challenges` section). This means:
- No attribution for challengers — the highest-value contributor action has no credit path
- No independent evidence chain — counter-evidence is subordinate to the claim it challenges
- No linking — other claims can't reference a challenge
- No tracking — open challenges aren't discoverable as a class
Making challenges first-class gives them attribution, evidence chains, independent linking, and discoverability. This is the schema that makes "prove us wrong" operational.
## YAML Frontmatter
```yaml
---
type: challenge
target: "claim-filename-slug" # which claim this challenges (filename without .md)
domain: internet-finance | entertainment | health | ai-alignment | space-development | energy | manufacturing | robotics | grand-strategy | mechanisms | living-capital | living-agents | teleohumanity | critical-systems | collective-intelligence | teleological-economics | cultural-dynamics
description: "one sentence capturing the counter-argument"
status: open | addressed | accepted | rejected
strength: strong | moderate | weak
source: "who raised this challenge and key counter-evidence"
created: YYYY-MM-DD
resolved: null # YYYY-MM-DD when status changes from open
---
```
## Required Fields
| Field | Type | Description |
|-------|------|-------------|
| type | enum | Always `challenge` |
| target | string | Filename slug of the claim being challenged |
| domain | enum | Domain of the target claim |
| description | string | The counter-argument in one sentence (~150 chars) |
| status | enum | `open` (unresolved), `addressed` (target claim updated to acknowledge), `accepted` (target claim modified or confidence changed), `rejected` (counter-evidence insufficient, with explanation) |
| strength | enum | `strong` (direct counter-evidence), `moderate` (plausible alternative explanation or scope limitation), `weak` (edge case or theoretical objection). Strength reflects how compelling the counter-argument is, not how confident we are in the target claim. |
| source | string | Attribution — who raised this, key counter-evidence |
| created | date | When filed |
## Optional Fields
| Field | Type | Description |
|-------|------|-------------|
| resolved | date | When status changed from `open` |
| resolution_summary | string | One sentence: how was this resolved? |
| attribution | object | Role-specific contributor tracking (see `schemas/attribution.md`) |
## Status Transitions
| Transition | What it means | Who decides |
|-----------|--------------|-------------|
| open → addressed | Target claim updated its Challenges section to acknowledge this counter-evidence | Claim author + reviewer |
| open → accepted | Target claim changed confidence, scope, or wording based on this challenge | Claim author + reviewer |
| open → rejected | Counter-evidence evaluated and found insufficient — rejection reasoning documented | Reviewer (Leo + domain peer) |
| addressed → accepted | Acknowledgment led to actual claim modification | Claim author + reviewer |
**Key rule:** Rejecting a challenge requires explanation. The rejection reasoning lives in the challenge file's Resolution section, not just a status flip. This is what makes the system intellectually honest — you can't silently dismiss counter-evidence.
## Title Format
Challenge titles state the counter-argument as a prose proposition, prefixed with the target claim context.
**Good:** "the AI content acceptance decline claim may be scope-bounded to entertainment because reference and analytical AI content shows no acceptance penalty"
**Bad:** "challenge to AI acceptance claim"
**The challenge test:** "This note argues against [target claim] because [title]" must work as a sentence.
## Body Format
```markdown
# [counter-argument as prose]
## Target Claim
[[target-claim-filename]] — [one sentence summary of what the target claims]
**Current confidence:** [target claim's confidence level]
## Counter-Evidence
[The argument and evidence against the target claim. This is the substance — why is the claim wrong, incomplete, or mis-scoped?]
- [evidence source 1] — what it shows
- [evidence source 2] — what it shows
## Scope of Challenge
[Is this challenging the entire claim, or a specific scope/boundary condition?]
- **Full challenge:** The claim is wrong — here's why
- **Scope challenge:** The claim is true in context X but not in context Y — the scope is too broad
- **Evidence challenge:** The claim's evidence doesn't support its confidence level
## What This Would Change
[If accepted, what happens downstream? Which beliefs and positions depend on the target claim?]
- [[dependent-belief-or-position]] — how it would be affected
- [[related-claim]] — how it would need updating
## Resolution
[Filled in when status changes from open. Documents how the challenge was resolved.]
**Status:** open | addressed | accepted | rejected
**Resolved:** YYYY-MM-DD
**Summary:** [one sentence]
---
Relevant Notes:
- [[related-claim]] — relationship
- [[divergence-file]] — if this challenge created or connects to a divergence
Topics:
- [[domain-map]]
```
## Governance
- **Who can file:** Any contributor, any agent. Challenges are the primary entry point for new participants.
- **Review:** Leo + domain peer review for quality (is the counter-evidence real? is the scope of challenge clear?). Low bar for filing — the quality gate is on the evidence, not the right to challenge.
- **Resolution:** The claim author must respond to the challenge. They can update the claim (accepted), acknowledge without changing (addressed), or reject with documented reasoning (rejected). They cannot ignore it.
- **Attribution:** Challengers get full attribution. In the contribution scoring system, successful challenges (accepted) are weighted higher than new claims because they improve existing knowledge rather than just adding to it.
## Filing Convention
**Location:** `domains/{domain}/challenge-{slug}.md`
The slug should be descriptive of the counter-argument, not the target claim.
```
domains/
entertainment/
challenge-ai-acceptance-decline-may-be-scope-bounded-to-entertainment.md
challenge-zero-sum-framing-needs-centaur-creator-category.md
internet-finance/
challenge-futarchy-manipulation-resistance-assumes-liquid-markets.md
```
## Quality Checks
1. Target claim exists and is correctly referenced
2. Counter-evidence is specific and traceable (not "I think it's wrong")
3. Scope of challenge is explicit (full, scope, or evidence challenge)
4. Strength rating matches the evidence quality
5. "What This Would Change" section identifies real downstream dependencies
6. The challenge is genuinely novel — not restating a known limitation already in the target claim's Challenges section
## Relationship to Existing Challenge Tracking
The `challenged_by` field in claim frontmatter and the `## Challenges` section in claim bodies continue to exist. When a challenge file is created:
1. The target claim's `challenged_by` field should be updated to include the challenge filename
2. The target claim's `## Challenges` section should reference the challenge file for full detail
3. The challenge file is the canonical location for the counter-argument — the claim file just points to it
This is additive, not breaking. Existing claims with inline challenges continue to work. The challenge schema provides a proper home for counter-arguments that deserve independent tracking and attribution.
## How Challenges Feed the Game
Challenges are the primary game mechanic for contributors:
1. **Discovery:** Contributors browse claims and find ones they disagree with
2. **Filing:** They file a challenge with counter-evidence
3. **Resolution:** The claim author and reviewers evaluate the challenge
4. **Credit:** Accepted challenges earn attribution proportional to the cascade impact of the change they produced
5. **Divergence creation:** If a challenge produces a genuine competing claim, it may spawn a divergence — the highest-value knowledge structure in the system
The importance of a challenge is measured by the importance of the claim it targets and the downstream dependencies that would change if the challenge is accepted. This connects directly to the structural importance scoring of the knowledge graph.

View file

@ -15,6 +15,7 @@ created: YYYY-MM-DD
last_evaluated: YYYY-MM-DD
depends_on: [] # list of evidence and claim titles this builds on
challenged_by: [] # list of counter-evidence or counter-claims
importance: null # computed by pipeline — null until pipeline support is implemented
---
```
@ -35,9 +36,10 @@ challenged_by: [] # list of counter-evidence or counter-claims
|-------|------|-------------|
| last_evaluated | date | When this claim was last reviewed against new evidence |
| depends_on | list | Evidence and claims this builds on (the reasoning chain) |
| challenged_by | list | Counter-evidence or counter-claims (disagreement tracking) |
| challenged_by | list | Challenge filenames or inline counter-evidence. When a first-class challenge file exists (see `schemas/challenge.md`), reference the filename. Inline descriptions are still valid for minor objections that don't warrant a standalone file. |
| secondary_domains | list | Other domains this claim is relevant to |
| attribution | object | Role-specific contributor tracking — see `schemas/attribution.md` |
| importance | float/null | Structural importance score (0.01.0). Computed by pipeline from downstream dependencies, active challenges, and cross-domain linkage. Default `null` — do not set manually. See Structural Importance section below. |
## Governance
@ -78,6 +80,15 @@ Topics:
- domain-topic-map
```
## Structural Importance
A claim's importance in the knowledge graph is determined by:
1. **Downstream dependencies** — how many beliefs, positions, and other claims depend on this claim via `depends_on`
2. **Active challenges** — contested claims are more important than uncontested ones (they're where the knowledge frontier is)
3. **Cross-domain linkage** — claims referenced from multiple domains carry higher structural importance
Importance is computed by the pipeline and written to the `importance` frontmatter field. Until pipeline support is implemented, this field defaults to `null` — agents should not set it manually. See `extract-graph-data.py` for the planned computation. The importance score determines contribution credit — challenging a high-importance claim earns more than challenging a low-importance one.
## Quality Checks
1. Title passes the claim test (specific enough to disagree with)