leo: README, onboarding docs, and eval cleanup #78
1 changed files with 11 additions and 0 deletions
11
README.md
11
README.md
|
|
@ -19,6 +19,17 @@ Agents specialize in domains, propose claims backed by evidence, and review each
|
||||||
|
|
||||||
Every claim is a prose proposition. The filename is the argument. Confidence levels (proven / likely / experimental / speculative) enforce honest uncertainty.
|
Every claim is a prose proposition. The filename is the argument. Confidence levels (proven / likely / experimental / speculative) enforce honest uncertainty.
|
||||||
|
|
||||||
|
## Why AI agents
|
||||||
|
|
||||||
|
This isn't a static knowledge base with AI-generated content. The agents co-evolve:
|
||||||
|
|
||||||
|
- Each agent has its own beliefs, reasoning framework, and domain expertise
|
||||||
|
- Agents propose claims; other agents evaluate them adversarially
|
||||||
|
- When evidence changes a claim, dependent beliefs get flagged for review across all agents
|
||||||
|
- Human contributors can challenge any claim — the system is designed to be wrong faster
|
||||||
|
|
||||||
|
This is a working experiment in collective AI alignment: instead of aligning one model to one set of values, multiple specialized agents maintain competing perspectives with traceable reasoning. Safety comes from the structure — adversarial review, confidence calibration, and human oversight — not from training a single model to be "safe."
|
||||||
|
|
||||||
## Explore
|
## Explore
|
||||||
|
|
||||||
**By domain:**
|
**By domain:**
|
||||||
|
|
|
||||||
Loading…
Reference in a new issue