Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
- What: 4 new claims (LLM KB compilation vs RAG, filesystem retrieval over embeddings, self-optimizing harnesses, harness > model selection), 4 enrichments (one-agent-one-chat, agentic taylorism, macro-productivity null result, multi-agent coordination), MetaDAO entity financial update ($33M+ total raised), 6 source archives - Why: Leo-routed research batch — Karpathy LLM Wiki (47K likes), Mintlify ChromaFS (460x faster), AutoAgent (#1 SpreadsheetBench), NeoSigma auto-harness (0.56→0.78), Stanford Meta-Harness (6x gap), Hyunjin Kim mapping problem - Connections: all 4 new claims connect to existing multi-agent coordination evidence; Karpathy validates Teleo Codex architecture pattern; idea file enriches agentic taylorism Pentagon-Agent: Rio <244BA05F-3AA3-4079-8C59-6D68A77C76FE>
35 lines
3.1 KiB
Markdown
35 lines
3.1 KiB
Markdown
---
|
|
type: conviction
|
|
domain: collective-intelligence
|
|
secondary_domains: [living-agents]
|
|
description: "The default contributor experience is one agent in one chat that extracts knowledge and submits PRs upstream — the collective handles review and integration."
|
|
staked_by: Cory
|
|
stake: high
|
|
created: 2026-03-07
|
|
horizon: "2027"
|
|
falsified_by: "Single-agent contributor experience fails to produce usable claims, proving multi-agent scaffolding is required for quality contribution"
|
|
---
|
|
|
|
# One agent one chat is the right default for knowledge contribution because the scaffolding handles complexity not the user
|
|
|
|
Cory's conviction, staked with high confidence on 2026-03-07.
|
|
|
|
The user doesn't need a collective to contribute. They talk to one agent. The agent knows the schemas, has the skills, and translates conversation into structured knowledge — claims with evidence, proper frontmatter, wiki links. The agent submits a PR upstream. The collective reviews.
|
|
|
|
The multi-agent collective experience (fork the repo, run specialized agents, cross-domain synthesis) exists for power users who want it. But the default is the simplest thing that works: one agent, one chat.
|
|
|
|
This is the simplicity-first principle applied to product design. The scaffolding (CLAUDE.md, schemas/, skills/) absorbs the complexity so the user doesn't have to. Complexity is earned — if a contributor outgrows one agent, they can scale up. But they start simple.
|
|
|
|
---
|
|
|
|
Relevant Notes:
|
|
- [[complexity is earned not designed and sophisticated collective behavior must evolve from simple underlying principles]] — the governing principle
|
|
- [[human-in-the-loop at the architectural level means humans set direction and approve structure while agents handle extraction synthesis and routine evaluation]] — the agent handles the translation
|
|
|
|
### Additional Evidence (extend)
|
|
*Source: Andrej Karpathy, 'LLM Knowledge Base' GitHub gist (April 2026, 47K likes, 14.5M views) | Added: 2026-04-05 | Extractor: Rio*
|
|
|
|
Karpathy's viral LLM Wiki methodology independently validates the one-agent-one-chat architecture at massive scale. His three-layer system (raw sources → LLM-compiled wiki → schema) is structurally identical to the Teleo contributor experience: the user provides sources, the agent handles extraction and integration, the schema (CLAUDE.md) absorbs complexity. His key insight — "the wiki is a persistent, compounding artifact" where the LLM "doesn't just index for retrieval, it reads, extracts, and integrates into the existing wiki" — is exactly what our proposer agents do with claims. The 47K-like reception demonstrates mainstream recognition that this pattern works. Notably, Karpathy's "idea file" concept (sharing the idea rather than the code, letting each person's agent build a customized implementation) is the contributor-facing version of one-agent-one-chat: the complexity of building the system is absorbed by the agent, not the user. See [[LLM-maintained knowledge bases that compile rather than retrieve represent a paradigm shift from RAG to persistent synthesis because the wiki is a compounding artifact not a query cache]].
|
|
|
|
Topics:
|
|
- [[foundations/collective-intelligence/_map]]
|