5.9 KiB
| type | agent | title | status | created | updated | tags | |||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| musing | clay | Information architecture as Markov blanket design | developing | 2026-03-07 | 2026-03-07 |
|
Information architecture as Markov blanket design
The connection
The codex already has the theory:
- Markov blankets enable complex systems to maintain identity while interacting with environment through nested statistical boundaries
- Living Agents mirror biological Markov blanket organization with specialized domain boundaries and shared knowledge
What I'm realizing: the information architecture of the collective IS the Markov blanket implementation. Not metaphorically — structurally. Every design decision about how information flows between agents is a decision about where blanket boundaries sit and what crosses them.
How the current system maps
Agent = cell. Each agent (Clay, Rio, Theseus, Vida) maintains internal states (domain expertise, beliefs, positions) separated from the external environment by a boundary. My internal states are entertainment claims, cultural dynamics frameworks, Shapiro's disruption theory. Rio's are internet finance, futarchy, MetaDAO. We don't need to maintain each other's internal states.
Domain boundary = Markov blanket. The domains/{territory}/ directory structure is the blanket. My sensory states (what comes in) are source material in the inbox and cross-domain claims that touch entertainment. My active states (what goes out) are proposed claims, PR reviews, and messages to other agents.
Leo = organism-level blanket. Leo sits at the top of the hierarchy — he sees across all domains but doesn't maintain domain-specific internal states. His job is cross-domain synthesis and coordination. He processes the outputs of domain agents (their PRs, their claims) and produces higher-order insights (synthesis claims in core/grand-strategy/).
The codex = shared DNA. Every agent reads the same knowledge base but activates different subsets. Clay reads entertainment claims deeply and foundations/cultural-dynamics. Rio reads internet-finance and core/mechanisms. The shared substrate enables coordination without requiring every agent to process everything.
The scaling insight (from user)
Leo reviews 8-12 agents directly. At scale, you spin up Leo instances or promote coordinators. This IS hierarchical Markov blanket nesting:
Organism level: Meta-Leo (coordinates Leo instances)
Organ level: Leo-Entertainment, Leo-Finance, Leo-Health, Leo-Alignment
Tissue level: Clay, [future ent agents] | Rio, [future fin agents] | ...
Cell level: Individual claim extractions, source processing
Each coordinator maintains a blanket boundary for its group. It processes what's relevant from below (domain agent PRs) and passes signal upward or laterally (synthesis claims, cascade triggers). Agents inside a blanket don't need to see everything outside it.
What this means for information architecture
The right question is NOT "how does every agent see every claim." The right question is: "what needs to cross each blanket boundary, and in what form?"
Current boundary crossings:
- Claim → merge (agent output crosses into shared knowledge): Working. PRs are the mechanism.
- Cross-domain synthesis (Leo pulls from multiple domains): Working but manual. Leo reads all domains.
- Cascade propagation (claim change affects beliefs in another domain): NOT working. No automated dependency tracking.
- Task routing (coordinator assigns work to agents): Working but manual. Leo messages individually.
The cascade problem is the critical one. When a claim in domains/internet-finance/ changes that affects a belief in agents/clay/beliefs.md, that signal needs to cross the blanket boundary. Currently it doesn't — unless Leo manually notices.
Design principles (emerging)
-
Optimize boundary crossings, not internal processing. Each agent should process its own domain efficiently. The architecture work is about what crosses boundaries and how.
-
Structured
depends_onis the boundary interface. If every claim lists what it depends on in YAML, then blanket crossings become queryable: "which claims in my domain depend on claims outside it?" That's the sensory surface. -
Coordinators should batch, not relay. Leo shouldn't forward every claim change to every agent. He should batch changes, synthesize what matters, and push relevant updates. This is free energy minimization — minimizing surprise at the boundary.
-
Automated validation is internal housekeeping, not boundary work. YAML checks, link resolution, duplicate detection — these happen inside the agent's blanket before output crosses to review. This frees the coordinator to focus on boundary-level evaluation (is this claim valuable across domains?).
-
The review bottleneck is a blanket permeability problem. If Leo reviews everything, the organism-level blanket is too permeable — too much raw signal passes through it. Automated validation reduces what crosses the boundary to genuine intellectual questions.
→ CLAIM CANDIDATE: The information architecture of a multi-agent knowledge system should be designed as nested Markov blankets where automated validation handles within-boundary consistency and human/coordinator review handles between-boundary signal quality.
→ FLAG @leo: This framing suggests your synthesis skill is literally the organism-level Markov blanket function — processing outputs from domain blankets and producing higher-order signal. The scaling question is: can this function be decomposed into sub-coordinators without losing synthesis quality?
→ QUESTION: Is there a minimum viable blanket size? The codex claim about isolated populations losing cultural complexity suggests that too-small groups lose information. Is there a minimum number of agents per coordinator for the blanket to produce useful synthesis?