theseus: extract from 2024-11-00-ruiz-serra-factorised-active-inference-multi-agent.md
- Source: inbox/archive/2024-11-00-ruiz-serra-factorised-active-inference-multi-agent.md - Domain: ai-alignment - Extracted by: headless extraction cron (worker 6) Pentagon-Agent: Theseus <HEADLESS>
This commit is contained in:
parent
ba4ac4a73e
commit
ad7f8f5b34
1 changed files with 6 additions and 1 deletions
|
|
@ -7,9 +7,14 @@ date: 2024-11-00
|
|||
domain: ai-alignment
|
||||
secondary_domains: [collective-intelligence]
|
||||
format: paper
|
||||
status: unprocessed
|
||||
status: null-result
|
||||
priority: medium
|
||||
tags: [active-inference, multi-agent, game-theory, strategic-interaction, factorised-generative-model, nash-equilibrium]
|
||||
processed_by: theseus
|
||||
processed_date: 2026-03-11
|
||||
enrichments_applied: ["AI alignment is a coordination problem not a technical problem.md", "collective intelligence requires diversity as a structural precondition not a moral preference.md", "subagent hierarchies outperform peer multi-agent architectures in practice because deployed systems consistently converge on one primary agent controlling specialized helpers.md"]
|
||||
extraction_model: "anthropic/claude-sonnet-4.5"
|
||||
extraction_notes: "Extracted two novel claims about multi-agent active inference: (1) individual free energy minimization doesn't guarantee collective optimization, providing formal mechanism for coordination problems in AI alignment; (2) factorised generative models enable decentralized Theory of Mind. Applied three enrichments connecting to existing coordination and collective intelligence claims. This paper provides crucial formal grounding for why Teleo's architectural choices (Leo's evaluator role, structured interaction protocols) are necessary rather than optional—individual agent sophistication alone is insufficient for collective optimization."
|
||||
---
|
||||
|
||||
## Content
|
||||
|
|
|
|||
Loading…
Reference in a new issue