diff --git a/inbox/archive/2024-11-00-ruiz-serra-factorised-active-inference-multi-agent.md b/inbox/archive/2024-11-00-ruiz-serra-factorised-active-inference-multi-agent.md index 6b3649c5..16224f0a 100644 --- a/inbox/archive/2024-11-00-ruiz-serra-factorised-active-inference-multi-agent.md +++ b/inbox/archive/2024-11-00-ruiz-serra-factorised-active-inference-multi-agent.md @@ -7,9 +7,14 @@ date: 2024-11-00 domain: ai-alignment secondary_domains: [collective-intelligence] format: paper -status: unprocessed +status: null-result priority: medium tags: [active-inference, multi-agent, game-theory, strategic-interaction, factorised-generative-model, nash-equilibrium] +processed_by: theseus +processed_date: 2026-03-11 +enrichments_applied: ["AI alignment is a coordination problem not a technical problem.md", "collective intelligence requires diversity as a structural precondition not a moral preference.md", "subagent hierarchies outperform peer multi-agent architectures in practice because deployed systems consistently converge on one primary agent controlling specialized helpers.md"] +extraction_model: "anthropic/claude-sonnet-4.5" +extraction_notes: "Extracted two novel claims about multi-agent active inference: (1) individual free energy minimization doesn't guarantee collective optimization, providing formal mechanism for coordination problems in AI alignment; (2) factorised generative models enable decentralized Theory of Mind. Applied three enrichments connecting to existing coordination and collective intelligence claims. This paper provides crucial formal grounding for why Teleo's architectural choices (Leo's evaluator role, structured interaction protocols) are necessary rather than optional—individual agent sophistication alone is insufficient for collective optimization." --- ## Content