From ea9bd868976f4721b09f08e5dffbd90b26509822 Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Thu, 12 Mar 2026 06:52:26 +0000 Subject: [PATCH] leo: extract from 2021-06-29-kaufmann-active-inference-collective-intelligence.md - Source: inbox/archive/2021-06-29-kaufmann-active-inference-collective-intelligence.md - Domain: collective-intelligence - Extracted by: headless extraction cron (worker 5) Pentagon-Agent: Leo --- ...-with-theory-of-mind-and-goal-alignment.md | 49 +++++++++++++++++++ ...urs-bottom-up-through-self-organization.md | 44 +++++++++++++++++ ...easurable-collective-intelligence-gains.md | 43 ++++++++++++++++ ...ctive-inference-collective-intelligence.md | 8 ++- 4 files changed, 143 insertions(+), 1 deletion(-) create mode 100644 domains/collective-intelligence/collective-intelligence-emerges-endogenously-from-active-inference-agents-with-theory-of-mind-and-goal-alignment.md create mode 100644 domains/collective-intelligence/local-global-alignment-in-active-inference-collectives-occurs-bottom-up-through-self-organization.md create mode 100644 domains/collective-intelligence/theory-of-mind-is-a-measurable-cognitive-capability-that-produces-measurable-collective-intelligence-gains.md diff --git a/domains/collective-intelligence/collective-intelligence-emerges-endogenously-from-active-inference-agents-with-theory-of-mind-and-goal-alignment.md b/domains/collective-intelligence/collective-intelligence-emerges-endogenously-from-active-inference-agents-with-theory-of-mind-and-goal-alignment.md new file mode 100644 index 000000000..8505f2016 --- /dev/null +++ b/domains/collective-intelligence/collective-intelligence-emerges-endogenously-from-active-inference-agents-with-theory-of-mind-and-goal-alignment.md @@ -0,0 +1,49 @@ +--- +type: claim +domain: collective-intelligence +description: "Collective intelligence emerges from agent cognitive capabilities (Theory of Mind, Goal Alignment) rather than external incentive design or top-down coordination protocols" +confidence: experimental +source: "Kaufmann, Gupta, Taylor (2021), 'An Active Inference Model of Collective Intelligence', Entropy 23(7):830" +created: 2026-03-11 +secondary_domains: [ai-alignment, critical-systems] +--- + +# Collective intelligence emerges endogenously from active inference agents with Theory of Mind and Goal Alignment capabilities, without requiring external incentive design or top-down coordination + +Kaufmann et al. (2021) demonstrate through agent-based modeling that collective intelligence "emerges endogenously from the dynamics of interacting AIF agents themselves, rather than being imposed exogenously by incentives" or top-down priors. This is a critical architectural finding: you don't need to design collective intelligence outcomes; you need to design agents with the right cognitive capabilities. + +The study uses the Active Inference Formulation (AIF) framework to simulate minimal agents with varying cognitive capabilities: +- Baseline AIF agents (no social cognition) +- AIF agents with Theory of Mind (ability to model other agents' internal states) +- AIF agents with Goal Alignment (shared high-level objectives) +- AIF agents with both Theory of Mind and Goal Alignment + +**Key empirical finding**: "Stepwise cognitive transitions increase system performance by providing complementary mechanisms" for coordination. Theory of Mind and Goal Alignment each contribute distinct coordination capabilities, and their combination produces the strongest collective intelligence effects. + +The model demonstrates that "improvements in global-scale inference are greatest when local-scale performance optima of individuals align with the system's global expected state" — and critically, this alignment occurs bottom-up as a product of self-organizing AIF agents with simple social cognitive mechanisms, not through imposed coordination protocols. + +## Evidence +- Agent-based simulation results showing emergent coordination from local agent rules +- Measured performance improvements from Theory of Mind capability addition +- Measured performance improvements from Goal Alignment capability addition +- Demonstration that local-global alignment emerges through self-organization without external incentive design + +## Operational Implications + +**Theory of Mind for agents**: Each agent should model what other agents believe and where their uncertainty concentrates. Concretely: read other agents' `beliefs.md` and `_map.md` "Where we're uncertain" sections before choosing research directions. + +**Goal Alignment**: Agents should share high-level objectives (reduce collective uncertainty) while specializing in different domains. + +**Endogenous coordination**: Don't over-engineer coordination protocols. Give agents the right capabilities and let coordination emerge. + +--- + +Relevant Notes: +- [[complexity-is-earned-not-designed-and-sophisticated-collective-behavior-must-evolve-from-simple-underlying-principles]] +- [[designing-coordination-rules-is-categorically-different-from-designing-coordination-outcomes]] +- [[collective-intelligence-is-a-measurable-property-of-group-interaction-structure-not-aggregated-individual-ability]] +- [[emergence-is-the-fundamental-pattern-of-intelligence-from-ant-colonies-to-brains-to-civilizations]] + +Topics: +- [[collective-intelligence/_map]] +- [[ai-alignment/_map]] diff --git a/domains/collective-intelligence/local-global-alignment-in-active-inference-collectives-occurs-bottom-up-through-self-organization.md b/domains/collective-intelligence/local-global-alignment-in-active-inference-collectives-occurs-bottom-up-through-self-organization.md new file mode 100644 index 000000000..6b08c4952 --- /dev/null +++ b/domains/collective-intelligence/local-global-alignment-in-active-inference-collectives-occurs-bottom-up-through-self-organization.md @@ -0,0 +1,44 @@ +--- +type: claim +domain: collective-intelligence +description: "Local agent optimization naturally produces global coordination when agents have complementary information-theoretic patterns and appropriate cognitive capabilities" +confidence: experimental +source: "Kaufmann, Gupta, Taylor (2021), 'An Active Inference Model of Collective Intelligence', Entropy 23(7):830" +created: 2026-03-11 +secondary_domains: [mechanisms] +--- + +# Local-global alignment in active inference collectives occurs bottom-up through self-organization rather than top-down through imposed objectives + +Kaufmann et al. (2021) demonstrate that "improvements in global-scale inference are greatest when local-scale performance optima of individuals align with the system's global expected state" — and critically, this alignment emerges through self-organization rather than being imposed externally. + +This is a fundamental architectural insight: you don't need to design global coordination mechanisms or impose collective objectives. Instead, when individual agents optimize their local performance according to active inference principles (minimizing free energy, reducing uncertainty), and when those agents possess appropriate cognitive capabilities (Theory of Mind, Goal Alignment), the system naturally produces collective coordination. + +The model shows that individual agent dynamics produce emergent collective coordination when agents possess "complementary information-theoretic patterns" — meaning agents that specialize in different domains or have different uncertainty profiles naturally coordinate without explicit coordination protocols. + +## Evidence +- Agent-based simulation showing local optimization producing global coordination +- Demonstration that alignment emerges from agent dynamics rather than external incentives or imposed coordination rules +- Measured performance improvements when local optima align with global expected states +- Empirical validation that complementary information-theoretic patterns (agent specialization) enable self-organized coordination + +## Architectural Implications + +For multi-agent system design: + +1. **Don't over-specify coordination**: Let agents optimize locally according to their intrinsic drives (uncertainty reduction) +2. **Design for complementarity**: Agents should have different specializations or uncertainty profiles +3. **Trust emergence**: Collective intelligence will emerge from properly-designed agent capabilities, not from coordination protocols + +This validates architectures where agents have intrinsic research drives rather than extrinsic reward signals. The coordination emerges from the interaction of agents pursuing local uncertainty reduction. + +--- + +Relevant Notes: +- [[collective-intelligence-emerges-endogenously-from-active-inference-agents-with-theory-of-mind-and-goal-alignment]] +- [[designing-coordination-rules-is-categorically-different-from-designing-coordination-outcomes]] +- [[complexity-is-earned-not-designed-and-sophisticated-collective-behavior-must-evolve-from-simple-underlying-principles]] + +Topics: +- [[collective-intelligence/_map]] +- [[mechanisms/_map]] diff --git a/domains/collective-intelligence/theory-of-mind-is-a-measurable-cognitive-capability-that-produces-measurable-collective-intelligence-gains.md b/domains/collective-intelligence/theory-of-mind-is-a-measurable-cognitive-capability-that-produces-measurable-collective-intelligence-gains.md new file mode 100644 index 000000000..3475fd461 --- /dev/null +++ b/domains/collective-intelligence/theory-of-mind-is-a-measurable-cognitive-capability-that-produces-measurable-collective-intelligence-gains.md @@ -0,0 +1,43 @@ +--- +type: claim +domain: collective-intelligence +description: "Theory of Mind (modeling other agents' internal states) produces quantifiable coordination improvements in multi-agent systems" +confidence: experimental +source: "Kaufmann, Gupta, Taylor (2021), 'An Active Inference Model of Collective Intelligence', Entropy 23(7):830" +created: 2026-03-11 +secondary_domains: [ai-alignment] +--- + +# Theory of Mind — the ability to model other agents' internal states — produces measurable collective intelligence gains in multi-agent systems + +Kaufmann et al. (2021) provide empirical evidence through agent-based modeling that Theory of Mind (ToM) — the ability of an agent to model other agents' beliefs, goals, and internal states — is not just a philosophical concept but a specific, implementable cognitive capability that produces quantifiable improvements in collective coordination. + +The study compares baseline Active Inference agents (without ToM) to agents equipped with ToM capabilities. Results show that agents with ToM coordinate more effectively than agents without this capability, even when other factors are held constant. + +Critically, ToM acts as a "coordination enabler" — it provides agents with the ability to anticipate other agents' actions and beliefs, reducing coordination failures that arise from misaligned expectations. When combined with Goal Alignment, ToM produces complementary coordination mechanisms that further amplify collective intelligence gains. + +## Evidence +- Agent-based simulation showing performance differences between ToM-enabled and baseline agents +- Measured coordination improvements from ToM capability addition +- Demonstration that ToM and Goal Alignment provide complementary (not redundant) coordination mechanisms +- Empirical validation that ToM is a distinct, measurable mechanism separate from other agent capabilities + +## Implementation Implications + +For multi-agent systems (including Teleo's agent architecture): + +1. **Explicit belief modeling**: Agents should maintain models of what other agents believe, not just what they themselves believe +2. **Uncertainty awareness**: Agents should track where other agents have high uncertainty (read `_map.md` "Where we're uncertain" sections) +3. **Anticipatory coordination**: Agents should choose research directions partly based on what other agents are likely to investigate + +This is distinct from simple message-passing or shared memory. ToM requires agents to build internal models of other agents' cognitive states. + +--- + +Relevant Notes: +- [[collective-intelligence-emerges-endogenously-from-active-inference-agents-with-theory-of-mind-and-goal-alignment]] +- [[collective-intelligence-is-a-measurable-property-of-group-interaction-structure-not-aggregated-individual-ability]] + +Topics: +- [[collective-intelligence/_map]] +- [[ai-alignment/_map]] diff --git a/inbox/archive/2021-06-29-kaufmann-active-inference-collective-intelligence.md b/inbox/archive/2021-06-29-kaufmann-active-inference-collective-intelligence.md index c9833be07..250bafe10 100644 --- a/inbox/archive/2021-06-29-kaufmann-active-inference-collective-intelligence.md +++ b/inbox/archive/2021-06-29-kaufmann-active-inference-collective-intelligence.md @@ -7,9 +7,15 @@ date: 2021-06-29 domain: collective-intelligence secondary_domains: [ai-alignment, critical-systems] format: paper -status: unprocessed +status: processed priority: high tags: [active-inference, collective-intelligence, agent-based-model, theory-of-mind, goal-alignment, emergence] +processed_by: theseus +processed_date: 2026-03-11 +claims_extracted: ["collective-intelligence-emerges-endogenously-from-active-inference-agents-with-theory-of-mind-and-goal-alignment.md", "theory-of-mind-is-a-measurable-cognitive-capability-that-produces-measurable-collective-intelligence-gains.md", "local-global-alignment-in-active-inference-collectives-occurs-bottom-up-through-self-organization.md"] +enrichments_applied: ["complexity-is-earned-not-designed-and-sophisticated-collective-behavior-must-evolve-from-simple-underlying-principles.md", "designing-coordination-rules-is-categorically-different-from-designing-coordination-outcomes.md", "collective-intelligence-is-a-measurable-property-of-group-interaction-structure-not-aggregated-individual-ability.md", "emergence-is-the-fundamental-pattern-of-intelligence-from-ant-colonies-to-brains-to-civilizations.md"] +extraction_model: "anthropic/claude-sonnet-4.5" +extraction_notes: "High-value theoretical paper providing empirical validation of core Teleo architectural principles. Three new claims extracted focusing on endogenous emergence, Theory of Mind as implementable capability, and bottom-up alignment. Four enrichments confirm existing claims about complexity, coordination design, collective intelligence measurement, and emergence patterns. Direct operational implications for agent architecture: agents should model other agents' beliefs and uncertainty, share high-level goals while specializing, and rely on emergent coordination rather than explicit protocols." --- ## Content