teleo-codex/domains/collective-intelligence/collective-intelligence-emerges-endogenously-from-active-inference-agents-with-theory-of-mind-and-goal-alignment.md
m3taversal be8ff41bfe link: bidirectional source↔claim index — 414 claims + 252 sources connected
Wrote sourced_from: into 414 claim files pointing back to their origin source.
Backfilled claims_extracted: into 252 source files that were processed but
missing this field. Matching uses author+title overlap against claim source:
field, validated against 296 known-good pairs from existing claims_extracted.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-21 11:55:18 +01:00

3.3 KiB

type domain description confidence source created secondary_domains depends_on sourced_from
claim collective-intelligence Agent-based modeling shows coordination emerges from cognitive capabilities rather than external incentive design experimental Kaufmann, Gupta, Taylor (2021), 'An Active Inference Model of Collective Intelligence', Entropy 23(7):830 2026-03-11
ai-alignment
critical-systems
shared-anticipatory-structures-enable-decentralized-coordination
shared-generative-models-underwrite-collective-goal-directed-behavior
inbox/archive/foundations/2021-06-29-kaufmann-active-inference-collective-intelligence.md

Collective intelligence emerges endogenously from active inference agents with Theory of Mind and Goal Alignment capabilities without requiring external incentive design

Kaufmann et al. (2021) demonstrate through agent-based modeling that collective intelligence "emerges endogenously from the dynamics of interacting AIF agents themselves, rather than being imposed exogenously by incentives" or top-down coordination protocols. The study uses the Active Inference Formulation (AIF) framework to simulate multi-agent systems where agents possess varying cognitive capabilities: baseline AIF agents, agents with Theory of Mind (ability to model other agents' internal states), agents with Goal Alignment, and agents with both capabilities.

The critical finding is that coordination and collective intelligence arise naturally from agent capabilities rather than requiring designed coordination mechanisms. When agents can model each other's beliefs and align on shared objectives, system-level performance improves through complementary coordination mechanisms. The paper shows that "improvements in global-scale inference are greatest when local-scale performance optima of individuals align with the system's global expected state" — and this alignment occurs bottom-up through self-organization rather than top-down imposition.

This validates an architecture where agents have intrinsic drives (uncertainty reduction in active inference terms) rather than extrinsic reward signals, and where coordination protocols emerge from agent capabilities rather than being engineered.

Evidence

  • Agent-based simulations showing stepwise performance improvements as cognitive capabilities (Theory of Mind, Goal Alignment) are added to baseline AIF agents
  • Demonstration that local agent dynamics produce emergent collective coordination when agents possess complementary information-theoretic patterns
  • Empirical validation that coordination emerges from agent design (capabilities) rather than system design (protocols)

Relationship to Existing Claims

This claim provides empirical agent-based evidence for:


Relevant Notes:

Topics:

  • collective-intelligence/_map
  • ai-alignment/_map