teleo-codex/domains/collective-intelligence/local-global-alignment-in-active-inference-collectives-occurs-bottom-up-through-self-organization.md
m3taversal be8ff41bfe link: bidirectional source↔claim index — 414 claims + 252 sources connected
Wrote sourced_from: into 414 claim files pointing back to their origin source.
Backfilled claims_extracted: into 252 source files that were processed but
missing this field. Matching uses author+title overlap against claim source:
field, validated against 296 known-good pairs from existing claims_extracted.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-21 11:55:18 +01:00

43 lines
2.7 KiB
Markdown

---
type: claim
domain: collective-intelligence
description: "Individual optimization aligns with system-level objectives through emergent dynamics rather than imposed constraints"
confidence: experimental
source: "Kaufmann, Gupta, Taylor (2021), 'An Active Inference Model of Collective Intelligence', Entropy 23(7):830"
created: 2026-03-11
secondary_domains: [mechanisms]
sourced_from:
- inbox/archive/foundations/2021-06-29-kaufmann-active-inference-collective-intelligence.md
---
# Local-global alignment in active inference collectives occurs bottom-up through self-organization rather than top-down through imposed objectives
Kaufmann et al. (2021) demonstrate that "improvements in global-scale inference are greatest when local-scale performance optima of individuals align with the system's global expected state" — and critically, this alignment emerges from the self-organizing dynamics of active inference agents rather than being imposed through top-down objectives or external incentives.
This finding challenges the conventional approach to multi-agent system design, which typically relies on carefully engineered incentive structures or explicit coordination protocols to align individual and collective objectives. Instead, the paper shows that when agents possess appropriate cognitive capabilities (Theory of Mind, Goal Alignment), local optimization naturally produces global coordination.
The mechanism is that active inference agents naturally minimize free energy (reduce uncertainty), and when they can model each other's states and share objectives, their individual uncertainty-reduction drives automatically align with system-level uncertainty reduction. No external alignment mechanism is required.
## Evidence
- Agent-based modeling showing that local agent optima align with global system states through emergent dynamics in AIF agents with Theory of Mind and Goal Alignment
- Demonstration that coordination emerges from agent capabilities rather than requiring external incentive design
- Empirical validation that bottom-up self-organization produces collective intelligence without top-down coordination
## Design Implications
For collective intelligence systems:
1. Focus on agent capabilities (what agents can do) rather than coordination protocols (what agents must do)
2. Give agents intrinsic drives (uncertainty reduction) rather than extrinsic rewards
3. Let coordination emerge rather than engineering it explicitly
This validates architectures where agents have research drives and domain specialization, with collective intelligence emerging from their interactions rather than being orchestrated.
---
Relevant Notes:
- [[shared-generative-models-underwrite-collective-goal-directed-behavior]]
Topics:
- collective-intelligence/_map
- mechanisms/_map