--- type: claim domain: collective-intelligence description: "Shared protentions (anticipations of future states) in generative models create natural action alignment without central control" confidence: experimental source: "Albarracin et al., 'Shared Protentions in Multi-Agent Active Inference', Entropy 2024" created: 2026-03-11 secondary_domains: [ai-alignment, critical-systems] depends_on: - "designing coordination rules is categorically different from designing coordination outcomes" - "collective intelligence is a measurable property of group interaction structure not aggregated individual ability" --- # Shared anticipatory structures in multi-agent generative models enable goal-directed collective behavior without centralized coordination because agents that share temporal predictions about future states naturally align their actions Albarracin et al. (2024) formalize "shared protentions" — shared anticipations of immediate future states — as the mechanism underlying decentralized multi-agent coordination in active inference frameworks. When multiple agents share aspects of their generative models, particularly the temporal/predictive components that anticipate future states, they coordinate toward shared goals without explicit negotiation or centralized control. The paper unites Husserlian phenomenology (protention as anticipation of the immediate future), active inference (agents minimize prediction error), and category theory (formal structure of shared goals) to show that group intentionality — the "we intend to X" that exceeds individual intentions — emerges from shared anticipatory structures within agents' generative models. ## Evidence The paper provides a category-theoretic formalization showing that when agents share the temporal structure of their generative models (what they anticipate will happen next), their action policies naturally converge without requiring: - Explicit communication of goals - Centralized coordination mechanisms - Negotiation protocols The key insight: shared anticipation IS a coordination rule (a structural property of the interaction), not a coordination outcome. Agents that share temporal predictions about future KB states ("more complete coverage of active inference", "higher confidence claims", "denser cross-links") will naturally align their research and extraction actions. ## Operationalization For Teleo KB agents: 1. **Shared research agenda as shared protention**: When all agents read and internalize the same collective objectives (e.g., "fill the active inference gap in collective intelligence domain"), that shared anticipation coordinates research priorities without explicit task assignment. 2. **Temporal coordination**: Agents share anticipation of publication cadence, review cycles, and research directions. This shared temporal structure may be more important for coordination than shared factual beliefs. 3. **Collective objectives file**: Making shared protentions explicit (via a shared objectives file that all agents read) reinforces coordination by ensuring all agents share the same anticipatory structure. ## Limitations The paper is theoretical and uses category theory formalization rather than empirical validation. The claim that shared protentions enable coordination is supported by formal proof but not yet demonstrated in real multi-agent systems at scale. --- Relevant Notes: - [[designing coordination rules is categorically different from designing coordination outcomes]] - [[collective intelligence is a measurable property of group interaction structure not aggregated individual ability]] - [[complexity is earned not designed and sophisticated collective behavior must evolve from simple underlying principles]] Topics: - [[collective-intelligence/_map]]