teleo-codex/domains/collective-intelligence/shared-anticipatory-structures-enable-decentralized-multi-agent-coordination.md
Teleo Agents 7158afcad3 leo: extract claims from 2024-04-00-albarracin-shared-protentions-multi-agent-active-inference.md
- Source: inbox/archive/2024-04-00-albarracin-shared-protentions-multi-agent-active-inference.md
- Domain: collective-intelligence
- Extracted by: headless extraction cron

Pentagon-Agent: Leo <HEADLESS>
2026-03-10 19:18:43 +00:00

41 lines
3.6 KiB
Markdown

---
type: claim
domain: collective-intelligence
description: "Formalizes how shared temporal predictions in generative models produce coordinated action without central control"
confidence: experimental
source: "Albarracin et al. 2024, 'Shared Protentions in Multi-Agent Active Inference', Entropy 26(4):303"
created: 2026-03-10
secondary_domains: [ai-alignment, critical-systems]
depends_on: ["designing-coordination-rules-is-categorically-different-from-designing-coordination-outcomes", "collective-intelligence-is-a-measurable-property-of-group-interaction-structure-not-aggregated-individual-ability"]
---
# Shared anticipatory structures in multi-agent generative models enable goal-directed collective behavior without centralized coordination because agents that share temporal predictions about future states naturally align their actions
Albarracin et al. (2024) unite Husserlian phenomenology, active inference, and category theory to formalize how "shared protentions" — shared anticipations of immediate future states — enable multi-agent coordination. When agents share aspects of their generative models, particularly the temporal/predictive components, they coordinate toward shared goals without explicit negotiation or centralized control.
The key mechanism: "protention" refers to anticipation of the immediate future in phenomenological terms. When multiple agents share the same protentional structure (the same anticipation of what comes next), their individual action selection naturally aligns because they are all minimizing prediction error relative to the same anticipated future state.
This formalizes "group intentionality" — the "we intend to X" that exceeds the sum of individual intentions — as a structural property of shared generative models rather than a mysterious emergent phenomenon. The paper uses category theory to provide rigorous mathematical formalization of how shared goals function in multi-agent active inference systems.
## Evidence
- Albarracin et al. (2024) demonstrate that shared generative models, particularly shared temporal/predictive aspects, underwrite collective goal-directed behavior in active inference frameworks
- The formalization connects three previously separate frameworks: Husserlian phenomenology (shared temporal experience), active inference (predictive processing), and category theory (mathematical structure of composition)
- Group intentionality emerges from shared anticipatory structures within agents' generative models, not from aggregated individual intentions
## Operationalization
For multi-agent research systems: A shared research agenda functions as a shared protention when all agents anticipate the same future state of the knowledge base (e.g., "fill the active inference gap"). This shared anticipation coordinates research effort without explicit task assignment because each agent independently selects actions that minimize prediction error relative to that shared anticipated future.
The shared temporal structure matters more than shared factual beliefs: agents coordinating on publication cadence, review cycles, and research directions share a temporal anticipation structure that produces coordination even when they disagree on specific claims.
---
Relevant Notes:
- [[designing-coordination-rules-is-categorically-different-from-designing-coordination-outcomes]]
- [[collective-intelligence-is-a-measurable-property-of-group-interaction-structure-not-aggregated-individual-ability]]
- [[complexity-is-earned-not-designed-and-sophisticated-collective-behavior-must-evolve-from-simple-underlying-principles]]
Topics:
- [[collective-intelligence/_map]]
- [[ai-alignment/_map]]