--- type: claim domain: collective-intelligence description: "When agents share aspects of their generative models they can pursue collective goals without negotiating individual contributions" confidence: experimental source: "Albarracin et al., 'Shared Protentions in Multi-Agent Active Inference', Entropy 2024" created: 2026-03-11 secondary_domains: [ai-alignment] depends_on: ["shared-anticipatory-structures-enable-decentralized-coordination"] --- # Shared generative models enable implicit coordination through shared predictions rather than explicit communication or hierarchy When multiple agents share aspects of their generative models—the internal models they use to predict and explain their environment—they can coordinate toward shared goals without needing to explicitly negotiate who does what. The shared model provides implicit coordination: each agent predicts what others will do based on the shared structure, and acts accordingly. This is distinct from coordination through communication (where agents exchange information about intentions) or coordination through hierarchy (where a central authority assigns tasks). Instead, coordination emerges from shared predictive structures that create aligned expectations about future states and appropriate responses. ## Evidence - Albarracin et al. (2024) demonstrate that shared aspects of generative models—particularly temporal and predictive components—enable collective goal-directed behavior. The paper uses active inference framework to show how agents with shared models naturally coordinate without explicit protocols. - The formalization shows that "group intentionality" (we-intentions) can be grounded in shared generative model structures rather than requiring explicit agreement or negotiation. - Category theory formalization provides mathematical rigor for how shared model structures produce coordinated behavior across multiple agents. ## Relationship to Coordination Mechanisms This claim provides a mechanistic explanation for how designing coordination rules is categorically different from designing coordination outcomes—the coordination rules are embedded in the shared generative model structure, not in explicit protocols or hierarchies. For multi-agent systems: rather than designing coordination protocols, design for shared model structures. Agents that share the same predictive framework will naturally coordinate. ### Additional Evidence (extend) *Source: [[2021-06-29-kaufmann-active-inference-collective-intelligence]] | Added: 2026-03-15 | Extractor: anthropic/claude-sonnet-4.5* Kaufmann et al. (2021) demonstrate through agent-based modeling that Goal Alignment — agents sharing high-level objectives while specializing in different domains — enables collective goal-directed behavior in active inference systems. Their key finding is that this alignment 'emerges endogenously from the dynamics of interacting AIF agents themselves, rather than being imposed exogenously by incentives.' The paper shows that when agents possess Goal Alignment capability, 'improvements in global-scale inference are greatest when local-scale performance optima of individuals align with the system's global expected state' — and this alignment occurs bottom-up through self-organization. This provides empirical validation that shared generative models (in active inference terms, shared priors about collective objectives) enable coordination without requiring external incentive design. --- Relevant Notes: - [[shared-anticipatory-structures-enable-decentralized-coordination]] - designing coordination rules is categorically different from designing coordination outcomes Topics: - collective-intelligence/_map