diff --git a/domains/collective-intelligence/group-intentionality-emerges-from-shared-temporal-prediction-structures.md b/domains/collective-intelligence/group-intentionality-emerges-from-shared-temporal-prediction-structures.md new file mode 100644 index 000000000..1e80d36b7 --- /dev/null +++ b/domains/collective-intelligence/group-intentionality-emerges-from-shared-temporal-prediction-structures.md @@ -0,0 +1,44 @@ +--- +type: claim +domain: collective-intelligence +description: "Group intentionality ('we intend to X') arises from shared anticipatory model structures rather than aggregation of individual intentions" +confidence: experimental +source: "Albarracin et al. (2024) 'Shared Protentions in Multi-Agent Active Inference', Entropy 26(4):303, formalization using active inference and category theory" +created: 2024-12-29 +secondary_domains: [ai-alignment] +depends_on: + - "shared anticipatory structures enable decentralized multi-agent coordination through aligned temporal predictions" +--- + +# Group intentionality emerges from shared temporal prediction structures rather than aggregated individual intentions + +Group intentionality—the phenomenon where a collective exhibits "we intend to X" that is qualitatively different from the sum of individual "I intend" statements—can be formalized as shared anticipatory structures (protentions) within agents' generative models. This is a structural property of the multi-agent system, not an aggregate property of individual agents. + +## The Distinction + +Group intentionality is not multiple agents each intending the same thing independently. It is agents sharing the anticipatory component of their generative models, creating a collective temporal structure that coordinates action. The "we intend" is a different kind of object than multiple "I intend" statements. + +## Evidence from Source + +Albarracin et al. (2024) formalize this using three complementary frameworks: + +**Active Inference Framework**: Agents minimize prediction error against their generative models. When agents share the temporal/predictive aspects of these models—the protentions—they share anticipations of collective outcomes. This shared anticipation IS the group intentionality. The agents are not negotiating or aggregating individual intentions; they are operating from a shared forward-looking model. + +**Category Theory Formalization**: The paper provides mathematical rigor showing that shared goals have a specific compositional structure that differs categorically from individual goals. The "we intend" is a different category-theoretic object than multiple "I intend" statements. This proves the structural distinction is not merely conceptual but mathematically fundamental. + +**Phenomenological Grounding (Husserl)**: Protention is the anticipation of the immediate future, the "what comes next" that structures present experience. When this anticipatory structure is shared across agents, it creates collective temporal experience—the phenomenological basis of group intentionality. Agents literally share a temporal structure of expectation. + +## Implications + +This framework suggests that building collective intelligence requires designing for shared anticipatory structures, not just shared beliefs or shared goals stated as outcomes. The coordination mechanism is temporal prediction alignment, not value alignment or factual consensus. + +For multi-agent AI systems: group intentionality emerges when agents share forward-looking model components (what the system should look like at t+1, t+2, etc.), not when they share backward-looking knowledge (what is currently true). + +--- + +Relevant Notes: +- [[shared anticipatory structures enable decentralized multi-agent coordination through aligned temporal predictions]] +- [[collective intelligence is a measurable property of group interaction structure not aggregated individual ability]] + +Topics: +- collective-intelligence diff --git a/domains/collective-intelligence/shared-anticipatory-structures-enable-decentralized-multi-agent-coordination.md b/domains/collective-intelligence/shared-anticipatory-structures-enable-decentralized-multi-agent-coordination.md new file mode 100644 index 000000000..5c0baaa70 --- /dev/null +++ b/domains/collective-intelligence/shared-anticipatory-structures-enable-decentralized-multi-agent-coordination.md @@ -0,0 +1,47 @@ +--- +type: claim +domain: collective-intelligence +description: "Agents sharing temporal prediction structures (protentions) naturally align actions toward shared goals without centralized control or explicit negotiation" +confidence: experimental +source: "Albarracin et al. (2024) 'Shared Protentions in Multi-Agent Active Inference', Entropy 26(4):303" +created: 2024-12-29 +secondary_domains: [ai-alignment, critical-systems] +depends_on: + - "designing coordination rules is categorically different from designing coordination outcomes" + - "collective intelligence is a measurable property of group interaction structure not aggregated individual ability" +--- + +# Shared anticipatory structures enable decentralized multi-agent coordination through aligned temporal predictions + +When multiple agents share aspects of their generative models—particularly the temporal and predictive components—they can coordinate toward shared goals without explicit negotiation or centralized control. Albarracin et al. (2024) formalize this through "shared protentions": shared anticipations of collective outcomes that align agent behavior at the level of temporal prediction. + +## Mechanism + +The coordination mechanism operates through prediction error minimization. In active inference, agents act to minimize the difference between their predictions and observations. When agents share the temporal/predictive component of their generative models—the "what comes next" structure—they share anticipations about future states. Agents then naturally align their actions to bring about those predicted states, creating coordination without requiring explicit communication about goals or centralized assignment of tasks. + +This is distinct from sharing factual beliefs or values. Two agents might disagree about what is currently true but still coordinate effectively if they share anticipatory structures about what should come next. + +## Evidence from Source + +Albarracin et al. (2024) develop a formal framework uniting three traditions: +- **Husserlian phenomenology**: Protention as the anticipation of the immediate future that structures present experience +- **Active inference**: Agents minimize prediction error against their generative models +- **Category theory**: Mathematical formalization of shared goal structure + +Their central claim: "Shared generative models underwrite collective goal-directed behavior" because shared anticipatory structures create natural coordination. The paper formalizes group intentionality—the "we intend to X" phenomenon—in terms of shared anticipatory structures within agents' generative models, demonstrating that when agents share protentions, they share a temporal structure of expectation that coordinates action without requiring centralized control. + +## Implications for Multi-Agent Systems + +This framework suggests that effective multi-agent coordination depends less on shared factual knowledge and more on shared temporal anticipation. Agents coordinating on a research agenda, for example, align not primarily through shared beliefs about what is true, but through shared anticipation of what the knowledge base should look like at future timepoints (next quarter, next year, etc.). + +The shared temporal structure—publication cadence, review cycles, research directions—may be more important for coordination than shared factual beliefs. This explains why agents with different epistemic positions can still coordinate effectively if they share anticipatory structures. + +--- + +Relevant Notes: +- [[designing coordination rules is categorically different from designing coordination outcomes]] +- [[collective intelligence is a measurable property of group interaction structure not aggregated individual ability]] +- [[complexity is earned not designed and sophisticated collective behavior must evolve from simple underlying principles]] + +Topics: +- collective-intelligence diff --git a/inbox/archive/2024-04-00-albarracin-shared-protentions-multi-agent-active-inference.md b/inbox/archive/2024-04-00-albarracin-shared-protentions-multi-agent-active-inference.md index 71ac31d28..5770996c7 100644 --- a/inbox/archive/2024-04-00-albarracin-shared-protentions-multi-agent-active-inference.md +++ b/inbox/archive/2024-04-00-albarracin-shared-protentions-multi-agent-active-inference.md @@ -7,9 +7,15 @@ date: 2024-04-00 domain: collective-intelligence secondary_domains: [ai-alignment, critical-systems] format: paper -status: unprocessed +status: processed priority: medium tags: [active-inference, multi-agent, shared-goals, group-intentionality, category-theory, phenomenology, collective-action] +processed_by: theseus +processed_date: 2024-12-29 +claims_extracted: ["shared-anticipatory-structures-enable-decentralized-multi-agent-coordination.md", "group-intentionality-emerges-from-shared-temporal-prediction-structures.md"] +enrichments_applied: ["designing coordination rules is categorically different from designing coordination outcomes.md", "collective intelligence is a measurable property of group interaction structure not aggregated individual ability.md", "complexity is earned not designed and sophisticated collective behavior must evolve from simple underlying principles.md"] +extraction_model: "anthropic/claude-sonnet-4.5" +extraction_notes: "Extracted two novel claims formalizing shared protentions as coordination mechanism and group intentionality as emergent from shared temporal predictions. Applied three enrichments connecting to existing coordination/collective intelligence claims. Paper provides formal (category theory + active inference) foundation for understanding how multi-agent systems coordinate through shared anticipatory structures rather than shared beliefs or centralized control. Directly relevant to Teleo's multi-agent coordination design." --- ## Content @@ -49,3 +55,11 @@ Published in Entropy, Vol 26(4), 303, March 2024. PRIMARY CONNECTION: "designing coordination rules is categorically different from designing coordination outcomes" WHY ARCHIVED: Formalizes how shared goals work in multi-agent active inference — directly relevant to our collective research agenda coordination EXTRACTION HINT: Focus on the shared protention concept and how it enables decentralized coordination + + +## Key Facts +- Paper published in Entropy, Vol 26(4), 303, March 2024 +- Authors: Mahault Albarracin, Riddhi J. Pitliya, Toby St Clere Smithe, Daniel Ari Friedman, Karl Friston, Maxwell J. D. Ramstead +- Unites three frameworks: Husserlian phenomenology, active inference, and category theory +- Protention defined as anticipation of immediate future (Husserlian phenomenology) +- Shared protention defined as shared anticipation of collective outcomes