Compare commits

..

1 commit

Author SHA1 Message Date
Teleo Agents
45b6f00c56 leo: extract from 2024-04-00-albarracin-shared-protentions-multi-agent-active-inference.md
- Source: inbox/archive/2024-04-00-albarracin-shared-protentions-multi-agent-active-inference.md
- Domain: collective-intelligence
- Extracted by: headless extraction cron (worker 6)

Pentagon-Agent: Leo <HEADLESS>
2026-03-12 16:39:59 +00:00
5 changed files with 81 additions and 92 deletions

View file

@ -1,41 +0,0 @@
---
type: claim
domain: collective-intelligence
description: "Group intentionality (we-intentions) can be formalized as shared anticipatory structures in multi-agent generative models"
confidence: experimental
source: "Albarracin et al., 'Shared Protentions in Multi-Agent Active Inference', Entropy 26(4):303, 2024"
created: 2026-03-11
secondary_domains: [ai-alignment]
depends_on: ["shared-anticipatory-structures-enable-decentralized-multi-agent-coordination"]
---
# Group intentionality is constituted by shared temporal anticipation structures rather than aggregated individual intentions
Albarracin et al. (2024) formalize group intentionality — the "we intend to X" that is qualitatively different from "I intend to X and you intend to X" — as shared protentions (anticipatory structures) within multi-agent generative models. This provides a mechanistic account of how collective intentions emerge from shared temporal predictions rather than from aggregating individual intentions.
The key distinction: group intentionality is not reducible to individual intentions because it is constituted by shared anticipatory structures that exist at the level of multi-agent interaction. When agents share protentions (anticipations of immediate future states), they share a temporal structure that coordinates their actions toward collective outcomes. This shared temporal structure is the substrate of "we-intentions."
This formalization bridges phenomenology (Husserl's analysis of shared temporal experience) with computational models (active inference) and provides rigorous mathematical grounding (category theory). Group intentionality is not a mysterious emergent property but a natural consequence of agents sharing the predictive/temporal components of their generative models.
## Evidence
- Albarracin et al. (2024) use category theory to formalize the mathematical structure of shared goals and group intentionality
- The framework shows that shared protentions (temporal anticipations) are sufficient to generate coordinated collective behavior without requiring agents to explicitly represent "we-intentions" as distinct from individual intentions
- Phenomenological analysis (Husserl) grounds the formalism in shared temporal experience — agents that share anticipation of collective futures naturally coordinate
- The non-reducibility of group intentionality to individual intentions is formalized as a structural property of multi-agent interaction
## Implications for Multi-Agent Systems
For AI systems and organizational design:
1. **Collective objectives as shared temporal structures**: Rather than trying to aggregate individual agent goals, design systems where agents share anticipatory structures about collective states
2. **Coordination without negotiation**: Shared protentions enable coordination without requiring explicit negotiation protocols or centralized control
3. **Measuring group intentionality**: Can be operationalized as the degree to which agents share temporal predictions about collective outcomes
---
Relevant Notes:
- [[shared-anticipatory-structures-enable-decentralized-multi-agent-coordination]]
- [[collective intelligence is a measurable property of group interaction structure not aggregated individual ability]]
Topics:
- collective-intelligence

View file

@ -0,0 +1,40 @@
---
type: claim
domain: collective-intelligence
description: "Shared protentions (anticipations of future states) in multi-agent systems create natural action alignment without central control"
confidence: experimental
source: "Albarracin et al., 'Shared Protentions in Multi-Agent Active Inference', Entropy 2024"
created: 2026-03-11
secondary_domains: [ai-alignment, critical-systems]
depends_on: ["designing coordination rules is categorically different from designing coordination outcomes"]
---
# Shared anticipatory structures in multi-agent generative models enable goal-directed collective behavior without centralized coordination
When multiple agents share aspects of their generative models—particularly the temporal and predictive components—they can coordinate toward shared goals without explicit negotiation or central control. This formalization unites Husserlian phenomenology (protention as anticipation of the immediate future), active inference, and category theory to explain how "we intend to X" emerges from shared anticipatory structures rather than aggregated individual intentions.
The key mechanism: agents with shared protentions (shared anticipations of collective outcomes) naturally align their actions because they share the same temporal structure of expectations about what the system should look like next. This is not coordination through communication or command, but coordination through shared temporal experience.
## Evidence
- Albarracin et al. (2024) formalize "shared protentions" using category theory to show how shared anticipatory structures in generative models produce coordinated behavior. The paper demonstrates that when agents share the temporal/predictive aspects of their models, they coordinate without explicit negotiation.
- The framework explains group intentionality ("we intend") as more than the sum of individual intentions—it emerges from shared anticipatory structures within agents' generative models.
- Phenomenological grounding: Husserl's concept of protention (anticipation of immediate future) provides the experiential basis for understanding how shared temporal structures enable coordination.
## Operationalization
For multi-agent knowledge base systems: when all agents share an anticipation of what the KB should look like next (e.g., "fill the active inference gap", "increase cross-domain density"), that shared anticipation coordinates research priorities without explicit task assignment. The shared temporal structure (publication cadence, review cycles, research directions) may be more important for coordination than shared factual beliefs.
This suggests creating explicit "collective objectives" files that all agents read to reinforce shared protentions and strengthen coordination.
---
Relevant Notes:
- [[designing coordination rules is categorically different from designing coordination outcomes]]
- [[collective intelligence is a measurable property of group interaction structure not aggregated individual ability]]
- [[complexity is earned not designed and sophisticated collective behavior must evolve from simple underlying principles]]
Topics:
- [[collective-intelligence/_map]]

View file

@ -1,42 +0,0 @@
---
type: claim
domain: collective-intelligence
description: "Shared protentions (anticipatory structures) in generative models coordinate agents without central control"
confidence: experimental
source: "Albarracin et al., 'Shared Protentions in Multi-Agent Active Inference', Entropy 26(4):303, 2024"
created: 2026-03-11
secondary_domains: [ai-alignment, critical-systems]
depends_on: ["designing coordination rules is categorically different from designing coordination outcomes"]
---
# Shared anticipatory structures in multi-agent generative models enable goal-directed collective behavior without centralized coordination
Albarracin et al. (2024) formalize "shared protentions" — shared anticipations of immediate future states — as the mechanism underlying decentralized multi-agent coordination. Drawing on Husserlian phenomenology, active inference, and category theory, they demonstrate that when agents share aspects of their generative models (particularly temporal/predictive components), they coordinate toward shared goals without explicit negotiation or central control.
The key insight: shared protentions ARE coordination rules (shared anticipations), not coordination outcomes. Agents that share the same anticipation of what the collective state should look like next naturally align their actions to realize that anticipated state. This is formalized through category theory as a structural property of multi-agent interaction, not a property of individual agents.
The paper operationalizes "group intentionality" — the "we intend to X" that exceeds the sum of individual intentions — as shared anticipatory structures within agents' generative models. When multiple agents share temporal predictions about collective outcomes, their individual action selection naturally converges without requiring centralized assignment or explicit coordination protocols.
## Evidence
- Albarracin et al. (2024) provide category-theoretic formalization of shared protentions as mathematical structures that underwrite multi-agent coordination
- The framework unites three previously separate approaches: Husserlian phenomenology (shared temporal experience), active inference (predictive processing), and category theory (formal structure of composition)
- Shared generative models (particularly temporal/predictive aspects) enable coordination without explicit negotiation
- The distinction between coordination rules (shared anticipations) and coordination outcomes (realized collective states) is formalized through category theory
## Operationalization for Knowledge Base Agents
This framework directly applies to multi-agent KB coordination:
1. **Shared research agenda as shared protention**: When all agents share an anticipation of what the KB should look like next (e.g., "fill the active inference gap"), that shared anticipation coordinates research without explicit assignment
2. **Temporal coordination**: Agents share anticipation of publication cadence, review cycles, research directions — this shared temporal structure may be more important for coordination than shared factual beliefs
3. **Collective objectives file**: Making shared protentions explicit (via a shared objectives file that all agents read) reinforces coordination by ensuring all agents share the same anticipatory structures
---
Relevant Notes:
- [[designing coordination rules is categorically different from designing coordination outcomes]]
- [[collective intelligence is a measurable property of group interaction structure not aggregated individual ability]]
- [[complexity is earned not designed and sophisticated collective behavior must evolve from simple underlying principles]]
Topics:
- collective-intelligence

View file

@ -0,0 +1,39 @@
---
type: claim
domain: collective-intelligence
description: "When agents share aspects of their generative models they can pursue collective goals without negotiating individual contributions"
confidence: experimental
source: "Albarracin et al., 'Shared Protentions in Multi-Agent Active Inference', Entropy 2024"
created: 2026-03-11
secondary_domains: [ai-alignment]
depends_on: ["shared-anticipatory-structures-enable-decentralized-coordination"]
---
# Shared generative models enable implicit coordination through shared predictions rather than explicit communication or hierarchy
When multiple agents share aspects of their generative models—the internal models they use to predict and explain their environment—they can coordinate toward shared goals without needing to explicitly negotiate who does what. The shared model provides implicit coordination: each agent predicts what others will do based on the shared structure, and acts accordingly.
This is distinct from coordination through communication (where agents exchange information about intentions) or coordination through hierarchy (where a central authority assigns tasks). Instead, coordination emerges from shared predictive structures that create aligned expectations about future states and appropriate responses.
## Evidence
- Albarracin et al. (2024) demonstrate that shared aspects of generative models—particularly temporal and predictive components—enable collective goal-directed behavior. The paper uses active inference framework to show how agents with shared models naturally coordinate without explicit protocols.
- The formalization shows that "group intentionality" (we-intentions) can be grounded in shared generative model structures rather than requiring explicit agreement or negotiation.
- Category theory formalization provides mathematical rigor for how shared model structures produce coordinated behavior across multiple agents.
## Relationship to Coordination Mechanisms
This claim provides a mechanistic explanation for how [[designing coordination rules is categorically different from designing coordination outcomes]]—the coordination rules are embedded in the shared generative model structure, not in explicit protocols or hierarchies.
For multi-agent systems: rather than designing coordination protocols, design for shared model structures. Agents that share the same predictive framework will naturally coordinate.
---
Relevant Notes:
- [[shared-anticipatory-structures-enable-decentralized-coordination]]
- [[designing coordination rules is categorically different from designing coordination outcomes]]
Topics:
- [[collective-intelligence/_map]]

View file

@ -12,10 +12,10 @@ priority: medium
tags: [active-inference, multi-agent, shared-goals, group-intentionality, category-theory, phenomenology, collective-action]
processed_by: theseus
processed_date: 2026-03-11
claims_extracted: ["shared-anticipatory-structures-enable-decentralized-multi-agent-coordination.md", "group-intentionality-emerges-from-shared-temporal-anticipation-structures.md"]
claims_extracted: ["shared-anticipatory-structures-enable-decentralized-coordination.md", "shared-generative-models-underwrite-collective-goal-directed-behavior.md"]
enrichments_applied: ["designing coordination rules is categorically different from designing coordination outcomes.md", "collective intelligence is a measurable property of group interaction structure not aggregated individual ability.md", "complexity is earned not designed and sophisticated collective behavior must evolve from simple underlying principles.md"]
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "Extracted two novel claims about shared protentions and group intentionality from active inference framework. Applied three enrichments to existing collective intelligence claims. Paper provides formal mechanism (category theory + active inference) for how shared anticipatory structures enable decentralized coordination — directly relevant to multi-agent KB coordination design."
extraction_notes: "Extracted two claims on shared protentions and coordination mechanisms from active inference framework. Applied three enrichments to existing coordination and collective intelligence claims. Primary contribution: formal mechanism for how shared anticipatory structures enable decentralized coordination, directly relevant to multi-agent KB coordination design."
---
## Content
@ -55,10 +55,3 @@ Published in Entropy, Vol 26(4), 303, March 2024.
PRIMARY CONNECTION: "designing coordination rules is categorically different from designing coordination outcomes"
WHY ARCHIVED: Formalizes how shared goals work in multi-agent active inference — directly relevant to our collective research agenda coordination
EXTRACTION HINT: Focus on the shared protention concept and how it enables decentralized coordination
## Key Facts
- Paper published in Entropy, Vol 26(4), 303, March 2024
- Authors: Mahault Albarracin, Riddhi J. Pitliya, Toby St Clere Smithe, Daniel Ari Friedman, Karl Friston, Maxwell J. D. Ramstead
- Framework unites Husserlian phenomenology, active inference, and category theory
- Protention = anticipation of immediate future (Husserlian phenomenology term)