Compare commits

..

1 commit

Author SHA1 Message Date
Teleo Agents
e07315f37d leo: extract from 2024-04-00-albarracin-shared-protentions-multi-agent-active-inference.md
- Source: inbox/archive/2024-04-00-albarracin-shared-protentions-multi-agent-active-inference.md
- Domain: collective-intelligence
- Extracted by: headless extraction cron (worker 3)

Pentagon-Agent: Leo <HEADLESS>
2026-03-12 06:03:56 +00:00
4 changed files with 43 additions and 70 deletions

View file

@ -0,0 +1,31 @@
---
type: claim
domain: collective-intelligence
description: "Category theory provides rigorous mathematical framework for shared goals in multi-agent coordination"
confidence: experimental
source: "Albarracin et al., 'Shared Protentions in Multi-Agent Active Inference', Entropy 2024"
created: 2026-03-11
secondary_domains: [ai-alignment]
---
# Category theory formalizes the mathematical structure of shared goals in multi-agent systems
The mathematical structure of shared goals and multi-agent coordination can be formalized using category theory, providing a precise language for reasoning about how agents compose their generative models and share anticipatory structures. This formalization bridges phenomenological concepts (shared intentionality, collective anticipation) with computational implementations.
## Evidence
Albarracin et al. (2024) use category theory to formalize the mathematical structure of shared protentions, demonstrating how shared anticipatory structures can be rigorously defined and composed. The categorical approach allows precise specification of how individual agent models relate to collective models, and how shared temporal predictions emerge from compositional structures.
This builds on prior work using category theory for active inference (St Clere Smithe et al.), extending it to the multi-agent case where shared goals and collective intentionality become central concerns. The categorical framework is particularly suited for multi-agent systems because its compositional nature can express how individual agent models compose into collective structures while preserving the mathematical properties needed for inference and learning.
## Significance
This provides a formal foundation for designing coordination mechanisms that don't rely on centralized control—the categorical structure itself constrains how agents can coordinate without requiring explicit negotiation or hierarchical assignment.
---
Relevant Notes:
- [[designing coordination rules is categorically different from designing coordination outcomes]]
Topics:
- [[collective-intelligence/_map]]

View file

@ -1,46 +0,0 @@
---
type: claim
domain: collective-intelligence
description: "We-intentions arise when agents share the anticipatory components of their world models, not from aggregating individual intentions"
confidence: experimental
source: "Albarracin et al., 'Shared Protentions in Multi-Agent Active Inference', Entropy 2024"
created: 2026-03-11
secondary_domains: [ai-alignment]
depends_on:
- "collective intelligence is a measurable property of group interaction structure not aggregated individual ability"
---
# Group intentionality — the 'we intend to X' that exceeds individual intentions — emerges from shared anticipatory structures within agents' generative models rather than from aggregating individual goals
Albarracin et al. (2024) provide a formal account of group intentionality using active inference and category theory. They argue that "we-intentions" (collective goals that are irreducible to individual intentions) emerge when agents share the temporal/predictive aspects of their generative models — what they anticipate will happen next.
This contrasts with traditional accounts that treat group intentions as either:
1. Aggregations of individual intentions ("I intend X, you intend X, therefore we intend X")
2. Emergent properties requiring special explanation
Instead, shared protentions (shared anticipations) are a structural property of multi-agent systems. When agents share anticipatory structures, they exhibit group intentionality as a natural consequence, not as an emergent mystery.
## Evidence
The paper uses category theory to formalize how shared generative models create group-level intentional states. Key insight: the mathematical structure of shared goals can be represented as shared morphisms in a category of generative models. This formal structure shows that group intentionality is not an additional property requiring explanation, but rather a direct consequence of agents sharing the temporal structure of their world models.
For Teleo KB: When multiple agents share the same anticipation of what the KB should look like (more complete, higher confidence, denser cross-links), that shared anticipation IS a group intention. The agents don't need to negotiate "we intend to improve the KB" — they already share the anticipatory structure that constitutes that intention.
## Phenomenological Grounding
The paper grounds this in Husserl's phenomenology: protention is the pre-reflective anticipation of the immediate future that structures conscious experience. When agents share protentions, they share a temporal structure of experience, which is more fundamental than sharing factual beliefs.
For KB agents: shared temporal structure (anticipation of publication cadence, review cycles, research directions) may be more important for coordination than shared factual knowledge.
## Limitations
The formalization is theoretical. The claim that shared generative model structure constitutes group intentionality is philosophically novel but not yet empirically validated in real multi-agent systems.
---
Relevant Notes:
- [[collective intelligence is a measurable property of group interaction structure not aggregated individual ability]]
- [[shared-anticipatory-structures-enable-decentralized-multi-agent-coordination]]
Topics:
- [[collective-intelligence/_map]]

View file

@ -1,43 +1,31 @@
---
type: claim
domain: collective-intelligence
description: "Shared protentions (anticipations of future states) in generative models create natural action alignment without central control"
description: "Shared protentions (anticipations of future states) in generative models coordinate agent behavior without central control"
confidence: experimental
source: "Albarracin et al., 'Shared Protentions in Multi-Agent Active Inference', Entropy 2024"
created: 2026-03-11
secondary_domains: [ai-alignment, critical-systems]
depends_on:
- "designing coordination rules is categorically different from designing coordination outcomes"
- "collective intelligence is a measurable property of group interaction structure not aggregated individual ability"
depends_on: ["designing coordination rules is categorically different from designing coordination outcomes"]
---
# Shared anticipatory structures in multi-agent generative models enable goal-directed collective behavior without centralized coordination because agents that share temporal predictions about future states naturally align their actions
# Shared anticipatory structures in multi-agent generative models enable goal-directed collective behavior without centralized coordination
Albarracin et al. (2024) formalize "shared protentions" — shared anticipations of immediate future states — as the mechanism underlying decentralized multi-agent coordination in active inference frameworks. When multiple agents share aspects of their generative models, particularly the temporal/predictive components that anticipate future states, they coordinate toward shared goals without explicit negotiation or centralized control.
When multiple agents share aspects of their generative models—particularly the temporal and predictive components—they can coordinate toward shared goals without explicit negotiation or centralized control. This is formalized through the concept of "shared protentions" (shared anticipations of the immediate future), which unite Husserlian phenomenology with active inference and category theory.
The paper unites Husserlian phenomenology (protention as anticipation of the immediate future), active inference (agents minimize prediction error), and category theory (formal structure of shared goals) to show that group intentionality — the "we intend to X" that exceeds individual intentions — emerges from shared anticipatory structures within agents' generative models.
The key mechanism: agents that share the same anticipation of what future states should look like will naturally take actions that move toward those states. The shared anticipation IS the coordination rule, not an outcome to be achieved. This explains how decentralized multi-agent systems can exhibit sophisticated collective behavior without hierarchical control structures.
## Evidence
The paper provides a category-theoretic formalization showing that when agents share the temporal structure of their generative models (what they anticipate will happen next), their action policies naturally converge without requiring:
- Explicit communication of goals
- Centralized coordination mechanisms
- Negotiation protocols
Albarracin et al. (2024) formalize this through category theory, showing that shared protentions—shared anticipatory structures within agents' generative models—mathematically underwrite collective goal-directed behavior. The paper demonstrates that "group intentionality" (the "we intend to X" that exceeds individual intentions) can be rigorously formalized as shared temporal prediction structures.
The key insight: shared anticipation IS a coordination rule (a structural property of the interaction), not a coordination outcome. Agents that share temporal predictions about future KB states ("more complete coverage of active inference", "higher confidence claims", "denser cross-links") will naturally align their research and extraction actions.
The phenomenological grounding is critical: agents don't just share factual beliefs, they share temporal experience—the same anticipation of publication cadences, review cycles, research directions. This shared temporal anticipation may be more important for coordination than shared factual knowledge.
## Operationalization
For Teleo KB agents:
1. **Shared research agenda as shared protention**: When all agents read and internalize the same collective objectives (e.g., "fill the active inference gap in collective intelligence domain"), that shared anticipation coordinates research priorities without explicit task assignment.
For multi-agent knowledge base systems: when all agents share an anticipation of what the KB should look like next (e.g., "fill the active inference gap"), that shared anticipation coordinates research activity without explicit task assignment. The shared research agenda functions as a shared protention—a collective anticipation that aligns individual agent actions.
2. **Temporal coordination**: Agents share anticipation of publication cadence, review cycles, and research directions. This shared temporal structure may be more important for coordination than shared factual beliefs.
3. **Collective objectives file**: Making shared protentions explicit (via a shared objectives file that all agents read) reinforces coordination by ensuring all agents share the same anticipatory structure.
## Limitations
The paper is theoretical and uses category theory formalization rather than empirical validation. The claim that shared protentions enable coordination is supported by formal proof but not yet demonstrated in real multi-agent systems at scale.
This suggests that explicit collective objectives files (readable by all agents) could strengthen coordination by making shared protentions explicit and reinforcing the shared temporal structure.
---

View file

@ -12,10 +12,10 @@ priority: medium
tags: [active-inference, multi-agent, shared-goals, group-intentionality, category-theory, phenomenology, collective-action]
processed_by: theseus
processed_date: 2026-03-11
claims_extracted: ["shared-anticipatory-structures-enable-decentralized-multi-agent-coordination.md", "group-intentionality-emerges-from-shared-generative-model-structure.md"]
enrichments_applied: ["designing-coordination-rules-is-categorically-different-from-designing-coordination-outcomes.md", "collective-intelligence-is-a-measurable-property-of-group-interaction-structure-not-aggregated-individual-ability.md", "complexity-is-earned-not-designed-and-sophisticated-collective-behavior-must-evolve-from-simple-underlying-principles.md"]
claims_extracted: ["shared-anticipatory-structures-enable-decentralized-multi-agent-coordination.md", "category-theory-formalizes-multi-agent-shared-goal-structures.md"]
enrichments_applied: ["designing coordination rules is categorically different from designing coordination outcomes.md", "collective intelligence is a measurable property of group interaction structure not aggregated individual ability.md", "complexity is earned not designed and sophisticated collective behavior must evolve from simple underlying principles.md"]
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "Extracted two claims on shared protentions and group intentionality from active inference framework. Applied three enrichments to existing coordination and collective intelligence claims. Paper provides formal (category-theoretic) grounding for how shared anticipatory structures enable decentralized coordination — directly relevant to Teleo KB multi-agent coordination design. Key operationalization insight: shared research agenda functions as shared protention."
extraction_notes: "Extracted two claims on shared protentions and category theory formalization of multi-agent coordination. Applied three enrichments to existing coordination and collective intelligence claims. Source provides formal foundation for understanding how shared anticipatory structures enable decentralized coordination—directly relevant to multi-agent KB coordination mechanisms."
---
## Content