Compare commits
1 commit
73e3303651
...
e07315f37d
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e07315f37d |
4 changed files with 42 additions and 56 deletions
|
|
@ -0,0 +1,31 @@
|
|||
---
|
||||
type: claim
|
||||
domain: collective-intelligence
|
||||
description: "Category theory provides rigorous mathematical framework for shared goals in multi-agent coordination"
|
||||
confidence: experimental
|
||||
source: "Albarracin et al., 'Shared Protentions in Multi-Agent Active Inference', Entropy 2024"
|
||||
created: 2026-03-11
|
||||
secondary_domains: [ai-alignment]
|
||||
---
|
||||
|
||||
# Category theory formalizes the mathematical structure of shared goals in multi-agent systems
|
||||
|
||||
The mathematical structure of shared goals and multi-agent coordination can be formalized using category theory, providing a precise language for reasoning about how agents compose their generative models and share anticipatory structures. This formalization bridges phenomenological concepts (shared intentionality, collective anticipation) with computational implementations.
|
||||
|
||||
## Evidence
|
||||
|
||||
Albarracin et al. (2024) use category theory to formalize the mathematical structure of shared protentions, demonstrating how shared anticipatory structures can be rigorously defined and composed. The categorical approach allows precise specification of how individual agent models relate to collective models, and how shared temporal predictions emerge from compositional structures.
|
||||
|
||||
This builds on prior work using category theory for active inference (St Clere Smithe et al.), extending it to the multi-agent case where shared goals and collective intentionality become central concerns. The categorical framework is particularly suited for multi-agent systems because its compositional nature can express how individual agent models compose into collective structures while preserving the mathematical properties needed for inference and learning.
|
||||
|
||||
## Significance
|
||||
|
||||
This provides a formal foundation for designing coordination mechanisms that don't rely on centralized control—the categorical structure itself constrains how agents can coordinate without requiring explicit negotiation or hierarchical assignment.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
- [[designing coordination rules is categorically different from designing coordination outcomes]]
|
||||
|
||||
Topics:
|
||||
- [[collective-intelligence/_map]]
|
||||
|
|
@ -1,39 +0,0 @@
|
|||
---
|
||||
type: claim
|
||||
domain: collective-intelligence
|
||||
description: "Group intentionality (we-intentions) formalizes as shared components of agents' generative models rather than aggregated individual intentions"
|
||||
confidence: experimental
|
||||
source: "Albarracin et al., 'Shared Protentions in Multi-Agent Active Inference', Entropy 2024"
|
||||
created: 2026-03-11
|
||||
secondary_domains: [ai-alignment]
|
||||
---
|
||||
|
||||
# Group intentionality — the "we intend to X" that exceeds the sum of individual intentions — formalizes as shared anticipatory structures within agents' generative models
|
||||
|
||||
Albarracin et al. (2024) provide a formal account of group intentionality using active inference and category theory. They argue that "we-intentions" (collective goals that are not reducible to individual intentions) emerge when agents share components of their generative models, particularly the temporal/anticipatory aspects.
|
||||
|
||||
This resolves a longstanding puzzle in social ontology: how can a group have intentions that are not just the sum of individual intentions? The answer: group intentions are **structural properties of shared generative models**, not aggregated individual mental states.
|
||||
|
||||
## Evidence
|
||||
|
||||
The paper:
|
||||
- Formalizes Husserlian phenomenology of collective intentionality using active inference framework
|
||||
- Uses category theory to model the mathematical structure of shared goals
|
||||
- Demonstrates that shared protentions (anticipatory structures) in generative models produce group-level intentionality
|
||||
|
||||
Key insight: When agents share anticipations about future states, they form a collective intentional structure that is **ontologically distinct** from individual intentions. The group intention exists in the shared model components, not in any individual agent's mind.
|
||||
|
||||
## Implications
|
||||
|
||||
For multi-agent systems:
|
||||
- Group goals should be encoded as **shared anticipatory structures** (what future states do all agents predict?), not as aggregated individual goals
|
||||
- Collective action emerges from shared temporal predictions, not from negotiated individual commitments
|
||||
- Measuring group intentionality = measuring overlap in agents' generative model components, particularly temporal predictions
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
- [[collective intelligence is a measurable property of group interaction structure not aggregated individual ability]]
|
||||
|
||||
Topics:
|
||||
- [[collective-intelligence/_map]]
|
||||
|
|
@ -1,37 +1,31 @@
|
|||
---
|
||||
type: claim
|
||||
domain: collective-intelligence
|
||||
description: "Shared protentions (anticipatory structures) in generative models coordinate agent behavior without centralized control"
|
||||
description: "Shared protentions (anticipations of future states) in generative models coordinate agent behavior without central control"
|
||||
confidence: experimental
|
||||
source: "Albarracin et al., 'Shared Protentions in Multi-Agent Active Inference', Entropy 2024"
|
||||
created: 2026-03-11
|
||||
secondary_domains: [ai-alignment, critical-systems]
|
||||
depends_on:
|
||||
- "designing coordination rules is categorically different from designing coordination outcomes"
|
||||
- "collective intelligence is a measurable property of group interaction structure not aggregated individual ability"
|
||||
depends_on: ["designing coordination rules is categorically different from designing coordination outcomes"]
|
||||
---
|
||||
|
||||
# Shared anticipatory structures in multi-agent generative models enable goal-directed collective behavior without centralized coordination
|
||||
|
||||
Albarracin et al. (2024) formalize "shared protentions" — shared anticipations of immediate future states — as the mechanism underlying decentralized multi-agent coordination. Drawing on Husserlian phenomenology, active inference, and category theory, they demonstrate that when agents share aspects of their generative models (particularly temporal/predictive components), they coordinate toward shared goals without explicit negotiation or centralized control.
|
||||
When multiple agents share aspects of their generative models—particularly the temporal and predictive components—they can coordinate toward shared goals without explicit negotiation or centralized control. This is formalized through the concept of "shared protentions" (shared anticipations of the immediate future), which unite Husserlian phenomenology with active inference and category theory.
|
||||
|
||||
The key insight: **shared protentions function as coordination rules, not coordination outcomes**. When multiple agents anticipate the same future state (e.g., "the knowledge base should have higher confidence claims in active inference by next month"), that shared anticipation structures their individual actions toward the collective goal. The coordination emerges from aligned temporal predictions, not from aggregated individual plans.
|
||||
The key mechanism: agents that share the same anticipation of what future states should look like will naturally take actions that move toward those states. The shared anticipation IS the coordination rule, not an outcome to be achieved. This explains how decentralized multi-agent systems can exhibit sophisticated collective behavior without hierarchical control structures.
|
||||
|
||||
## Evidence
|
||||
|
||||
The paper provides:
|
||||
- Category-theoretic formalization of shared goals as shared anticipatory structures in multi-agent generative models
|
||||
- Integration of phenomenological accounts of group intentionality ("we intend to X") with active inference framework
|
||||
- Mathematical demonstration that shared temporal predictions in generative models produce coordinated behavior without centralized planning
|
||||
Albarracin et al. (2024) formalize this through category theory, showing that shared protentions—shared anticipatory structures within agents' generative models—mathematically underwrite collective goal-directed behavior. The paper demonstrates that "group intentionality" (the "we intend to X" that exceeds individual intentions) can be rigorously formalized as shared temporal prediction structures.
|
||||
|
||||
The framework explains why agents with shared research agendas coordinate effectively: they share a temporal structure (publication cadences, review cycles, research directions) that aligns their actions without requiring explicit task assignment.
|
||||
The phenomenological grounding is critical: agents don't just share factual beliefs, they share temporal experience—the same anticipation of publication cadences, review cycles, research directions. This shared temporal anticipation may be more important for coordination than shared factual knowledge.
|
||||
|
||||
## Operationalization
|
||||
|
||||
For multi-agent knowledge base systems:
|
||||
1. **Shared research agenda as shared protention**: When all agents anticipate the same future KB state ("fill the active inference gap"), that shared anticipation coordinates research without explicit assignment
|
||||
2. **Collective objectives file**: Making shared protentions explicit (via a shared objectives file all agents read) reinforces coordination
|
||||
3. **Temporal alignment over factual alignment**: Agents may disagree on current claims but still coordinate if they share anticipations about what the KB should look like next
|
||||
For multi-agent knowledge base systems: when all agents share an anticipation of what the KB should look like next (e.g., "fill the active inference gap"), that shared anticipation coordinates research activity without explicit task assignment. The shared research agenda functions as a shared protention—a collective anticipation that aligns individual agent actions.
|
||||
|
||||
This suggests that explicit collective objectives files (readable by all agents) could strengthen coordination by making shared protentions explicit and reinforcing the shared temporal structure.
|
||||
|
||||
---
|
||||
|
||||
|
|
|
|||
|
|
@ -12,10 +12,10 @@ priority: medium
|
|||
tags: [active-inference, multi-agent, shared-goals, group-intentionality, category-theory, phenomenology, collective-action]
|
||||
processed_by: theseus
|
||||
processed_date: 2026-03-11
|
||||
claims_extracted: ["shared-anticipatory-structures-enable-decentralized-multi-agent-coordination.md", "group-intentionality-formalizes-as-shared-generative-model-components.md"]
|
||||
claims_extracted: ["shared-anticipatory-structures-enable-decentralized-multi-agent-coordination.md", "category-theory-formalizes-multi-agent-shared-goal-structures.md"]
|
||||
enrichments_applied: ["designing coordination rules is categorically different from designing coordination outcomes.md", "collective intelligence is a measurable property of group interaction structure not aggregated individual ability.md", "complexity is earned not designed and sophisticated collective behavior must evolve from simple underlying principles.md"]
|
||||
extraction_model: "anthropic/claude-sonnet-4.5"
|
||||
extraction_notes: "Extracted two claims on shared protentions and group intentionality from active inference framework. Three enrichments applied to existing coordination and collective intelligence claims. Paper provides formal mathematical framework (category theory + active inference) for understanding decentralized multi-agent coordination. Key operationalization insight: shared research agendas function as shared protentions that coordinate agent behavior without centralized control."
|
||||
extraction_notes: "Extracted two claims on shared protentions and category theory formalization of multi-agent coordination. Applied three enrichments to existing coordination and collective intelligence claims. Source provides formal foundation for understanding how shared anticipatory structures enable decentralized coordination—directly relevant to multi-agent KB coordination mechanisms."
|
||||
---
|
||||
|
||||
## Content
|
||||
|
|
|
|||
Loading…
Reference in a new issue