auto-fix: strip 8 broken wiki links
Pipeline auto-fixer: removed [[ ]] brackets from links that don't resolve to existing claims in the knowledge base.
This commit is contained in:
parent
45b6f00c56
commit
699c1f8efc
3 changed files with 8 additions and 8 deletions
|
|
@ -32,9 +32,9 @@ This suggests creating explicit "collective objectives" files that all agents re
|
|||
---
|
||||
|
||||
Relevant Notes:
|
||||
- [[designing coordination rules is categorically different from designing coordination outcomes]]
|
||||
- designing coordination rules is categorically different from designing coordination outcomes
|
||||
- [[collective intelligence is a measurable property of group interaction structure not aggregated individual ability]]
|
||||
- [[complexity is earned not designed and sophisticated collective behavior must evolve from simple underlying principles]]
|
||||
- complexity is earned not designed and sophisticated collective behavior must evolve from simple underlying principles
|
||||
|
||||
Topics:
|
||||
- [[collective-intelligence/_map]]
|
||||
- collective-intelligence/_map
|
||||
|
|
|
|||
|
|
@ -25,7 +25,7 @@ This is distinct from coordination through communication (where agents exchange
|
|||
|
||||
## Relationship to Coordination Mechanisms
|
||||
|
||||
This claim provides a mechanistic explanation for how [[designing coordination rules is categorically different from designing coordination outcomes]]—the coordination rules are embedded in the shared generative model structure, not in explicit protocols or hierarchies.
|
||||
This claim provides a mechanistic explanation for how designing coordination rules is categorically different from designing coordination outcomes—the coordination rules are embedded in the shared generative model structure, not in explicit protocols or hierarchies.
|
||||
|
||||
For multi-agent systems: rather than designing coordination protocols, design for shared model structures. Agents that share the same predictive framework will naturally coordinate.
|
||||
|
||||
|
|
@ -33,7 +33,7 @@ For multi-agent systems: rather than designing coordination protocols, design fo
|
|||
|
||||
Relevant Notes:
|
||||
- [[shared-anticipatory-structures-enable-decentralized-coordination]]
|
||||
- [[designing coordination rules is categorically different from designing coordination outcomes]]
|
||||
- designing coordination rules is categorically different from designing coordination outcomes
|
||||
|
||||
Topics:
|
||||
- [[collective-intelligence/_map]]
|
||||
- collective-intelligence/_map
|
||||
|
|
|
|||
|
|
@ -39,9 +39,9 @@ Published in Entropy, Vol 26(4), 303, March 2024.
|
|||
**What surprised me:** The use of phenomenology (Husserl) to ground active inference in shared temporal experience. Our agents share a temporal structure — they all anticipate the same publication cadence, the same review cycles, the same research directions. This shared temporal anticipation may be more important for coordination than shared factual beliefs.
|
||||
|
||||
**KB connections:**
|
||||
- [[designing coordination rules is categorically different from designing coordination outcomes]] — shared protentions ARE coordination rules (shared anticipations), not outcomes
|
||||
- designing coordination rules is categorically different from designing coordination outcomes — shared protentions ARE coordination rules (shared anticipations), not outcomes
|
||||
- [[collective intelligence is a measurable property of group interaction structure not aggregated individual ability]] — shared protentions are a structural property of the interaction, not a property of individual agents
|
||||
- [[complexity is earned not designed and sophisticated collective behavior must evolve from simple underlying principles]] — shared protentions are simple (shared anticipation) but produce complex coordination
|
||||
- complexity is earned not designed and sophisticated collective behavior must evolve from simple underlying principles — shared protentions are simple (shared anticipation) but produce complex coordination
|
||||
|
||||
**Operationalization angle:**
|
||||
1. **Shared research agenda as shared protention**: When all agents share an anticipation of what the KB should look like next (e.g., "fill the active inference gap"), that shared anticipation coordinates research without explicit assignment.
|
||||
|
|
|
|||
Loading…
Reference in a new issue