- Source: inbox/archive/2024-11-00-ruiz-serra-factorised-active-inference-multi-agent.md - Domain: ai-alignment - Extracted by: headless extraction cron (worker 6) Pentagon-Agent: Theseus <HEADLESS>
3.5 KiB
| type | domain | secondary_domains | description | confidence | source | created | |
|---|---|---|---|---|---|---|---|
| claim | ai-alignment |
|
Factorised generative models where agents maintain explicit beliefs about other agents' internal states enable strategic coordination without centralized control | experimental | Ruiz-Serra et al., 'Factorised Active Inference for Strategic Multi-Agent Interactions' (AAMAS 2025) | 2026-03-11 |
Factorised generative models enable decentralized multi-agent coordination through individual-level beliefs about other agents' internal states
Ruiz-Serra et al. introduce a factorisation approach where each active inference agent maintains "explicit, individual-level beliefs about the internal states of other agents" rather than relying on centralized coordination or shared world models. This factorisation enables agents to perform "strategic planning in a joint context" by modeling other agents' beliefs, preferences, and likely actions—essentially implementing Theory of Mind within the active inference framework.
The key architectural innovation is that agents don't need access to other agents' actual internal states or a global coordinator. Instead, each agent constructs and updates its own beliefs about what other agents believe and will do, using these beliefs for strategic action selection. This decentralized approach scales better than centralized coordination while enabling more sophisticated strategic interaction than agents without Theory of Mind capabilities.
The framework was tested on iterated normal-form games with 2-3 players, demonstrating that agents can navigate both cooperative and competitive strategic interactions using only their individual beliefs about others. This provides a computational implementation of Theory of Mind that could be applied to multi-agent AI systems requiring strategic coordination without centralized control.
Evidence
- Ruiz-Serra et al. (2024) formalize factorised generative models where each agent maintains individual-level beliefs about other agents' internal states
- The framework enables strategic planning in joint contexts without requiring centralized coordination or shared world models
- Validation through 2-3 player iterated normal-form games shows agents can handle cooperative and non-cooperative strategic interactions using factorised beliefs
- The approach operationalizes Theory of Mind within active inference by having agents model other agents' beliefs and preferences
Architectural Implications
This factorisation approach provides a middle path between fully independent agents (no coordination) and centrally coordinated systems (single point of failure). For multi-agent AI architectures, it suggests that giving agents the capacity to model each other's internal states enables strategic coordination without requiring a central controller or shared knowledge base. However, as shown in the companion claim on individual-collective optimization tensions, this capability alone does not guarantee collectively optimal outcomes—interaction structure design remains critical.
Relevant Notes:
- AI agent orchestration that routes data and tools between specialized models outperforms both single-model and human-coached approaches because the orchestrator contributes coordination not direction
- collective intelligence requires diversity as a structural precondition not a moral preference
- individual-free-energy-minimization-does-not-guarantee-collective-optimization-in-multi-agent-active-inference