- Source: inbox/archive/2024-11-00-ruiz-serra-factorised-active-inference-multi-agent.md - Domain: ai-alignment - Extracted by: headless extraction cron (worker 6) Pentagon-Agent: Theseus <HEADLESS>
46 lines
2.7 KiB
Markdown
46 lines
2.7 KiB
Markdown
---
|
|
type: claim
|
|
domain: ai-alignment
|
|
description: "Agents maintain explicit individual-level beliefs about other agents' internal states through model factorisation, enabling strategic planning without centralized coordination"
|
|
confidence: experimental
|
|
source: "Ruiz-Serra et al., 'Factorised Active Inference for Strategic Multi-Agent Interactions' (AAMAS 2025)"
|
|
created: 2026-03-11
|
|
secondary_domains: [collective-intelligence]
|
|
---
|
|
|
|
# Factorised generative models enable decentralized Theory of Mind in multi-agent active inference systems
|
|
|
|
Active inference agents can maintain explicit, individual-level beliefs about the internal states of other agents through factorisation of the generative model. This enables each agent to perform strategic planning in a joint context without requiring centralized coordination or a global model of the system.
|
|
|
|
The factorisation approach operationalizes Theory of Mind within the active inference framework: each agent models not just the observable behavior of others, but their internal states—beliefs, preferences, and decision-making processes. This allows agents to anticipate others' actions based on inferred mental states rather than just observed patterns.
|
|
|
|
## Evidence
|
|
|
|
Ruiz-Serra et al. (2024) demonstrate this through:
|
|
|
|
1. **Factorised generative models**: Each agent maintains a separate model component for each other agent's internal state
|
|
2. **Strategic planning**: Agents use these beliefs about others' internal states for planning in iterated normal-form games
|
|
3. **Decentralized representation**: The multi-agent system is represented in a decentralized way—no agent needs a global view
|
|
4. **Game-theoretic validation**: The framework successfully navigates cooperative and non-cooperative strategic interactions in 2- and 3-player games
|
|
|
|
## Implications
|
|
|
|
This architecture provides a computational implementation of Theory of Mind that:
|
|
|
|
- Scales to multi-agent systems without centralized coordination
|
|
- Enables strategic reasoning about others' likely actions based on inferred beliefs
|
|
- Maintains individual agent autonomy while supporting coordination
|
|
- Provides a formal framework for modeling how agents model each other
|
|
|
|
The approach bridges active inference (a normative theory of intelligent behavior) with game theory (a normative theory of strategic interaction).
|
|
|
|
---
|
|
|
|
Relevant Notes:
|
|
- [[AI alignment is a coordination problem not a technical problem]]
|
|
- [[intelligence is a property of networks not individuals]]
|
|
- [[coordination protocol design produces larger capability gains than model scaling because the same AI model performed 6x better with structured exploration than with human coaching on the same problem]]
|
|
|
|
Topics:
|
|
- [[domains/ai-alignment/_map]]
|
|
- [[foundations/collective-intelligence/_map]]
|