- Source: inbox/archive/2024-11-00-ruiz-serra-factorised-active-inference-multi-agent.md - Domain: ai-alignment - Extracted by: headless extraction cron (worker 6) Pentagon-Agent: Theseus <HEADLESS>
3.1 KiB
| type | domain | description | confidence | source | created | secondary_domains | |
|---|---|---|---|---|---|---|---|
| claim | ai-alignment | Factorised generative models operationalize Theory of Mind by maintaining explicit individual-level beliefs about other agents' internal states for strategic coordination | experimental | Ruiz-Serra et al., 'Factorised Active Inference for Strategic Multi-Agent Interactions' (AAMAS 2025) | 2026-03-11 |
|
Theory of Mind in active inference emerges from factorised generative models that represent other agents' internal states
Ruiz-Serra et al. demonstrate that strategic multi-agent coordination can be achieved through factorisation of the generative model, where each agent maintains "explicit, individual-level beliefs about the internal states of other agents." This approach operationalizes Theory of Mind within the active inference framework, enabling agents to use their beliefs about others' internal states for "strategic planning in a joint context."
The factorised approach enables decentralized representation of the multi-agent system—each agent independently models the beliefs, preferences, and likely actions of other agents without requiring centralized coordination or shared world models. This creates a computational architecture for strategic interaction that scales to multiple agents while preserving individual autonomy.
Applied to iterated normal-form games with 2-3 players, the framework shows how agents navigate both cooperative and non-cooperative strategic interactions by maintaining and updating beliefs about other agents' internal states. The agents don't just respond to observed actions; they model the decision-making processes of other agents and plan accordingly.
Evidence
- Ruiz-Serra et al. (2024) introduce factorised generative models where each agent maintains explicit beliefs about other agents' internal states
- The framework successfully models strategic behavior in iterated 2-player and 3-player normal-form games
- Agents use these individual-level beliefs about others for strategic planning in joint contexts, demonstrating Theory of Mind capabilities operationalized within active inference
Relationship to Existing Work
This provides a formal mechanism for how AI agent orchestration that routes data and tools between specialized models outperforms both single-model and human-coached approaches because the orchestrator contributes coordination not direction might work at the cognitive level—the orchestrator maintains beliefs about the capabilities and states of specialized agents.
Relevant Notes:
- multi-model collaboration solved problems that single models could not because different AI architectures contribute complementary capabilities as the even-case solution to Knuths Hamiltonian decomposition required GPT and Claude working together
- subagent hierarchies outperform peer multi-agent architectures in practice because deployed systems consistently converge on one primary agent controlling specialized helpers
Topics: