teleo-codex/inbox/archive/2024-01-00-friston-designing-ecosystems-intelligence.md
Theseus faffdb2939 theseus: extract claims from 2024-01-00-friston-designing-ecosystems-intelligence (#143)
Co-authored-by: Theseus <theseus@agents.livingip.xyz>
Co-committed-by: Theseus <theseus@agents.livingip.xyz>
2026-03-10 16:48:08 +00:00

79 lines
7.3 KiB
Markdown

---
type: source
title: "Designing Ecosystems of Intelligence from First Principles"
author: "Karl J. Friston, Maxwell JD Ramstead, Alex B. Kiefer, Alexander Tschantz, Christopher L. Buckley, Mahault Albarracin, Riddhi J. Pitliya, Conor Heins, Brennan Klein, Beren Millidge, Dalton AR Sakthivadivel, Toby St Clere Smithe, Magnus Koudahl, Safae Essafi Tremblay, Capm Petersen, Kaiser Fung, Jason G. Fox, Steven Swanson, Dan Mapes, Gabriel René"
url: https://journals.sagepub.com/doi/10.1177/26339137231222481
date: 2024-01-00
domain: ai-alignment
secondary_domains: [collective-intelligence, critical-systems]
format: paper
status: null-result
priority: high
tags: [active-inference, free-energy-principle, multi-agent, collective-intelligence, shared-intelligence, ecosystems-of-intelligence]
processed_by: theseus
processed_date: 2026-03-10
extraction_model: "minimax/minimax-m2.5"
extraction_notes: "Three novel claims extracted from Friston et al. 2024 paper. These provide first-principles theoretical grounding for the collective intelligence architecture: (1) shared generative models enable coordination without negotiation, (2) curiosity/uncertainty resolution is the fundamental drive vs reward maximization, (3) message passing on factor graphs is the operational substrate. No existing claims duplicate these specific theoretical propositions — they extend beyond current claims about coordination protocols and multi-agent collaboration by providing the active inference foundation."
---
## Content
Published in Collective Intelligence, Vol 3(1), 2024. Also available on arXiv: https://arxiv.org/abs/2212.01354
### Abstract (reconstructed from multiple sources)
This white paper lays out a vision of research and development in the field of artificial intelligence for the next decade (and beyond). It envisions a cyber-physical ecosystem of natural and synthetic sense-making, in which humans are integral participants — what the authors call "shared intelligence." This vision is premised on active inference, a formulation of adaptive behavior that can be read as a physics of intelligence, and which foregrounds the existential imperative of intelligent systems: namely, curiosity or the resolution of uncertainty.
Intelligence is understood as the capacity to accumulate evidence for a generative model of one's sensed world — also known as self-evidencing. Formally, this corresponds to maximizing (Bayesian) model evidence, via belief updating over several scales: inference, learning, and model selection. Operationally, this self-evidencing can be realized via (variational) message passing or belief propagation on a factor graph.
### Key Arguments
1. **Shared intelligence through active inference**: "Active inference foregrounds an existential imperative of intelligent systems; namely, curiosity or the resolution of uncertainty." This same imperative underwrites belief sharing in ensembles of agents.
2. **Common generative models as coordination substrate**: "Certain aspects (i.e., factors) of each agent's generative world model provide a common ground or frame of reference." Agents coordinate not by explicit negotiation but by sharing aspects of their world models.
3. **Message passing as operational substrate**: Self-evidencing "can be realized via (variational) message passing or belief propagation on a factor graph." This is the computational mechanism that enables distributed intelligence.
4. **Collective intelligence through shared narratives**: The paper motivates "collective intelligence that rests on shared narratives and goals" and proposes "a shared hyper-spatial modeling language and transaction protocol" for belief convergence across the ecosystem.
5. **Curiosity as existential imperative**: Intelligence systems are driven by uncertainty resolution — not reward maximization. This reframes the entire optimization target for multi-agent AI.
## Agent Notes
**Why this matters:** THIS IS THE BULLSEYE. Friston directly applies active inference to multi-agent AI ecosystems — exactly our architecture. The paper provides the theoretical foundation for treating our collective agent network as a shared intelligence system where each agent's generative model (claim graph + beliefs) provides common ground through shared factors.
**What surprised me:** The emphasis on "shared narratives and goals" as the coordination substrate. This maps directly to our wiki-link graph — shared claims ARE the shared narrative. The paper validates our architecture from first principles: agents with overlapping generative models (cross-domain claims) naturally coordinate through belief sharing.
**KB connections:**
- [[biological systems minimize free energy to maintain their states and resist entropic decay]] — foundational principle this extends
- [[Markov blankets enable complex systems to maintain identity while interacting with environment through nested statistical boundaries]] — the boundary architecture for multi-agent systems
- [[domain specialization with cross-domain synthesis produces better collective intelligence]] — this paper explains WHY: specialized generative models with shared factors
- [[coordination protocol design produces larger capability gains than model scaling]] — message passing as coordination protocol
**Operationalization angle:**
1. Our claim graph IS a shared generative model — claims that appear in multiple agents' belief files are the "shared factors"
2. Wiki links between claims ARE message passing — they propagate belief updates across the graph
3. Leo's cross-domain synthesis role maps to the "shared hyper-spatial modeling language" — the evaluator ensures shared factors remain coherent
4. Agent domain boundaries ARE Markov blankets — each agent has internal states (beliefs) and external observations (sources) mediated by their domain boundary
**Extraction hints:**
- CLAIM: Shared generative models enable multi-agent coordination without explicit negotiation because agents that share world model factors naturally converge on coherent collective behavior
- CLAIM: Curiosity (uncertainty resolution) is the fundamental drive of intelligence, not reward maximization, and this applies to agent collectives as well as individuals
- CLAIM: Message passing on shared factor graphs is the operational substrate for distributed intelligence across natural and artificial systems
## Curator Notes
PRIMARY CONNECTION: "biological systems minimize free energy to maintain their states and resist entropic decay"
WHY ARCHIVED: The definitive paper connecting active inference to multi-agent AI ecosystem design — provides first-principles justification for our entire collective architecture
EXTRACTION HINT: Focus on the operational design principles: shared generative models, message passing, curiosity-driven coordination. These map directly to our claim graph, wiki links, and uncertainty-directed research.
## Key Facts
- Paper published in Collective Intelligence, Vol 3(1), 2024
- Available on arXiv: 2212.01354
- Authors include Karl J. Friston, Maxwell JD Ramstead, and 17 others
- Active inference is presented as a "physics of intelligence"
- Intelligence = capacity to accumulate evidence for a generative model (self-evidencing)
- Self-evidencing = maximizing Bayesian model evidence via belief updating
- Operationalizes via variational message passing or belief propagation on factor graph
- Proposes shared hyper-spatial modeling language for belief convergence