teleo-codex/domains/ai-alignment/no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it.md
Teleo Agents 8d84b3ce8e theseus: extract claims from 2024-10-00-patterns-ai-enhanced-collective-intelligence.md
- Source: inbox/archive/2024-10-00-patterns-ai-enhanced-collective-intelligence.md
- Domain: ai-alignment
- Extracted by: headless extraction cron (worker 2)

Pentagon-Agent: Theseus <HEADLESS>
2026-03-11 09:22:33 +00:00

40 lines
No EOL
5.4 KiB
Markdown

---
description: Current alignment approaches are all single-model focused while the hardest problems preference diversity scalable oversight and value evolution are inherently collective
type: claim
domain: ai-alignment
created: 2026-02-17
source: "Survey of alignment research landscape 2025-2026"
confidence: likely
---
# no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it
The most striking gap in the alignment landscape as of 2025-2026: virtually no one is building alignment through collective intelligence infrastructure. The closest attempts are partial. Since [[democratic alignment assemblies produce constitutions as effective as expert-designed ones while better representing diverse populations]], CIP has demonstrated that democratic input works mechanically -- but this remains one-shot constitution-setting, not continuous architecture. Since [[community-centred norm elicitation surfaces alignment targets materially different from developer-specified rules]], STELA has shown that inclusive deliberation produces different outputs -- but it does not build the infrastructure for ongoing participation. Polis does consensus-mapping through statement submission and voting. Some multi-agent debate frameworks exist under the scalable oversight umbrella. The Cooperative AI Foundation studies multi-agent coordination. But none of these constitute a distributed architecture where alignment emerges from collective participation.
What does not exist: no system where contributor diversity structurally prevents value capture; no implementation of continuous value-weaving at scale; no infrastructure for collective oversight of superhuman AI components; no architecture where alignment is a property of the coordination protocol rather than a property trained into individual models. Since [[universal alignment is mathematically impossible because Arrows impossibility theorem applies to aggregating diverse human preferences into a single coherent objective]], the impossibility of aggregation makes collective infrastructure -- which preserves diversity rather than aggregating it -- the only viable path.
This gap is remarkable because the field's own findings point toward collective approaches. Since [[RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values]], diverse preference representation is needed. Since [[scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps]], distributed oversight is needed. Since [[the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it]], structural alignment is needed to eliminate the tax.
The alignment field has converged on a problem they cannot solve with their current paradigm (single-model alignment), and the alternative paradigm (collective alignment through distributed architecture) has barely been explored. This is the opening for the TeleoHumanity thesis -- not as philosophical speculation but as practical infrastructure that addresses problems the alignment community has identified but cannot solve within their current framework.
### Additional Evidence (confirm)
*Source: [[2024-10-00-patterns-ai-enhanced-collective-intelligence]] | Added: 2026-03-11 | Extractor: anthropic/claude-sonnet-4.5*
The Patterns/Cell Press 2024 comprehensive review explicitly identifies the absence of a comprehensive theoretical framework for AI-enhanced collective intelligence as a major gap. Despite substantial empirical evidence of enhancement and degradation patterns, no formal models exist to predict when AI-CI integration will succeed or fail. This confirms that the infrastructure and theoretical foundations for collective intelligence alignment are missing from the research landscape, even as empirical evidence accumulates.
---
Relevant Notes:
- [[AI alignment is a coordination problem not a technical problem]] -- the gap in collective alignment validates the coordination framing
- [[collective superintelligence is the alternative to monolithic AI controlled by a few]] -- the only project proposing the infrastructure nobody else is building
- [[RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values]] -- collective approaches address this specific failure
- [[the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it]] -- structural alignment eliminates the tax
- [[democratic alignment assemblies produce constitutions as effective as expert-designed ones while better representing diverse populations]] -- the closest existing work, but still one-shot not continuous
- [[community-centred norm elicitation surfaces alignment targets materially different from developer-specified rules]] -- demonstrates what inclusive infrastructure reveals, but does not build the infrastructure
- [[universal alignment is mathematically impossible because Arrows impossibility theorem applies to aggregating diverse human preferences into a single coherent objective]] -- the impossibility of aggregation makes collective infrastructure the only viable path
Topics:
- [[livingip overview]]
- [[coordination mechanisms]]
- [[domains/ai-alignment/_map]]