3.3 KiB
3.3 KiB
Collective Intelligence — The Theory
What collective intelligence IS, how it works, and the theoretical foundations for designed emergence. Domain-independent science — the TeleoHumanity-specific interpretation lives in core/teleohumanity/, and alignment-specific applications live in domains/ai-alignment/.
Intelligence Foundations
- intelligence is a property of networks not individuals — the core premise
- collective intelligence is a measurable property of group interaction structure not aggregated individual ability — CI is structural, not aggregate
- collective intelligence requires diversity as a structural precondition not a moral preference — diversity is functional engineering
- centaur team performance depends on role complementarity not mere human-AI combination — conditional, not unconditional
- partial connectivity produces better collective intelligence than full connectivity on complex problems because it preserves diversity — network topology matters
- collective intelligence within a purpose-driven community faces a structural tension because shared worldview correlates errors while shared purpose enables coordination — the core tension
Coordination Design
- designing coordination rules is categorically different from designing coordination outcomes as nine intellectual traditions independently confirm — rules not outcomes
- Ostrom proved communities self-govern shared resources when eight design principles are met without requiring state control or privatization — the empirical evidence
- protocol design enables emergent coordination of arbitrary complexity as Linux Bitcoin and Wikipedia demonstrate — the existence proofs
- trial and error is the only coordination strategy humanity has ever used — the current limitation
- Hayek argued that designed rules of just conduct enable spontaneous order of greater complexity than deliberate arrangement could achieve — the Hayek insight
AI Alignment as Coordination (domain-independent theory)
- universal alignment is mathematically impossible because Arrows impossibility theorem applies to aggregating diverse human preferences into a single coherent objective — the impossibility result
- RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values — why current approaches fail
- scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps — the scalability problem
- multipolar failure from competing aligned AI systems may pose greater existential risk than any single misaligned superintelligence — the multipolar risk
- the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it — the race dynamic
Moved to other layers (foundations audit 2026-03-07)
Claims below were moved because they are TeleoHumanity interpretations or alignment-domain claims, not domain-independent CI theory:
- → core/teleohumanity/: collective superintelligence as alternative, three paths to SI, alignment dissolves with continuous weaving
- → domains/ai-alignment/: AI alignment is coordination problem, safe before scaling, no research group building CI alignment