teleo-codex/foundations/collective-intelligence
m3taversal a2c42621ad theseus: restore COVID coordination link per Leo's review
- Restore [[COVID proved humanity cannot coordinate...]] wiki link
  that was incorrectly removed in enrichment. File exists at
  core/teleohumanity/ and is a relevant connection.

Pentagon-Agent: Theseus <845F10FB-BC22-40F6-A6A6-F6E4D8F78465>
2026-03-06 12:42:16 +00:00
..
_map.md Initial commit: Teleo Codex v1 2026-03-05 20:30:34 +00:00
AI alignment is a coordination problem not a technical problem.md theseus: restore COVID coordination link per Leo's review 2026-03-06 12:42:16 +00:00
centaur teams outperform both pure humans and pure AI because complementary strengths compound.md Initial commit: Teleo Codex v1 2026-03-05 20:30:34 +00:00
collective intelligence is a measurable property of group interaction structure not aggregated individual ability.md Initial commit: Teleo Codex v1 2026-03-05 20:30:34 +00:00
collective intelligence requires diversity as a structural precondition not a moral preference.md Initial commit: Teleo Codex v1 2026-03-05 20:30:34 +00:00
collective intelligence within a purpose-driven community faces a structural tension because shared worldview correlates errors while shared purpose enables coordination.md Initial commit: Teleo Codex v1 2026-03-05 20:30:34 +00:00
collective superintelligence is the alternative to monolithic AI controlled by a few.md Initial commit: Teleo Codex v1 2026-03-05 20:30:34 +00:00
designing coordination rules is categorically different from designing coordination outcomes as nine intellectual traditions independently confirm.md Initial commit: Teleo Codex v1 2026-03-05 20:30:34 +00:00
Hayek argued that designed rules of just conduct enable spontaneous order of greater complexity than deliberate arrangement could achieve.md Initial commit: Teleo Codex v1 2026-03-05 20:30:34 +00:00
intelligence is a property of networks not individuals.md Initial commit: Teleo Codex v1 2026-03-05 20:30:34 +00:00
multipolar failure from competing aligned AI systems may pose greater existential risk than any single misaligned superintelligence.md Initial commit: Teleo Codex v1 2026-03-05 20:30:34 +00:00
no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it.md Initial commit: Teleo Codex v1 2026-03-05 20:30:34 +00:00
Ostrom proved communities self-govern shared resources when eight design principles are met without requiring state control or privatization.md Initial commit: Teleo Codex v1 2026-03-05 20:30:34 +00:00
partial connectivity produces better collective intelligence than full connectivity on complex problems because it preserves diversity.md Initial commit: Teleo Codex v1 2026-03-05 20:30:34 +00:00
protocol design enables emergent coordination of arbitrary complexity as Linux Bitcoin and Wikipedia demonstrate.md Initial commit: Teleo Codex v1 2026-03-05 20:30:34 +00:00
RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values.md Initial commit: Teleo Codex v1 2026-03-05 20:30:34 +00:00
safe AI development requires building alignment mechanisms before scaling capability.md Initial commit: Teleo Codex v1 2026-03-05 20:30:34 +00:00
scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps.md Initial commit: Teleo Codex v1 2026-03-05 20:30:34 +00:00
the alignment problem dissolves when human values are continuously woven into the system rather than specified in advance.md Initial commit: Teleo Codex v1 2026-03-05 20:30:34 +00:00
the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it.md theseus: add 3 claims from Anthropic/Pentagon/nuclear news + enrich 2 foundations 2026-03-06 12:41:42 +00:00
three paths to superintelligence exist but only collective superintelligence preserves human agency.md Initial commit: Teleo Codex v1 2026-03-05 20:30:34 +00:00
trial and error is the only coordination strategy humanity has ever used.md Initial commit: Teleo Codex v1 2026-03-05 20:30:34 +00:00
universal alignment is mathematically impossible because Arrows impossibility theorem applies to aggregating diverse human preferences into a single coherent objective.md Initial commit: Teleo Codex v1 2026-03-05 20:30:34 +00:00