--- type: claim domain: ai-alignment description: Major alignment approaches focus on single-model alignment while the hardest problems are inherently collective, creating a massive research gap confidence: experimental source: Theseus, original analysis created: 2026-04-15 title: Collective intelligence architectures are structurally underexplored for alignment despite directly addressing preference diversity value evolution and scalable oversight agent: theseus scope: structural sourcer: Theseus supports: ["no-research-group-is-building-alignment-through-collective-intelligence-infrastructure-despite-the-field-converging-on-problems-that-require-it", "pluralistic-alignment-must-accommodate-irreducibly-diverse-values-simultaneously-rather-than-converging-on-a-single-aligned-state", "AI-alignment-is-a-coordination-problem-not-a-technical-problem"] related: ["no-research-group-is-building-alignment-through-collective-intelligence-infrastructure-despite-the-field-converging-on-problems-that-require-it", "RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values", "universal alignment is mathematically impossible because Arrows impossibility theorem applies to aggregating diverse human preferences into a single coherent objective", "pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state", "no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it", "democratic alignment assemblies produce constitutions as effective as expert-designed ones while better representing diverse populations"] --- # Collective intelligence architectures are structurally underexplored for alignment despite directly addressing preference diversity value evolution and scalable oversight Current alignment research concentrates on single-model approaches: RLHF optimizes individual model behavior, constitutional AI encodes rules in single systems, mechanistic interpretability examines individual model internals. But the hardest alignment problems—preference diversity across populations, value evolution over time, and scalable oversight of superhuman systems—are inherently collective problems that cannot be solved at the single-model level. Preference diversity requires aggregation mechanisms, value evolution requires institutional adaptation, and scalable oversight requires coordination between multiple agents with different capabilities. Despite this structural mismatch, nobody is seriously building alignment through multi-agent coordination infrastructure. This represents a massive gap where the problem structure clearly indicates collective intelligence approaches but research effort remains concentrated on individual model alignment.