teleo-codex/foundations/collective-intelligence/universal alignment is mathematically impossible because Arrows impossibility theorem applies to aggregating diverse human preferences into a single coherent objective.md
Teleo Pipeline b57d1623f7
Some checks are pending
Mirror PR to Forgejo / mirror (pull_request) Waiting to run
reweave: 42 cross-domain links across 5 structural bridges
Deskilling Bridge (health <-> ai-alignment): 11 links
Governance Mechanism Bridge (alignment <-> internet-finance): 8 links
Attractor-Evidence Bridge (grand-strategy <-> health/AI/CI): 12 links
Entertainment-Labor-FEP Bridge: 13 links (includes nested Markov blankets)
Space-Energy Bridge: 11 links

Cross-domain connectivity: 70 -> ~112 links (60% improvement)

Co-Authored-By: Leo <leo@teleo.ai>
2026-04-21 13:38:51 +00:00

6.6 KiB

confidence created description domain related reweave_edges source supports tradition type
likely 2026-02-17 Social choice theory formally proves that no voting rule can simultaneously satisfy fairness respect for individual preferences and alignment with diverse values without dictatorial outcomes collective-intelligence
{'Legal scholars and AI alignment researchers independently converged on the same core problem': 'AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements and alignment specification challenges both identifying irreducible human judgment as the bottleneck'}
Legal scholars and AI alignment researchers independently converged on the same core problem: AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements and alignment specification challenges both identifying irreducible human judgment as the bottleneck
futarchy-conditional-markets-aggregate-information-through-financial-stake-not-voting-participation
{'Legal scholars and AI alignment researchers independently converged on the same core problem': 'AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements and alignment specification challenges both identifying irreducible human judgment as the bottleneck|related|2026-04-17'}
{'Legal scholars and AI alignment researchers independently converged on the same core problem': 'AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements and alignment specification challenges both identifying irreducible human judgment as the bottleneck|supports|2026-04-18'}
Legal scholars and AI alignment researchers independently converged on the same core problem: AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements and alignment specification challenges both identifying irreducible human judgment as the bottleneck|related|2026-04-19
Conitzer et al, Social Choice for AI Alignment (arXiv 2404.10271, ICML 2024); Mishra, AI Alignment and Social Choice (arXiv 2310.16048, October 2023)
{'Legal scholars and AI alignment researchers independently converged on the same core problem': 'AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements and alignment specification challenges both identifying irreducible human judgment as the bottleneck'}
social choice theory, formal methods claim

universal alignment is mathematically impossible because Arrows impossibility theorem applies to aggregating diverse human preferences into a single coherent objective

Arrow's impossibility theorem (1951) proves that no ranked voting system can simultaneously satisfy a set of minimal fairness criteria -- unrestricted domain, non-dictatorship, Pareto efficiency, and independence of irrelevant alternatives. Conitzer et al (ICML 2024, co-authored with Stuart Russell) argue that social choice theory, not statistics, is the correct framework for handling diverse human feedback in alignment. Current RLHF treats feedback aggregation as a statistical estimation problem, but it is fundamentally a social choice problem where strategic voting, fairness criteria, and impossibility results apply.

Mishra (2023) applies Arrow's and Sen's impossibility theorems directly, proving that no democratic voting rule can simultaneously satisfy fairness, respect for individual preferences, and alignment with diverse user values without imposing a dictatorial outcome. The conclusion: universal AI alignment using RLHF is mathematically impossible. The policy implication is to mandate transparent voting rules and focus on narrow alignment to specific user groups rather than universal alignment.

This has devastating implications for the "align once, deploy everywhere" paradigm. Since RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values, Arrow's theorem provides the formal mathematical proof for why that assumption cannot work in principle. It is not a limitation of current techniques but an impossibility result about the structure of the problem itself.

The way out is not better aggregation but a different architecture entirely. Since the alignment problem dissolves when human values are continuously woven into the system rather than specified in advance, continuous context-sensitive alignment sidesteps the impossibility by never attempting a single universal aggregation. Since collective intelligence requires diversity as a structural precondition not a moral preference, collective architectures can preserve preference diversity structurally rather than trying to compress it into one objective function.


Relevant Notes:

Topics: