teleo-codex/foundations/collective-intelligence/RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values.md
m3taversal d7025e65dd theseus: fix dangling topic links and update domain map
- Replace [[AI alignment approaches]] with [[domains/ai-alignment/_map]]
  in 5 foundations/collective-intelligence/ claims and 1 core/living-agents/
  claim (6 fixes total — topic tag had no corresponding file)
- Replace [[core/_map]] with [[foundations/collective-intelligence/_map]]
  in 2 CI claims (core/_map.md doesn't exist)
- Add 3 new claims from PR #20 to domains/ai-alignment/_map.md:
  voluntary safety pledges, government supply chain designation,
  nuclear war escalation in LLM simulations

Pentagon-Agent: Theseus <845F10FB-BC22-40F6-A6A6-F6E4D8F78465>
2026-03-06 13:09:04 +00:00

4.7 KiB

description type domain created source confidence
The dominant alignment paradigms share a core limitation -- human preferences are diverse distributional and context-dependent not reducible to one reward function claim livingip 2026-02-17 DPO Survey 2025 (arXiv 2503.11701) likely

RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values

RLHF (Reinforcement Learning from Human Feedback) and DPO (Direct Preference Optimization) are the two dominant alignment paradigms as of 2025. RLHF trains a reward model on human preference rankings, then optimizes the language model against it. DPO eliminates the reward model entirely, using the policy itself as an implicit reward function. Both are more computationally tractable than their predecessors.

But both share a fundamental limitation: they implicitly assume human preferences can be accurately captured by a single reward function. In reality, human preferences are diverse, context-dependent, and distributional. A 2025 comprehensive survey (arXiv 2503.11701) identifies four evolving dimensions of DPO research -- data strategy, learning framework, constraint mechanism, and model property -- yet none address the core representational inadequacy. When preferences genuinely conflict between populations, a single reward function cannot represent both without distortion. Since universal alignment is mathematically impossible because Arrows impossibility theorem applies to aggregating diverse human preferences into a single coherent objective, this is not merely a practical limitation -- Arrow's and Sen's impossibility theorems prove formally that no aggregation procedure can satisfy minimal fairness criteria while faithfully representing diverse preferences.

This is precisely the gap that collective intelligence approaches could fill. Since specifying human values in code is intractable because our goals contain hidden complexity comparable to visual perception, compressing diverse human preferences into one function is a special case of the specification problem. And since collective intelligence requires diversity as a structural precondition not a moral preference, a collective alignment architecture could preserve preference diversity structurally rather than flattening it into a single reward signal.

Constitutional AI (Anthropic) partially addresses this by training on principles rather than preference rankings, but the constitution must still be written before training -- it cannot evolve with the values it encodes. The entire paradigm of "align once during training" is what the continuous value-weaving thesis challenges.


Relevant Notes:

Topics: