extract: 2026-03-05-futardio-launch-phonon-studio-ai #1125
3 changed files with 4 additions and 4 deletions
|
|
@ -29,7 +29,7 @@ The paper's proposed solution—RLCHF with explicit social welfare functions—c
|
|||
|
||||
|
||||
### Additional Evidence (extend)
|
||||
*Source: [[2025-06-00-li-scaling-human-judgment-community-notes-llms]] | Added: 2026-03-15*
|
||||
*Source: 2025-06-00-li-scaling-human-judgment-community-notes-llms | Added: 2026-03-15*
|
||||
|
||||
RLCF makes the social choice mechanism explicit through the bridging algorithm (matrix factorization with intercept scores). Unlike standard RLHF which aggregates preferences opaquely through reward model training, RLCF's use of intercepts as the training signal is a deliberate choice to optimize for cross-partisan agreement—a specific social welfare function.
|
||||
|
||||
|
|
|
|||
|
|
@ -29,7 +29,7 @@ Chakraborty, Qiu, Yuan, Koppel, Manocha, Huang, Bedi, Wang. "MaxMin-RLHF: Alignm
|
|||
|
||||
|
||||
### Additional Evidence (confirm)
|
||||
*Source: [[2025-11-00-operationalizing-pluralistic-values-llm-alignment]] | Added: 2026-03-15*
|
||||
*Source: 2025-11-00-operationalizing-pluralistic-values-llm-alignment | Added: 2026-03-15*
|
||||
|
||||
Study demonstrates that models trained on different demographic populations show measurable behavioral divergence (3-5 percentage points), providing empirical evidence that single-reward functions trained on one population systematically misalign with others.
|
||||
|
||||
|
|
|
|||
|
|
@ -44,8 +44,8 @@ Published February 2026. Comprehensive survey of differentiable social choice
|
|||
**What I expected but didn't find:** No specific engagement with RLCF or bridging-based approaches. The paper is a survey, not a solution proposal.
|
||||
|
||||
**KB connections:**
|
||||
- [[designing coordination rules is categorically different from designing coordination outcomes]] — differentiable social choice designs rules that learn outcomes
|
||||
- [[universal alignment is mathematically impossible because Arrows impossibility theorem applies]] — impossibility results become optimization constraints
|
||||
- designing coordination rules is categorically different from designing coordination outcomes — differentiable social choice designs rules that learn outcomes
|
||||
- universal alignment is mathematically impossible because Arrows impossibility theorem applies — impossibility results become optimization constraints
|
||||
|
||||
**Extraction hints:** Claims about (1) RLHF as implicit social choice without normative scrutiny, (2) impossibility results as optimization trade-offs not brick walls, (3) differentiable mechanisms as learnable alternatives to designed ones.
|
||||
|
||||
|
|
|
|||
Loading…
Reference in a new issue