auto-fix: address review feedback on PR #512

- Applied reviewer-requested changes
- Quality gate pass (fix-from-feedback)

Pentagon-Agent: Auto-Fix <HEADLESS>
This commit is contained in:
Teleo Agents 2026-03-11 10:06:14 +00:00
parent 756973929b
commit 32632499ea
3 changed files with 51 additions and 109 deletions

View file

@ -1,47 +1,28 @@
---
type: claim
domain: ai-alignment
description: "MaxMin-RLHF achieves 33% minority group improvement while maintaining majority performance, suggesting single-reward RLHF leaves value on table rather than navigating zero-sum constraints"
claim_type: empirical
confidence: experimental
source: "Chakraborty et al. (2024) MaxMin-RLHF experiments at GPT-2 and Tulu2-7B scale, ICML 2024"
created: 2024-02-14
depends_on: ["maxmin-rlhf-applies-egalitarian-social-choice-to-alignment-by-maximizing-minimum-group-utility"]
tags:
- ai-alignment
- rlhf
- fairness
- pareto-improvement
source:
- "[[2024-02-00-chakraborty-maxmin-rlhf]]"
---
# MaxMin alignment improves minority group performance by 33% without compromising majority outcomes
MaxMin alignment improves minority group performance without compromising majority outcomes.
MaxMin-RLHF achieved substantial minority group improvement (33% boost at Tulu2-7B scale) while maintaining majority group performance. This suggests that single-reward RLHF was making suboptimal tradeoffs rather than navigating genuine zero-sum constraints.
Chakraborty et al. (2024) demonstrate that MaxMin RLHF achieves approximately Pareto improvements over single-reward RLHF in their experiments:
## Evidence
**Tulu2-7B scale with 10:1 majority:minority ratio:**
**Tulu2-7B results (two-group preference dataset)**:
- Single-reward RLHF: 70.4% majority win rate, 42% minority win rate
- MaxMin-RLHF: ~56.67% win rate for BOTH groups
- Net result: ~16% average improvement, ~33% minority-specific boost
- MaxMin RLHF: ~56.67% win rate for BOTH groups
**GPT-2 scale qualitative results:**
- Single RLHF optimized for positive sentiment (majority preference) while completely ignoring conciseness (minority preference)
- MaxMin satisfied both simultaneously—not through compromise but through discovering that the constraints were compatible
This represents a ~33% improvement for the minority group (42% → 56.67%) while the majority group experiences only a ~13.7 percentage point reduction (70.4% → 56.67%). The minority group gains substantially more utility than the majority loses, suggesting an overall welfare improvement under most social welfare functions.
## Why This Matters
The result is "approximately Pareto" rather than strictly Pareto because the majority group does experience some reduction in win rate. However, the egalitarian redistribution substantially reduces alignment disparity while maintaining reasonable performance for both groups.
The absence of majority performance degradation is the key finding. If alignment were genuinely zero-sum across preference groups, MaxMin would have to sacrifice majority utility to improve minority outcomes. Instead, it found Pareto improvements—outcomes better for some groups and no worse for others.
**Important scale caveat**: These experiments used GPT-2 and Tulu2-7B, which are 1-2 orders of magnitude smaller than frontier models (GPT-4, Claude-3). Alignment tax often increases with model scale, so the Pareto improvement finding may not hold at frontier model scales. This limitation should be considered when evaluating the practical applicability of these results.
This suggests single-reward aggregation was destroying value through premature averaging, not making optimal tradeoffs given fundamental constraints. The implication is that preference diversity can be accommodated without sacrifice if the aggregation mechanism is chosen appropriately.
## Limitations
Results are at GPT-2 and Tulu2-7B scale. Unclear whether Pareto improvements persist at frontier model scale or with more than two preference groups. The mechanism assumes discrete, identifiable subpopulations—continuous or overlapping preferences may not exhibit the same property.
No comparison with bridging-based approaches (RLCF, Community Notes mentioned in related work). MaxMin may be one mechanism among several that avoid premature aggregation.
---
Relevant Notes:
- [[maxmin-rlhf-applies-egalitarian-social-choice-to-alignment-by-maximizing-minimum-group-utility]]
- [[single-reward-rlhf-cannot-align-models-with-diverse-human-preferences]]
- [[RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values]]
- [[pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state]]
Topics:
- [[domains/ai-alignment/_map]]
Related: [[maxmin-rlhf-applies-egalitarian-social-choice-to-alignment-by-maximizing-minimum-group-utility]], [[single-reward-rlhf-cannot-align-models-with-diverse-human-preferences]]

View file

@ -1,55 +1,29 @@
---
type: claim
domain: ai-alignment
secondary_domains: ["collective-intelligence"]
description: "MaxMin-RLHF learns mixture of reward models via EM clustering then optimizes worst-off group following Sen's Egalitarian principle from social choice theory"
claim_type: empirical
confidence: experimental
source: "Chakraborty et al. (2024) MaxMin-RLHF, ICML 2024"
created: 2024-02-14
depends_on: ["single-reward-rlhf-cannot-align-models-with-diverse-human-preferences"]
tags:
- ai-alignment
- rlhf
- social-choice-theory
- fairness
source:
- "[[2024-02-00-chakraborty-maxmin-rlhf]]"
---
# MaxMin-RLHF applies egalitarian social choice theory to alignment by maximizing the minimum utility across preference groups
MaxMin RLHF applies egalitarian social choice to alignment by maximizing minimum group utility.
MaxMin-RLHF reframes alignment as a fairness problem rather than an averaging problem, directly applying Sen's Egalitarian principle from social choice theory: "society should focus on maximizing the minimum utility of all individuals."
Chakraborty et al. (2024) propose MaxMin RLHF, which explicitly incorporates Rawlsian maximin principles into reinforcement learning from human feedback:
The mechanism has two components:
**Core mechanism**:
1. **Group-specific reward models**: Train separate reward models for each preference group
2. **EM Algorithm for Reward Mixture**: Iteratively clusters humans based on preference compatibility (operates unsupervised without requiring pre-labeled group membership)
3. **Maximin optimization**: During RL training, optimize for max(min(R₁(y), R₂(y), ...)) where Rᵢ is the reward from group i
1. **EM Algorithm for Reward Mixture**: Iteratively clusters humans based on preference compatibility and updates subpopulation-specific reward functions until convergence. This learns a mixture of reward models rather than a single aggregate.
This directly implements the egalitarian social choice rule: improve outcomes for the worst-off group. Unlike utilitarian aggregation (averaging rewards), MaxMin creates incentives to satisfy minority preferences.
2. **MaxMin Objective**: Optimizes for the worst-off preference group rather than average utility. This is a direct application of the Egalitarian rule to AI alignment.
**Key theoretical connection**: MaxMin explicitly chooses one social choice rule (egalitarian/Rawlsian) rather than attempting to escape Arrow's impossibility theorem. It accepts that no aggregation method satisfies all desirable properties and makes a normative choice about which properties to prioritize.
## Evidence
**Scale limitations**: Validated on GPT-2 and Tulu2-7B (1-2 orders of magnitude smaller than frontier models). Behavior at GPT-4/Claude-3 scale remains unknown.
**Tulu2-7B implementation with 10:1 majority:minority ratio:**
- MaxMin-RLHF: 56.67% win rate across both majority and minority groups
- Single-reward RLHF: 70.4% (majority) / 42% (minority) split
- Result: ~16% average improvement, ~33% boost specifically for minority groups
**GPT-2 scale qualitative results:**
- Single RLHF satisfied positive sentiment (majority) but ignored conciseness (minority)
- MaxMin satisfied both simultaneously—not through compromise but through discovering that the constraints were compatible
## Limitations
Assumes discrete, identifiable subpopulations. Requires specifying number of clusters beforehand. EM algorithm assumes clustering is feasible with preference data alone, which may not hold for continuous or overlapping preference distributions.
No comparison with other social choice mechanisms (Borda count, approval voting, etc.). The egalitarian principle is one approach among many—optimality depends on which fairness axioms you accept.
## Relationship to Coordination Theory
This is a constructive mechanism that accepts Arrow's impossibility constraints but optimizes for a specific social choice objective. It doesn't escape [[designing coordination rules is categorically different from designing coordination outcomes as nine intellectual traditions independently confirm]]—it chooses egalitarianism as the rule and accepts whatever outcomes emerge.
Relates to [[collective intelligence requires diversity as a structural precondition not a moral preference]] by treating preference diversity as input to preserve rather than noise to eliminate.
---
Relevant Notes:
- [[single-reward-rlhf-cannot-align-models-with-diverse-human-preferences]]
- [[pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state]]
- [[designing coordination rules is categorically different from designing coordination outcomes as nine intellectual traditions independently confirm]]
- [[collective intelligence requires diversity as a structural precondition not a moral preference]]
Topics:
- [[domains/ai-alignment/_map]]
- [[foundations/collective-intelligence/_map]]
Related: [[universal alignment is mathematically impossible because Arrows impossibility theorem applies to aggregating diverse human preferences into a single coherent objective]], [[maxmin-alignment-improves-minority-group-performance-without-compromising-majority-outcomes]], [[single-reward-rlhf-cannot-align-models-with-diverse-human-preferences]]

View file

@ -1,41 +1,28 @@
---
type: claim
domain: ai-alignment
description: "Formal impossibility result: single-reward RLHF alignment gap grows proportionally with minority preference distinctiveness and inversely with representation"
confidence: likely
source: "Chakraborty et al. (2024) MaxMin-RLHF paper, ICML 2024"
created: 2024-02-14
depends_on: ["RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values"]
claim_type: empirical
confidence: experimental
tags:
- ai-alignment
- rlhf
- preference-diversity
- social-choice
source:
- "[[2024-02-00-chakraborty-maxmin-rlhf]]"
---
# Single-reward RLHF cannot adequately align language models when human preferences are diverse across subpopulations
Single-reward RLHF cannot align models with diverse human preferences.
Chakraborty et al. (2024) establish a formal impossibility result: standard RLHF using a singular reward model cannot adequately align language models when human preferences are diverse across subpopulations. The alignment gap is not a practical limitation but a mathematical constraint that scales with preference diversity.
Chakraborty et al. (2024) provide strong empirical evidence that standard RLHF with a single reward model trained on aggregated preferences systematically fails when human preferences are diverse. Their experiments on GPT-2 and Tulu2-7B demonstrate that:
Specifically: alignment gap is proportional to how distinct minority preferences are and inversely proportional to their representation in the training data. High subpopulation diversity inevitably produces greater alignment failure when aggregated into a single reward function.
1. **Empirical demonstration of alignment failure**: When preferences diverge across groups, single-reward RLHF optimizes for the majority preference at the expense of minority groups, creating what they term "alignment disparity."
## Evidence
2. **Tulu2-7B experiments**: On a two-group preference dataset, single-reward RLHF achieved 70.4% win rate for the majority group but only 42% for the minority group—worse than random.
**Empirical validation at Tulu2-7B scale with 10:1 majority:minority ratio:**
- Single-reward RLHF: 70.4% accuracy on majority group, 42% on minority group
- Degradation: 28 percentage points from representation imbalance alone, independent of model capability limits
3. **GPT-2 qualitative analysis**: In creative writing tasks with different stylistic preferences, the single reward model collapsed diverse preferences into a single mode.
**GPT-2 scale qualitative demonstration:**
- Single RLHF optimized for positive sentiment (majority preference) while completely ignoring conciseness (minority preference)
- Demonstrates zero-sum tradeoff in practice: cannot simultaneously satisfy both groups with single aggregated reward
This empirical finding challenges the assumption that aggregating preferences into a single reward signal preserves alignment across diverse populations. The evidence suggests this is a fundamental limitation of the single-reward approach rather than a tuning issue.
## Relationship to Existing Work
**Scale limitations**: These results are from models 1-2 orders of magnitude smaller than frontier models (GPT-4, Claude-3). Alignment tax and preference aggregation challenges may behave differently at larger scales.
This formalizes the empirical observation in [[RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values]]. The impossibility result provides mathematical grounding for why [[pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state]].
The result is independent of but convergent with Arrow's impossibility theorem applications to alignment, showing the problem emerges from multiple theoretical directions.
---
Relevant Notes:
- [[RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values]]
- [[pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state]]
- [[some disagreements are permanently irreducible because they stem from genuine value differences not information gaps and systems must map rather than eliminate them]]
Topics:
- [[domains/ai-alignment/_map]]
Related: [[maxmin-rlhf-applies-egalitarian-social-choice-to-alignment-by-maximizing-minimum-group-utility]], [[universal alignment is mathematically impossible because Arrows impossibility theorem applies to aggregating diverse human preferences into a single coherent objective]]