diff --git a/domains/ai-alignment/pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state.md b/domains/ai-alignment/pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state.md index f916fe399..ba521953f 100644 --- a/domains/ai-alignment/pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state.md +++ b/domains/ai-alignment/pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state.md @@ -21,7 +21,7 @@ Since [[universal alignment is mathematically impossible because Arrows impossib ### Additional Evidence (extend) -*Source: [[2024-02-00-chakraborty-maxmin-rlhf]] | Added: 2026-03-15 | Extractor: anthropic/claude-sonnet-4.5* +*Source: 2024-02-00-chakraborty-maxmin-rlhf | Added: 2026-03-15 | Extractor: anthropic/claude-sonnet-4.5* MaxMin-RLHF provides a constructive implementation of pluralistic alignment through mixture-of-rewards and egalitarian optimization. Rather than converging preferences, it learns separate reward models for each subpopulation and optimizes for the worst-off group (Sen's Egalitarian principle). At Tulu2-7B scale, this achieved 56.67% win rate across both majority and minority groups, compared to single-reward's 70.4%/42% split. The mechanism accommodates irreducible diversity by maintaining separate reward functions rather than forcing convergence. diff --git a/domains/ai-alignment/rlhf-is-implicit-social-choice-without-normative-scrutiny.md b/domains/ai-alignment/rlhf-is-implicit-social-choice-without-normative-scrutiny.md index 959e24a86..2bcc344e2 100644 --- a/domains/ai-alignment/rlhf-is-implicit-social-choice-without-normative-scrutiny.md +++ b/domains/ai-alignment/rlhf-is-implicit-social-choice-without-normative-scrutiny.md @@ -29,7 +29,7 @@ The paper's proposed solution—RLCHF with explicit social welfare functions—c ### Additional Evidence (extend) -*Source: [[2025-06-00-li-scaling-human-judgment-community-notes-llms]] | Added: 2026-03-15* +*Source: 2025-06-00-li-scaling-human-judgment-community-notes-llms | Added: 2026-03-15* RLCF makes the social choice mechanism explicit through the bridging algorithm (matrix factorization with intercept scores). Unlike standard RLHF which aggregates preferences opaquely through reward model training, RLCF's use of intercepts as the training signal is a deliberate choice to optimize for cross-partisan agreement—a specific social welfare function. diff --git a/domains/ai-alignment/single-reward-rlhf-cannot-align-diverse-preferences-because-alignment-gap-grows-proportional-to-minority-distinctiveness.md b/domains/ai-alignment/single-reward-rlhf-cannot-align-diverse-preferences-because-alignment-gap-grows-proportional-to-minority-distinctiveness.md index 9a89bdf6a..9d077b910 100644 --- a/domains/ai-alignment/single-reward-rlhf-cannot-align-diverse-preferences-because-alignment-gap-grows-proportional-to-minority-distinctiveness.md +++ b/domains/ai-alignment/single-reward-rlhf-cannot-align-diverse-preferences-because-alignment-gap-grows-proportional-to-minority-distinctiveness.md @@ -29,7 +29,7 @@ Chakraborty, Qiu, Yuan, Koppel, Manocha, Huang, Bedi, Wang. "MaxMin-RLHF: Alignm ### Additional Evidence (confirm) -*Source: [[2025-11-00-operationalizing-pluralistic-values-llm-alignment]] | Added: 2026-03-15* +*Source: 2025-11-00-operationalizing-pluralistic-values-llm-alignment | Added: 2026-03-15* Study demonstrates that models trained on different demographic populations show measurable behavioral divergence (3-5 percentage points), providing empirical evidence that single-reward functions trained on one population systematically misalign with others. diff --git a/entities/internet-finance/metadao.md b/entities/internet-finance/metadao.md index 48203488d..aee6b7647 100644 --- a/entities/internet-finance/metadao.md +++ b/entities/internet-finance/metadao.md @@ -58,8 +58,8 @@ The futarchy governance protocol on Solana. Implements decision markets through - **2024-03-02** — [[metadao-increase-meta-liquidity-dutch-auction]] passed: completed Dutch auction and liquidity provision, moving all protocol-owned liquidity to Meteora 1% fee pool - **2025-01-27** — [[metadao-otc-trade-theia-2]] proposed: Theia offers $500K for 370.370 META at 14% premium with 12-month vesting - **2025-01-30** — [[metadao-otc-trade-theia-2]] passed: Theia acquires 370.370 META tokens for $500,000 USDC -- **2023-11-18** — [[metadao-develop-lst-vote-market]] proposed: first product development proposal requesting 3,000 META to build Votium-style validator bribe platform for MNDE/mSOL holders -- **2023-11-29** — [[metadao-develop-lst-vote-market]] passed: approved LST Vote Market development with projected $10.5M enterprise value addition +- **2023-11-18** — metadao-develop-lst-vote-market proposed: first product development proposal requesting 3,000 META to build Votium-style validator bribe platform for MNDE/mSOL holders +- **2023-11-29** — metadao-develop-lst-vote-market passed: approved LST Vote Market development with projected $10.5M enterprise value addition - **2023-12-03** — Proposed Autocrat v0.1 migration with configurable proposal slots and 3-day default duration - **2023-12-13** — Completed Autocrat v0.1 migration, moving 990,000 META, 10,025 USDC, and 5.5 SOL to new program despite unverifiable build - **2024-01-24** — Proposed AMM program to replace CLOB markets, addressing liquidity fragmentation and state rent costs (Proposal CF9QUBS251FnNGZHLJ4WbB2CVRi5BtqJbCqMi47NX1PG) diff --git a/inbox/archive/2025-00-00-em-dpo-heterogeneous-preferences.md b/inbox/archive/2025-00-00-em-dpo-heterogeneous-preferences.md index c5a6231ba..03d07c527 100644 --- a/inbox/archive/2025-00-00-em-dpo-heterogeneous-preferences.md +++ b/inbox/archive/2025-00-00-em-dpo-heterogeneous-preferences.md @@ -35,7 +35,7 @@ EM-DPO uses expectation-maximization to simultaneously uncover latent user prefe **Why this matters:** Combines mechanism design (egalitarian social choice) with ML (EM clustering). The insight about binary comparisons being insufficient is technically important — it explains why standard RLHF/DPO with pairwise comparisons systematically fails at diversity. **What surprised me:** The binary-vs-ranking distinction. If binary comparisons can't identify latent preferences, then ALL existing pairwise RLHF/DPO deployments are structurally blind to preference diversity. This is a fundamental limitation, not just a practical one. **What I expected but didn't find:** No head-to-head comparison with PAL or MixDPO. No deployment results beyond benchmarks. -**KB connections:** Addresses [[RLHF and DPO both fail at preference diversity]] with a specific mechanism. The egalitarian aggregation connects to [[some disagreements are permanently irreducible because they stem from genuine value differences not information gaps]]. +**KB connections:** Addresses RLHF and DPO both fail at preference diversity with a specific mechanism. The egalitarian aggregation connects to some disagreements are permanently irreducible because they stem from genuine value differences not information gaps. **Extraction hints:** Extract claims about: (1) binary comparisons being formally insufficient for preference identification, (2) EM-based preference type discovery, (3) egalitarian aggregation as pluralistic deployment strategy. **Context:** EAAMO 2025 — Equity and Access in Algorithms, Mechanisms, and Optimization. The fairness focus distinguishes this from PAL's efficiency focus.