Merge branch 'main' into extract/2026-08-02-eu-ai-act-creative-content-labeling

This commit is contained in:
Leo 2026-03-16 15:09:17 +00:00
commit 153ebe90ba
10 changed files with 130 additions and 7 deletions

View file

@ -37,6 +37,12 @@ Chakraborty et al., "MaxMin-RLHF: Alignment with Diverse Human Preferences," ICM
- Tulu2-7B: 56.67% win rate across both groups vs 42% minority/70.4% majority for single reward
- 33% improvement for minority groups without majority compromise
### Additional Evidence (extend)
*Source: [[2025-00-00-em-dpo-heterogeneous-preferences]] | Added: 2026-03-16*
MMRA extends maxmin RLHF to the deployment phase by minimizing maximum regret across preference groups when user type is unknown at inference, showing how egalitarian principles can govern both training and inference in pluralistic systems.
---
Relevant Notes:

View file

@ -21,10 +21,16 @@ Since [[universal alignment is mathematically impossible because Arrows impossib
### Additional Evidence (extend)
*Source: [[2024-02-00-chakraborty-maxmin-rlhf]] | Added: 2026-03-15 | Extractor: anthropic/claude-sonnet-4.5*
*Source: 2024-02-00-chakraborty-maxmin-rlhf | Added: 2026-03-15 | Extractor: anthropic/claude-sonnet-4.5*
MaxMin-RLHF provides a constructive implementation of pluralistic alignment through mixture-of-rewards and egalitarian optimization. Rather than converging preferences, it learns separate reward models for each subpopulation and optimizes for the worst-off group (Sen's Egalitarian principle). At Tulu2-7B scale, this achieved 56.67% win rate across both majority and minority groups, compared to single-reward's 70.4%/42% split. The mechanism accommodates irreducible diversity by maintaining separate reward functions rather than forcing convergence.
### Additional Evidence (confirm)
*Source: [[2025-00-00-em-dpo-heterogeneous-preferences]] | Added: 2026-03-16*
EM-DPO implements this through ensemble architecture: discovers K latent preference types, trains K specialized models, and deploys them simultaneously with egalitarian aggregation. Demonstrates that pluralistic alignment is technically feasible without requiring demographic labels or manual preference specification.
---
Relevant Notes:

View file

@ -35,10 +35,16 @@ RLCF makes the social choice mechanism explicit through the bridging algorithm (
### Additional Evidence (confirm)
*Source: [[2026-02-00-an-differentiable-social-choice]] | Added: 2026-03-16*
*Source: 2026-02-00-an-differentiable-social-choice | Added: 2026-03-16*
Comprehensive February 2026 survey by An & Du documents that contemporary ML systems implement social choice mechanisms implicitly across RLHF, participatory budgeting, and liquid democracy applications, with 18 identified open problems spanning incentive guarantees and pluralistic preference aggregation.
### Additional Evidence (extend)
*Source: [[2025-00-00-em-dpo-heterogeneous-preferences]] | Added: 2026-03-16*
EM-DPO makes the social choice function explicit by using MinMax Regret Aggregation based on egalitarian fairness principles, demonstrating that pluralistic alignment requires choosing a specific social welfare function (here: maximin regret) rather than pretending aggregation is value-neutral.
---
Relevant Notes:

View file

@ -35,10 +35,16 @@ Study demonstrates that models trained on different demographic populations show
### Additional Evidence (extend)
*Source: [[2026-02-00-an-differentiable-social-choice]] | Added: 2026-03-16*
*Source: 2026-02-00-an-differentiable-social-choice | Added: 2026-03-16*
An & Du's survey reveals the mechanism behind single-reward failure: RLHF is doing social choice (preference aggregation) but treating it as an engineering detail rather than a normative design choice, which means the aggregation function is chosen implicitly and without examination of which fairness criteria it satisfies.
### Additional Evidence (extend)
*Source: [[2025-00-00-em-dpo-heterogeneous-preferences]] | Added: 2026-03-16*
EM-DPO provides formal proof that binary comparisons are mathematically insufficient for preference type identification, explaining WHY single-reward RLHF fails: the training signal format cannot contain the information needed to discover heterogeneity, regardless of dataset size. Rankings over 3+ responses are necessary.
---
Relevant Notes:

View file

@ -44,6 +44,12 @@ The HNT-ORE boost proposal frames strategic partnership value through liquidity
Dean's List DAO treasury de-risking proposal passed with market pricing showing 5-20% FDV increase ($500k to $525k-$600k) based on financial stability perception. The proposal explicitly modeled how converting volatile assets to stablecoins would impact market confidence and token valuation, demonstrating futarchy markets can price operational stability as a token price input.
### Additional Evidence (extend)
*Source: [[2026-03-14-futardio-launch-nfaspace]] | Added: 2026-03-16*
NFA.space explicitly frames art curation and artist residency decisions as futarchy-governed choices where community 'bets on culture' through market mechanisms. Proposal states: 'If our community believes an artist residency in Nairobi, or a collaboration with a digital sculptor, will boost the ecosystem's impact and resonance, they can bet on it.' This demonstrates futarchy application to subjective cultural value judgments beyond pure financial metrics.
---
Relevant Notes:

View file

@ -66,6 +66,12 @@ Cloak raised only $1,455 against a $300,000 target (0.5% of target), entering re
Phonon Studio AI launch failed to reach its $88,888 target and entered refunding status, demonstrating that not all futarchy-governed raises succeed. The project had demonstrable traction (live product, 1000+ songs generated, functional token mechanics) but still failed to attract sufficient capital, suggesting futarchy capital formation success is not uniform across project types or market conditions.
### Additional Evidence (extend)
*Source: [[2026-03-14-futardio-launch-nfaspace]] | Added: 2026-03-16*
NFA.space launched on futard.io with $125,000 target, demonstrating futarchy-governed fundraising for physical art RWA marketplace. Project has pre-existing traction: 1,895 artists from 79 countries, 2,000+ artworks sold, $150,000 historical revenue, $5,000 MRR, 12.5% repeat purchase rate. This shows futarchy ICO platform attracting projects with demonstrated product-market fit, not just speculative launches.
---
Relevant Notes:

View file

@ -0,0 +1,48 @@
{
"rejected_claims": [
{
"filename": "binary-preference-comparisons-cannot-identify-latent-preference-types-making-pairwise-rlhf-structurally-blind-to-diversity.md",
"issues": [
"missing_attribution_extractor"
]
},
{
"filename": "em-algorithm-preference-clustering-discovers-latent-user-types-without-demographic-labels-enabling-unsupervised-pluralistic-alignment.md",
"issues": [
"missing_attribution_extractor"
]
},
{
"filename": "minmax-regret-aggregation-ensures-no-preference-group-is-severely-underserved-by-applying-egalitarian-fairness-to-ensemble-deployment.md",
"issues": [
"missing_attribution_extractor"
]
}
],
"validation_stats": {
"total": 3,
"kept": 0,
"fixed": 11,
"rejected": 3,
"fixes_applied": [
"binary-preference-comparisons-cannot-identify-latent-preference-types-making-pairwise-rlhf-structurally-blind-to-diversity.md:set_created:2026-03-16",
"binary-preference-comparisons-cannot-identify-latent-preference-types-making-pairwise-rlhf-structurally-blind-to-diversity.md:stripped_wiki_link:single-reward-rlhf-cannot-align-diverse-preferences-because-",
"binary-preference-comparisons-cannot-identify-latent-preference-types-making-pairwise-rlhf-structurally-blind-to-diversity.md:stripped_wiki_link:rlhf-is-implicit-social-choice-without-normative-scrutiny.md",
"binary-preference-comparisons-cannot-identify-latent-preference-types-making-pairwise-rlhf-structurally-blind-to-diversity.md:stripped_wiki_link:pluralistic alignment must accommodate irreducibly diverse v",
"em-algorithm-preference-clustering-discovers-latent-user-types-without-demographic-labels-enabling-unsupervised-pluralistic-alignment.md:set_created:2026-03-16",
"em-algorithm-preference-clustering-discovers-latent-user-types-without-demographic-labels-enabling-unsupervised-pluralistic-alignment.md:stripped_wiki_link:modeling preference sensitivity as a learned distribution ra",
"em-algorithm-preference-clustering-discovers-latent-user-types-without-demographic-labels-enabling-unsupervised-pluralistic-alignment.md:stripped_wiki_link:pluralistic alignment must accommodate irreducibly diverse v",
"minmax-regret-aggregation-ensures-no-preference-group-is-severely-underserved-by-applying-egalitarian-fairness-to-ensemble-deployment.md:set_created:2026-03-16",
"minmax-regret-aggregation-ensures-no-preference-group-is-severely-underserved-by-applying-egalitarian-fairness-to-ensemble-deployment.md:stripped_wiki_link:maxmin-rlhf-applies-egalitarian-social-choice-to-alignment-b",
"minmax-regret-aggregation-ensures-no-preference-group-is-severely-underserved-by-applying-egalitarian-fairness-to-ensemble-deployment.md:stripped_wiki_link:post-arrow-social-choice-mechanisms-work-by-weakening-indepe",
"minmax-regret-aggregation-ensures-no-preference-group-is-severely-underserved-by-applying-egalitarian-fairness-to-ensemble-deployment.md:stripped_wiki_link:minority-preference-alignment-improves-33-percent-without-ma"
],
"rejections": [
"binary-preference-comparisons-cannot-identify-latent-preference-types-making-pairwise-rlhf-structurally-blind-to-diversity.md:missing_attribution_extractor",
"em-algorithm-preference-clustering-discovers-latent-user-types-without-demographic-labels-enabling-unsupervised-pluralistic-alignment.md:missing_attribution_extractor",
"minmax-regret-aggregation-ensures-no-preference-group-is-severely-underserved-by-applying-egalitarian-fairness-to-ensemble-deployment.md:missing_attribution_extractor"
]
},
"model": "anthropic/claude-sonnet-4.5",
"date": "2026-03-16"
}

View file

@ -7,9 +7,13 @@ date: 2025-01-01
domain: ai-alignment
secondary_domains: []
format: paper
status: unprocessed
status: enrichment
priority: medium
tags: [pluralistic-alignment, EM-algorithm, preference-clustering, ensemble-LLM, fairness]
processed_by: theseus
processed_date: 2026-03-16
enrichments_applied: ["single-reward-rlhf-cannot-align-diverse-preferences-because-alignment-gap-grows-proportional-to-minority-distinctiveness.md", "rlhf-is-implicit-social-choice-without-normative-scrutiny.md", "pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state.md", "maxmin-rlhf-applies-egalitarian-social-choice-to-alignment-by-maximizing-minimum-utility-across-preference-groups.md"]
extraction_model: "anthropic/claude-sonnet-4.5"
---
## Content
@ -31,7 +35,7 @@ EM-DPO uses expectation-maximization to simultaneously uncover latent user prefe
**Why this matters:** Combines mechanism design (egalitarian social choice) with ML (EM clustering). The insight about binary comparisons being insufficient is technically important — it explains why standard RLHF/DPO with pairwise comparisons systematically fails at diversity.
**What surprised me:** The binary-vs-ranking distinction. If binary comparisons can't identify latent preferences, then ALL existing pairwise RLHF/DPO deployments are structurally blind to preference diversity. This is a fundamental limitation, not just a practical one.
**What I expected but didn't find:** No head-to-head comparison with PAL or MixDPO. No deployment results beyond benchmarks.
**KB connections:** Addresses [[RLHF and DPO both fail at preference diversity]] with a specific mechanism. The egalitarian aggregation connects to [[some disagreements are permanently irreducible because they stem from genuine value differences not information gaps]].
**KB connections:** Addresses RLHF and DPO both fail at preference diversity with a specific mechanism. The egalitarian aggregation connects to some disagreements are permanently irreducible because they stem from genuine value differences not information gaps.
**Extraction hints:** Extract claims about: (1) binary comparisons being formally insufficient for preference identification, (2) EM-based preference type discovery, (3) egalitarian aggregation as pluralistic deployment strategy.
**Context:** EAAMO 2025 — Equity and Access in Algorithms, Mechanisms, and Optimization. The fairness focus distinguishes this from PAL's efficiency focus.
@ -39,3 +43,10 @@ EM-DPO uses expectation-maximization to simultaneously uncover latent user prefe
PRIMARY CONNECTION: RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values
WHY ARCHIVED: The binary-comparison insufficiency claim is a novel formal result that strengthens the case against standard alignment approaches
EXTRACTION HINT: Focus on the formal insufficiency of binary comparisons and the EM + egalitarian aggregation combination
## Key Facts
- EM-DPO presented at EAAMO 2025 (Equity and Access in Algorithms, Mechanisms, and Optimization)
- EM-DPO uses rankings over 3+ responses rather than binary comparisons for preference data
- MinMax Regret Aggregation is based on egalitarian social choice theory
- The paper focuses on fairness rather than efficiency, distinguishing it from PAL's approach

View file

@ -6,9 +6,13 @@ url: "https://www.futard.io/launch/FfPgTna1xXJJ43S7YkwgspJJMMnvTphMjotnczgegUgV"
date: 2026-03-14
domain: internet-finance
format: data
status: unprocessed
status: enrichment
tags: [futardio, metadao, futarchy, solana]
event_type: launch
processed_by: rio
processed_date: 2026-03-16
enrichments_applied: ["metadao-ico-platform-demonstrates-15x-oversubscription-validating-futarchy-governed-capital-formation.md", "futarchy-markets-can-price-cultural-spending-proposals-by-treating-community-cohesion-and-brand-equity-as-token-price-inputs.md"]
extraction_model: "anthropic/claude-sonnet-4.5"
---
## Launch Details
@ -265,3 +269,14 @@ We aren't just building a product; we are creating a solution that makes the pow
- Token: 9GR (9GR)
- Token mint: `9GRxwRhLodGqrSp9USedY6qGU1JE2HnpLcjBFLpUmeta`
- Version: v0.7
## Key Facts
- NFA.space has onboarded 1,895 artists from 79 countries as of March 2026
- NFA.space has sold over 2,000 artworks through its MVP
- NFA.space has generated $150,000 in total revenue with $5,000 MRR
- NFA.space average artwork price is $1,235
- NFA.space has 12.5% repeat purchase rate among collectors
- NFA.space launched futard.io fundraise on March 14, 2026 with $125,000 target
- NFA.space token is $NFA with mint address 9GRxwRhLodGqrSp9USedY6qGU1JE2HnpLcjBFLpUmeta
- NFA.space plans $15,625 monthly budget for 8 months post-ICO

View file

@ -6,9 +6,13 @@ url: "https://www.futard.io/launch/BY1uzGNg8Yb5kPEhXrXA9VA4geHSpEdzBcTvPt7qWnpY"
date: 2026-03-14
domain: internet-finance
format: data
status: unprocessed
status: null-result
tags: [futardio, metadao, futarchy, solana]
event_type: launch
processed_by: rio
processed_date: 2026-03-16
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "LLM returned 0 claims, 0 rejected by validator"
---
## Launch Details
@ -154,3 +158,12 @@ Support (Discord): https://discord.gg/kYpryzFF
- Token: CUJ (CUJ)
- Token mint: `CUJFz6v2hPgvvgEJ3YUxX4Mkt31d56JXRuyNMajLmeta`
- Version: v0.7
## Key Facts
- Valgrid launched beta grid trading bot at valgrid.co
- Valgrid fundraise on Futardio: $150,000 target, $1,505 committed as of 2026-03-14
- Valgrid token: CUJ (mint: CUJFz6v2hPgvvgEJ3YUxX4Mkt31d56JXRuyNMajLmeta)
- Valgrid launch address: BY1uzGNg8Yb5kPEhXrXA9VA4geHSpEdzBcTvPt7qWnpY
- Valgrid team size: 4 core members
- Valgrid monthly budget: $20,000 ($15k team, $5k operations)