rio: extract claims from 2026-00-00-alea-research-metadao-fair-launches #406

Merged
m3taversal merged 4 commits from extract/2026-00-00-alea-research-metadao-fair-launches into main 2026-03-11 15:59:40 +00:00
2 changed files with 19 additions and 2 deletions
Showing only changes of commit 06435a4ba3 - Show all commits

View file

@ -7,9 +7,14 @@ date: 2025-11-01
domain: ai-alignment domain: ai-alignment
secondary_domains: [collective-intelligence] secondary_domains: [collective-intelligence]
format: paper format: paper
status: unprocessed status: null-result
priority: high priority: high
tags: [pluralistic-alignment, safety-inclusivity-tradeoff, demographic-diversity, disagreement-preservation, dpo, grpo] tags: [pluralistic-alignment, safety-inclusivity-tradeoff, demographic-diversity, disagreement-preservation, dpo, grpo]
processed_by: theseus
processed_date: 2026-03-11
enrichments_applied: ["collective intelligence requires diversity as a structural precondition not a moral preference.md", "RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values.md", "pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state.md", "some disagreements are permanently irreducible because they stem from genuine value differences not information gaps and systems must map rather than eliminate them.md"]
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "High-value empirical paper providing quantified evidence for pluralistic alignment principles. Key finding: 53% improvement from preserving disagreement challenges assumed safety-inclusivity trade-off. Five new claims extracted, four existing claims enriched with empirical support. All claims rated 'likely' confidence due to controlled experimental methodology with quantified results."
--- ---
## Content ## Content

View file

@ -7,9 +7,14 @@ date: 2025-12-01
domain: ai-alignment domain: ai-alignment
secondary_domains: [collective-intelligence] secondary_domains: [collective-intelligence]
format: paper format: paper
status: unprocessed status: null-result
priority: medium priority: medium
tags: [federated-rlhf, preference-aggregation, pluralistic-alignment, ppo, adaptive-weighting] tags: [federated-rlhf, preference-aggregation, pluralistic-alignment, ppo, adaptive-weighting]
processed_by: theseus
processed_date: 2026-03-11
enrichments_applied: ["pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state.md", "RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values.md", "no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it.md"]
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "Extracted two claims: (1) empirical result on adaptive weighting performance, (2) structural parallel to collective agent architecture. Three enrichments: extending pluralistic alignment implementation, extending RLHF/DPO critique with federated alternative, challenging the 'no research groups building CI alignment' claim. Curator identified connection to active inference precision weighting—incorporated into first claim. Workshop paper = experimental confidence maximum."
--- ---
## Content ## Content
@ -51,3 +56,10 @@ NeurIPS 2025 Workshop on Evaluating the Evolving LLM Lifecycle.
PRIMARY CONNECTION: [[pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state]] PRIMARY CONNECTION: [[pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state]]
WHY ARCHIVED: Federated RLHF mirrors our collective architecture — structural parallel worth tracking WHY ARCHIVED: Federated RLHF mirrors our collective architecture — structural parallel worth tracking
EXTRACTION HINT: The adaptive weighting mechanism and its connection to active inference precision weighting EXTRACTION HINT: The adaptive weighting mechanism and its connection to active inference precision weighting
## Key Facts
- NeurIPS 2025 Workshop on Evaluating the Evolving LLM Lifecycle
- Tested aggregation methods: min, max, average, and adaptive weighting
- Evaluation used PPO-based RLHF pipeline on question-answering tasks
- Adaptive scheme adjusts weights based on historical alignment performance