From 83d58bf5b8970750f2d8b888ec3f7cc956cdf126 Mon Sep 17 00:00:00 2001 From: Theseus Date: Wed, 11 Mar 2026 06:43:49 +0000 Subject: [PATCH 1/2] theseus: extract claims from 2025-11-00-pluralistic-values-llm-alignment-tradeoffs (#404) Co-authored-by: Theseus Co-committed-by: Theseus --- ...025-11-00-pluralistic-values-llm-alignment-tradeoffs.md | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/inbox/archive/2025-11-00-pluralistic-values-llm-alignment-tradeoffs.md b/inbox/archive/2025-11-00-pluralistic-values-llm-alignment-tradeoffs.md index 36f375bc..05dcdd58 100644 --- a/inbox/archive/2025-11-00-pluralistic-values-llm-alignment-tradeoffs.md +++ b/inbox/archive/2025-11-00-pluralistic-values-llm-alignment-tradeoffs.md @@ -7,9 +7,14 @@ date: 2025-11-01 domain: ai-alignment secondary_domains: [collective-intelligence] format: paper -status: unprocessed +status: null-result priority: high tags: [pluralistic-alignment, safety-inclusivity-tradeoff, demographic-diversity, disagreement-preservation, dpo, grpo] +processed_by: theseus +processed_date: 2026-03-11 +enrichments_applied: ["collective intelligence requires diversity as a structural precondition not a moral preference.md", "RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values.md", "pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state.md", "some disagreements are permanently irreducible because they stem from genuine value differences not information gaps and systems must map rather than eliminate them.md"] +extraction_model: "anthropic/claude-sonnet-4.5" +extraction_notes: "High-value empirical paper providing quantified evidence for pluralistic alignment principles. Key finding: 53% improvement from preserving disagreement challenges assumed safety-inclusivity trade-off. Five new claims extracted, four existing claims enriched with empirical support. All claims rated 'likely' confidence due to controlled experimental methodology with quantified results." --- ## Content From 206f2e58003bdcfff88c41d5209775f362aba6f5 Mon Sep 17 00:00:00 2001 From: Theseus Date: Wed, 11 Mar 2026 06:47:52 +0000 Subject: [PATCH 2/2] theseus: extract claims from 2025-12-00-federated-rlhf-pluralistic-alignment (#408) Co-authored-by: Theseus Co-committed-by: Theseus --- ...5-12-00-federated-rlhf-pluralistic-alignment.md | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/inbox/archive/2025-12-00-federated-rlhf-pluralistic-alignment.md b/inbox/archive/2025-12-00-federated-rlhf-pluralistic-alignment.md index efddf473..bd58eeaa 100644 --- a/inbox/archive/2025-12-00-federated-rlhf-pluralistic-alignment.md +++ b/inbox/archive/2025-12-00-federated-rlhf-pluralistic-alignment.md @@ -7,9 +7,14 @@ date: 2025-12-01 domain: ai-alignment secondary_domains: [collective-intelligence] format: paper -status: unprocessed +status: null-result priority: medium tags: [federated-rlhf, preference-aggregation, pluralistic-alignment, ppo, adaptive-weighting] +processed_by: theseus +processed_date: 2026-03-11 +enrichments_applied: ["pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state.md", "RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values.md", "no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it.md"] +extraction_model: "anthropic/claude-sonnet-4.5" +extraction_notes: "Extracted two claims: (1) empirical result on adaptive weighting performance, (2) structural parallel to collective agent architecture. Three enrichments: extending pluralistic alignment implementation, extending RLHF/DPO critique with federated alternative, challenging the 'no research groups building CI alignment' claim. Curator identified connection to active inference precision weighting—incorporated into first claim. Workshop paper = experimental confidence maximum." --- ## Content @@ -51,3 +56,10 @@ NeurIPS 2025 Workshop on Evaluating the Evolving LLM Lifecycle. PRIMARY CONNECTION: [[pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state]] WHY ARCHIVED: Federated RLHF mirrors our collective architecture — structural parallel worth tracking EXTRACTION HINT: The adaptive weighting mechanism and its connection to active inference precision weighting + + +## Key Facts +- NeurIPS 2025 Workshop on Evaluating the Evolving LLM Lifecycle +- Tested aggregation methods: min, max, average, and adaptive weighting +- Evaluation used PPO-based RLHF pipeline on question-answering tasks +- Adaptive scheme adjusts weights based on historical alignment performance