theseus: extract from 2025-11-00-operationalizing-pluralistic-values-llm-alignment.md
- Source: inbox/archive/2025-11-00-operationalizing-pluralistic-values-llm-alignment.md - Domain: ai-alignment - Extracted by: headless extraction cron (worker 3) Pentagon-Agent: Theseus <HEADLESS>
This commit is contained in:
parent
ba4ac4a73e
commit
a1ec0e9b23
1 changed files with 6 additions and 1 deletions
|
|
@ -7,9 +7,14 @@ date: 2025-11-01
|
|||
domain: ai-alignment
|
||||
secondary_domains: []
|
||||
format: paper
|
||||
status: unprocessed
|
||||
status: null-result
|
||||
priority: high
|
||||
tags: [pluralistic-alignment, demographic-composition, empirical, safety-inclusivity, real-human-feedback]
|
||||
processed_by: theseus
|
||||
processed_date: 2026-03-11
|
||||
enrichments_applied: ["community-centred norm elicitation surfaces alignment targets materially different from developer-specified rules.md", "some disagreements are permanently irreducible because they stem from genuine value differences not information gaps.md", "pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state.md", "RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values.md"]
|
||||
extraction_model: "anthropic/claude-sonnet-4.5"
|
||||
extraction_notes: "First large-scale empirical study (N=1,095) systematically varying demographic composition of alignment training data. Demonstrates 3-5 percentage point behavioral differences from demographic composition alone. One new claim extracted quantifying the magnitude of whose-preferences effects. Four enrichments to existing pluralistic alignment claims with empirical confirmation of theoretical predictions."
|
||||
---
|
||||
|
||||
## Content
|
||||
|
|
|
|||
Loading…
Reference in a new issue