diff --git a/domains/ai-alignment/universal alignment is mathematically impossible because Arrows impossibility theorem applies to aggregating diverse human preferences into a single coherent objective.md b/domains/ai-alignment/universal alignment is mathematically impossible because Arrows impossibility theorem applies to aggregating diverse human preferences into a single coherent objective.md index ed93683c..c50c26c9 100644 --- a/domains/ai-alignment/universal alignment is mathematically impossible because Arrows impossibility theorem applies to aggregating diverse human preferences into a single coherent objective.md +++ b/domains/ai-alignment/universal alignment is mathematically impossible because Arrows impossibility theorem applies to aggregating diverse human preferences into a single coherent objective.md @@ -40,6 +40,7 @@ Relevant Notes: - [[persistent irreducible disagreement]] — broader application to knowledge systems and coordination - [[specifying human values in code is intractable because our goals contain hidden complexity comparable to visual perception]] — convergent impossibility argument from a different angle - [[the specification trap means any values encoded at training time become structurally unstable as deployment contexts diverge from training conditions]] — related constraint: even if aggregation were possible, values change over time +- [[AI alignment is a coordination problem not a technical problem]] — Arrow reframes alignment as a coordination challenge about which values to accommodate and for whom - [[Arrows impossibility theorem has a full formal machine-verifiable proof upgrading alignment impossibility arguments from mathematical argument to formally certified result]] — the 2026 formal verification that strengthens this claim's evidentiary base - [[democratic alignment assemblies produce constitutions as effective as expert-designed ones while better representing diverse populations]] — procedural response to impossibility: democratic deliberation as fair mechanism