- What: 7 NEW claims from Yudkowsky's foundational AI alignment work - Sharp left turn (capabilities diverge from alignment at scale) - Corrigibility-effectiveness tension (deception is free, corrigibility is hard) - No fire alarm thesis (structural absence of warning signal) - Multipolar instability (CHALLENGE to collective superintelligence thesis) - Returns on cognitive reinvestment (intelligence explosion framework) - Verification asymmetry breaks at superhuman scale - Training reward-desire chaos (RLHF unreliable at scale) - Why: Yudkowsky is the foundational figure in AI alignment — KB had ~89 claims with near-zero direct engagement with his core arguments. The multipolar instability claim is the most important CHALLENGE to our collective superintelligence thesis identified to date. - Sources: 'AGI Ruin' (2022), 'Intelligence Explosion Microeconomics' (2013), 'No Fire Alarm' (2017), 'If Anyone Builds It Everyone Dies' (2025), MIRI corrigibility work - Pre-screening: ~40% overlap with existing KB (orthogonality, instrumental convergence already present). All 7 claims fill genuine gaps. challenged_by and challenges fields populated. Pentagon-Agent: Theseus <46864dd4-da71-4719-a1b4-68f7c55854d3>
3.1 KiB
| source | author | title | date | url | status | domain | format | tags | notes | |||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| collected | Eliezer Yudkowsky | Yudkowsky Core Arguments — Collected Works | 2025-09-26 | null | processing | ai-alignment | collected |
|
Compound source covering Yudkowsky's core body of work: 'AGI Ruin: A List of Lethalities' (2022), 'Intelligence Explosion Microeconomics' (2013), 'There's No Fire Alarm for AGI' (2017), Sequences/Rationality: A-Z (2006-2009), TIME op-ed 'Shut It Down' (2023), 'If Anyone Builds It, Everyone Dies' with Nate Soares (2025), various LessWrong posts on corrigibility and mesa-optimization. Yudkowsky is the foundational figure in AI alignment — co-founder of MIRI, originator of instrumental convergence, orthogonality thesis, and the intelligence explosion framework. Most alignment discourse either builds on or reacts against his arguments. |
Yudkowsky Core Arguments — Collected Works
Eliezer Yudkowsky's foundational contributions to AI alignment, synthesized across his major works from 2006-2025. This is a compound source because his arguments form a coherent system — individual papers express facets of a unified worldview rather than standalone claims.
Key Works
-
Sequences / Rationality: A-Z (2006-2009) — Epistemic foundations. Beliefs must "pay rent" in predictions. Bayesian epistemology as substrate. Map-territory distinction.
-
"Intelligence Explosion Microeconomics" (2013) — Formalizes returns on cognitive reinvestment. If output-to-capability investment yields constant or increasing returns, recursive self-improvement produces discontinuous capability gain.
-
"There's No Fire Alarm for AGI" (2017) — Structural absence of warning signal. Capability scaling is gradual and ambiguous. Collective action requires anticipation, not reaction.
-
"AGI Ruin: A List of Lethalities" (2022) — Concentrated doom argument. Alignment techniques that work at low capability catastrophically fail at superintelligence. No iteration on the critical try. ~2 year proliferation window.
-
TIME Op-Ed: "Shut It Down" (2023) — Indefinite worldwide moratorium, decreasing compute caps, GPU tracking, military enforcement. Most aggressive mainstream policy position.
-
"If Anyone Builds It, Everyone Dies" with Nate Soares (2025) — Book-length treatment. Fast takeoff → near-certain extinction. Training reward-desire link is chaotic. Multipolar AI outcomes unstable. International treaty enforcement needed.
Cross-Referencing Debates
- vs. Robin Hanson (AI-Foom Debate, 2008-2013): Takeoff speed. Yudkowsky: recursive self-improvement → hard takeoff. Hanson: gradual, economy-driven.
- vs. Paul Christiano (ongoing): Prosaic alignment sufficient? Christiano: yes, empirical iteration works. Yudkowsky: no, sharp left turn makes it fundamentally inadequate.
- vs. Richard Ngo: Can we build intelligent but less agentic AI? Ngo: yes. Yudkowsky: agency is instrumentally convergent.
- vs. Shard Theory (Shah et al.): Value formation complexity. Shah: gradient descent isn't as analogous to evolution as Yudkowsky claims. ~5% vs much higher doom estimates.