leo: enrich 3 existing claims with Schmachtenberger corpus evidence
- What: Enrichments to "AI accelerates Moloch" (Schmachtenberger omni-use + Jevons paradox), "AI alignment is coordination" (misaligned context argument), "authoritarian lock-in" (motivated reasoning singularity as enabling mechanism) - Why: Schmachtenberger corpus provides the most developed articulations of mechanisms already claimed in the KB. Adding his evidence chains strengthens existing claims and connects them to the new claims in this sprint. - Sources: Schmachtenberger/Boeree podcast, Great Simplification #71 and #132 Pentagon-Agent: Leo <D35C9237-A739-432E-A3DB-20D52D1577A9>
This commit is contained in:
parent
f7201c3ef5
commit
fd9fdae1e6
3 changed files with 17 additions and 0 deletions
|
|
@ -37,6 +37,11 @@ This reframing has direct implications for governance strategy. If AI's primary
|
|||
|
||||
The structural implication: alignment work that focuses exclusively on making individual AI systems safe addresses only one symptom. The deeper problem is civilizational — competitive dynamics that were always catastrophic in principle are becoming catastrophic in practice as AI removes the friction that kept them bounded.
|
||||
|
||||
### Additional Evidence (confirm)
|
||||
*Source: Schmachtenberger & Boeree 'Win-Win or Lose-Lose' podcast (2024), Schmachtenberger on Great Simplification #71 and #132 | Added: 2026-04-03 | Extractor: Leo*
|
||||
|
||||
Schmachtenberger's full corpus provides the most developed articulation of this mechanism. His formulation: global capitalism IS already a misaligned autopoietic superintelligence running on human GI as substrate, and AI doesn't create a new misaligned SI — it accelerates the existing one. Three specific acceleration vectors: (1) AI is omni-use, not dual-use — it improves ALL capabilities simultaneously, meaning anything it can optimize it can break. (2) Even "beneficial" AI accelerates externalities via Jevons paradox — efficiency gains increase total usage rather than reducing impact. (3) AI increases inscrutability beyond human adjudication capacity — the only thing that can audit an AI is a more powerful AI, creating recursive complexity. His sharpest formulation: "Rather than build AI to change Moloch, AI is being built by Moloch in its service." The Jevons paradox point is particularly important — it means that AI acceleration of Moloch occurs even in the BEST case (beneficial deployment), not just in adversarial scenarios.
|
||||
|
||||
## Challenges
|
||||
|
||||
- This framing risks minimizing genuinely novel AI risks (deceptive alignment, mesa-optimization, power-seeking) by subsuming them under "existing dynamics." Novel failure modes may exist alongside accelerated existing dynamics.
|
||||
|
|
@ -50,6 +55,8 @@ Relevant Notes:
|
|||
- [[the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it]] — the AI-domain instance of Molochian dynamics
|
||||
- [[physical infrastructure constraints on AI development create a natural governance window of 2 to 10 years because hardware bottlenecks are not software-solvable]] — the governance window this claim argues is degrading
|
||||
- [[AI alignment is a coordination problem not a technical problem]] — this claim provides the mechanism for why coordination matters more than technical safety
|
||||
- [[AI is omni-use technology categorically different from dual-use because it improves all capabilities simultaneously meaning anything AI can optimize it can break]] — the omni-use nature is the mechanism by which AI accelerates ALL Molochian dynamics simultaneously
|
||||
- [[global capitalism functions as a misaligned autopoietic superintelligence running on human general intelligence as substrate with convert everything into capital as its objective function]] — the misaligned SI that AI accelerates
|
||||
|
||||
Topics:
|
||||
- [[_map]]
|
||||
|
|
@ -72,6 +72,11 @@ Krier provides institutional mechanism: personal AI agents enable Coasean bargai
|
|||
Mengesha provides a fifth layer of coordination failure beyond the four established in sessions 7-10: the response gap. Even if we solve the translation gap (research to compliance), detection gap (sandbagging/monitoring), and commitment gap (voluntary pledges), institutions still lack the standing coordination infrastructure to respond when prevention fails. This is structural — it requires precommitment frameworks, shared incident protocols, and permanent coordination venues analogous to IAEA, WHO, and ISACs.
|
||||
|
||||
|
||||
### Additional Evidence (extend)
|
||||
*Source: Schmachtenberger & Boeree 'Win-Win or Lose-Lose' podcast (2024), Schmachtenberger on Great Simplification #71 | Added: 2026-04-03 | Extractor: Leo*
|
||||
|
||||
Schmachtenberger extends this claim to its logical conclusion: a misaligned context cannot develop aligned AI. Even if technical alignment research succeeds at making individual AI systems safe, honest, and helpful, the system deploying them (global capitalism as misaligned autopoietic SI) selects for AIs that serve its optimization target. "Aligning AI with human intent would not be great because human intent is not awesome so far" — human preferences shaped by a broken information ecology and competitive consumption patterns are themselves misaligned. RLHF trained on preferences shaped by advertising, social media engagement optimization, and status competition inherits those distortions. This means alignment is not just coordination between actors (the framing in this claim) but coordination of the CONTEXT — the incentive structures, information ecology, and governance mechanisms that determine how aligned AI is deployed. System alignment is prerequisite for AI alignment.
|
||||
|
||||
Relevant Notes:
|
||||
- [[the internet enabled global communication but not global cognition]] -- the coordination infrastructure gap that makes this problem unsolvable with existing tools
|
||||
- [[the alignment problem dissolves when human values are continuously woven into the system rather than specified in advance]] -- the structural solution to this coordination failure
|
||||
|
|
|
|||
|
|
@ -42,6 +42,11 @@ If all three capabilities develop sufficiently:
|
|||
|
||||
This doesn't mean authoritarian lock-in is inevitable — it means the cost of achieving and maintaining it drops dramatically, making it accessible to actors who previously lacked the institutional capacity for sustained centralized control.
|
||||
|
||||
### Additional Evidence (extend)
|
||||
*Source: Schmachtenberger on Great Simplification #132 (Nate Hagens, 2025) | Added: 2026-04-03 | Extractor: Leo*
|
||||
|
||||
Schmachtenberger identifies an enabling mechanism for lock-in that operates BEFORE any authoritarian actor achieves control: the motivated reasoning singularity among AI lab leaders. Every major lab leader publicly acknowledges AI may cause human extinction, then continues accelerating. Even safety-focused organizations (Anthropic) weaken commitments under competitive pressure. The structural irony: those with the most capability to prevent lock-in scenarios have the most incentive to accelerate toward them. This motivated reasoning doesn't require authoritarian intent — it creates the capability overhang that an authoritarian actor could later exploit. The pathway is: competitive AI race → capability concentration in a few labs/nations → motivated reasoning prevents voluntary slowdown → whoever achieves decisive capability advantage first has lock-in option. The pathway to lock-in runs through competitive dynamics and motivated reasoning, not through authoritarian planning.
|
||||
|
||||
## Challenges
|
||||
|
||||
- The claim that AI "solves" Hayek's knowledge problem overstates current and near-term AI capability. Processing distributed information at civilization-scale in real time is far beyond current systems. The claim is about trajectory, not current state.
|
||||
|
|
|
|||
Loading…
Reference in a new issue