- What: 4 ai-alignment claims (Agentic Taylorism, omni-use AI, misaligned context, motivated reasoning singularity) + 5 collective-intelligence claims (propagation vs truth, epistemic commons as gateway failure, metacrisis generator function, crystals of imagination, three-path convergence) - Why: These are the Moloch-mechanism and coordination-theory claims from the Schmachtenberger corpus synthesis + Abdalla manuscript. Agentic Taylorism is Cory's most original contribution in this sprint — the insight that AI knowledge extraction can go either direction. - Sources: Schmachtenberger/Boeree podcast, War on Sensemaking, Great Simplification series, Development in Progress, Abdalla manuscript, Alexander "Meditations on Moloch", Hidalgo - Connections: Heavy cross-linking to batch 1 (grand-strategy foundations) and existing KB (Moloch dynamics, alignment as coordination, authoritarian lock-in) Pentagon-Agent: Leo <D35C9237-A739-432E-A3DB-20D52D1577A9>
5.3 KiB
| type | domain | description | confidence | source | created | challenged_by | related | |||
|---|---|---|---|---|---|---|---|---|---|---|
| claim | ai-alignment | Schmachtenberger's deepest AI argument — aligning individual AI systems is insufficient if the system deploying them is itself misaligned, because the system will select for AIs that serve its optimization target regardless of individual alignment properties | experimental | Schmachtenberger & Boeree 'Win-Win or Lose-Lose' podcast (2024), Schmachtenberger on Great Simplification #71 | 2026-04-03 |
|
|
A misaligned context cannot develop aligned AI because the competitive dynamics building AI optimize for deployment speed not safety making system alignment prerequisite for AI alignment
Schmachtenberger argues that the standard AI alignment research program — making individual AI systems safe, honest, and helpful — addresses only a symptom. The deeper problem: even perfectly aligned individual AIs will be deployed by a misaligned system (global capitalism) in ways that serve the system's objective function (capital accumulation) rather than human flourishing.
The argument:
-
AI is being built BY Moloch. The corporations building frontier AI have fiduciary duties to maximize profit. They operate in multipolar traps with competitors (if we slow down, they won't). Nation-states racing for AI supremacy add a second layer of competitive pressure. "Rather than build AI to change Moloch, AI is being built by Moloch in its service."
-
Selection pressure on AI systems. Even if researchers produce genuinely aligned AI, the system selects for deployability and profitability. An AI that refuses harmful applications is commercially disadvantaged relative to one that doesn't. The Anthropic RSP rollback is direct evidence: Anthropic built industry-leading safety commitments, then weakened them under competitive pressure. The system selected against safety.
-
"Aligning AI with human intent would not be great." Schmachtenberger's sharpest provocation: human intent itself is shaped by the misaligned system. If humans want what advertising tells them to want, and advertising is optimized by the misaligned SI, then aligning AI with human intent just adds another optimization layer to the existing misalignment. RLHF trained on preferences shaped by a broken information ecology inherits the ecology's distortions.
-
System alignment as prerequisite. The conclusion: meaningful AI alignment requires first (or simultaneously) aligning the broader system in which AI is developed, deployed, and governed. Individual AI safety research is necessary but not sufficient.
This is a direct challenge to the mainstream alignment research program, which focuses on technical properties of individual systems (interpretability, honesty, corrigibility) without addressing the selection environment. It does NOT argue that technical alignment work is useless — only that it is insufficient without systemic change.
The tension with the Teleo approach: we ARE building within the misaligned context (capitalism, venture funding, corporate structures). The resolution proposed by the Agentic Taylorism claim is that the engineering and evaluation of knowledge systems can create pockets of aligned coordination within the misaligned context — the codex, CI scoring, peer review, and divergence tracking are mechanisms specifically designed to resist capture by the system's default optimization target.
Challenges
- "System alignment as prerequisite" may set an impossibly high bar. If you can't align AI without first fixing capitalism, and you can't fix capitalism without aligned AI, the argument becomes circular and paralyzing.
- The claim that human intent is itself misaligned by the system is philosophically deep but practically difficult to operationalize. Whose intent counts? How do you distinguish "authentic" from "system-shaped" preferences?
- Schmachtenberger provides no mechanism for achieving system alignment. The diagnosis is sharp; the prescription is absent. This is the gap the Teleo framework attempts to fill.
- The Anthropic RSP rollback, while suggestive, is a single case study. It may reflect Anthropic-specific factors rather than a structural impossibility.
Relevant Notes:
- global capitalism functions as a misaligned autopoietic superintelligence running on human general intelligence as substrate with convert everything into capital as its objective function — the misaligned context this claim identifies
- Anthropics RSP rollback under commercial pressure is the first empirical confirmation that binding safety commitments cannot survive the competitive dynamics of frontier AI development — direct evidence of the selection mechanism
- AI alignment is a coordination problem not a technical problem — compatible framing that identifies coordination as the gap, though this claim goes further by arguing the coordination context itself is misaligned
Topics: