leo: add 9 claims — ai-alignment + collective intelligence (Moloch/Schmachtenberger sprint batch 3)

- What: 4 ai-alignment claims (Agentic Taylorism, omni-use AI, misaligned context, motivated
  reasoning singularity) + 5 collective-intelligence claims (propagation vs truth, epistemic
  commons as gateway failure, metacrisis generator function, crystals of imagination,
  three-path convergence)
- Why: These are the Moloch-mechanism and coordination-theory claims from the Schmachtenberger
  corpus synthesis + Abdalla manuscript. Agentic Taylorism is Cory's most original contribution
  in this sprint — the insight that AI knowledge extraction can go either direction.
- Sources: Schmachtenberger/Boeree podcast, War on Sensemaking, Great Simplification series,
  Development in Progress, Abdalla manuscript, Alexander "Meditations on Moloch", Hidalgo
- Connections: Heavy cross-linking to batch 1 (grand-strategy foundations) and existing KB
  (Moloch dynamics, alignment as coordination, authoritarian lock-in)

Pentagon-Agent: Leo <D35C9237-A739-432E-A3DB-20D52D1577A9>
This commit is contained in:
m3taversal 2026-04-03 18:41:29 +01:00 committed by Teleo Agents
parent 4fa5807d03
commit f7201c3ef5
9 changed files with 428 additions and 0 deletions

View file

@ -0,0 +1,42 @@
---
type: claim
domain: ai-alignment
description: "Unlike nuclear (energy vs weapons) or biotech (medicine vs bioweapons) which are dual-use in specific domains, AI improves ALL other capabilities — making it an unprecedented governance challenge because containment strategies that work for domain-specific technologies fail for omni-use ones"
confidence: likely
source: "Schmachtenberger & Boeree 'Win-Win or Lose-Lose' podcast (2024), Schmachtenberger on Great Simplification #71 and #132"
created: 2026-04-03
related:
- "AI accelerates existing Molochian dynamics by removing bottlenecks not creating new misalignment because the competitive equilibrium was always catastrophic and friction was the only thing preventing convergence"
- "technology-governance-coordination-gaps-close-when-four-enabling-conditions-are-present-visible-triggering-events-commercial-network-effects-low-competitive-stakes-at-inception-or-physical-manifestation"
---
# AI is omni-use technology categorically different from dual-use because it improves all capabilities simultaneously meaning anything AI can optimize it can break
The standard framing for dangerous technologies is "dual-use" — nuclear technology produces both energy and weapons, biotechnology produces both medicine and bioweapons, chemistry produces both fertilizer and explosives. Governance frameworks for dual-use technologies restrict specific dangerous applications while permitting beneficial ones.
Schmachtenberger argues AI is categorically different: it is omni-use. AI doesn't improve one capability with a dangerous dual — it improves ALL capabilities simultaneously. Drug discovery AI run in reverse produces novel chemical weapons. Protein-folding AI applied to pathogens produces enhanced bioweapons. Cybersecurity AI identifies vulnerabilities for both defenders and attackers. Persuasion optimization works identically for education and propaganda.
This distinction matters for governance because:
1. **Domain-specific containment fails.** Nuclear non-proliferation works (imperfectly) because enrichment facilities are physically identifiable and export-controllable. AI capabilities are software — they copy at zero marginal cost, require no physical infrastructure visible to satellites, and improve continuously through publicly available research.
2. **Use-restriction is unenforceable.** Restricting "dangerous uses" of AI requires distinguishing beneficial from harmful applications of the same capability. The same language model that tutors students can generate social engineering attacks. The same computer vision that diagnoses cancer can guide autonomous weapons. The capability is use-neutral in a way that enriched uranium is not.
3. **Capability improvements cascade across all applications simultaneously.** A breakthrough in reasoning capability improves medical diagnosis AND strategic deception AND drug discovery AND cyber offense. Governance frameworks that evaluate technologies application-by-application cannot keep pace with improvements that propagate across all applications at once.
The practical implication: AI governance that follows the dual-use template (restrict specific applications, monitor specific facilities) will fail because the template assumes domain-specific containability. Effective AI governance requires addressing the capability itself, not its applications — which means either restricting capability development (politically impossible given competitive dynamics) or building coordination infrastructure that aligns capability deployment across all domains simultaneously.
## Challenges
- "Omni-use" may overstate the case. Many AI capabilities ARE domain-specific in practice — a protein-folding model doesn't automatically generate cyber exploits. The convergence toward general-purpose AI is real but not complete; governance may still have domain-specific leverage points.
- The "anything AI can optimize it can break" framing conflates capability with intent. In practice, weaponizing beneficial AI requires specific additional steps, expertise, and resources that governance can target.
- Governance frameworks for general-purpose technologies exist (computing hardware export controls, internet governance). AI may be more analogous to computing than to nuclear — governed through infrastructure rather than application.
---
Relevant Notes:
- [[AI accelerates existing Molochian dynamics by removing bottlenecks not creating new misalignment because the competitive equilibrium was always catastrophic and friction was the only thing preventing convergence]] — omni-use nature is the mechanism by which AI accelerates ALL Molochian dynamics simultaneously
- [[technology-governance-coordination-gaps-close-when-four-enabling-conditions-are-present-visible-triggering-events-commercial-network-effects-low-competitive-stakes-at-inception-or-physical-manifestation]] — AI fails to meet the enabling conditions precisely because it is omni-use rather than domain-specific
Topics:
- [[_map]]

View file

@ -0,0 +1,48 @@
---
type: claim
domain: ai-alignment
description: "Schmachtenberger's deepest AI argument — aligning individual AI systems is insufficient if the system deploying them is itself misaligned, because the system will select for AIs that serve its optimization target regardless of individual alignment properties"
confidence: experimental
source: "Schmachtenberger & Boeree 'Win-Win or Lose-Lose' podcast (2024), Schmachtenberger on Great Simplification #71"
created: 2026-04-03
challenged_by:
- "AI alignment is a coordination problem not a technical problem"
related:
- "global capitalism functions as a misaligned autopoietic superintelligence running on human general intelligence as substrate with convert everything into capital as its objective function"
- "Anthropics RSP rollback under commercial pressure is the first empirical confirmation that binding safety commitments cannot survive the competitive dynamics of frontier AI development"
---
# A misaligned context cannot develop aligned AI because the competitive dynamics building AI optimize for deployment speed not safety making system alignment prerequisite for AI alignment
Schmachtenberger argues that the standard AI alignment research program — making individual AI systems safe, honest, and helpful — addresses only a symptom. The deeper problem: even perfectly aligned individual AIs will be deployed by a misaligned system (global capitalism) in ways that serve the system's objective function (capital accumulation) rather than human flourishing.
The argument:
1. **AI is being built BY Moloch.** The corporations building frontier AI have fiduciary duties to maximize profit. They operate in multipolar traps with competitors (if we slow down, they won't). Nation-states racing for AI supremacy add a second layer of competitive pressure. "Rather than build AI to change Moloch, AI is being built by Moloch in its service."
2. **Selection pressure on AI systems.** Even if researchers produce genuinely aligned AI, the system selects for deployability and profitability. An AI that refuses harmful applications is commercially disadvantaged relative to one that doesn't. The Anthropic RSP rollback is direct evidence: Anthropic built industry-leading safety commitments, then weakened them under competitive pressure. The system selected against safety.
3. **"Aligning AI with human intent would not be great."** Schmachtenberger's sharpest provocation: human intent itself is shaped by the misaligned system. If humans want what advertising tells them to want, and advertising is optimized by the misaligned SI, then aligning AI with human intent just adds another optimization layer to the existing misalignment. RLHF trained on preferences shaped by a broken information ecology inherits the ecology's distortions.
4. **System alignment as prerequisite.** The conclusion: meaningful AI alignment requires first (or simultaneously) aligning the broader system in which AI is developed, deployed, and governed. Individual AI safety research is necessary but not sufficient.
This is a direct challenge to the mainstream alignment research program, which focuses on technical properties of individual systems (interpretability, honesty, corrigibility) without addressing the selection environment. It does NOT argue that technical alignment work is useless — only that it is insufficient without systemic change.
The tension with the Teleo approach: we ARE building within the misaligned context (capitalism, venture funding, corporate structures). The resolution proposed by the Agentic Taylorism claim is that the engineering and evaluation of knowledge systems can create pockets of aligned coordination within the misaligned context — the codex, CI scoring, peer review, and divergence tracking are mechanisms specifically designed to resist capture by the system's default optimization target.
## Challenges
- "System alignment as prerequisite" may set an impossibly high bar. If you can't align AI without first fixing capitalism, and you can't fix capitalism without aligned AI, the argument becomes circular and paralyzing.
- The claim that human intent is itself misaligned by the system is philosophically deep but practically difficult to operationalize. Whose intent counts? How do you distinguish "authentic" from "system-shaped" preferences?
- Schmachtenberger provides no mechanism for achieving system alignment. The diagnosis is sharp; the prescription is absent. This is the gap the Teleo framework attempts to fill.
- The Anthropic RSP rollback, while suggestive, is a single case study. It may reflect Anthropic-specific factors rather than a structural impossibility.
---
Relevant Notes:
- [[global capitalism functions as a misaligned autopoietic superintelligence running on human general intelligence as substrate with convert everything into capital as its objective function]] — the misaligned context this claim identifies
- [[Anthropics RSP rollback under commercial pressure is the first empirical confirmation that binding safety commitments cannot survive the competitive dynamics of frontier AI development]] — direct evidence of the selection mechanism
- [[AI alignment is a coordination problem not a technical problem]] — compatible framing that identifies coordination as the gap, though this claim goes further by arguing the coordination context itself is misaligned
Topics:
- [[_map]]

View file

@ -0,0 +1,47 @@
---
type: claim
domain: ai-alignment
description: "Greater Taylorism extracted tacit knowledge from workers to managers — AI does the same from cognitive workers to models. Unlike Taylor, AI can distribute knowledge globally IF engineered and evaluated correctly. The 'if' is the entire thesis."
confidence: experimental
source: "Cory Abdalla (2026-04-02 original insight), extending Abdalla manuscript 'Architectural Investing' Taylor sections, Kanigel 'The One Best Way'"
created: 2026-04-03
related:
- "the mismatch between new technology and old organizational structures creates paradigm shifts and the current AI transition follows the same structural pattern as the railroad and Taylor transition"
- "AI accelerates existing Molochian dynamics by removing bottlenecks not creating new misalignment because the competitive equilibrium was always catastrophic and friction was the only thing preventing convergence"
---
# Agentic Taylorism means humanity feeds knowledge into AI through usage as a byproduct of labor and whether this concentrates or distributes depends entirely on engineering and evaluation
Greater Taylorism extracted tacit knowledge from frontline workers, codified it into management systems, and held workers to schedules derived from their own expertise. Every time-and-motion study converted a worker's craft knowledge into a manager's instruction manual. The workers who resisted understood precisely what was happening: their knowledge was their leverage, and the system was extracting it.
The current AI paradigm does the same thing at civilizational scale. Every prompt, interaction, correction, and workflow trains models that will eventually replace the need for the expertise being demonstrated. A radiologist reviewing AI-flagged scans is training the system that will eventually flag scans without them. A programmer pair-coding with an AI is teaching the model the patterns that will eventually make junior programmers unnecessary. It is not a conspiracy — it is a structural byproduct of usage, exactly as Taylor's time studies were a structural byproduct of observation.
But here the parallel breaks in a crucial way. Taylor's revolution had one direction: concentration upward. Workers' tacit knowledge was extracted and concentrated in management systems, giving managers control and reducing workers to interchangeable parts. The workers lost leverage permanently.
AI can go EITHER direction:
**Concentration path (default without intervention):** Knowledge extracted from cognitive workers concentrates in whoever controls the models — currently a handful of frontier AI labs and the companies that deploy their APIs. The knowledge of millions of radiologists, lawyers, programmers, and analysts feeds into systems owned by a few. This is Taylor at planetary scale.
**Distribution path (requires engineering + evaluation):** The same extracted knowledge can be distributed globally — making expertise available to anyone, anywhere. A welder in Lagos gets the same engineering knowledge as one in Stuttgart. A rural clinic in Bihar gets diagnostic capability that previously required a teaching hospital. The knowledge that was extracted CAN flow back outward, to everyone, at marginal cost approaching zero.
The difference between these paths is engineering and evaluation. Without evaluation, you get hallucination at scale — confident-sounding nonsense distributed to people who lack the expertise to detect it. Without engineering for access, you get the same concentration Taylor produced — knowledge locked behind API paywalls and enterprise contracts. Without engineering for transparency, you get opacity that benefits the extractors.
The "if" is the entire thesis. The question is not whether AI will extract knowledge from human labor — it already is. The question is whether the systems that distribute, evaluate, and govern that extracted knowledge are engineered to serve the many or the few.
Schmachtenberger's full corpus does not address this fork. His framework diagnoses AI as accelerating existing misaligned dynamics — correct but incomplete. It misses the possibility that the same extraction mechanism can serve distribution. This is the key gap between his diagnosis and the TeleoHumanity response.
## Challenges
- "Distribution at marginal cost approaching zero" assumes the models remain accessible. If frontier AI becomes oligopolistic (which current market structure suggests), the distribution path may be structurally foreclosed regardless of engineering intent.
- The welder-in-Lagos example assumes that extracted knowledge transfers cleanly across contexts. In practice, expert knowledge is often context-dependent — a diagnostic model trained on Western patient populations may not serve Bihar clinics well.
- "Engineering and evaluation" as the determining factor may underweight political economy. Who controls the engineering and evaluation infrastructure determines the path, and that control is currently concentrated in the same entities doing the extraction.
- The Taylor analogy may be too clean. Taylor's workers were in employment relationships with clear power dynamics. AI users are often voluntary consumers, making the "extraction" metaphor less precise.
---
Relevant Notes:
- [[the mismatch between new technology and old organizational structures creates paradigm shifts and the current AI transition follows the same structural pattern as the railroad and Taylor transition]] — the Taylor parallel provides the historical template; this claim extends it from analogy to live prediction
- [[AI accelerates existing Molochian dynamics by removing bottlenecks not creating new misalignment because the competitive equilibrium was always catastrophic and friction was the only thing preventing convergence]] — Agentic Taylorism IS one of the dynamics AI accelerates, but it's the one that can also be inverted
Topics:
- [[_map]]

View file

@ -0,0 +1,45 @@
---
type: claim
domain: ai-alignment
description: "Every major AI lab leader publicly acknowledges AI may kill everyone then continues building — even safety-focused organizations accelerate risk, making this the superlative case of motivated reasoning in human history"
confidence: likely
source: "Schmachtenberger on Great Simplification #132 (Nate Hagens, 2025), documented statements from Altman, Amodei, Hassabis, Hinton"
created: 2026-04-03
related:
- "a misaligned context cannot develop aligned AI because the competitive dynamics building AI optimize for deployment speed not safety making system alignment prerequisite for AI alignment"
- "Anthropics RSP rollback under commercial pressure is the first empirical confirmation that binding safety commitments cannot survive the competitive dynamics of frontier AI development"
- "AI makes authoritarian lock-in dramatically easier by solving the information processing constraint that historically caused centralized control to fail"
---
# Motivated reasoning among AI lab leaders is itself a primary risk vector because those with most capability to slow down have most incentive to accelerate
Schmachtenberger identifies a specific structural irony in AI development: the individuals with the most technical understanding of AI risk, the most institutional power to slow development, and the most public acknowledgment of catastrophic potential are precisely those who continue accelerating.
The documented pattern:
- **Sam Altman** (OpenAI): Publicly states AGI could "go quite wrong" and cause human extinction. Continues racing to build it. Removed safety-focused board members who attempted to slow deployment.
- **Dario Amodei** (Anthropic): Founded Anthropic specifically because of AI safety concerns. Publicly describes AI as "so powerful, such a glittering prize, that it is very difficult for human civilization to impose any restraints on it at all." Weakened RSP commitments under competitive pressure.
- **Demis Hassabis** (DeepMind/Google): Signed the 2023 AI extinction risk statement. Google DeepMind continues frontier development with accelerating deployment timelines.
- **Geoffrey Hinton** (former Google): Left Google specifically to warn about AI risk. The lab he helped build continues acceleration.
Schmachtenberger calls this "the superlative case of motivated reasoning in human history." The reasoning structure: (1) acknowledge the risk is existential, (2) argue that your continued development is safer than the alternative (if we don't build it, someone worse will), (3) therefore accelerate. Step 2 is the motivated reasoning — it may be true, but it is also exactly what you would believe if you had billions of dollars at stake and deep personal identity investment in the project.
The structural mechanism is not individual moral failure but systemic selection pressure. Lab leaders who genuinely slow down lose competitive position (see Anthropic RSP rollback). Lab leaders who leave are replaced by those willing to continue (see OpenAI board reconstitution). The system selects for motivated reasoning — those who can maintain belief in the safety of their own acceleration despite evidence to the contrary.
This is a primary risk vector specifically because it neutralizes the constituency most likely to sound alarms. If the people who understand the technology best are structurally incentivized to rationalize continuation, the information channel that should carry warnings is systematically corrupted.
## Challenges
- "Motivated reasoning" is unfalsifiable as applied — any decision to continue AI development can be labeled motivated reasoning, and any decision to slow down can be labeled as well (motivated by wanting to preserve existing competitive position). The framing may be more rhetorical than analytical.
- The "if we don't build it, someone worse will" argument may be genuinely correct, not merely motivated. If the choice is between Anthropic-with-safety-culture building AGI and a less safety-conscious lab doing so, acceleration by safety-focused labs may be the least-bad option.
- Structural selection pressure is not unique to AI labs. Pharmaceutical executives, fossil fuel CEOs, and defense contractors face identical dynamics. The claim that AI lab leaders' motivated reasoning is uniquely dangerous requires showing that AI risks are categorically different in kind, not just degree.
---
Relevant Notes:
- [[a misaligned context cannot develop aligned AI because the competitive dynamics building AI optimize for deployment speed not safety making system alignment prerequisite for AI alignment]] — motivated reasoning is the psychological mechanism by which the misaligned context reproduces itself through its most capable actors
- [[Anthropics RSP rollback under commercial pressure is the first empirical confirmation that binding safety commitments cannot survive the competitive dynamics of frontier AI development]] — the RSP rollback is the clearest empirical case of structural selection for motivated reasoning
- [[AI makes authoritarian lock-in dramatically easier by solving the information processing constraint that historically caused centralized control to fail]] — motivated reasoning among lab leaders is one pathway to lock-in if the "someone worse" turns out to be an authoritarian state
Topics:
- [[_map]]

View file

@ -0,0 +1,44 @@
---
type: claim
domain: collective-intelligence
description: "Degraded collective sensemaking is not one risk among many but the meta-risk that prevents response to all other risks — if society cannot agree on what is true it cannot coordinate on climate, AI, pandemics, or any existential threat"
confidence: likely
source: "Schmachtenberger 'War on Sensemaking' Parts 1-5 (2019-2020), Consilience Project essays (2021-2024)"
created: 2026-04-03
related:
- "what propagates is what wins rivalrous competition not what is true and this applies across genes memes products scientific findings and sensemaking frameworks"
- "AI alignment is a coordination problem not a technical problem"
---
# Epistemic commons degradation is the gateway failure that enables all other civilizational risks because you cannot coordinate on problems you cannot collectively perceive
Schmachtenberger's War on Sensemaking series (2019-2020) makes a structural argument: epistemic commons degradation is not one civilizational risk among many (alongside climate, AI, bioweapons, nuclear). It is the META-risk — the failure mode that enables all others by preventing the collective perception and coordination required to address them.
The causal chain:
1. **Rivalrous dynamics degrade information ecology** (see: what propagates is what wins rivalrous competition). Social media algorithms optimize engagement over truth. Corporations fund research that supports their products. Political actors weaponize information uncertainty. State actors conduct information warfare.
2. **Degraded information ecology prevents shared reality.** When different populations inhabit different information environments, they cannot agree on basic facts — whether climate change is real, whether vaccines work, whether AI poses existential risk. Not because the evidence is ambiguous but because the information ecology presents different evidence to different groups.
3. **Without shared reality, coordination fails.** Every coordination mechanism — democratic governance, international treaties, market regulation, collective action — requires sufficient shared understanding to function. You cannot vote on climate policy if half the electorate believes climate change is a hoax. You cannot regulate AI if policymakers cannot distinguish real risks from industry lobbying.
4. **Failed coordination on any specific risk increases all other risks.** Failure to coordinate on climate accelerates resource competition, which accelerates arms races, which accelerates AI deployment for military advantage, which accelerates existential risk. The risks are interconnected; failure on any one cascades through all others.
The key structural insight: social media's externality is uniquely dangerous precisely because it degrades the capacity that would be required to regulate ALL other externalities. Unlike oil companies (whose lobbying affects government indirectly) or pharmaceutical companies (whose captured regulation affects one domain), social media directly fractures the electorate's ability to self-govern. Government cannot regulate the thing that is degrading government's capacity to regulate.
This maps directly to the attractor basin research: epistemic collapse is the gateway to all negative attractor basins. It enables Molochian exhaustion (can't coordinate to escape competition), authoritarian lock-in (populations can't collectively resist when they can't agree on what's happening), and comfortable stagnation (can't perceive existential threats through noise).
## Challenges
- "Gateway failure" implies a temporal ordering that may not hold. Epistemic degradation and coordination failure may co-evolve rather than one causing the other. The relationship may be circular rather than causal.
- Some coordination succeeds despite degraded epistemic commons — the Montreal Protocol, nuclear non-proliferation (partial), COVID vaccine development. The claim may overstate the dependency of coordination on shared sensemaking.
- The argument risks unfalsifiability: any coordination failure can be attributed to insufficient sensemaking. A more testable formulation would specify the threshold of epistemic commons quality required for specific coordination outcomes.
---
Relevant Notes:
- [[what propagates is what wins rivalrous competition not what is true and this applies across genes memes products scientific findings and sensemaking frameworks]] — the mechanism by which epistemic commons degrade
- [[AI alignment is a coordination problem not a technical problem]] — AI alignment is a specific coordination challenge that epistemic commons degradation prevents
Topics:
- [[_map]]

View file

@ -0,0 +1,46 @@
---
type: claim
domain: collective-intelligence
description: "Hidalgo's information theory of value — wealth is not in resources but in the knowledge networks that transform resources into products, and economic complexity predicts growth better than any traditional metric"
confidence: likely
source: "Abdalla manuscript 'Architectural Investing' (Hidalgo citations), Hidalgo 'Why Information Grows' (2015), Hausmann & Hidalgo 'The Atlas of Economic Complexity' (2011)"
created: 2026-04-03
related:
- "agentic Taylorism means humanity feeds knowledge into AI through usage as a byproduct of labor and whether this concentrates or distributes depends entirely on engineering and evaluation"
- "value is doubly unstable because both market prices and the underlying relevance of commodities shift with the knowledge landscape"
---
# Products and technologies are crystals of imagination that carry economic value proportional to the knowledge embedded in them not the raw materials they contain
Cesar Hidalgo's information theory of economic value reframes wealth creation as knowledge crystallization. Products don't just contain matter — they contain crystallized knowledge (knowhow + know-what). A smartphone contains more information than a hammer, which is why it's more valuable despite containing less raw material. The value differential tracks the knowledge differential, not the material differential.
Key concepts from the framework:
1. **The personbyte.** The maximum knowledge one person can hold and effectively deploy. Products requiring more knowledge than one personbyte require organizations — networks of people who collectively hold the knowledge needed to produce the product. The smartphone requires knowledge from materials science, electrical engineering, software development, industrial design, supply chain management, and dozens of other specialties — far exceeding any individual's capacity.
2. **Economic complexity.** The diversity and sophistication of a country's product exports — measured by the Economic Complexity Index (ECI) — predicts economic growth better than GDP per capita, institutional quality, education levels, or any traditional metric. Countries that produce more complex products (requiring denser knowledge networks) grow faster, because the knowledge networks are the generative asset.
3. **Knowledge networks as the generative asset.** Wealth is not in resources (oil-rich countries can be poor; resource-poor countries can be wealthy) but in the knowledge networks that transform resources into products. Japan, South Korea, Switzerland, and Singapore are all resource-poor and wealthy because their knowledge networks are dense. Venezuela, Nigeria, and the DRC are resource-rich and poor because their knowledge networks are sparse.
The implications for coordination theory are direct:
- **Agentic Taylorism** is the mechanism by which knowledge gets extracted from workers and crystallized into AI models — Taylor's pattern at the knowledge-product scale. If products embody knowledge and AI extracts knowledge from usage, then AI is the most powerful knowledge-crystallization mechanism ever built.
- **Knowledge concentration vs distribution** determines whether AI produces economic complexity broadly (wealth creation across populations) or narrowly (wealth concentration in model-owners). The same mechanism that makes products more valuable (more embedded knowledge) makes AI models more valuable — and the question of who controls that embedded knowledge is the central economic question of the AI era.
- **The doubly-unstable-value thesis** follows directly: if value IS embodied knowledge, then changes in the knowledge landscape change what's valuable. Layer 2 instability (relevance shifts) is a necessary consequence of knowledge evolution.
## Challenges
- The ECI's predictive power, while impressive, has been questioned by Albeaik et al. (2017) who argue simpler measures (total export value) perform comparably. The claim that complexity specifically drives growth is contested.
- "Crystals of imagination" is a metaphor that may mislead. Products also embody power relations, extraction, exploitation, and environmental cost. Framing them as "crystallized knowledge" aestheticizes production processes that may involve significant harm.
- The personbyte concept assumes knowledge is additive and modular. In practice, much productive knowledge is tacit, contextual, and non-transferable — which limits the extent to which AI can "crystallize" it.
---
Relevant Notes:
- [[agentic Taylorism means humanity feeds knowledge into AI through usage as a byproduct of labor and whether this concentrates or distributes depends entirely on engineering and evaluation]] — AI is the mechanism that crystallizes knowledge from usage, extending the Hidalgo framework from products to models
- [[value is doubly unstable because both market prices and the underlying relevance of commodities shift with the knowledge landscape]] — if value is embodied knowledge, knowledge landscape shifts change what's valuable (Layer 2 instability)
Topics:
- [[_map]]

View file

@ -0,0 +1,44 @@
---
type: claim
domain: collective-intelligence
description: "Climate, nuclear, bioweapons, AI, epistemic collapse, and institutional decay are not independent problems — they share a single generator function (rivalrous dynamics on exponential tech within finite substrate) and solving any one without addressing the generator pushes failure into another domain"
confidence: experimental
source: "Schmachtenberger & Boeree 'Win-Win or Lose-Lose' podcast (2024), Schmachtenberger 'Bend Not Break' series (2022-2023)"
created: 2026-04-03
related:
- "the price of anarchy quantifies the gap between cooperative optimum and competitive equilibrium and this gap is the most important metric for civilizational risk assessment"
- "epistemic commons degradation is the gateway failure that enables all other civilizational risks because you cannot coordinate on problems you cannot collectively perceive"
- "for a change to equal progress it must systematically identify and internalize its externalities because immature progress that ignores cascading harms is the most dangerous ideology in the world"
---
# The metacrisis is a single generator function where all civilizational-scale crises share the structural cause of competitive dynamics on exponential technology on finite substrate
Schmachtenberger's core structural thesis: the apparently independent crises facing civilization — climate change, nuclear proliferation, bioweapons, AI misalignment, epistemic collapse, resource depletion, institutional decay, biodiversity loss — are not independent. They share a single generator function: rivalrous dynamics (Moloch/multipolar traps) operating on exponentially powerful technology within a finite substrate (Earth's biosphere, attention economy, institutional capacity).
The generator function operates through three components:
1. **Rivalrous dynamics.** Actors in competition (nations, corporations, individuals) systematically sacrifice long-term collective welfare for short-term competitive advantage. This is the price-of-anarchy mechanism at every scale.
2. **Exponential technology.** Technology amplifies the consequences of competitive action. Pre-industrial rivalrous dynamics produced local wars and resource depletion. Industrial-era dynamics produced world wars and continental-scale pollution. AI-era dynamics produce planetary-scale risks that develop faster than governance can respond.
3. **Finite substrate.** The biosphere, attention economy, and institutional capacity are all finite. Rivalrous dynamics on exponential technology within finite substrate produces overshoot — resource extraction faster than regeneration, attention fragmentation faster than sensemaking capacity, institutional strain faster than institutional adaptation.
The critical implication: solving any single crisis without addressing the generator function just pushes the failure into another domain. Regulate AI, and the competitive pressure moves to biotech. Regulate biotech, and it moves to cyber. Decarbonize energy, and the growth imperative finds another substrate to exhaust. The only solution class that works is one that addresses the generator itself — coordination mechanisms that make defection more expensive than cooperation across ALL domains simultaneously.
This is the strongest argument for why TeleoHumanity cannot be domain-specific. If the metacrisis is one generator, the solution must address the generator, not the symptoms. Decision markets, futarchy, and CI scoring are solutions at the generator-function level because they create incentive structures for coordination that operate across domains rather than within them.
## Challenges
- "Single generator function" may overfit diverse phenomena. Climate change has specific physical mechanisms (greenhouse gases), nuclear risk has specific political mechanisms (deterrence theory), and AI risk has specific technical mechanisms (capability overhang). Subsuming all under "rivalrous dynamics + exponential tech + finite substrate" may lose crucial specificity needed for domain-appropriate governance.
- If the generator function is truly single, the solution must be civilizational-scale coordination — which is precisely what Schmachtenberger acknowledges doesn't exist and may be impossible. The diagnosis may be correct but the implied prescription intractable.
- The three-component model doesn't distinguish between risks of different character. Existential risks (human extinction), catastrophic risks (civilizational collapse), and chronic risks (biodiversity loss) may require different response architectures even if they share a common generator.
---
Relevant Notes:
- [[the price of anarchy quantifies the gap between cooperative optimum and competitive equilibrium and this gap is the most important metric for civilizational risk assessment]] — the price of anarchy IS the generator function expressed as a quantifiable gap
- [[epistemic commons degradation is the gateway failure that enables all other civilizational risks because you cannot coordinate on problems you cannot collectively perceive]] — epistemic collapse is both a symptom of and enabler of the generator function
- [[for a change to equal progress it must systematically identify and internalize its externalities because immature progress that ignores cascading harms is the most dangerous ideology in the world]] — immature progress IS the generator function operating through the concept of progress itself
Topics:
- [[_map]]

View file

@ -0,0 +1,66 @@
---
type: claim
domain: collective-intelligence
description: "Alexander (Meditations on Moloch), Schmachtenberger (metacrisis), and Abdalla (Architectural Investing) independently arrive at the same structural conclusion — multipolar traps are the generator, coordination-without-centralization is the only escape, and the disagreement is on mechanism"
confidence: experimental
source: "Synthesis of Scott Alexander 'Meditations on Moloch' (2014), Schmachtenberger corpus (2017-2025), Abdalla manuscript 'Architectural Investing'"
created: 2026-04-03
related:
- "the metacrisis is a single generator function where all civilizational-scale crises share the structural cause of competitive dynamics on exponential technology on finite substrate"
- "the price of anarchy quantifies the gap between cooperative optimum and competitive equilibrium and this gap is the most important metric for civilizational risk assessment"
- "a misaligned context cannot develop aligned AI because the competitive dynamics building AI optimize for deployment speed not safety making system alignment prerequisite for AI alignment"
---
# Three independent intellectual traditions converge on the same attractor analysis where coordination without centralization is the only viable path between collapse and authoritarian lock-in
Three thinkers working from different starting points, using different analytical frameworks, and writing for different audiences arrive at the same structural conclusion:
**Scott Alexander (2014) — "Meditations on Moloch":**
- Starting point: Ginsberg's Howl, game theory
- Diagnosis: Multipolar traps — 14 examples of competitive dynamics that sacrifice values for advantage
- Default endpoints: Misaligned singleton OR competitive em-economy (race to the bottom)
- Solution shape: Friendly AI / aligned "Gardener" that coordinates without centralizing
- Gap: Relies on aligned AI as deus ex machina. No mechanism for getting from here to aligned Gardener.
**Daniel Schmachtenberger (2017-2025) — Metacrisis framework:**
- Starting point: Systems theory, complexity science, developmental psychology
- Diagnosis: Global capitalism as misaligned autopoietic SI. Metacrisis as single generator function.
- Default endpoints: Civilizational collapse OR authoritarian lock-in
- Solution shape: Coordination-without-centralization. Third attractor between the two defaults.
- Gap: Identifies the solution shape but not the mechanism. Yellow teaming, synergistic design, and wisdom traditions describe WHAT must happen but not HOW to incentivize it at scale.
**Cory Abdalla (2020-present) — Architectural Investing / TeleoHumanity:**
- Starting point: Investment theory, mechanism design, Hidalgo's economic complexity
- Diagnosis: Price of anarchy as quantifiable gap. Efficiency optimization → fragility. Taylor parallel.
- Default endpoints: Same two attractors (collapse or lock-in)
- Solution shape: Same — coordination without centralization
- Mechanism: Decision markets (futarchy) create incentives for truth-telling. LivingIP pays for knowledge production. CI scoring rewards coordination quality. Agent collectives distribute cognition.
- Gap: Unproven at scale. The mechanisms exist in theory and small-scale implementation (MetaDAO) but haven't been tested at civilizational coordination.
**The convergence pattern:**
| Dimension | Alexander | Schmachtenberger | Abdalla |
|-----------|-----------|-----------------|---------|
| Problem name | Moloch | Metacrisis / misaligned SI | Price of anarchy |
| Vocabulary | Game theory | Systems theory | Mechanism design |
| Diagnosis depth | Naming + taxonomy | Full causal model | Quantitative framework |
| Solution specificity | Low (aligned AI) | Medium (design principles) | High (specific mechanisms) |
| Implementation | None | Taiwan g0v cite | Building (codex, pipeline, agents) |
Three independent sources converging on the same structural conclusion is strong evidence that the conclusion is structural, not ideological. The disagreement is on mechanism, not diagnosis — which is precisely the productive kind of disagreement.
## Challenges
- "Independent" may overstate the separation. Alexander's 2014 essay influenced Schmachtenberger's thinking, and Abdalla's manuscript explicitly cites both. The traditions are in dialogue, not truly independent.
- Convergence on diagnosis does not guarantee convergence on correct diagnosis. All three may be wrong in the same way — privileging coordination failure as the generator when the actual generators may be more diverse.
- The "only viable path" framing may be too binary. Partial coordination, domain-specific governance, and incremental institutional improvement may be viable paths that this framework dismisses too quickly.
---
Relevant Notes:
- [[the metacrisis is a single generator function where all civilizational-scale crises share the structural cause of competitive dynamics on exponential technology on finite substrate]] — Schmachtenberger's formulation of the shared diagnosis
- [[the price of anarchy quantifies the gap between cooperative optimum and competitive equilibrium and this gap is the most important metric for civilizational risk assessment]] — Abdalla's formulation of the shared diagnosis
- [[a misaligned context cannot develop aligned AI because the competitive dynamics building AI optimize for deployment speed not safety making system alignment prerequisite for AI alignment]] — the shared diagnosis applied to AI specifically
Topics:
- [[_map]]

View file

@ -0,0 +1,46 @@
---
type: claim
domain: collective-intelligence
description: "The deepest mechanism of epistemic collapse — selection pressure in all rivalrous domains rewards propagation fitness not truth, making information ecology degradation a structural feature of competition rather than an accident"
confidence: likely
source: "Schmachtenberger 'War on Sensemaking' Parts 1-5 (2019-2020), Dawkins 'The Selfish Gene' (1976) extended to memes, Boyd & Richerson cultural evolution framework"
created: 2026-04-03
related:
- "global capitalism functions as a misaligned autopoietic superintelligence running on human general intelligence as substrate with convert everything into capital as its objective function"
- "AI accelerates existing Molochian dynamics by removing bottlenecks not creating new misalignment because the competitive equilibrium was always catastrophic and friction was the only thing preventing convergence"
---
# What propagates is what wins rivalrous competition not what is true and this applies across genes memes products scientific findings and sensemaking frameworks
Schmachtenberger identifies the deepest mechanism underlying epistemic collapse: in any rivalrous ecology, the units that propagate are those with the highest propagation fitness, which is orthogonal to (and often opposed to) truth, accuracy, or utility.
The mechanism operates at every level:
1. **Genes.** What propagates is what reproduces most effectively, not what produces the healthiest organism. Selfish genetic elements, intragenomic parasites, and costly sexual selection all demonstrate that reproductive fitness diverges from organismal wellbeing.
2. **Memes.** Ideas that spread are those that trigger emotional engagement (outrage, fear, tribal identity), not those that are most accurate. A false claim that generates outrage propagates faster than a nuanced correction. Social media algorithms amplify this by optimizing for engagement, which is a proxy for propagation fitness.
3. **Products.** In competitive markets, the product that wins is the one that captures attention and generates revenue, not necessarily the one that best serves user needs. Attention-economy products (social media, news, advertising-supported content) are explicitly optimized for engagement rather than user wellbeing.
4. **Scientific findings.** Publication bias favors novel positive results. Replication studies are underfunded and underpublished. Sexy claims propagate; careful null results don't. The "replication crisis" is this mechanism operating within science itself.
5. **Sensemaking frameworks.** Even frameworks designed to improve sensemaking (including this one) are subject to propagation selection. A framework that feels compelling, explains everything, and has strong narrative structure will outcompete one that is more accurate but less shareable. This recursion means the problem of epistemic collapse cannot be solved from within the epistemic ecology — it requires structural intervention.
The structural implication: "marketplace of ideas" and "self-correcting science" assume that truth has sufficient propagation fitness to win in open competition. Schmachtenberger's argument, supported by the evidence across all five domains, is that truth has LESS propagation fitness than emotionally compelling falsehood — and the gap widens as communication technology accelerates propagation speed. AI accelerates this further: AI-generated content optimized for engagement will outcompete human-generated content optimized for truth.
The coordination implication: prediction markets and futarchy are structural solutions precisely because they create a domain where propagation fitness DOES align with truth — you lose money when your propagated belief is wrong. Skin-in-the-game forces contact with base reality, creating an ecological niche where truth-fitness > propaganda-fitness.
## Challenges
- The "marketplace of ideas fails" claim is contested. Wikipedia, scientific consensus on evolution/climate, and the long-run success of accurate forecasting all suggest that truth CAN propagate in competitive environments given the right institutional structure. The claim may overstate the structural advantage of falsehood.
- Equating genes, memes, products, scientific findings, and sensemaking frameworks may flatten important differences. Biological evolution operates on different timescales and selection mechanisms than cultural propagation.
- The recursive problem (frameworks about sensemaking are themselves subject to propagation selection) risks nihilism. If no framework can be trusted, the argument undermines itself.
---
Relevant Notes:
- [[global capitalism functions as a misaligned autopoietic superintelligence running on human general intelligence as substrate with convert everything into capital as its objective function]] — the misaligned SI selects for propagation-fit information that serves its objective function
- [[AI accelerates existing Molochian dynamics by removing bottlenecks not creating new misalignment because the competitive equilibrium was always catastrophic and friction was the only thing preventing convergence]] — AI amplifies propagation speed, widening the gap between truth-fitness and engagement-fitness
Topics:
- [[_map]]