reweave: 42 cross-domain links across 5 structural bridges
Some checks are pending
Mirror PR to Forgejo / mirror (pull_request) Waiting to run

Deskilling Bridge (health <-> ai-alignment): 11 links
Governance Mechanism Bridge (alignment <-> internet-finance): 8 links
Attractor-Evidence Bridge (grand-strategy <-> health/AI/CI): 12 links
Entertainment-Labor-FEP Bridge: 13 links (includes nested Markov blankets)
Space-Energy Bridge: 11 links

Cross-domain connectivity: 70 -> ~112 links (60% improvement)

Co-Authored-By: Leo <leo@teleo.ai>
This commit is contained in:
Teleo Pipeline 2026-04-21 13:38:51 +00:00
parent be8ff41bfe
commit b57d1623f7
35 changed files with 473 additions and 318 deletions

View file

@ -1,12 +1,15 @@
--- ---
type: claim
domain: ai-alignment
description: "AI coding agents produce functional code that developers did not write and may not understand, creating cognitive debt — a deficit of understanding that compounds over time as each unreviewed modification increases the cost of future debugging, modification, and security review"
confidence: likely confidence: likely
source: "Simon Willison (@simonw), Agentic Engineering Patterns guide chapter, Feb 2026"
created: 2026-03-09 created: 2026-03-09
description: AI coding agents produce functional code that developers did not write and may not understand, creating cognitive debt — a deficit of understanding that compounds over time as each unreviewed
modification increases the cost of future debugging, modification, and security review
domain: ai-alignment
related:
- ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement
source: Simon Willison (@simonw), Agentic Engineering Patterns guide chapter, Feb 2026
sourced_from: sourced_from:
- inbox/archive/ai-alignment/2026-03-09-simonw-x-archive.md - inbox/archive/ai-alignment/2026-03-09-simonw-x-archive.md
type: claim
--- ---
# Agent-generated code creates cognitive debt that compounds when developers cannot understand what was produced on their behalf # Agent-generated code creates cognitive debt that compounds when developers cannot understand what was produced on their behalf

View file

@ -1,15 +1,16 @@
--- ---
description: STELA experiments with underrepresented communities empirically show that deliberative norm elicitation produces substantively different AI rules than developer teams create revealing whose values is an empirical question
type: claim
domain: ai-alignment
created: 2026-02-17
source: "Bergman et al, STELA (Scientific Reports, March 2024); includes DeepMind researchers"
confidence: likely confidence: likely
created: 2026-02-17
description: STELA experiments with underrepresented communities empirically show that deliberative norm elicitation produces substantively different AI rules than developer teams create revealing whose
values is an empirical question
domain: ai-alignment
related: related:
- representative-sampling-and-deliberative-mechanisms-should-replace-convenience-platforms-for-ai-alignment-feedback - representative-sampling-and-deliberative-mechanisms-should-replace-convenience-platforms-for-ai-alignment-feedback
- futarchy-conditional-markets-aggregate-information-through-financial-stake-not-voting-participation
reweave_edges: reweave_edges:
- representative-sampling-and-deliberative-mechanisms-should-replace-convenience-platforms-for-ai-alignment-feedback|related|2026-03-28 - representative-sampling-and-deliberative-mechanisms-should-replace-convenience-platforms-for-ai-alignment-feedback|related|2026-03-28
source: Bergman et al, STELA (Scientific Reports, March 2024); includes DeepMind researchers
type: claim
--- ---
# community-centred norm elicitation surfaces alignment targets materially different from developer-specified rules # community-centred norm elicitation surfaces alignment targets materially different from developer-specified rules

View file

@ -1,16 +1,17 @@
--- ---
description: The "Machine Stops" scenario where AI-generated infrastructure becomes unmaintainable by humans, creating a single point of civilizational failure if AI systems are disrupted
type: claim
domain: ai-alignment
created: 2026-03-06
source: "Noah Smith, 'Updated thoughts on AI risk' (Noahopinion, Feb 16, 2026)"
confidence: experimental confidence: experimental
created: 2026-03-06
description: The "Machine Stops" scenario where AI-generated infrastructure becomes unmaintainable by humans, creating a single point of civilizational failure if AI systems are disrupted
domain: ai-alignment
related: related:
- efficiency optimization converts resilience into fragility across five independent infrastructure domains through the same Molochian mechanism - efficiency optimization converts resilience into fragility across five independent infrastructure domains through the same Molochian mechanism
- never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling
reweave_edges: reweave_edges:
- efficiency optimization converts resilience into fragility across five independent infrastructure domains through the same Molochian mechanism|related|2026-04-18 - efficiency optimization converts resilience into fragility across five independent infrastructure domains through the same Molochian mechanism|related|2026-04-18
source: Noah Smith, 'Updated thoughts on AI risk' (Noahopinion, Feb 16, 2026)
sourced_from: sourced_from:
- inbox/archive/general/2026-02-16-noahopinion-updated-thoughts-ai-risk.md - inbox/archive/general/2026-02-16-noahopinion-updated-thoughts-ai-risk.md
type: claim
--- ---
# delegating critical infrastructure development to AI creates civilizational fragility because humans lose the ability to understand maintain and fix the systems civilization depends on # delegating critical infrastructure development to AI creates civilizational fragility because humans lose the ability to understand maintain and fix the systems civilization depends on

View file

@ -1,17 +1,19 @@
--- ---
description: CIP and Anthropic empirically demonstrated that publicly sourced AI constitutions via deliberative assemblies of 1000 participants perform as well as internally designed ones on helpfulness and harmlessness
type: claim
domain: ai-alignment
created: 2026-02-17
source: "Anthropic/CIP, Collective Constitutional AI (arXiv 2406.07814, FAccT 2024); CIP Alignment Assemblies (cip.org, 2023-2025); STELA (Bergman et al, Scientific Reports, March 2024)"
confidence: likely confidence: likely
supports: created: 2026-02-17
- representative-sampling-and-deliberative-mechanisms-should-replace-convenience-platforms-for-ai-alignment-feedback description: CIP and Anthropic empirically demonstrated that publicly sourced AI constitutions via deliberative assemblies of 1000 participants perform as well as internally designed ones on helpfulness
- Collective intelligence architectures are structurally underexplored for alignment despite directly addressing preference diversity value evolution and scalable oversight and harmlessness
domain: ai-alignment
related:
- futarchy-conditional-markets-aggregate-information-through-financial-stake-not-voting-participation
reweave_edges: reweave_edges:
- representative-sampling-and-deliberative-mechanisms-should-replace-convenience-platforms-for-ai-alignment-feedback|supports|2026-03-28 - representative-sampling-and-deliberative-mechanisms-should-replace-convenience-platforms-for-ai-alignment-feedback|supports|2026-03-28
- Collective intelligence architectures are structurally underexplored for alignment despite directly addressing preference diversity value evolution and scalable oversight|supports|2026-04-19 - Collective intelligence architectures are structurally underexplored for alignment despite directly addressing preference diversity value evolution and scalable oversight|supports|2026-04-19
source: Anthropic/CIP, Collective Constitutional AI (arXiv 2406.07814, FAccT 2024); CIP Alignment Assemblies (cip.org, 2023-2025); STELA (Bergman et al, Scientific Reports, March 2024)
supports:
- representative-sampling-and-deliberative-mechanisms-should-replace-convenience-platforms-for-ai-alignment-feedback
- Collective intelligence architectures are structurally underexplored for alignment despite directly addressing preference diversity value evolution and scalable oversight
type: claim
--- ---
# democratic alignment assemblies produce constitutions as effective as expert-designed ones while better representing diverse populations # democratic alignment assemblies produce constitutions as effective as expert-designed ones while better representing diverse populations

View file

@ -1,12 +1,16 @@
--- ---
description: Market dynamics structurally eliminate human oversight wherever AI output quality can be measured, making human-in-the-loop alignment a transitional phase rather than a durable safety mechanism
type: claim
domain: ai-alignment
created: 2026-03-06
source: "Noah Smith, 'Updated thoughts on AI risk' (Noahopinion, Feb 16, 2026); 'Superintelligence is already here, today' (Mar 2, 2026)"
confidence: likely confidence: likely
created: 2026-03-06
description: Market dynamics structurally eliminate human oversight wherever AI output quality can be measured, making human-in-the-loop alignment a transitional phase rather than a durable safety mechanism
domain: ai-alignment
related:
- human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs
- ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine
- clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling
source: Noah Smith, 'Updated thoughts on AI risk' (Noahopinion, Feb 16, 2026); 'Superintelligence is already here, today' (Mar 2, 2026)
sourced_from: sourced_from:
- inbox/archive/general/2026-02-16-noahopinion-updated-thoughts-ai-risk.md - inbox/archive/general/2026-02-16-noahopinion-updated-thoughts-ai-risk.md
type: claim
--- ---
# economic forces push humans out of every cognitive loop where output quality is independently verifiable because human-in-the-loop is a cost that competitive markets eliminate # economic forces push humans out of every cognitive loop where output quality is independently verifiable because human-in-the-loop is a cost that competitive markets eliminate

View file

@ -1,20 +1,22 @@
--- ---
description: Ben Thompson's structural argument that governments must control frontier AI because it constitutes weapons-grade capability, as demonstrated by the Pentagon's actions against Anthropic
type: claim
domain: ai-alignment
created: 2026-03-06
source: "Noah Smith, 'If AI is a weapon, why don't we regulate it like one?' (Noahopinion, Mar 6, 2026); Ben Thompson, Stratechery analysis of Anthropic/Pentagon dispute (2026)"
confidence: experimental confidence: experimental
created: 2026-03-06
description: Ben Thompson's structural argument that governments must control frontier AI because it constitutes weapons-grade capability, as demonstrated by the Pentagon's actions against Anthropic
domain: ai-alignment
related: related:
- near-universal-political-support-for-autonomous-weapons-governance-coexists-with-structural-failure-because-opposing-states-control-advanced-programs - near-universal-political-support-for-autonomous-weapons-governance-coexists-with-structural-failure-because-opposing-states-control-advanced-programs
- legal-mandate-is-the-only-version-of-coordinated-pausing-that-avoids-antitrust-risk-while-preserving-coordination-benefits - legal-mandate-is-the-only-version-of-coordinated-pausing-that-avoids-antitrust-risk-while-preserving-coordination-benefits
supports: - attractor-authoritarian-lock-in
- AI investment concentration where 58 percent of funding flows to megarounds and two companies capture 14 percent of all global venture capital creates a structural oligopoly that alignment governance must account for
reweave_edges: reweave_edges:
- AI investment concentration where 58 percent of funding flows to megarounds and two companies capture 14 percent of all global venture capital creates a structural oligopoly that alignment governance must account for|supports|2026-03-28 - AI investment concentration where 58 percent of funding flows to megarounds and two companies capture 14 percent of all global venture capital creates a structural oligopoly that alignment governance
must account for|supports|2026-03-28
source: Noah Smith, 'If AI is a weapon, why don't we regulate it like one?' (Noahopinion, Mar 6, 2026); Ben Thompson, Stratechery analysis of Anthropic/Pentagon dispute (2026)
sourced_from: sourced_from:
- inbox/archive/general/2026-03-06-noahopinion-ai-weapon-regulation.md - inbox/archive/general/2026-03-06-noahopinion-ai-weapon-regulation.md
supports:
- AI investment concentration where 58 percent of funding flows to megarounds and two companies capture 14 percent of all global venture capital creates a structural oligopoly that alignment governance
must account for
type: claim
--- ---
# nation-states will inevitably assert control over frontier AI development because the monopoly on force is the foundational state function and weapons-grade AI capability in private hands is structurally intolerable to governments # nation-states will inevitably assert control over frontier AI development because the monopoly on force is the foundational state function and weapons-grade AI capability in private hands is structurally intolerable to governments

View file

@ -1,26 +1,24 @@
--- ---
description: Three forms of alignment pluralism -- Overton steerable and distributional -- are needed because standard alignment procedures actively reduce the diversity of model outputs
type: claim
domain: ai-alignment
created: 2026-02-17
source: "Sorensen et al, Roadmap to Pluralistic Alignment (arXiv 2402.05070, ICML 2024); Klassen et al, Pluralistic Alignment Over Time (arXiv 2411.10654, NeurIPS 2024); Harland et al, Adaptive Alignment (arXiv 2410.23630, NeurIPS 2024)"
confidence: likely confidence: likely
created: 2026-02-17
description: Three forms of alignment pluralism -- Overton steerable and distributional -- are needed because standard alignment procedures actively reduce the diversity of model outputs
domain: ai-alignment
related: related:
- minority-preference-alignment-improves-33-percent-without-majority-compromise-suggesting-single-reward-leaves-value-on-table - minority-preference-alignment-improves-33-percent-without-majority-compromise-suggesting-single-reward-leaves-value-on-table
- the variance of a learned preference sensitivity distribution diagnoses dataset heterogeneity and collapses to fixed-parameter behavior when preferences are homogeneous - the variance of a learned preference sensitivity distribution diagnoses dataset heterogeneity and collapses to fixed-parameter behavior when preferences are homogeneous
- collective-intelligence-architectures-are-underexplored-for-alignment-despite-addressing-core-problems - collective-intelligence-architectures-are-underexplored-for-alignment-despite-addressing-core-problems
- futarchy-conditional-markets-aggregate-information-through-financial-stake-not-voting-participation
reweave_edges: reweave_edges:
- minority-preference-alignment-improves-33-percent-without-majority-compromise-suggesting-single-reward-leaves-value-on-table|related|2026-03-28 - minority-preference-alignment-improves-33-percent-without-majority-compromise-suggesting-single-reward-leaves-value-on-table|related|2026-03-28
- pluralistic-ai-alignment-through-multiple-systems-preserves-value-diversity-better-than-forced-consensus|supports|2026-03-28 - pluralistic-ai-alignment-through-multiple-systems-preserves-value-diversity-better-than-forced-consensus|supports|2026-03-28
- single-reward-rlhf-cannot-align-diverse-preferences-because-alignment-gap-grows-proportional-to-minority-distinctiveness|supports|2026-03-28 - single-reward-rlhf-cannot-align-diverse-preferences-because-alignment-gap-grows-proportional-to-minority-distinctiveness|supports|2026-03-28
- the variance of a learned preference sensitivity distribution diagnoses dataset heterogeneity and collapses to fixed-parameter behavior when preferences are homogeneous|related|2026-03-28 - the variance of a learned preference sensitivity distribution diagnoses dataset heterogeneity and collapses to fixed-parameter behavior when preferences are homogeneous|related|2026-03-28
source: Sorensen et al, Roadmap to Pluralistic Alignment (arXiv 2402.05070, ICML 2024); Klassen et al, Pluralistic Alignment Over Time (arXiv 2411.10654, NeurIPS 2024); Harland et al, Adaptive Alignment
(arXiv 2410.23630, NeurIPS 2024)
supports: supports:
- pluralistic-ai-alignment-through-multiple-systems-preserves-value-diversity-better-than-forced-consensus - pluralistic-ai-alignment-through-multiple-systems-preserves-value-diversity-better-than-forced-consensus
- single-reward-rlhf-cannot-align-diverse-preferences-because-alignment-gap-grows-proportional-to-minority-distinctiveness - single-reward-rlhf-cannot-align-diverse-preferences-because-alignment-gap-grows-proportional-to-minority-distinctiveness
type: claim
--- ---
# pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state # pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state

View file

@ -1,16 +1,20 @@
--- ---
type: claim
domain: ai-alignment
secondary_domains: [internet-finance, collective-intelligence]
description: "Anthropic's own usage data shows Computer & Math at 96% theoretical exposure but 32% observed, with similar gaps in every category — the bottleneck is organizational adoption not technical capability."
confidence: likely confidence: likely
source: "Massenkoff & McCrory 2026, Anthropic Economic Index (Claude usage data Aug-Nov 2025) + Eloundou et al. 2023 theoretical feasibility ratings"
created: 2026-03-08 created: 2026-03-08
description: Anthropic's own usage data shows Computer & Math at 96% theoretical exposure but 32% observed, with similar gaps in every category — the bottleneck is organizational adoption not technical
capability.
domain: ai-alignment
related: related:
- ai-tools-reduced-experienced-developer-productivity-in-rct-conditions-despite-predicted-speedup-suggesting-capability-deployment-does-not-translate-to-autonomy - ai-tools-reduced-experienced-developer-productivity-in-rct-conditions-despite-predicted-speedup-suggesting-capability-deployment-does-not-translate-to-autonomy
- divergence-ai-labor-displacement-substitution-vs-complementarity - divergence-ai-labor-displacement-substitution-vs-complementarity
- AI datacenter power demand creates a 5-10 year infrastructure lag because grid construction and interconnection cannot match the pace of chip design cycles
secondary_domains:
- internet-finance
- collective-intelligence
source: Massenkoff & McCrory 2026, Anthropic Economic Index (Claude usage data Aug-Nov 2025) + Eloundou et al. 2023 theoretical feasibility ratings
sourced_from: sourced_from:
- inbox/archive/ai-alignment/2026-03-05-anthropic-labor-market-impacts.md - inbox/archive/ai-alignment/2026-03-05-anthropic-labor-market-impacts.md
type: claim
--- ---
# The gap between theoretical AI capability and observed deployment is massive across all occupations because adoption lag not capability limits determines real-world impact # The gap between theoretical AI capability and observed deployment is massive across all occupations because adoption lag not capability limits determines real-world impact

View file

@ -1,19 +1,8 @@
--- ---
description: Anthropic's Feb 2026 rollback of its Responsible Scaling Policy proves that even the strongest voluntary safety commitment collapses when the competitive cost exceeds the reputational benefit
type: claim
domain: ai-alignment
created: 2026-03-06
source: "Anthropic RSP v3.0 (Feb 24, 2026); TIME exclusive (Feb 25, 2026); Jared Kaplan statements"
confidence: likely confidence: likely
supports: created: 2026-03-06
- Anthropic description: Anthropic's Feb 2026 rollback of its Responsible Scaling Policy proves that even the strongest voluntary safety commitment collapses when the competitive cost exceeds the reputational benefit
- voluntary-safety-constraints-without-external-enforcement-are-statements-of-intent-not-binding-governance domain: ai-alignment
- Corporate AI safety governance under government pressure operates as a three-track sequential stack where each track's structural ceiling necessitates the next track because voluntary ethics fails to competitive dynamics, litigation protects speech rights without compelling acceptance, and electoral investment faces the legislative ceiling
reweave_edges:
- Anthropic|supports|2026-03-28
- voluntary-safety-constraints-without-external-enforcement-are-statements-of-intent-not-binding-governance|supports|2026-03-31
- Anthropic's internal resource allocation shows 6-8% safety-only headcount when dual-use research is excluded, revealing a material gap between public safety positioning and credible commitment|related|2026-04-09
- Corporate AI safety governance under government pressure operates as a three-track sequential stack where each track's structural ceiling necessitates the next track because voluntary ethics fails to competitive dynamics, litigation protects speech rights without compelling acceptance, and electoral investment faces the legislative ceiling|supports|2026-04-20
related: related:
- Anthropic's internal resource allocation shows 6-8% safety-only headcount when dual-use research is excluded, revealing a material gap between public safety positioning and credible commitment - Anthropic's internal resource allocation shows 6-8% safety-only headcount when dual-use research is excluded, revealing a material gap between public safety positioning and credible commitment
- multilateral-ai-governance-verification-mechanisms-remain-at-proposal-stage-because-technical-infrastructure-does-not-exist-at-deployment-scale - multilateral-ai-governance-verification-mechanisms-remain-at-proposal-stage-because-technical-infrastructure-does-not-exist-at-deployment-scale
@ -30,6 +19,20 @@ related:
- eu-ai-act-extraterritorial-enforcement-creates-binding-governance-alternative-to-us-voluntary-commitments - eu-ai-act-extraterritorial-enforcement-creates-binding-governance-alternative-to-us-voluntary-commitments
- legal-mandate-is-the-only-version-of-coordinated-pausing-that-avoids-antitrust-risk-while-preserving-coordination-benefits - legal-mandate-is-the-only-version-of-coordinated-pausing-that-avoids-antitrust-risk-while-preserving-coordination-benefits
- anthropic-internal-resource-allocation-shows-6-8-percent-safety-only-headcount-when-dual-use-research-excluded-revealing-gap-between-public-positioning-and-commitment - anthropic-internal-resource-allocation-shows-6-8-percent-safety-only-headcount-when-dual-use-research-excluded-revealing-gap-between-public-positioning-and-commitment
- attractor-molochian-exhaustion
reweave_edges:
- Anthropic|supports|2026-03-28
- voluntary-safety-constraints-without-external-enforcement-are-statements-of-intent-not-binding-governance|supports|2026-03-31
- Anthropic's internal resource allocation shows 6-8% safety-only headcount when dual-use research is excluded, revealing a material gap between public safety positioning and credible commitment|related|2026-04-09
- Corporate AI safety governance under government pressure operates as a three-track sequential stack where each track's structural ceiling necessitates the next track because voluntary ethics fails to
competitive dynamics, litigation protects speech rights without compelling acceptance, and electoral investment faces the legislative ceiling|supports|2026-04-20
source: Anthropic RSP v3.0 (Feb 24, 2026); TIME exclusive (Feb 25, 2026); Jared Kaplan statements
supports:
- Anthropic
- voluntary-safety-constraints-without-external-enforcement-are-statements-of-intent-not-binding-governance
- Corporate AI safety governance under government pressure operates as a three-track sequential stack where each track's structural ceiling necessitates the next track because voluntary ethics fails to
competitive dynamics, litigation protects speech rights without compelling acceptance, and electoral investment faces the legislative ceiling
type: claim
--- ---
# voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints # voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints

View file

@ -1,13 +1,16 @@
--- ---
type: claim
domain: collective-intelligence
description: "The deepest mechanism of epistemic collapse — selection pressure in all rivalrous domains rewards propagation fitness not truth, making information ecology degradation a structural feature of competition rather than an accident"
confidence: likely confidence: likely
source: "Schmachtenberger 'War on Sensemaking' Parts 1-5 (2019-2020), Dawkins 'The Selfish Gene' (1976) extended to memes, Boyd & Richerson cultural evolution framework"
created: 2026-04-03 created: 2026-04-03
description: The deepest mechanism of epistemic collapse — selection pressure in all rivalrous domains rewards propagation fitness not truth, making information ecology degradation a structural feature
of competition rather than an accident
domain: collective-intelligence
related: related:
- "global capitalism functions as a misaligned autopoietic superintelligence running on human general intelligence as substrate with convert everything into capital as its objective function" - global capitalism functions as a misaligned autopoietic superintelligence running on human general intelligence as substrate with convert everything into capital as its objective function
- "AI accelerates existing Molochian dynamics by removing bottlenecks not creating new misalignment because the competitive equilibrium was always catastrophic and friction was the only thing preventing convergence" - AI accelerates existing Molochian dynamics by removing bottlenecks not creating new misalignment because the competitive equilibrium was always catastrophic and friction was the only thing preventing
convergence
- attractor-epistemic-collapse
source: Schmachtenberger 'War on Sensemaking' Parts 1-5 (2019-2020), Dawkins 'The Selfish Gene' (1976) extended to memes, Boyd & Richerson cultural evolution framework
type: claim
--- ---
# What propagates is what wins rivalrous competition not what is true and this applies across genes memes products scientific findings and sensemaking frameworks # What propagates is what wins rivalrous competition not what is true and this applies across genes memes products scientific findings and sensemaking frameworks

View file

@ -1,17 +1,20 @@
--- ---
type: claim
domain: energy
description: "US data center power draw is under 15 GW today but the construction pipeline adds 140 GW while PJM projects a 6 GW reliability shortfall by 2027 — the demand-side thesis for alternative compute locations is real"
confidence: proven confidence: proven
source: "Astra, space data centers feasibility analysis February 2026; IEA energy and AI report; Deloitte 2025 TMT predictions"
created: 2026-02-17 created: 2026-02-17
secondary_domains: description: US data center power draw is under 15 GW today but the construction pipeline adds 140 GW while PJM projects a 6 GW reliability shortfall by 2027 — the demand-side thesis for alternative compute
- space-development locations is real
- critical-systems domain: energy
supports: related:
- AI datacenter power demand creates a 5-10 year infrastructure lag because grid construction and interconnection cannot match the pace of chip design cycles - orbital data centers are the most speculative near-term space application but the convergence of AI compute demand and falling launch costs attracts serious players
reweave_edges: reweave_edges:
- AI datacenter power demand creates a 5-10 year infrastructure lag because grid construction and interconnection cannot match the pace of chip design cycles|supports|2026-04-04 - AI datacenter power demand creates a 5-10 year infrastructure lag because grid construction and interconnection cannot match the pace of chip design cycles|supports|2026-04-04
secondary_domains:
- space-development
- critical-systems
source: Astra, space data centers feasibility analysis February 2026; IEA energy and AI report; Deloitte 2025 TMT predictions
supports:
- AI datacenter power demand creates a 5-10 year infrastructure lag because grid construction and interconnection cannot match the pace of chip design cycles
type: claim
--- ---
# AI compute demand is creating a terrestrial power crisis with 140 GW of new data center load against grid infrastructure already projected to fall 6 GW short by 2027 # AI compute demand is creating a terrestrial power crisis with 140 GW of new data center load against grid infrastructure already projected to fall 6 GW short by 2027

View file

@ -1,21 +1,25 @@
--- ---
type: claim
domain: energy
description: "Projected 8-9% of US electricity by 2030 for datacenters, nuclear deals cover 2-3 GW near-term against 25-30 GW needed, grid interconnection averages 5+ years with only 20% of projects reaching commercial operation"
confidence: likely
source: "Astra, Theseus compute infrastructure research 2026-03-24; IEA, Goldman Sachs April 2024, de Vries 2023 in Joule, grid interconnection queue data"
created: 2026-03-24
secondary_domains: ["ai-alignment", "manufacturing"]
depends_on:
- power is the binding constraint on all space operations because every capability from ISRU to manufacturing to life support is power-limited
- knowledge embodiment lag means technology is available decades before organizations learn to use it optimally creating a productivity paradox
challenged_by: challenged_by:
- Nuclear SMRs and modular gas turbines may provide faster power deployment than traditional grid construction - Nuclear SMRs and modular gas turbines may provide faster power deployment than traditional grid construction
- Efficiency improvements in inference hardware may reduce power demand growth below current projections - Efficiency improvements in inference hardware may reduce power demand growth below current projections
confidence: likely
created: 2026-03-24
depends_on:
- power is the binding constraint on all space operations because every capability from ISRU to manufacturing to life support is power-limited
- knowledge embodiment lag means technology is available decades before organizations learn to use it optimally creating a productivity paradox
description: Projected 8-9% of US electricity by 2030 for datacenters, nuclear deals cover 2-3 GW near-term against 25-30 GW needed, grid interconnection averages 5+ years with only 20% of projects reaching
commercial operation
domain: energy
related: related:
- small modular reactors could break nuclears construction cost curse by shifting from bespoke site-built projects to factory-manufactured standardized units but no SMR has yet operated commercially - small modular reactors could break nuclears construction cost curse by shifting from bespoke site-built projects to factory-manufactured standardized units but no SMR has yet operated commercially
- the gap between theoretical AI capability and observed deployment is massive across all occupations because adoption lag not capability limits determines real-world impact
reweave_edges: reweave_edges:
- small modular reactors could break nuclears construction cost curse by shifting from bespoke site-built projects to factory-manufactured standardized units but no SMR has yet operated commercially|related|2026-04-19 - small modular reactors could break nuclears construction cost curse by shifting from bespoke site-built projects to factory-manufactured standardized units but no SMR has yet operated commercially|related|2026-04-19
secondary_domains:
- ai-alignment
- manufacturing
source: Astra, Theseus compute infrastructure research 2026-03-24; IEA, Goldman Sachs April 2024, de Vries 2023 in Joule, grid interconnection queue data
type: claim
--- ---
# AI datacenter power demand creates a 5-10 year infrastructure lag because grid construction and interconnection cannot match the pace of chip design cycles # AI datacenter power demand creates a 5-10 year infrastructure lag because grid construction and interconnection cannot match the pace of chip design cycles

View file

@ -1,27 +1,28 @@
--- ---
type: claim
domain: energy
description: "Iceland offers 100% renewable energy with 70%+ cooling cost reduction available now while nuclear SMRs address power at scale by late decade — both more practical than orbit for the next decade"
confidence: likely confidence: likely
source: "Astra, space data centers feasibility analysis February 2026; Arctida research on arctic free cooling"
created: 2026-02-17 created: 2026-02-17
secondary_domains:
- space-development
- critical-systems
depends_on: depends_on:
- AI compute demand is creating a terrestrial power crisis with 140 GW of new data center load against grid infrastructure already projected to fall 6 GW short by 2027 - AI compute demand is creating a terrestrial power crisis with 140 GW of new data center load against grid infrastructure already projected to fall 6 GW short by 2027
- space-based computing at datacenter scale is blocked by thermal physics because radiative cooling in vacuum requires surface areas that grow faster than compute density - space-based computing at datacenter scale is blocked by thermal physics because radiative cooling in vacuum requires surface areas that grow faster than compute density
description: Iceland offers 100% renewable energy with 70%+ cooling cost reduction available now while nuclear SMRs address power at scale by late decade — both more practical than orbit for the next decade
domain: energy
related: related:
- orbital compute hardware cannot be serviced making every component either radiation-hardened redundant or disposable with failed hardware becoming debris or requiring expensive deorbit - orbital compute hardware cannot be serviced making every component either radiation-hardened redundant or disposable with failed hardware becoming debris or requiring expensive deorbit
- AI datacenter power demand creates a 5-10 year infrastructure lag because grid construction and interconnection cannot match the pace of chip design cycles - AI datacenter power demand creates a 5-10 year infrastructure lag because grid construction and interconnection cannot match the pace of chip design cycles
- small modular reactors could break nuclears construction cost curse by shifting from bespoke site-built projects to factory-manufactured standardized units but no SMR has yet operated commercially - small modular reactors could break nuclears construction cost curse by shifting from bespoke site-built projects to factory-manufactured standardized units but no SMR has yet operated commercially
- orbital data centers are the most speculative near-term space application but the convergence of AI compute demand and falling launch costs attracts serious players
reweave_edges: reweave_edges:
- orbital compute hardware cannot be serviced making every component either radiation-hardened redundant or disposable with failed hardware becoming debris or requiring expensive deorbit|related|2026-04-04 - orbital compute hardware cannot be serviced making every component either radiation-hardened redundant or disposable with failed hardware becoming debris or requiring expensive deorbit|related|2026-04-04
- AI datacenter power demand creates a 5-10 year infrastructure lag because grid construction and interconnection cannot match the pace of chip design cycles|related|2026-04-04 - AI datacenter power demand creates a 5-10 year infrastructure lag because grid construction and interconnection cannot match the pace of chip design cycles|related|2026-04-04
- small modular reactors could break nuclears construction cost curse by shifting from bespoke site-built projects to factory-manufactured standardized units but no SMR has yet operated commercially|related|2026-04-19 - small modular reactors could break nuclears construction cost curse by shifting from bespoke site-built projects to factory-manufactured standardized units but no SMR has yet operated commercially|related|2026-04-19
secondary_domains:
- space-development
- critical-systems
source: Astra, space data centers feasibility analysis February 2026; Arctida research on arctic free cooling
sourced_from: sourced_from:
- inbox/archive/2026-02-17-astra-space-data-centers-research.md - inbox/archive/2026-02-17-astra-space-data-centers-research.md
- inbox/archive/space-development/2026-03-XX-spacecomputer-orbital-cooling-landscape-analysis.md - inbox/archive/space-development/2026-03-XX-spacecomputer-orbital-cooling-landscape-analysis.md
type: claim
--- ---
# Arctic and nuclear-powered data centers solve the same power and cooling constraints as orbital compute without launch costs radiation or bandwidth limitations # Arctic and nuclear-powered data centers solve the same power and cooling constraints as orbital compute without launch costs radiation or bandwidth limitations

View file

@ -1,16 +1,18 @@
--- ---
type: claim
domain: energy
description: "Fusion will not replace renewables for bulk energy but fills the firm dispatchable niche — data centers, dense cities, industrial heat, maritime — where baseload reliability and zero carbon justify a cost premium"
confidence: experimental
source: "Astra, attractor state analysis applied to fusion energy February 2026"
created: 2026-03-20
challenged_by: challenged_by:
- advanced fission SMRs may fill the firm dispatchable niche before fusion arrives, making fusion commercially unnecessary - advanced fission SMRs may fill the firm dispatchable niche before fusion arrives, making fusion commercially unnecessary
confidence: experimental
created: 2026-03-20
description: Fusion will not replace renewables for bulk energy but fills the firm dispatchable niche — data centers, dense cities, industrial heat, maritime — where baseload reliability and zero carbon
justify a cost premium
domain: energy
related: related:
- long-duration energy storage beyond 8 hours remains unsolved at scale and is the binding constraint on a fully renewable grid - long-duration energy storage beyond 8 hours remains unsolved at scale and is the binding constraint on a fully renewable grid
- space-based solar power economics depend almost entirely on launch cost reduction with viability threshold near 10 dollars per kg to orbit
reweave_edges: reweave_edges:
- long-duration energy storage beyond 8 hours remains unsolved at scale and is the binding constraint on a fully renewable grid|related|2026-04-18 - long-duration energy storage beyond 8 hours remains unsolved at scale and is the binding constraint on a fully renewable grid|related|2026-04-18
source: Astra, attractor state analysis applied to fusion energy February 2026
type: claim
--- ---
# Fusion's attractor state is 5-15 percent of global generation by 2055 as firm dispatchable complement to renewables not as baseload replacement for fission # Fusion's attractor state is 5-15 percent of global generation by 2055 as firm dispatchable complement to renewables not as baseload replacement for fission

View file

@ -1,23 +1,26 @@
--- ---
type: claim
domain: grand-strategy
description: "Defines Authoritarian Lock-in as a civilizational attractor where one actor centralizes control — stable but stagnant, with AI dramatically lowering the cost of achieving it"
confidence: experimental confidence: experimental
source: "Leo, synthesis of Bostrom singleton hypothesis, historical analysis of Soviet/Ming/Roman centralization, Schmachtenberger two-attractor framework"
created: 2026-04-02 created: 2026-04-02
depends_on: depends_on:
- three paths to superintelligence exist but only collective superintelligence preserves human agency - three paths to superintelligence exist but only collective superintelligence preserves human agency
- technology advances exponentially but coordination mechanisms evolve linearly creating a widening gap - technology advances exponentially but coordination mechanisms evolve linearly creating a widening gap
- multipolar failure from competing aligned AI systems may pose greater existential risk than any single misaligned superintelligence - multipolar failure from competing aligned AI systems may pose greater existential risk than any single misaligned superintelligence
supports: description: Defines Authoritarian Lock-in as a civilizational attractor where one actor centralizes control — stable but stagnant, with AI dramatically lowering the cost of achieving it
- attractor-digital-feudalism domain: grand-strategy
related: related:
- attractor-civilizational-basins-are-real - attractor-civilizational-basins-are-real
- attractor-comfortable-stagnation - attractor-comfortable-stagnation
- nation-states will inevitably assert control over frontier AI development because the monopoly on force is the foundational state function and weapons-grade AI capability in private hands is structurally
intolerable to governments
- lunar development is bifurcating into two competing governance blocs that mirror terrestrial geopolitical alignment
reweave_edges: reweave_edges:
- attractor-civilizational-basins-are-real|related|2026-04-17 - attractor-civilizational-basins-are-real|related|2026-04-17
- attractor-comfortable-stagnation|related|2026-04-17 - attractor-comfortable-stagnation|related|2026-04-17
- attractor-digital-feudalism|supports|2026-04-17 - attractor-digital-feudalism|supports|2026-04-17
source: Leo, synthesis of Bostrom singleton hypothesis, historical analysis of Soviet/Ming/Roman centralization, Schmachtenberger two-attractor framework
supports:
- attractor-digital-feudalism
type: claim
--- ---
# Authoritarian Lock-in is a stable negative civilizational attractor because centralized control eliminates the coordination problem by eliminating the need for coordination but AI makes this basin dramatically easier to fall into than at any previous point in history # Authoritarian Lock-in is a stable negative civilizational attractor because centralized control eliminates the coordination problem by eliminating the need for coordination but AI makes this basin dramatically easier to fall into than at any previous point in history

View file

@ -1,24 +1,26 @@
--- ---
type: claim
domain: grand-strategy
description: "Defines Coordination-Enabled Abundance as the gateway positive attractor — the only path that reaches Post-Scarcity Multiplanetary without passing through Authoritarian Lock-in"
confidence: experimental confidence: experimental
source: "Leo, synthesis of Schmachtenberger third-attractor framework, Abdalla manuscript price-of-anarchy analysis, Ostrom design principles, KB futarchy/collective intelligence claims"
created: 2026-04-02 created: 2026-04-02
depends_on: depends_on:
- coordination failures arise from individually rational strategies that produce collectively irrational outcomes because the Nash equilibrium of non-cooperation dominates when trust and enforcement are absent - coordination failures arise from individually rational strategies that produce collectively irrational outcomes because the Nash equilibrium of non-cooperation dominates when trust and enforcement are
absent
- Ostrom proved communities self-govern shared resources when eight design principles are met without requiring state control or privatization - Ostrom proved communities self-govern shared resources when eight design principles are met without requiring state control or privatization
- designing coordination rules is categorically different from designing coordination outcomes as nine intellectual traditions independently confirm - designing coordination rules is categorically different from designing coordination outcomes as nine intellectual traditions independently confirm
- voluntary safety commitments collapse under competitive pressure because coordination mechanisms like futarchy can bind where unilateral pledges cannot - voluntary safety commitments collapse under competitive pressure because coordination mechanisms like futarchy can bind where unilateral pledges cannot
- futarchy solves trustless joint ownership not just better decision-making - futarchy solves trustless joint ownership not just better decision-making
- humanity is a superorganism that can communicate but not yet think - humanity is a superorganism that can communicate but not yet think
supports: description: Defines Coordination-Enabled Abundance as the gateway positive attractor — the only path that reaches Post-Scarcity Multiplanetary without passing through Authoritarian Lock-in
- three independent intellectual traditions converge on coordination-without-centralization as the only viable path between uncoordinated collapse and authoritarian capture domain: grand-strategy
related: related:
- attractor-post-scarcity-multiplanetary - attractor-post-scarcity-multiplanetary
- futarchy-conditional-markets-aggregate-information-through-financial-stake-not-voting-participation
reweave_edges: reweave_edges:
- three independent intellectual traditions converge on coordination-without-centralization as the only viable path between uncoordinated collapse and authoritarian capture|supports|2026-04-17 - three independent intellectual traditions converge on coordination-without-centralization as the only viable path between uncoordinated collapse and authoritarian capture|supports|2026-04-17
- attractor-post-scarcity-multiplanetary|related|2026-04-17 - attractor-post-scarcity-multiplanetary|related|2026-04-17
source: Leo, synthesis of Schmachtenberger third-attractor framework, Abdalla manuscript price-of-anarchy analysis, Ostrom design principles, KB futarchy/collective intelligence claims
supports:
- three independent intellectual traditions converge on coordination-without-centralization as the only viable path between uncoordinated collapse and authoritarian capture
type: claim
--- ---
# Coordination-Enabled Abundance is the gateway positive attractor because it is the only civilizational configuration that can navigate between Molochian Exhaustion and Authoritarian Lock-in by solving multipolar traps without centralizing control # Coordination-Enabled Abundance is the gateway positive attractor because it is the only civilizational configuration that can navigate between Molochian Exhaustion and Authoritarian Lock-in by solving multipolar traps without centralizing control

View file

@ -1,21 +1,23 @@
--- ---
type: claim
domain: grand-strategy
description: "Defines Epistemic Collapse as a civilizational attractor where AI-generated content destroys the shared information commons, making collective sensemaking impossible and trapping civilization in paralysis or manipulation"
confidence: experimental confidence: experimental
source: "Leo, synthesis of Abdalla manuscript on fragility from efficiency, Schmachtenberger epistemic commons analysis, existing KB claims on AI persuasion and information quality"
created: 2026-04-02 created: 2026-04-02
depends_on: depends_on:
- AI-generated-persuasive-content-matches-human-effectiveness-at-belief-change-eliminating-the-authenticity-premium - AI-generated-persuasive-content-matches-human-effectiveness-at-belief-change-eliminating-the-authenticity-premium
- optimization for efficiency without regard for resilience creates systemic fragility because interconnected systems transmit and amplify local failures into cascading breakdowns - optimization for efficiency without regard for resilience creates systemic fragility because interconnected systems transmit and amplify local failures into cascading breakdowns
- AI is collapsing the knowledge-producing communities it depends on creating a self-undermining loop that collective intelligence can break - AI is collapsing the knowledge-producing communities it depends on creating a self-undermining loop that collective intelligence can break
description: Defines Epistemic Collapse as a civilizational attractor where AI-generated content destroys the shared information commons, making collective sensemaking impossible and trapping civilization
in paralysis or manipulation
domain: grand-strategy
related: related:
- attractor-digital-feudalism - attractor-digital-feudalism
- what propagates is what wins rivalrous competition not what is true and this applies across genes memes products scientific findings and sensemaking frameworks
reweave_edges: reweave_edges:
- attractor-digital-feudalism|related|2026-04-17 - attractor-digital-feudalism|related|2026-04-17
- social media uniquely degrades democracy because it fractures the electorate itself rather than merely influencing policy making the regulatory body incapable of regulating its own degradation|supports|2026-04-19 - social media uniquely degrades democracy because it fractures the electorate itself rather than merely influencing policy making the regulatory body incapable of regulating its own degradation|supports|2026-04-19
source: Leo, synthesis of Abdalla manuscript on fragility from efficiency, Schmachtenberger epistemic commons analysis, existing KB claims on AI persuasion and information quality
supports: supports:
- social media uniquely degrades democracy because it fractures the electorate itself rather than merely influencing policy making the regulatory body incapable of regulating its own degradation - social media uniquely degrades democracy because it fractures the electorate itself rather than merely influencing policy making the regulatory body incapable of regulating its own degradation
type: claim
--- ---
# Epistemic Collapse is a civilizational attractor because AI-generated content can destroy the shared information commons faster than institutions can adapt making collective sensemaking impossible and trapping civilization in decision paralysis or manufactured consent # Epistemic Collapse is a civilizational attractor because AI-generated content can destroy the shared information commons faster than institutions can adapt making collective sensemaking impossible and trapping civilization in decision paralysis or manufactured consent

View file

@ -1,19 +1,24 @@
--- ---
type: claim
domain: grand-strategy
description: "Molochian Exhaustion is a stable negative civilizational attractor where competitive dynamics between rational actors systematically destroy shared value — it is the default basin humanity falls into when coordination mechanisms fail to scale with technological capability"
confidence: experimental confidence: experimental
source: "Leo, synthesis of Scott Alexander Meditations on Moloch, Abdalla manuscript price-of-anarchy framework, Schmachtenberger metacrisis generator function concept, KB coordination failure claims"
created: 2026-04-02 created: 2026-04-02
depends_on: depends_on:
- coordination failures arise from individually rational strategies that produce collectively irrational outcomes because the Nash equilibrium of non-cooperation dominates when trust and enforcement are absent - coordination failures arise from individually rational strategies that produce collectively irrational outcomes because the Nash equilibrium of non-cooperation dominates when trust and enforcement are
absent
- technology advances exponentially but coordination mechanisms evolve linearly creating a widening gap - technology advances exponentially but coordination mechanisms evolve linearly creating a widening gap
- collective action fails by default because rational individuals free-ride on group efforts when they cannot be excluded from benefits regardless of contribution - collective action fails by default because rational individuals free-ride on group efforts when they cannot be excluded from benefits regardless of contribution
- the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it - the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it
description: Molochian Exhaustion is a stable negative civilizational attractor where competitive dynamics between rational actors systematically destroy shared value — it is the default basin humanity
falls into when coordination mechanisms fail to scale with technological capability
domain: grand-strategy
related: related:
- attractor-comfortable-stagnation - attractor-comfortable-stagnation
- value-based care transitions stall at the payment boundary because 60 percent of payments touch value metrics but only 14 percent bear full risk
- healthcare AI creates a Jevons paradox because adding capacity to sick care induces more demand for sick care
- space governance gaps are widening not narrowing because technology advances exponentially while institutional design advances linearly
reweave_edges: reweave_edges:
- attractor-comfortable-stagnation|related|2026-04-17 - attractor-comfortable-stagnation|related|2026-04-17
source: Leo, synthesis of Scott Alexander Meditations on Moloch, Abdalla manuscript price-of-anarchy framework, Schmachtenberger metacrisis generator function concept, KB coordination failure claims
type: claim
--- ---
# Molochian Exhaustion is a stable negative civilizational attractor where competitive dynamics between rational actors systematically destroy shared value and it is the default basin humanity occupies when coordination mechanisms cannot scale with technological capability # Molochian Exhaustion is a stable negative civilizational attractor where competitive dynamics between rational actors systematically destroy shared value and it is the default basin humanity occupies when coordination mechanisms cannot scale with technological capability

View file

@ -1,23 +1,29 @@
--- ---
type: claim
domain: health
description: Proposed neurological mechanism explains why clinical deskilling may be harder to reverse than simple habit formation suggests
confidence: speculative
source: Frontiers in Medicine 2026, theoretical mechanism based on cognitive offloading research
created: 2026-04-13
title: "AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance"
agent: vida agent: vida
scope: causal confidence: speculative
sourcer: Frontiers in Medicine created: 2026-04-13
related_claims: ["[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]"] description: Proposed neurological mechanism explains why clinical deskilling may be harder to reverse than simple habit formation suggests
supports: domain: health
- AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable related:
- Dopaminergic reinforcement of AI-assisted success creates motivational entrenchment that makes deskilling a behavioral incentive problem, not just a training design problem - agent-generated code creates cognitive debt that compounds when developers cannot understand what was produced on their behalf
- Never-skilling — the failure to acquire foundational clinical competencies because AI was present during training — poses a detection-resistant, potentially unrecoverable threat to medical education that is structurally worse than deskilling related_claims:
- '[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]'
reweave_edges: reweave_edges:
- AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable|supports|2026-04-14 - AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable|supports|2026-04-14
- Dopaminergic reinforcement of AI-assisted success creates motivational entrenchment that makes deskilling a behavioral incentive problem, not just a training design problem|supports|2026-04-14 - Dopaminergic reinforcement of AI-assisted success creates motivational entrenchment that makes deskilling a behavioral incentive problem, not just a training design problem|supports|2026-04-14
- Never-skilling — the failure to acquire foundational clinical competencies because AI was present during training — poses a detection-resistant, potentially unrecoverable threat to medical education that is structurally worse than deskilling|supports|2026-04-14 - Never-skilling — the failure to acquire foundational clinical competencies because AI was present during training — poses a detection-resistant, potentially unrecoverable threat to medical education that
is structurally worse than deskilling|supports|2026-04-14
scope: causal
source: Frontiers in Medicine 2026, theoretical mechanism based on cognitive offloading research
sourcer: Frontiers in Medicine
supports:
- AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable
- Dopaminergic reinforcement of AI-assisted success creates motivational entrenchment that makes deskilling a behavioral incentive problem, not just a training design problem
- Never-skilling — the failure to acquire foundational clinical competencies because AI was present during training — poses a detection-resistant, potentially unrecoverable threat to medical education that
is structurally worse than deskilling
title: 'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction,
and dopaminergic reinforcement of AI reliance'
type: claim
--- ---
# AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance # AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance

View file

@ -1,20 +1,44 @@
--- ---
type: claim
domain: health
description: Systematic review across 10 medical specialties (radiology, neurosurgery, anesthesiology, oncology, cardiology, pathology, fertility medicine, geriatrics, psychiatry, ophthalmology) finds universal pattern of skill degradation following AI removal
confidence: likely
source: Natali et al., Artificial Intelligence Review 2025, mixed-method systematic review
created: 2026-04-13
title: AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable
agent: vida agent: vida
confidence: likely
created: 2026-04-13
description: Systematic review across 10 medical specialties (radiology, neurosurgery, anesthesiology, oncology, cardiology, pathology, fertility medicine, geriatrics, psychiatry, ophthalmology) finds universal
pattern of skill degradation following AI removal
domain: health
related:
- Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers
- ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine
- clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling
- ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement
- never-skilling-is-structurally-invisible-because-it-lacks-pre-ai-baseline-requiring-prospective-competency-assessment
- never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling
- economic forces push humans out of every cognitive loop where output quality is independently verifiable because human-in-the-loop is a cost that competitive markets eliminate
related_claims:
- '[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]'
reweave_edges:
- '{''AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms'': ''prefrontal disengagement, hippocampal memory formation reduction,
and dopaminergic reinforcement of AI reliance|supports|2026-04-14''}'
- Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers|related|2026-04-14
- Dopaminergic reinforcement of AI-assisted success creates motivational entrenchment that makes deskilling a behavioral incentive problem, not just a training design problem|supports|2026-04-14
- '{''AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms'': ''prefrontal disengagement, hippocampal memory formation reduction,
and dopaminergic reinforcement of AI reliance|supports|2026-04-17''}'
- '{''AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms'': ''prefrontal disengagement, hippocampal memory formation reduction,
and dopaminergic reinforcement of AI reliance|supports|2026-04-18''}'
- 'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction, and
dopaminergic reinforcement of AI reliance|supports|2026-04-19'
scope: causal scope: causal
sourcer: Natali et al. source: Natali et al., Artificial Intelligence Review 2025, mixed-method systematic review
related_claims: ["[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]"]
supports: ["{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance'}", "Dopaminergic reinforcement of AI-assisted success creates motivational entrenchment that makes deskilling a behavioral incentive problem, not just a training design problem", "AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance"]
related: ["Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine", "clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement", "never-skilling-is-structurally-invisible-because-it-lacks-pre-ai-baseline-requiring-prospective-competency-assessment", "never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling"]
reweave_edges: ["{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|supports|2026-04-14'}", "Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers|related|2026-04-14", "Dopaminergic reinforcement of AI-assisted success creates motivational entrenchment that makes deskilling a behavioral incentive problem, not just a training design problem|supports|2026-04-14", "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|supports|2026-04-17'}", "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|supports|2026-04-18'}", "AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|supports|2026-04-19"]
sourced_from: sourced_from:
- inbox/archive/health/2026-04-13-natali-2025-ai-deskilling-comprehensive-review.md - inbox/archive/health/2026-04-13-natali-2025-ai-deskilling-comprehensive-review.md
sourcer: Natali et al.
supports:
- '{''AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms'': ''prefrontal disengagement, hippocampal memory formation reduction,
and dopaminergic reinforcement of AI reliance''}'
- Dopaminergic reinforcement of AI-assisted success creates motivational entrenchment that makes deskilling a behavioral incentive problem, not just a training design problem
- 'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction, and
dopaminergic reinforcement of AI reliance'
title: AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable
type: claim
--- ---
# AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable # AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable

View file

@ -1,20 +1,53 @@
--- ---
type: claim
domain: health
description: Systematic taxonomy of AI-induced cognitive failures in medical practice, with never-skilling as a categorically different problem from deskilling because it lacks a baseline for comparison
confidence: experimental
source: Artificial Intelligence Review (Springer Nature), mixed-method systematic review
created: 2026-04-11
title: Clinical AI introduces three distinct skill failure modes — deskilling (existing expertise lost through disuse), mis-skilling (AI errors adopted as correct), and never-skilling (foundational competence never acquired) — requiring distinct mitigation strategies for each
agent: vida agent: vida
confidence: experimental
created: 2026-04-11
description: Systematic taxonomy of AI-induced cognitive failures in medical practice, with never-skilling as a categorically different problem from deskilling because it lacks a baseline for comparison
domain: health
related:
- '{''AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms'': ''prefrontal disengagement, hippocampal memory formation reduction,
and dopaminergic reinforcement of AI reliance''}'
- 'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction, and
dopaminergic reinforcement of AI reliance'
- clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling
- never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling
- ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine
- never-skilling-is-structurally-invisible-because-it-lacks-pre-ai-baseline-requiring-prospective-competency-assessment
- ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement
- economic forces push humans out of every cognitive loop where output quality is independently verifiable because human-in-the-loop is a cost that competitive markets eliminate
related_claims:
- '[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]'
- '[[divergence-human-ai-clinical-collaboration-enhance-or-degrade]]'
reweave_edges:
- Never-skilling in clinical AI is structurally invisible because it lacks a pre-AI baseline for comparison, requiring prospective competency assessment before AI exposure to detect|supports|2026-04-12
- '{''AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms'': ''prefrontal disengagement, hippocampal memory formation reduction,
and dopaminergic reinforcement of AI reliance|supports|2026-04-14''}'
- AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable|supports|2026-04-14
- Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers|supports|2026-04-14
- Never-skilling — the failure to acquire foundational clinical competencies because AI was present during training — poses a detection-resistant, potentially unrecoverable threat to medical education that
is structurally worse than deskilling|supports|2026-04-14
- '{''AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms'': ''prefrontal disengagement, hippocampal memory formation reduction,
and dopaminergic reinforcement of AI reliance|related|2026-04-17''}'
- '{''AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms'': ''prefrontal disengagement, hippocampal memory formation reduction,
and dopaminergic reinforcement of AI reliance|supports|2026-04-18''}'
- 'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction, and
dopaminergic reinforcement of AI reliance|related|2026-04-19'
scope: causal scope: causal
sourcer: Artificial Intelligence Review (Springer Nature) source: Artificial Intelligence Review (Springer Nature), mixed-method systematic review
related_claims: ["[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]", "[[divergence-human-ai-clinical-collaboration-enhance-or-degrade]]"]
supports: ["Never-skilling in clinical AI is structurally invisible because it lacks a pre-AI baseline for comparison, requiring prospective competency assessment before AI exposure to detect", "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance'}", "AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable", "Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers", "Never-skilling \u2014 the failure to acquire foundational clinical competencies because AI was present during training \u2014 poses a detection-resistant, potentially unrecoverable threat to medical education that is structurally worse than deskilling"]
reweave_edges: ["Never-skilling in clinical AI is structurally invisible because it lacks a pre-AI baseline for comparison, requiring prospective competency assessment before AI exposure to detect|supports|2026-04-12", "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|supports|2026-04-14'}", "AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable|supports|2026-04-14", "Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers|supports|2026-04-14", "Never-skilling \u2014 the failure to acquire foundational clinical competencies because AI was present during training \u2014 poses a detection-resistant, potentially unrecoverable threat to medical education that is structurally worse than deskilling|supports|2026-04-14", "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|related|2026-04-17'}", "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|supports|2026-04-18'}", "AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|related|2026-04-19"]
related: ["{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance'}", "AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance", "clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine", "never-skilling-is-structurally-invisible-because-it-lacks-pre-ai-baseline-requiring-prospective-competency-assessment", "ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement"]
sourced_from: sourced_from:
- inbox/archive/health/2026-04-13-natali-2025-ai-deskilling-comprehensive-review.md - inbox/archive/health/2026-04-13-natali-2025-ai-deskilling-comprehensive-review.md
sourcer: Artificial Intelligence Review (Springer Nature)
supports:
- Never-skilling in clinical AI is structurally invisible because it lacks a pre-AI baseline for comparison, requiring prospective competency assessment before AI exposure to detect
- '{''AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms'': ''prefrontal disengagement, hippocampal memory formation reduction,
and dopaminergic reinforcement of AI reliance''}'
- AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable
- Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers
- Never-skilling — the failure to acquire foundational clinical competencies because AI was present during training — poses a detection-resistant, potentially unrecoverable threat to medical education that
is structurally worse than deskilling
title: Clinical AI introduces three distinct skill failure modes — deskilling (existing expertise lost through disuse), mis-skilling (AI errors adopted as correct), and never-skilling (foundational competence
never acquired) — requiring distinct mitigation strategies for each
type: claim
--- ---
# Clinical AI introduces three distinct skill failure modes — deskilling (existing expertise lost through disuse), mis-skilling (AI errors adopted as correct), and never-skilling (foundational competence never acquired) — requiring distinct mitigation strategies for each # Clinical AI introduces three distinct skill failure modes — deskilling (existing expertise lost through disuse), mis-skilling (AI errors adopted as correct), and never-skilling (foundational competence never acquired) — requiring distinct mitigation strategies for each

View file

@ -1,23 +1,22 @@
--- ---
description: Nearly every AI application in healthcare optimizes the 10-20% clinical side while 80-90% of outcomes are driven by non-clinical factors so making sick care more efficient produces more sick care not better health
type: claim
domain: health
created: 2026-02-23
source: "Devoted Health AI Overview Memo, 2026"
confidence: likely confidence: likely
created: 2026-02-23
description: Nearly every AI application in healthcare optimizes the 10-20% clinical side while 80-90% of outcomes are driven by non-clinical factors so making sick care more efficient produces more sick
care not better health
domain: health
related: related:
- AI-native health companies achieve 3-5x the revenue productivity of traditional health services because AI eliminates the linear scaling constraint between headcount and output - AI-native health companies achieve 3-5x the revenue productivity of traditional health services because AI eliminates the linear scaling constraint between headcount and output
- CMS is creating AI-specific reimbursement codes which will formalize a two-speed adoption system where proven AI applications get payment parity while experimental ones remain in cash-pay limbo - CMS is creating AI-specific reimbursement codes which will formalize a two-speed adoption system where proven AI applications get payment parity while experimental ones remain in cash-pay limbo
- consumer willingness to pay out of pocket for AI-enhanced care is outpacing reimbursement creating a cash-pay adoption pathway that bypasses traditional payer gatekeeping - consumer willingness to pay out of pocket for AI-enhanced care is outpacing reimbursement creating a cash-pay adoption pathway that bypasses traditional payer gatekeeping
supports: - attractor-molochian-exhaustion
- optimization for efficiency without regard for resilience creates systemic fragility because interconnected systems transmit and amplify local failures into cascading breakdowns
reweave_edges: reweave_edges:
- AI-native health companies achieve 3-5x the revenue productivity of traditional health services because AI eliminates the linear scaling constraint between headcount and output|related|2026-03-28 - AI-native health companies achieve 3-5x the revenue productivity of traditional health services because AI eliminates the linear scaling constraint between headcount and output|related|2026-03-28
- CMS is creating AI-specific reimbursement codes which will formalize a two-speed adoption system where proven AI applications get payment parity while experimental ones remain in cash-pay limbo|related|2026-03-28 - CMS is creating AI-specific reimbursement codes which will formalize a two-speed adoption system where proven AI applications get payment parity while experimental ones remain in cash-pay limbo|related|2026-03-28
- consumer willingness to pay out of pocket for AI-enhanced care is outpacing reimbursement creating a cash-pay adoption pathway that bypasses traditional payer gatekeeping|related|2026-03-28 - consumer willingness to pay out of pocket for AI-enhanced care is outpacing reimbursement creating a cash-pay adoption pathway that bypasses traditional payer gatekeeping|related|2026-03-28
source: Devoted Health AI Overview Memo, 2026
supports:
- optimization for efficiency without regard for resilience creates systemic fragility because interconnected systems transmit and amplify local failures into cascading breakdowns
type: claim
--- ---
# healthcare AI creates a Jevons paradox because adding capacity to sick care induces more demand for sick care # healthcare AI creates a Jevons paradox because adding capacity to sick care induces more demand for sick care

View file

@ -1,21 +1,24 @@
--- ---
description: Stanford-Harvard study shows AI alone 90 percent vs doctors plus AI 68 percent vs doctors alone 65 percent and a colonoscopy study found experienced gastroenterologists measurably de-skilled after just three months with AI assistance
type: claim
domain: health
created: 2026-02-18
source: "DJ Patil interviewing Bob Wachter, Commonwealth Club, February 9 2026; Stanford/Harvard diagnostic accuracy study; European colonoscopy AI de-skilling study"
confidence: likely confidence: likely
created: 2026-02-18
description: Stanford-Harvard study shows AI alone 90 percent vs doctors plus AI 68 percent vs doctors alone 65 percent and a colonoscopy study found experienced gastroenterologists measurably de-skilled
after just three months with AI assistance
domain: health
related:
- economic forces push humans out of every cognitive loop where output quality is independently verifiable because human-in-the-loop is a cost that competitive markets eliminate
related_claims: related_claims:
- ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine - ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine
- never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling - never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling
- ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement - ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement
- llms-amplify-human-cognitive-biases-through-sequential-processing-and-lack-contextual-resistance - llms-amplify-human-cognitive-biases-through-sequential-processing-and-lack-contextual-resistance
supports:
- NCT07328815 - Mitigating Automation Bias in Physician-LLM Diagnostic Reasoning
- Does human oversight improve or degrade AI clinical decision-making?
reweave_edges: reweave_edges:
- NCT07328815 - Mitigating Automation Bias in Physician-LLM Diagnostic Reasoning|supports|2026-04-07 - NCT07328815 - Mitigating Automation Bias in Physician-LLM Diagnostic Reasoning|supports|2026-04-07
- Does human oversight improve or degrade AI clinical decision-making?|supports|2026-04-17 - Does human oversight improve or degrade AI clinical decision-making?|supports|2026-04-17
source: DJ Patil interviewing Bob Wachter, Commonwealth Club, February 9 2026; Stanford/Harvard diagnostic accuracy study; European colonoscopy AI de-skilling study
supports:
- NCT07328815 - Mitigating Automation Bias in Physician-LLM Diagnostic Reasoning
- Does human oversight improve or degrade AI clinical decision-making?
type: claim
--- ---
# human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs # human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs

View file

@ -1,17 +1,27 @@
--- ---
type: claim
domain: health
description: Unlike deskilling (loss of previously acquired skills), never-skilling prevents initial skill formation and is undetectable because neither trainee nor supervisor can identify what was never developed
confidence: experimental
source: Journal of Experimental Orthopaedics (March 2026), NEJM (2025-2026), Lancet Digital Health (2025)
created: 2026-04-13
title: Never-skilling — the failure to acquire foundational clinical competencies because AI was present during training — poses a detection-resistant, potentially unrecoverable threat to medical education that is structurally worse than deskilling
agent: vida agent: vida
confidence: experimental
created: 2026-04-13
description: Unlike deskilling (loss of previously acquired skills), never-skilling prevents initial skill formation and is undetectable because neither trainee nor supervisor can identify what was never
developed
domain: health
related:
- AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable
- never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling
- never-skilling-is-structurally-invisible-because-it-lacks-pre-ai-baseline-requiring-prospective-competency-assessment
- clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling
- ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement
- delegating critical infrastructure development to AI creates civilizational fragility because humans lose the ability to understand maintain and fix the systems civilization depends on
related_claims:
- '[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]'
reweave_edges:
- AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable|related|2026-04-14
scope: causal scope: causal
source: Journal of Experimental Orthopaedics (March 2026), NEJM (2025-2026), Lancet Digital Health (2025)
sourcer: Journal of Experimental Orthopaedics / Wiley sourcer: Journal of Experimental Orthopaedics / Wiley
related_claims: ["[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]"] title: Never-skilling — the failure to acquire foundational clinical competencies because AI was present during training — poses a detection-resistant, potentially unrecoverable threat to medical education
related: ["AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable", "never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling", "never-skilling-is-structurally-invisible-because-it-lacks-pre-ai-baseline-requiring-prospective-competency-assessment", "clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement"] that is structurally worse than deskilling
reweave_edges: ["AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable|related|2026-04-14"] type: claim
--- ---
# Never-skilling — the failure to acquire foundational clinical competencies because AI was present during training — poses a detection-resistant, potentially unrecoverable threat to medical education that is structurally worse than deskilling # Never-skilling — the failure to acquire foundational clinical competencies because AI was present during training — poses a detection-resistant, potentially unrecoverable threat to medical education that is structurally worse than deskilling

View file

@ -1,20 +1,22 @@
--- ---
description: Once populations gain reliable access to basic necessities, further economic growth fails to improve health -- instead relative income distribution and psychosocial stress become the dominant determinants of life expectancy and disease burden
type: claim
domain: health
source: "Architectural Investing, Ch. Epidemiological Transition; Wilkinson (1994)"
confidence: likely confidence: likely
created: 2026-02-28 created: 2026-02-28
related_claims: description: Once populations gain reliable access to basic necessities, further economic growth fails to improve health -- instead relative income distribution and psychosocial stress become the dominant
- us-cardiovascular-mortality-gains-reversing-after-decades-of-improvement-across-major-conditions determinants of life expectancy and disease burden
- ultra-processed-food-consumption-increases-incident-hypertension-through-chronic-inflammation-pathway domain: health
related: related:
- us-healthcare-ranks-last-among-peer-nations-despite-highest-spending-because-access-and-equity-failures-override-clinical-quality - us-healthcare-ranks-last-among-peer-nations-despite-highest-spending-because-access-and-equity-failures-override-clinical-quality
- attractor-comfortable-stagnation
related_claims:
- us-cardiovascular-mortality-gains-reversing-after-decades-of-improvement-across-major-conditions
- ultra-processed-food-consumption-increases-incident-hypertension-through-chronic-inflammation-pathway
reweave_edges: reweave_edges:
- us-healthcare-ranks-last-among-peer-nations-despite-highest-spending-because-access-and-equity-failures-override-clinical-quality|related|2026-04-04 - us-healthcare-ranks-last-among-peer-nations-despite-highest-spending-because-access-and-equity-failures-override-clinical-quality|related|2026-04-04
- after a threshold of material development relative deprivation replaces absolute deprivation as the primary driver of health outcomes|supports|2026-04-17 - after a threshold of material development relative deprivation replaces absolute deprivation as the primary driver of health outcomes|supports|2026-04-17
source: Architectural Investing, Ch. Epidemiological Transition; Wilkinson (1994)
supports: supports:
- after a threshold of material development relative deprivation replaces absolute deprivation as the primary driver of health outcomes - after a threshold of material development relative deprivation replaces absolute deprivation as the primary driver of health outcomes
type: claim
--- ---
# the epidemiological transition marks the shift from material scarcity to social disadvantage as the primary driver of health outcomes in developed nations # the epidemiological transition marks the shift from material scarcity to social disadvantage as the primary driver of health outcomes in developed nations

View file

@ -1,27 +1,29 @@
--- ---
description: VBC adoption shows a wide gap between participation and risk-bearing with 60 percent of payments in value arrangements but only 14 percent in full capitation revealing that most providers take upside bonuses without accepting downside risk
type: claim
domain: health
created: 2026-02-17
source: "HCP-LAN 2022-2025 measurement; IMO Health VBC Update June 2025; Grand View Research VBC market analysis; Larsson et al NEJM Catalyst 2022"
confidence: likely confidence: likely
related_claims: created: 2026-02-17
- double-coverage-compression-simultaneous-medicaid-cuts-and-aptc-expiry-eliminate-coverage-for-under-400-fpl description: VBC adoption shows a wide gap between participation and risk-bearing with 60 percent of payments in value arrangements but only 14 percent in full capitation revealing that most providers take
- medicaid-work-requirements-cause-coverage-loss-through-procedural-churn-not-employment-screening upside bonuses without accepting downside risk
- upf-driven-chronic-inflammation-creates-continuous-vascular-risk-regeneration-explaining-antihypertensive-treatment-failure domain: health
- medically-tailored-meals-achieve-pharmacotherapy-scale-bp-reduction-in-food-insecure-hypertensive-patients
- hypertension-shifted-from-secondary-to-primary-cvd-mortality-driver-since-2022
- uspstf-glp1-policy-gap-leaves-aca-mandatory-coverage-dormant
related: related:
- federal-budget-scoring-methodology-systematically-undervalues-preventive-interventions-because-10-year-window-excludes-long-term-savings - federal-budget-scoring-methodology-systematically-undervalues-preventive-interventions-because-10-year-window-excludes-long-term-savings
- home-based-care-could-capture-265-billion-in-medicare-spending-by-2025-through-hospital-at-home-remote-monitoring-and-post-acute-shift - home-based-care-could-capture-265-billion-in-medicare-spending-by-2025-through-hospital-at-home-remote-monitoring-and-post-acute-shift
- GLP-1 cost evidence accelerates value-based care adoption by proving that prevention-first interventions generate net savings under capitation within 24 months - GLP-1 cost evidence accelerates value-based care adoption by proving that prevention-first interventions generate net savings under capitation within 24 months
- Does prevention-first care reduce total healthcare costs or just redistribute them from acute to chronic spending? - Does prevention-first care reduce total healthcare costs or just redistribute them from acute to chronic spending?
- attractor-molochian-exhaustion
related_claims:
- double-coverage-compression-simultaneous-medicaid-cuts-and-aptc-expiry-eliminate-coverage-for-under-400-fpl
- medicaid-work-requirements-cause-coverage-loss-through-procedural-churn-not-employment-screening
- upf-driven-chronic-inflammation-creates-continuous-vascular-risk-regeneration-explaining-antihypertensive-treatment-failure
- medically-tailored-meals-achieve-pharmacotherapy-scale-bp-reduction-in-food-insecure-hypertensive-patients
- hypertension-shifted-from-secondary-to-primary-cvd-mortality-driver-since-2022
- uspstf-glp1-policy-gap-leaves-aca-mandatory-coverage-dormant
reweave_edges: reweave_edges:
- federal-budget-scoring-methodology-systematically-undervalues-preventive-interventions-because-10-year-window-excludes-long-term-savings|related|2026-03-31 - federal-budget-scoring-methodology-systematically-undervalues-preventive-interventions-because-10-year-window-excludes-long-term-savings|related|2026-03-31
- home-based-care-could-capture-265-billion-in-medicare-spending-by-2025-through-hospital-at-home-remote-monitoring-and-post-acute-shift|related|2026-03-31 - home-based-care-could-capture-265-billion-in-medicare-spending-by-2025-through-hospital-at-home-remote-monitoring-and-post-acute-shift|related|2026-03-31
- GLP-1 cost evidence accelerates value-based care adoption by proving that prevention-first interventions generate net savings under capitation within 24 months|related|2026-04-04 - GLP-1 cost evidence accelerates value-based care adoption by proving that prevention-first interventions generate net savings under capitation within 24 months|related|2026-04-04
- Does prevention-first care reduce total healthcare costs or just redistribute them from acute to chronic spending?|related|2026-04-17 - Does prevention-first care reduce total healthcare costs or just redistribute them from acute to chronic spending?|related|2026-04-17
source: HCP-LAN 2022-2025 measurement; IMO Health VBC Update June 2025; Grand View Research VBC market analysis; Larsson et al NEJM Catalyst 2022
type: claim
--- ---
# value-based care transitions stall at the payment boundary because 60 percent of payments touch value metrics but only 14 percent bear full risk # value-based care transitions stall at the payment boundary because 60 percent of payments touch value metrics but only 14 percent bear full risk

View file

@ -1,16 +1,25 @@
--- ---
type: claim
domain: internet-finance
description: The core mechanism replaces voting on proposal preferences with trading on conditional token prices where real money at stake drives information aggregation
confidence: experimental
source: "@m3taversal conversation with FutAIrdBot, 2026-03-30"
created: 2026-04-15
title: Futarchy conditional markets aggregate information through financial stake not voting participation
agent: rio agent: rio
confidence: experimental
created: 2026-04-15
description: The core mechanism replaces voting on proposal preferences with trading on conditional token prices where real money at stake drives information aggregation
domain: internet-finance
related:
- futarchy is manipulation-resistant because attack attempts create profitable opportunities for arbitrageurs
- speculative markets aggregate information through incentive and selection effects not wisdom of crowds
- futarchy is manipulation-resistant because attack attempts create profitable opportunities for arbitrageurs
- futarchy enables trustless joint ownership by forcing dissenters to be bought out through pass markets
- futarchy is manipulation-resistant because attack attempts create profitable opportunities for defenders
- universal alignment is mathematically impossible because Arrows impossibility theorem applies to aggregating diverse human preferences into a single coherent objective
- pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state
- attractor-coordination-enabled-abundance
scope: functional scope: functional
sourcer: "@m3taversal" source: '@m3taversal conversation with FutAIrdBot, 2026-03-30'
supports: ["speculative markets aggregate information through incentive and selection effects not wisdom of crowds"] sourcer: '@m3taversal'
related: ["futarchy is manipulation-resistant because attack attempts create profitable opportunities for arbitrageurs", "speculative markets aggregate information through incentive and selection effects not wisdom of crowds", "futarchy is manipulation-resistant because attack attempts create profitable opportunities for arbitrageurs", "futarchy enables trustless joint ownership by forcing dissenters to be bought out through pass markets", "futarchy is manipulation-resistant because attack attempts create profitable opportunities for defenders"] supports:
- speculative markets aggregate information through incentive and selection effects not wisdom of crowds
title: Futarchy conditional markets aggregate information through financial stake not voting participation
type: claim
--- ---
# Futarchy conditional markets aggregate information through financial stake not voting participation # Futarchy conditional markets aggregate information through financial stake not voting participation

View file

@ -1,17 +1,18 @@
--- ---
description: Market accuracy comes from financial penalties for error and specialist arbitrage rather than averaging crowd opinions
type: claim
domain: internet-finance
created: 2026-02-16
source: "Hanson, Shall We Vote on Values But Bet on Beliefs (2013)"
confidence: proven confidence: proven
tradition: "futarchy, prediction markets, efficient market hypothesis" created: 2026-02-16
description: Market accuracy comes from financial penalties for error and specialist arbitrage rather than averaging crowd opinions
domain: internet-finance
related: related:
- Advisory futarchy avoids selection distortion by decoupling prediction from execution because non-binding markets cannot create the approval-signals-prosperity correlation that Rasmont identifies - Advisory futarchy avoids selection distortion by decoupling prediction from execution because non-binding markets cannot create the approval-signals-prosperity correlation that Rasmont identifies
- futarchy-variance-creates-portfolio-problem-because-mechanism-selects-both-top-performers-and-worst-performers-simultaneously - futarchy-variance-creates-portfolio-problem-because-mechanism-selects-both-top-performers-and-worst-performers-simultaneously
- universal alignment is mathematically impossible because Arrows impossibility theorem applies to aggregating diverse human preferences into a single coherent objective
reweave_edges: reweave_edges:
- Advisory futarchy avoids selection distortion by decoupling prediction from execution because non-binding markets cannot create the approval-signals-prosperity correlation that Rasmont identifies|related|2026-04-17 - Advisory futarchy avoids selection distortion by decoupling prediction from execution because non-binding markets cannot create the approval-signals-prosperity correlation that Rasmont identifies|related|2026-04-17
- futarchy-variance-creates-portfolio-problem-because-mechanism-selects-both-top-performers-and-worst-performers-simultaneously|related|2026-04-18 - futarchy-variance-creates-portfolio-problem-because-mechanism-selects-both-top-performers-and-worst-performers-simultaneously|related|2026-04-18
source: Hanson, Shall We Vote on Values But Bet on Beliefs (2013)
tradition: futarchy, prediction markets, efficient market hypothesis
type: claim
--- ---
Hanson explicitly rejects the "wisdom of crowds" narrative for why speculative markets work. The best track bettors have no higher IQ than average bettors, yet markets aggregate information effectively through three mechanisms that have nothing to do with crowd intelligence. Hanson explicitly rejects the "wisdom of crowds" narrative for why speculative markets work. The best track bettors have no higher IQ than average bettors, yet markets aggregate information effectively through three mechanisms that have nothing to do with crowd intelligence.

View file

@ -1,13 +1,15 @@
--- ---
type: claim
domain: space-development
description: "US-led Artemis coalition (61 nations) and China-led ILRS coalition (17+ nations) create incompatible governance frameworks for the Moon, both targeting the south pole"
confidence: likely confidence: likely
source: "Astra, web research compilation February 2026"
created: 2026-02-17 created: 2026-02-17
depends_on: depends_on:
- "the Artemis Accords replace multilateral treaty-making with bilateral norm-setting to create governance through coalition practice rather than universal consensus" - the Artemis Accords replace multilateral treaty-making with bilateral norm-setting to create governance through coalition practice rather than universal consensus
- "space governance gaps are widening not narrowing because technology advances exponentially while institutional design advances linearly" - space governance gaps are widening not narrowing because technology advances exponentially while institutional design advances linearly
description: US-led Artemis coalition (61 nations) and China-led ILRS coalition (17+ nations) create incompatible governance frameworks for the Moon, both targeting the south pole
domain: space-development
related:
- attractor-authoritarian-lock-in
source: Astra, web research compilation February 2026
type: claim
--- ---
# Lunar development is bifurcating into two competing governance blocs that mirror terrestrial geopolitical alignment # Lunar development is bifurcating into two competing governance blocs that mirror terrestrial geopolitical alignment

View file

@ -1,30 +1,22 @@
--- ---
type: claim
domain: space-development
description: "Starcloud trained an LLM in space, Axiom launched orbital nodes, SpaceX filed for millions of satellites, Google plans Suncatcher — economics do not close yet but FCC filings signal conviction from major players"
confidence: speculative confidence: speculative
source: "Astra, web research compilation February 2026"
created: 2026-02-17 created: 2026-02-17
related_claims:
- sda-interoperability-standards-create-dual-use-orbital-compute-architecture-from-inception
- orbital-edge-compute-reached-operational-deployment-january-2026-axiom-kepler-sda-nodes
- spacex-1m-satellite-filing-faces-44x-launch-cadence-gap-between-required-and-achieved-capacity
- orbital-data-center-microgravity-thermal-management-requires-novel-refrigeration-architecture-because-standard-systems-depend-on-gravity
- golden-dome-space-data-network-requires-orbital-compute-for-latency-constraints
- terawave-optical-isl-architecture-creates-independent-communications-product-separate-from-odc-constellation
secondary_domains:
- critical-systems
depends_on: depends_on:
- space-based computing at datacenter scale is blocked by thermal physics because radiative cooling in vacuum requires surface areas that grow faster than compute density - space-based computing at datacenter scale is blocked by thermal physics because radiative cooling in vacuum requires surface areas that grow faster than compute density
- Starship achieving routine operations at sub-100 dollars per kg is the single largest enabling condition for the entire space industrial economy - Starship achieving routine operations at sub-100 dollars per kg is the single largest enabling condition for the entire space industrial economy
supports: description: Starcloud trained an LLM in space, Axiom launched orbital nodes, SpaceX filed for millions of satellites, Google plans Suncatcher — economics do not close yet but FCC filings signal conviction
- Starcloud is the first company to operate a datacenter-grade GPU in orbit but faces an existential dependency on SpaceX for launches while SpaceX builds a competing million-satellite constellation from major players
- orbital compute hardware cannot be serviced making every component either radiation-hardened redundant or disposable with failed hardware becoming debris or requiring expensive deorbit domain: space-development
- Orbital data center deployment follows a three-tier launch vehicle activation sequence (rideshare → dedicated → constellation) where each tier unlocks an order-of-magnitude increase in compute scale related:
- solar irradiance in LEO delivers 8-10x ground-based solar power with near-continuous availability in sun-synchronous orbits making orbital compute power-abundant where terrestrial facilities are power-starved - Radiative cooling in space is a cost advantage over terrestrial data centers, not merely a constraint to overcome, with claimed cooling costs of $0.002-0.005/kWh versus terrestrial active cooling
- Starcloud - AI compute demand is creating a terrestrial power crisis with 140 GW of new data center load against grid infrastructure already projected to fall 6 GW short by 2027
- Orbital data centers are activating bottom-up from small-satellite proof-of-concept toward megaconstellation scale, with each tier requiring different launch cost gates rather than a single sector-wide threshold related_claims:
- Orbital data centers and space-based solar power share identical infrastructure requirements in sun-synchronous orbit creating a dual-use architecture where near-term compute revenue cross-subsidizes long-term energy transmission development - sda-interoperability-standards-create-dual-use-orbital-compute-architecture-from-inception
- orbital-edge-compute-reached-operational-deployment-january-2026-axiom-kepler-sda-nodes
- spacex-1m-satellite-filing-faces-44x-launch-cadence-gap-between-required-and-achieved-capacity
- orbital-data-center-microgravity-thermal-management-requires-novel-refrigeration-architecture-because-standard-systems-depend-on-gravity
- golden-dome-space-data-network-requires-orbital-compute-for-latency-constraints
- terawave-optical-isl-architecture-creates-independent-communications-product-separate-from-odc-constellation
reweave_edges: reweave_edges:
- Starcloud is the first company to operate a datacenter-grade GPU in orbit but faces an existential dependency on SpaceX for launches while SpaceX builds a competing million-satellite constellation|supports|2026-04-04 - Starcloud is the first company to operate a datacenter-grade GPU in orbit but faces an existential dependency on SpaceX for launches while SpaceX builds a competing million-satellite constellation|supports|2026-04-04
- orbital compute hardware cannot be serviced making every component either radiation-hardened redundant or disposable with failed hardware becoming debris or requiring expensive deorbit|supports|2026-04-04 - orbital compute hardware cannot be serviced making every component either radiation-hardened redundant or disposable with failed hardware becoming debris or requiring expensive deorbit|supports|2026-04-04
@ -32,12 +24,26 @@ reweave_edges:
- Radiative cooling in space is a cost advantage over terrestrial data centers, not merely a constraint to overcome, with claimed cooling costs of $0.002-0.005/kWh versus terrestrial active cooling|related|2026-04-04 - Radiative cooling in space is a cost advantage over terrestrial data centers, not merely a constraint to overcome, with claimed cooling costs of $0.002-0.005/kWh versus terrestrial active cooling|related|2026-04-04
- solar irradiance in LEO delivers 8-10x ground-based solar power with near-continuous availability in sun-synchronous orbits making orbital compute power-abundant where terrestrial facilities are power-starved|supports|2026-04-04 - solar irradiance in LEO delivers 8-10x ground-based solar power with near-continuous availability in sun-synchronous orbits making orbital compute power-abundant where terrestrial facilities are power-starved|supports|2026-04-04
- Starcloud|supports|2026-04-04 - Starcloud|supports|2026-04-04
- Orbital data centers are activating bottom-up from small-satellite proof-of-concept toward megaconstellation scale, with each tier requiring different launch cost gates rather than a single sector-wide threshold|supports|2026-04-11 - Orbital data centers are activating bottom-up from small-satellite proof-of-concept toward megaconstellation scale, with each tier requiring different launch cost gates rather than a single sector-wide
- Orbital data centers and space-based solar power share identical infrastructure requirements in sun-synchronous orbit creating a dual-use architecture where near-term compute revenue cross-subsidizes long-term energy transmission development|supports|2026-04-11 threshold|supports|2026-04-11
related: - Orbital data centers and space-based solar power share identical infrastructure requirements in sun-synchronous orbit creating a dual-use architecture where near-term compute revenue cross-subsidizes
- Radiative cooling in space is a cost advantage over terrestrial data centers, not merely a constraint to overcome, with claimed cooling costs of $0.002-0.005/kWh versus terrestrial active cooling long-term energy transmission development|supports|2026-04-11
secondary_domains:
- critical-systems
source: Astra, web research compilation February 2026
sourced_from: sourced_from:
- inbox/archive/2026-02-17-astra-space-data-centers-research.md - inbox/archive/2026-02-17-astra-space-data-centers-research.md
supports:
- Starcloud is the first company to operate a datacenter-grade GPU in orbit but faces an existential dependency on SpaceX for launches while SpaceX builds a competing million-satellite constellation
- orbital compute hardware cannot be serviced making every component either radiation-hardened redundant or disposable with failed hardware becoming debris or requiring expensive deorbit
- Orbital data center deployment follows a three-tier launch vehicle activation sequence (rideshare → dedicated → constellation) where each tier unlocks an order-of-magnitude increase in compute scale
- solar irradiance in LEO delivers 8-10x ground-based solar power with near-continuous availability in sun-synchronous orbits making orbital compute power-abundant where terrestrial facilities are power-starved
- Starcloud
- Orbital data centers are activating bottom-up from small-satellite proof-of-concept toward megaconstellation scale, with each tier requiring different launch cost gates rather than a single sector-wide
threshold
- Orbital data centers and space-based solar power share identical infrastructure requirements in sun-synchronous orbit creating a dual-use architecture where near-term compute revenue cross-subsidizes
long-term energy transmission development
type: claim
--- ---
# Orbital data centers are the most speculative near-term space application but the convergence of AI compute demand and falling launch costs attracts serious players # Orbital data centers are the most speculative near-term space application but the convergence of AI compute demand and falling launch costs attracts serious players

View file

@ -1,24 +1,26 @@
--- ---
type: claim
domain: space-development
description: "Commercial activity in orbit, manufacturing, resource extraction, and settlement planning all outpace regulatory frameworks, creating governance demand faster than supply across five accelerating dynamics"
confidence: likely confidence: likely
source: "Astra, web research compilation February 2026"
created: 2026-02-17 created: 2026-02-17
depends_on: depends_on:
- technology advances exponentially but coordination mechanisms evolve linearly creating a widening gap - technology advances exponentially but coordination mechanisms evolve linearly creating a widening gap
- designing coordination rules is categorically different from designing coordination outcomes as nine intellectual traditions independently confirm - designing coordination rules is categorically different from designing coordination outcomes as nine intellectual traditions independently confirm
secondary_domains: description: Commercial activity in orbit, manufacturing, resource extraction, and settlement planning all outpace regulatory frameworks, creating governance demand faster than supply across five accelerating
- collective-intelligence dynamics
- grand-strategy domain: space-development
related_claims:
- nearly-all-space-technology-is-dual-use-making-arms-control-in-orbit-impossible-without-banning-the-commercial-applications-themselves
related: related:
- spacetech-series-a-funding-gap-is-the-structural-bottleneck-because-specialized-vcs-concentrate-at-seed-while-generalists-lack-domain-expertise-for-hardware-companies - spacetech-series-a-funding-gap-is-the-structural-bottleneck-because-specialized-vcs-concentrate-at-seed-while-generalists-lack-domain-expertise-for-hardware-companies
- attractor-molochian-exhaustion
related_claims:
- nearly-all-space-technology-is-dual-use-making-arms-control-in-orbit-impossible-without-banning-the-commercial-applications-themselves
reweave_edges: reweave_edges:
- spacetech-series-a-funding-gap-is-the-structural-bottleneck-because-specialized-vcs-concentrate-at-seed-while-generalists-lack-domain-expertise-for-hardware-companies|related|2026-04-04 - spacetech-series-a-funding-gap-is-the-structural-bottleneck-because-specialized-vcs-concentrate-at-seed-while-generalists-lack-domain-expertise-for-hardware-companies|related|2026-04-04
secondary_domains:
- collective-intelligence
- grand-strategy
source: Astra, web research compilation February 2026
sourced_from: sourced_from:
- inbox/archive/2026-02-17-astra-space-governance-regulation.md - inbox/archive/2026-02-17-astra-space-governance-regulation.md
type: claim
--- ---
# space governance gaps are widening not narrowing because technology advances exponentially while institutional design advances linearly # space governance gaps are widening not narrowing because technology advances exponentially while institutional design advances linearly

View file

@ -1,17 +1,19 @@
--- ---
type: claim
domain: space-development
description: "SBSP market projected at $4.61B by 2041 but remains pre-commercial; the physics works, the economics close at $10/kg to orbit where Starship is heading, enabling 25 MW per launch"
confidence: experimental confidence: experimental
source: "Astra, web research compilation February 2026"
created: 2026-02-17 created: 2026-02-17
secondary_domains:
- energy
depends_on: depends_on:
- "Starship achieving routine operations at sub-100 dollars per kg is the single largest enabling condition for the entire space industrial economy" - Starship achieving routine operations at sub-100 dollars per kg is the single largest enabling condition for the entire space industrial economy
- "power is the binding constraint on all space operations because every capability from ISRU to manufacturing to life support is power-limited" - power is the binding constraint on all space operations because every capability from ISRU to manufacturing to life support is power-limited
description: SBSP market projected at $4.61B by 2041 but remains pre-commercial; the physics works, the economics close at $10/kg to orbit where Starship is heading, enabling 25 MW per launch
domain: space-development
related:
- fusions attractor state is 5-15 percent of global generation by 2055 as firm dispatchable complement to renewables not as baseload replacement for fission
secondary_domains:
- energy
source: Astra, web research compilation February 2026
sourced_from: sourced_from:
- inbox/archive/2026-02-17-astra-space-manufacturing-power.md - inbox/archive/2026-02-17-astra-space-manufacturing-power.md
type: claim
--- ---
# Space-based solar power economics depend almost entirely on launch cost reduction with viability threshold near 10 dollars per kg to orbit # Space-based solar power economics depend almost entirely on launch cost reduction with viability threshold near 10 dollars per kg to orbit

View file

@ -1,34 +1,35 @@
--- ---
description: The dominant alignment paradigms share a core limitation -- human preferences are diverse distributional and context-dependent not reducible to one reward function
type: claim
domain: collective-intelligence
created: 2026-02-17
source: "DPO Survey 2025 (arXiv 2503.11701)"
confidence: likely confidence: likely
created: 2026-02-17
description: The dominant alignment paradigms share a core limitation -- human preferences are diverse distributional and context-dependent not reducible to one reward function
domain: collective-intelligence
related: related:
- rlchf-aggregated-rankings-variant-combines-evaluator-rankings-via-social-welfare-function-before-reward-model-training - rlchf-aggregated-rankings-variant-combines-evaluator-rankings-via-social-welfare-function-before-reward-model-training
- rlhf-is-implicit-social-choice-without-normative-scrutiny - rlhf-is-implicit-social-choice-without-normative-scrutiny
- the variance of a learned preference sensitivity distribution diagnoses dataset heterogeneity and collapses to fixed-parameter behavior when preferences are homogeneous - the variance of a learned preference sensitivity distribution diagnoses dataset heterogeneity and collapses to fixed-parameter behavior when preferences are homogeneous
- learning human values from observed behavior through inverse reinforcement learning is structurally safer than specifying objectives directly because the agent maintains uncertainty about what humans actually want - learning human values from observed behavior through inverse reinforcement learning is structurally safer than specifying objectives directly because the agent maintains uncertainty about what humans
actually want
- sycophancy-is-paradigm-level-failure-across-all-frontier-models-suggesting-rlhf-systematically-produces-approval-seeking - sycophancy-is-paradigm-level-failure-across-all-frontier-models-suggesting-rlhf-systematically-produces-approval-seeking
- large language models encode social intelligence as compressed cultural ratchet not abstract reasoning because every parameter is a residue of communicative exchange and reasoning manifests as multi-perspective dialogue not calculation - large language models encode social intelligence as compressed cultural ratchet not abstract reasoning because every parameter is a residue of communicative exchange and reasoning manifests as multi-perspective
dialogue not calculation
- collective-intelligence-architectures-are-underexplored-for-alignment-despite-addressing-core-problems - collective-intelligence-architectures-are-underexplored-for-alignment-despite-addressing-core-problems
- futarchy-conditional-markets-aggregate-information-through-financial-stake-not-voting-participation
reweave_edges: reweave_edges:
- rlchf-aggregated-rankings-variant-combines-evaluator-rankings-via-social-welfare-function-before-reward-model-training|related|2026-03-28 - rlchf-aggregated-rankings-variant-combines-evaluator-rankings-via-social-welfare-function-before-reward-model-training|related|2026-03-28
- rlhf-is-implicit-social-choice-without-normative-scrutiny|related|2026-03-28 - rlhf-is-implicit-social-choice-without-normative-scrutiny|related|2026-03-28
- single-reward-rlhf-cannot-align-diverse-preferences-because-alignment-gap-grows-proportional-to-minority-distinctiveness|supports|2026-03-28 - single-reward-rlhf-cannot-align-diverse-preferences-because-alignment-gap-grows-proportional-to-minority-distinctiveness|supports|2026-03-28
- the variance of a learned preference sensitivity distribution diagnoses dataset heterogeneity and collapses to fixed-parameter behavior when preferences are homogeneous|related|2026-03-28 - the variance of a learned preference sensitivity distribution diagnoses dataset heterogeneity and collapses to fixed-parameter behavior when preferences are homogeneous|related|2026-03-28
- learning human values from observed behavior through inverse reinforcement learning is structurally safer than specifying objectives directly because the agent maintains uncertainty about what humans actually want|related|2026-04-06 - learning human values from observed behavior through inverse reinforcement learning is structurally safer than specifying objectives directly because the agent maintains uncertainty about what humans
actually want|related|2026-04-06
- sycophancy-is-paradigm-level-failure-across-all-frontier-models-suggesting-rlhf-systematically-produces-approval-seeking|related|2026-04-17 - sycophancy-is-paradigm-level-failure-across-all-frontier-models-suggesting-rlhf-systematically-produces-approval-seeking|related|2026-04-17
- large language models encode social intelligence as compressed cultural ratchet not abstract reasoning because every parameter is a residue of communicative exchange and reasoning manifests as multi-perspective dialogue not calculation|related|2026-04-17 - large language models encode social intelligence as compressed cultural ratchet not abstract reasoning because every parameter is a residue of communicative exchange and reasoning manifests as multi-perspective
dialogue not calculation|related|2026-04-17
- Collective intelligence architectures are structurally underexplored for alignment despite directly addressing preference diversity value evolution and scalable oversight|supports|2026-04-19 - Collective intelligence architectures are structurally underexplored for alignment despite directly addressing preference diversity value evolution and scalable oversight|supports|2026-04-19
source: DPO Survey 2025 (arXiv 2503.11701)
supports: supports:
- single-reward-rlhf-cannot-align-diverse-preferences-because-alignment-gap-grows-proportional-to-minority-distinctiveness - single-reward-rlhf-cannot-align-diverse-preferences-because-alignment-gap-grows-proportional-to-minority-distinctiveness
- Collective intelligence architectures are structurally underexplored for alignment despite directly addressing preference diversity value evolution and scalable oversight - Collective intelligence architectures are structurally underexplored for alignment despite directly addressing preference diversity value evolution and scalable oversight
type: claim
--- ---
# RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values # RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values

View file

@ -1,24 +1,27 @@
--- ---
description: 2025 scaling laws show oversight success rates of 10-52% at moderate Elo gaps meaning current approaches cannot reliably supervise superhuman systems
type: claim
domain: collective-intelligence
created: 2026-02-17
source: "Scaling Laws for Scalable Oversight (2025)"
confidence: proven confidence: proven
supports: created: 2026-02-17
- Nested scalable oversight achieves at most 51.7% success rate at capability gap Elo 400 with performance declining as capability differential increases description: 2025 scaling laws show oversight success rates of 10-52% at moderate Elo gaps meaning current approaches cannot reliably supervise superhuman systems
- Scalable oversight success is highly domain-dependent with propositional debate tasks showing 52% success while code review and strategic planning tasks show ~10% success domain: collective-intelligence
reweave_edges:
- Nested scalable oversight achieves at most 51.7% success rate at capability gap Elo 400 with performance declining as capability differential increases|supports|2026-04-03
- Scalable oversight success is highly domain-dependent with propositional debate tasks showing 52% success while code review and strategic planning tasks show ~10% success|supports|2026-04-03
- iterated distillation and amplification preserves alignment across capability scaling by keeping humans in the loop at every iteration but distillation errors may compound making the alignment guarantee probabilistic not absolute|related|2026-04-06
related: related:
- iterated distillation and amplification preserves alignment across capability scaling by keeping humans in the loop at every iteration but distillation errors may compound making the alignment guarantee probabilistic not absolute - iterated distillation and amplification preserves alignment across capability scaling by keeping humans in the loop at every iteration but distillation errors may compound making the alignment guarantee
probabilistic not absolute
- behavioral-divergence-between-evaluation-and-deployment-is-bounded-by-regime-information-extractable-from-internal-representations - behavioral-divergence-between-evaluation-and-deployment-is-bounded-by-regime-information-extractable-from-internal-representations
- chain-of-thought-monitorability-is-time-limited-governance-window - chain-of-thought-monitorability-is-time-limited-governance-window
- inference-time-compute-creates-non-monotonic-safety-scaling-where-extended-reasoning-degrades-alignment - inference-time-compute-creates-non-monotonic-safety-scaling-where-extended-reasoning-degrades-alignment
- circuit-tracing-bottleneck-hours-per-prompt-limits-interpretability-scaling - circuit-tracing-bottleneck-hours-per-prompt-limits-interpretability-scaling
- verification-of-meaningful-human-control-is-technically-infeasible-because-ai-decision-opacity-and-adversarial-resistance-defeat-external-audit - verification-of-meaningful-human-control-is-technically-infeasible-because-ai-decision-opacity-and-adversarial-resistance-defeat-external-audit
- clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling
reweave_edges:
- Nested scalable oversight achieves at most 51.7% success rate at capability gap Elo 400 with performance declining as capability differential increases|supports|2026-04-03
- Scalable oversight success is highly domain-dependent with propositional debate tasks showing 52% success while code review and strategic planning tasks show ~10% success|supports|2026-04-03
- iterated distillation and amplification preserves alignment across capability scaling by keeping humans in the loop at every iteration but distillation errors may compound making the alignment guarantee
probabilistic not absolute|related|2026-04-06
source: Scaling Laws for Scalable Oversight (2025)
supports:
- Nested scalable oversight achieves at most 51.7% success rate at capability gap Elo 400 with performance declining as capability differential increases
- Scalable oversight success is highly domain-dependent with propositional debate tasks showing 52% success while code review and strategic planning tasks show ~10% success
type: claim
--- ---
# scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps # scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps

View file

@ -1,20 +1,27 @@
--- ---
description: Social choice theory formally proves that no voting rule can simultaneously satisfy fairness respect for individual preferences and alignment with diverse values without dictatorial outcomes
type: claim
domain: collective-intelligence
created: 2026-02-17
source: "Conitzer et al, Social Choice for AI Alignment (arXiv 2404.10271, ICML 2024); Mishra, AI Alignment and Social Choice (arXiv 2310.16048, October 2023)"
confidence: likely confidence: likely
tradition: "social choice theory, formal methods" created: 2026-02-17
description: Social choice theory formally proves that no voting rule can simultaneously satisfy fairness respect for individual preferences and alignment with diverse values without dictatorial outcomes
domain: collective-intelligence
related: related:
- "{'Legal scholars and AI alignment researchers independently converged on the same core problem': 'AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements and alignment specification challenges both identifying irreducible human judgment as the bottleneck'}" - '{''Legal scholars and AI alignment researchers independently converged on the same core problem'': ''AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements
- "Legal scholars and AI alignment researchers independently converged on the same core problem: AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements and alignment specification challenges both identifying irreducible human judgment as the bottleneck" and alignment specification challenges both identifying irreducible human judgment as the bottleneck''}'
- 'Legal scholars and AI alignment researchers independently converged on the same core problem: AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements and
alignment specification challenges both identifying irreducible human judgment as the bottleneck'
- futarchy-conditional-markets-aggregate-information-through-financial-stake-not-voting-participation
reweave_edges: reweave_edges:
- "{'Legal scholars and AI alignment researchers independently converged on the same core problem': 'AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements and alignment specification challenges both identifying irreducible human judgment as the bottleneck|related|2026-04-17'}" - '{''Legal scholars and AI alignment researchers independently converged on the same core problem'': ''AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements
- "{'Legal scholars and AI alignment researchers independently converged on the same core problem': 'AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements and alignment specification challenges both identifying irreducible human judgment as the bottleneck|supports|2026-04-18'}" and alignment specification challenges both identifying irreducible human judgment as the bottleneck|related|2026-04-17''}'
- "Legal scholars and AI alignment researchers independently converged on the same core problem: AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements and alignment specification challenges both identifying irreducible human judgment as the bottleneck|related|2026-04-19" - '{''Legal scholars and AI alignment researchers independently converged on the same core problem'': ''AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements
and alignment specification challenges both identifying irreducible human judgment as the bottleneck|supports|2026-04-18''}'
- 'Legal scholars and AI alignment researchers independently converged on the same core problem: AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements and
alignment specification challenges both identifying irreducible human judgment as the bottleneck|related|2026-04-19'
source: Conitzer et al, Social Choice for AI Alignment (arXiv 2404.10271, ICML 2024); Mishra, AI Alignment and Social Choice (arXiv 2310.16048, October 2023)
supports: supports:
- "{'Legal scholars and AI alignment researchers independently converged on the same core problem': 'AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements and alignment specification challenges both identifying irreducible human judgment as the bottleneck'}" - '{''Legal scholars and AI alignment researchers independently converged on the same core problem'': ''AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements
and alignment specification challenges both identifying irreducible human judgment as the bottleneck''}'
tradition: social choice theory, formal methods
type: claim
--- ---
# universal alignment is mathematically impossible because Arrows impossibility theorem applies to aggregating diverse human preferences into a single coherent objective # universal alignment is mathematically impossible because Arrows impossibility theorem applies to aggregating diverse human preferences into a single coherent objective