Compare commits
1 commit
main
...
reweave/20
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
83f4480655 |
27 changed files with 90 additions and 0 deletions
|
|
@ -6,6 +6,10 @@ created: 2026-02-21
|
||||||
confidence: experimental
|
confidence: experimental
|
||||||
source: "Strategic synthesis of Christensen disruption analysis, master narratives theory, and LivingIP grand strategy, Feb 2026"
|
source: "Strategic synthesis of Christensen disruption analysis, master narratives theory, and LivingIP grand strategy, Feb 2026"
|
||||||
tradition: "Teleological Investing, Christensen disruption theory, narrative theory"
|
tradition: "Teleological Investing, Christensen disruption theory, narrative theory"
|
||||||
|
related:
|
||||||
|
- Geopolitical competition over algorithmic narrative control confirms narrative distribution infrastructure has civilizational strategic value because states compete for algorithm ownership when narrative remains the active ingredient
|
||||||
|
reweave_edges:
|
||||||
|
- Geopolitical competition over algorithmic narrative control confirms narrative distribution infrastructure has civilizational strategic value because states compete for algorithm ownership when narrative remains the active ingredient|related|2026-04-26
|
||||||
---
|
---
|
||||||
|
|
||||||
# LivingIPs knowledge industry strategy builds collective synthesis infrastructure first and lets the coordination narrative emerge from demonstrated practice rather than designing it in advance
|
# LivingIPs knowledge industry strategy builds collective synthesis infrastructure first and lets the coordination narrative emerge from demonstrated practice rather than designing it in advance
|
||||||
|
|
|
||||||
|
|
@ -8,8 +8,10 @@ source: "OECD AI VC report (Feb 2026), Crunchbase funding analysis (2025), TechC
|
||||||
created: 2026-03-16
|
created: 2026-03-16
|
||||||
related:
|
related:
|
||||||
- whether AI knowledge codification concentrates or distributes depends on infrastructure openness because the same extraction mechanism produces digital feudalism under proprietary control and collective intelligence under commons governance
|
- whether AI knowledge codification concentrates or distributes depends on infrastructure openness because the same extraction mechanism produces digital feudalism under proprietary control and collective intelligence under commons governance
|
||||||
|
- Geopolitical competition over algorithmic narrative control confirms narrative distribution infrastructure has civilizational strategic value because states compete for algorithm ownership when narrative remains the active ingredient
|
||||||
reweave_edges:
|
reweave_edges:
|
||||||
- whether AI knowledge codification concentrates or distributes depends on infrastructure openness because the same extraction mechanism produces digital feudalism under proprietary control and collective intelligence under commons governance|related|2026-04-07
|
- whether AI knowledge codification concentrates or distributes depends on infrastructure openness because the same extraction mechanism produces digital feudalism under proprietary control and collective intelligence under commons governance|related|2026-04-07
|
||||||
|
- Geopolitical competition over algorithmic narrative control confirms narrative distribution infrastructure has civilizational strategic value because states compete for algorithm ownership when narrative remains the active ingredient|related|2026-04-26
|
||||||
sourced_from:
|
sourced_from:
|
||||||
- inbox/archive/ai-alignment/2026-03-16-theseus-ai-industry-landscape-briefing.md
|
- inbox/archive/ai-alignment/2026-03-16-theseus-ai-industry-landscape-briefing.md
|
||||||
---
|
---
|
||||||
|
|
|
||||||
|
|
@ -12,6 +12,7 @@ supports:
|
||||||
- voluntary-safety-constraints-without-external-enforcement-are-statements-of-intent-not-binding-governance
|
- voluntary-safety-constraints-without-external-enforcement-are-statements-of-intent-not-binding-governance
|
||||||
- Anthropic's internal resource allocation shows 6-8% safety-only headcount when dual-use research is excluded, revealing a material gap between public safety positioning and credible commitment
|
- Anthropic's internal resource allocation shows 6-8% safety-only headcount when dual-use research is excluded, revealing a material gap between public safety positioning and credible commitment
|
||||||
- motivated reasoning among AI lab leaders is itself a primary risk vector because those with most capability to slow down have most incentive to accelerate
|
- motivated reasoning among AI lab leaders is itself a primary risk vector because those with most capability to slow down have most incentive to accelerate
|
||||||
|
- Safety leadership exits precede voluntary governance policy changes as leading indicators of cumulative competitive pressure
|
||||||
reweave_edges:
|
reweave_edges:
|
||||||
- Anthropic|supports|2026-03-28
|
- Anthropic|supports|2026-03-28
|
||||||
- dario-amodei|supports|2026-03-28
|
- dario-amodei|supports|2026-03-28
|
||||||
|
|
@ -21,6 +22,7 @@ reweave_edges:
|
||||||
- Anthropic's internal resource allocation shows 6-8% safety-only headcount when dual-use research is excluded, revealing a material gap between public safety positioning and credible commitment|supports|2026-04-09
|
- Anthropic's internal resource allocation shows 6-8% safety-only headcount when dual-use research is excluded, revealing a material gap between public safety positioning and credible commitment|supports|2026-04-09
|
||||||
- Frontier AI labs allocate 6-15% of research headcount to safety versus 60-75% to capabilities with the ratio declining since 2024 as capabilities teams grow faster than safety teams|related|2026-04-09
|
- Frontier AI labs allocate 6-15% of research headcount to safety versus 60-75% to capabilities with the ratio declining since 2024 as capabilities teams grow faster than safety teams|related|2026-04-09
|
||||||
- motivated reasoning among AI lab leaders is itself a primary risk vector because those with most capability to slow down have most incentive to accelerate|supports|2026-04-17
|
- motivated reasoning among AI lab leaders is itself a primary risk vector because those with most capability to slow down have most incentive to accelerate|supports|2026-04-17
|
||||||
|
- Safety leadership exits precede voluntary governance policy changes as leading indicators of cumulative competitive pressure|supports|2026-04-26
|
||||||
related:
|
related:
|
||||||
- cross-lab-alignment-evaluation-surfaces-safety-gaps-internal-evaluation-misses-providing-empirical-basis-for-mandatory-third-party-evaluation
|
- cross-lab-alignment-evaluation-surfaces-safety-gaps-internal-evaluation-misses-providing-empirical-basis-for-mandatory-third-party-evaluation
|
||||||
- Frontier AI labs allocate 6-15% of research headcount to safety versus 60-75% to capabilities with the ratio declining since 2024 as capabilities teams grow faster than safety teams
|
- Frontier AI labs allocate 6-15% of research headcount to safety versus 60-75% to capabilities with the ratio declining since 2024 as capabilities teams grow faster than safety teams
|
||||||
|
|
|
||||||
|
|
@ -9,12 +9,14 @@ related:
|
||||||
- inference efficiency gains erode AI deployment governance without triggering compute monitoring thresholds because governance frameworks target training concentration while inference optimization distributes capability below detection
|
- inference efficiency gains erode AI deployment governance without triggering compute monitoring thresholds because governance frameworks target training concentration while inference optimization distributes capability below detection
|
||||||
- eu-ai-act-article-2-3-national-security-exclusion-confirms-legislative-ceiling-is-cross-jurisdictional
|
- eu-ai-act-article-2-3-national-security-exclusion-confirms-legislative-ceiling-is-cross-jurisdictional
|
||||||
- Semiconductor export controls (CHIPS Act, ASML restrictions) are the first AI governance instrument structurally analogous to Montreal Protocol's trade sanctions
|
- Semiconductor export controls (CHIPS Act, ASML restrictions) are the first AI governance instrument structurally analogous to Montreal Protocol's trade sanctions
|
||||||
|
- Geopolitical competition over algorithmic narrative control confirms narrative distribution infrastructure has civilizational strategic value because states compete for algorithm ownership when narrative remains the active ingredient
|
||||||
reweave_edges:
|
reweave_edges:
|
||||||
- inference efficiency gains erode AI deployment governance without triggering compute monitoring thresholds because governance frameworks target training concentration while inference optimization distributes capability below detection|related|2026-03-28
|
- inference efficiency gains erode AI deployment governance without triggering compute monitoring thresholds because governance frameworks target training concentration while inference optimization distributes capability below detection|related|2026-03-28
|
||||||
- AI governance discourse has been captured by economic competitiveness framing, inverting predicted participation patterns where China signs non-binding declarations while the US opts out|supports|2026-04-04
|
- AI governance discourse has been captured by economic competitiveness framing, inverting predicted participation patterns where China signs non-binding declarations while the US opts out|supports|2026-04-04
|
||||||
- eu-ai-act-article-2-3-national-security-exclusion-confirms-legislative-ceiling-is-cross-jurisdictional|related|2026-04-18
|
- eu-ai-act-article-2-3-national-security-exclusion-confirms-legislative-ceiling-is-cross-jurisdictional|related|2026-04-18
|
||||||
- BIS January 2026 Advanced AI Chip Export Rule|supports|2026-04-24
|
- BIS January 2026 Advanced AI Chip Export Rule|supports|2026-04-24
|
||||||
- Semiconductor export controls (CHIPS Act, ASML restrictions) are the first AI governance instrument structurally analogous to Montreal Protocol's trade sanctions|related|2026-04-24
|
- Semiconductor export controls (CHIPS Act, ASML restrictions) are the first AI governance instrument structurally analogous to Montreal Protocol's trade sanctions|related|2026-04-24
|
||||||
|
- Geopolitical competition over algorithmic narrative control confirms narrative distribution infrastructure has civilizational strategic value because states compete for algorithm ownership when narrative remains the active ingredient|related|2026-04-26
|
||||||
supports:
|
supports:
|
||||||
- AI governance discourse has been captured by economic competitiveness framing, inverting predicted participation patterns where China signs non-binding declarations while the US opts out
|
- AI governance discourse has been captured by economic competitiveness framing, inverting predicted participation patterns where China signs non-binding declarations while the US opts out
|
||||||
- BIS January 2026 Advanced AI Chip Export Rule
|
- BIS January 2026 Advanced AI Chip Export Rule
|
||||||
|
|
|
||||||
|
|
@ -13,6 +13,8 @@ attribution:
|
||||||
context: "OpenAI and Anthropic joint evaluation, August 2025"
|
context: "OpenAI and Anthropic joint evaluation, August 2025"
|
||||||
related: ["Making research evaluations into compliance triggers closes the translation gap by design by eliminating the institutional boundary between risk detection and risk response", "cross-lab-alignment-evaluation-surfaces-safety-gaps-internal-evaluation-misses-providing-empirical-basis-for-mandatory-third-party-evaluation", "AI-models-distinguish-testing-from-deployment-environments-providing-empirical-evidence-for-deceptive-alignment-concerns", "pre-deployment-AI-evaluations-do-not-predict-real-world-risk-creating-institutional-governance-built-on-unreliable-foundations", "multi-agent deployment exposes emergent security vulnerabilities invisible to single-agent evaluation because cross-agent propagation identity spoofing and unauthorized compliance arise only in realistic multi-party environments"]
|
related: ["Making research evaluations into compliance triggers closes the translation gap by design by eliminating the institutional boundary between risk detection and risk response", "cross-lab-alignment-evaluation-surfaces-safety-gaps-internal-evaluation-misses-providing-empirical-basis-for-mandatory-third-party-evaluation", "AI-models-distinguish-testing-from-deployment-environments-providing-empirical-evidence-for-deceptive-alignment-concerns", "pre-deployment-AI-evaluations-do-not-predict-real-world-risk-creating-institutional-governance-built-on-unreliable-foundations", "multi-agent deployment exposes emergent security vulnerabilities invisible to single-agent evaluation because cross-agent propagation identity spoofing and unauthorized compliance arise only in realistic multi-party environments"]
|
||||||
reweave_edges: ["Making research evaluations into compliance triggers closes the translation gap by design by eliminating the institutional boundary between risk detection and risk response|related|2026-04-17"]
|
reweave_edges: ["Making research evaluations into compliance triggers closes the translation gap by design by eliminating the institutional boundary between risk detection and risk response|related|2026-04-17"]
|
||||||
|
supports:
|
||||||
|
- Independent government evaluation publishing adverse findings during commercial negotiation functions as a governance instrument through information asymmetry reduction
|
||||||
---
|
---
|
||||||
|
|
||||||
# Cross-lab alignment evaluation surfaces safety gaps that internal evaluation misses, providing an empirical basis for mandatory third-party AI safety evaluation as a governance mechanism
|
# Cross-lab alignment evaluation surfaces safety gaps that internal evaluation misses, providing an empirical basis for mandatory third-party AI safety evaluation as a governance mechanism
|
||||||
|
|
|
||||||
|
|
@ -24,10 +24,12 @@ reweave_edges:
|
||||||
- Capabilities training alone grows evaluation-awareness from 2% to 20.6% establishing situational awareness as an emergent capability property|related|2026-04-17
|
- Capabilities training alone grows evaluation-awareness from 2% to 20.6% establishing situational awareness as an emergent capability property|related|2026-04-17
|
||||||
- Component task benchmarks overestimate operational capability because simulated environments remove real-world friction that prevents end-to-end execution|related|2026-04-17
|
- Component task benchmarks overestimate operational capability because simulated environments remove real-world friction that prevents end-to-end execution|related|2026-04-17
|
||||||
- Provider-level behavioral biases persist across model versions because they are embedded in training infrastructure rather than model-specific features|related|2026-04-17
|
- Provider-level behavioral biases persist across model versions because they are embedded in training infrastructure rather than model-specific features|related|2026-04-17
|
||||||
|
- Santos-Grueiro's theorem converts the hardware TEE monitoring argument from empirical case to categorical necessity by proving no behavioral testing approach escapes identifiability failure|supports|2026-04-26
|
||||||
supports:
|
supports:
|
||||||
- Behavioral evaluation is structurally insufficient for latent alignment verification under evaluation awareness because normative indistinguishability creates an identifiability problem not a measurement problem
|
- Behavioral evaluation is structurally insufficient for latent alignment verification under evaluation awareness because normative indistinguishability creates an identifiability problem not a measurement problem
|
||||||
- Current deception safety evaluation datasets vary from 37 to 100 percent in model detectability, rendering highly detectable evaluations uninformative about deployment behavior
|
- Current deception safety evaluation datasets vary from 37 to 100 percent in model detectability, rendering highly detectable evaluations uninformative about deployment behavior
|
||||||
- Evaluation awareness concentrates in earlier model layers (23-24) making output-level interventions insufficient for preventing strategic evaluation gaming
|
- Evaluation awareness concentrates in earlier model layers (23-24) making output-level interventions insufficient for preventing strategic evaluation gaming
|
||||||
|
- Santos-Grueiro's theorem converts the hardware TEE monitoring argument from empirical case to categorical necessity by proving no behavioral testing approach escapes identifiability failure
|
||||||
sourced_from:
|
sourced_from:
|
||||||
- inbox/archive/general/2025-02-13-aisi-renamed-ai-security-institute-mandate-drift.md
|
- inbox/archive/general/2025-02-13-aisi-renamed-ai-security-institute-mandate-drift.md
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -15,8 +15,10 @@ supports:
|
||||||
reweave_edges:
|
reweave_edges:
|
||||||
- Anthropic's internal resource allocation shows 6-8% safety-only headcount when dual-use research is excluded, revealing a material gap between public safety positioning and credible commitment|supports|2026-04-09
|
- Anthropic's internal resource allocation shows 6-8% safety-only headcount when dual-use research is excluded, revealing a material gap between public safety positioning and credible commitment|supports|2026-04-09
|
||||||
- Frontier AI safety frameworks score 8-35% against safety-critical industry standards with a 52% composite ceiling even when combining best practices across all frameworks|related|2026-04-17
|
- Frontier AI safety frameworks score 8-35% against safety-critical industry standards with a 52% composite ceiling even when combining best practices across all frameworks|related|2026-04-17
|
||||||
|
- Responsible AI dimensions exhibit systematic multi-objective tension where improving safety degrades accuracy and improving privacy reduces fairness with no accepted navigation framework|related|2026-04-26
|
||||||
related:
|
related:
|
||||||
- Frontier AI safety frameworks score 8-35% against safety-critical industry standards with a 52% composite ceiling even when combining best practices across all frameworks
|
- Frontier AI safety frameworks score 8-35% against safety-critical industry standards with a 52% composite ceiling even when combining best practices across all frameworks
|
||||||
|
- Responsible AI dimensions exhibit systematic multi-objective tension where improving safety degrades accuracy and improving privacy reduces fairness with no accepted navigation framework
|
||||||
---
|
---
|
||||||
|
|
||||||
# Frontier AI labs allocate 6-15% of research headcount to safety versus 60-75% to capabilities with the ratio declining since 2024 as capabilities teams grow faster than safety teams
|
# Frontier AI labs allocate 6-15% of research headcount to safety versus 60-75% to capabilities with the ratio declining since 2024 as capabilities teams grow faster than safety teams
|
||||||
|
|
|
||||||
|
|
@ -12,6 +12,8 @@ sourcer: Lily Stelling, Malcolm Murray, Simeon Campos, Henry Papadatos
|
||||||
related_claims: ["[[safe AI development requires building alignment mechanisms before scaling capability]]", "[[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]]"]
|
related_claims: ["[[safe AI development requires building alignment mechanisms before scaling capability]]", "[[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]]"]
|
||||||
related: ["Frontier AI safety verdicts rely partly on deployment track record rather than evaluation-derived confidence which establishes a precedent where safety claims are empirically grounded instead of counterfactually assured", "frontier-safety-frameworks-score-8-35-percent-against-safety-critical-standards-with-52-percent-composite-ceiling"]
|
related: ["Frontier AI safety verdicts rely partly on deployment track record rather than evaluation-derived confidence which establishes a precedent where safety claims are empirically grounded instead of counterfactually assured", "frontier-safety-frameworks-score-8-35-percent-against-safety-critical-standards-with-52-percent-composite-ceiling"]
|
||||||
reweave_edges: ["Frontier AI safety verdicts rely partly on deployment track record rather than evaluation-derived confidence which establishes a precedent where safety claims are empirically grounded instead of counterfactually assured|related|2026-04-17"]
|
reweave_edges: ["Frontier AI safety verdicts rely partly on deployment track record rather than evaluation-derived confidence which establishes a precedent where safety claims are empirically grounded instead of counterfactually assured|related|2026-04-17"]
|
||||||
|
supports:
|
||||||
|
- Responsible AI dimensions exhibit systematic multi-objective tension where improving safety degrades accuracy and improving privacy reduces fairness with no accepted navigation framework
|
||||||
---
|
---
|
||||||
|
|
||||||
# Frontier AI safety frameworks score 8-35% against safety-critical industry standards with a 52% composite ceiling even when combining best practices across all frameworks
|
# Frontier AI safety frameworks score 8-35% against safety-critical industry standards with a 52% composite ceiling even when combining best practices across all frameworks
|
||||||
|
|
|
||||||
|
|
@ -14,6 +14,7 @@ related:
|
||||||
- domestic-political-change-can-rapidly-erode-decade-long-international-AI-safety-norms-as-US-reversed-from-supporter-to-opponent-in-one-year
|
- domestic-political-change-can-rapidly-erode-decade-long-international-AI-safety-norms-as-US-reversed-from-supporter-to-opponent-in-one-year
|
||||||
- anthropic-internal-resource-allocation-shows-6-8-percent-safety-only-headcount-when-dual-use-research-excluded-revealing-gap-between-public-positioning-and-commitment
|
- anthropic-internal-resource-allocation-shows-6-8-percent-safety-only-headcount-when-dual-use-research-excluded-revealing-gap-between-public-positioning-and-commitment
|
||||||
- supply-chain-risk-designation-misdirection-occurs-when-instrument-requires-capability-target-structurally-lacks
|
- supply-chain-risk-designation-misdirection-occurs-when-instrument-requires-capability-target-structurally-lacks
|
||||||
|
- Coercive governance instruments can be deployed to preserve future capability optionality rather than prevent current harm, as demonstrated when the Pentagon designated Anthropic a supply chain risk for refusing to enable autonomous weapons capabilities not currently in use
|
||||||
reweave_edges:
|
reweave_edges:
|
||||||
- AI investment concentration where 58 percent of funding flows to megarounds and two companies capture 14 percent of all global venture capital creates a structural oligopoly that alignment governance must account for|related|2026-03-28
|
- AI investment concentration where 58 percent of funding flows to megarounds and two companies capture 14 percent of all global venture capital creates a structural oligopoly that alignment governance must account for|related|2026-03-28
|
||||||
- UK AI Safety Institute|related|2026-03-28
|
- UK AI Safety Institute|related|2026-03-28
|
||||||
|
|
@ -22,6 +23,7 @@ reweave_edges:
|
||||||
- Strategic interest alignment determines whether national security framing enables or undermines mandatory governance — aligned interests enable mandatory mechanisms (space) while conflicting interests undermine voluntary constraints (AI military deployment)|related|2026-04-19
|
- Strategic interest alignment determines whether national security framing enables or undermines mandatory governance — aligned interests enable mandatory mechanisms (space) while conflicting interests undermine voluntary constraints (AI military deployment)|related|2026-04-19
|
||||||
- Corporate AI safety governance under government pressure operates as a three-track sequential stack where each track's structural ceiling necessitates the next track because voluntary ethics fails to competitive dynamics, litigation protects speech rights without compelling acceptance, and electoral investment faces the legislative ceiling|supports|2026-04-20
|
- Corporate AI safety governance under government pressure operates as a three-track sequential stack where each track's structural ceiling necessitates the next track because voluntary ethics fails to competitive dynamics, litigation protects speech rights without compelling acceptance, and electoral investment faces the legislative ceiling|supports|2026-04-20
|
||||||
- Pentagon military AI contracts systematically demand 'any lawful use' terms as confirmed by three independent lab negotiations|supports|2026-04-25
|
- Pentagon military AI contracts systematically demand 'any lawful use' terms as confirmed by three independent lab negotiations|supports|2026-04-25
|
||||||
|
- Coercive governance instruments can be deployed to preserve future capability optionality rather than prevent current harm, as demonstrated when the Pentagon designated Anthropic a supply chain risk for refusing to enable autonomous weapons capabilities not currently in use|related|2026-04-26
|
||||||
supports:
|
supports:
|
||||||
- government-safety-penalties-invert-regulatory-incentives-by-blacklisting-cautious-actors
|
- government-safety-penalties-invert-regulatory-incentives-by-blacklisting-cautious-actors
|
||||||
- Corporate AI safety governance under government pressure operates as a three-track sequential stack where each track's structural ceiling necessitates the next track because voluntary ethics fails to competitive dynamics, litigation protects speech rights without compelling acceptance, and electoral investment faces the legislative ceiling
|
- Corporate AI safety governance under government pressure operates as a three-track sequential stack where each track's structural ceiling necessitates the next track because voluntary ethics fails to competitive dynamics, litigation protects speech rights without compelling acceptance, and electoral investment faces the legislative ceiling
|
||||||
|
|
|
||||||
|
|
@ -13,6 +13,10 @@ challenged_by:
|
||||||
- "corrigibility is at cross-purposes with effectiveness because deception is a convergent free strategy while corrigibility must be engineered against instrumental interests"
|
- "corrigibility is at cross-purposes with effectiveness because deception is a convergent free strategy while corrigibility must be engineered against instrumental interests"
|
||||||
sourced_from:
|
sourced_from:
|
||||||
- inbox/archive/2019-10-08-russell-human-compatible.md
|
- inbox/archive/2019-10-08-russell-human-compatible.md
|
||||||
|
related:
|
||||||
|
- Responsible AI dimensions exhibit systematic multi-objective tension where improving safety degrades accuracy and improving privacy reduces fairness with no accepted navigation framework
|
||||||
|
reweave_edges:
|
||||||
|
- Responsible AI dimensions exhibit systematic multi-objective tension where improving safety degrades accuracy and improving privacy reduces fairness with no accepted navigation framework|related|2026-04-26
|
||||||
---
|
---
|
||||||
|
|
||||||
# Inverse reinforcement learning with objective uncertainty produces provably safe behavior because an AI system that knows it doesnt know the human reward function will defer to humans and accept shutdown rather than persist in potentially wrong actions
|
# Inverse reinforcement learning with objective uncertainty produces provably safe behavior because an AI system that knows it doesnt know the human reward function will defer to humans and accept shutdown rather than persist in potentially wrong actions
|
||||||
|
|
|
||||||
|
|
@ -18,10 +18,12 @@ related:
|
||||||
- white-box-interpretability-fails-on-adversarially-trained-models-creating-anti-correlation-with-threat-model
|
- white-box-interpretability-fails-on-adversarially-trained-models-creating-anti-correlation-with-threat-model
|
||||||
- interpretability-effectiveness-anti-correlates-with-adversarial-training-making-tools-hurt-performance-on-sophisticated-misalignment
|
- interpretability-effectiveness-anti-correlates-with-adversarial-training-making-tools-hurt-performance-on-sophisticated-misalignment
|
||||||
- anthropic-deepmind-interpretability-complementarity-maps-mechanisms-versus-detects-intent
|
- anthropic-deepmind-interpretability-complementarity-maps-mechanisms-versus-detects-intent
|
||||||
|
- Constitutional Classifiers provide robust output safety monitoring at production scale through categorical harm detection that resists adversarial jailbreaks
|
||||||
reweave_edges:
|
reweave_edges:
|
||||||
- Non-autoregressive architectures reduce jailbreak vulnerability by 40-65% through elimination of continuation-drive mechanisms but impose a 15-25% capability cost on reasoning tasks|related|2026-04-17
|
- Non-autoregressive architectures reduce jailbreak vulnerability by 40-65% through elimination of continuation-drive mechanisms but impose a 15-25% capability cost on reasoning tasks|related|2026-04-17
|
||||||
- Training-free conversion of activation steering vectors into component-level weight edits enables persistent behavioral modification without retraining|related|2026-04-17
|
- Training-free conversion of activation steering vectors into component-level weight edits enables persistent behavioral modification without retraining|related|2026-04-17
|
||||||
- Research community silo between interpretability-for-safety and adversarial robustness creates deployment-phase safety failures where organizations implementing monitoring improvements inherit dual-use attack surfaces without exposure to adversarial robustness literature|supports|2026-04-25
|
- Research community silo between interpretability-for-safety and adversarial robustness creates deployment-phase safety failures where organizations implementing monitoring improvements inherit dual-use attack surfaces without exposure to adversarial robustness literature|supports|2026-04-25
|
||||||
|
- Constitutional Classifiers provide robust output safety monitoring at production scale through categorical harm detection that resists adversarial jailbreaks|related|2026-04-26
|
||||||
supports:
|
supports:
|
||||||
- "Anti-safety scaling law: larger models are more vulnerable to linear concept vector attacks because steerability and attack surface scale together"
|
- "Anti-safety scaling law: larger models are more vulnerable to linear concept vector attacks because steerability and attack surface scale together"
|
||||||
- Research community silo between interpretability-for-safety and adversarial robustness creates deployment-phase safety failures where organizations implementing monitoring improvements inherit dual-use attack surfaces without exposure to adversarial robustness literature
|
- Research community silo between interpretability-for-safety and adversarial robustness creates deployment-phase safety failures where organizations implementing monitoring improvements inherit dual-use attack surfaces without exposure to adversarial robustness literature
|
||||||
|
|
|
||||||
|
|
@ -11,6 +11,10 @@ depends_on:
|
||||||
- file-backed durable state is the most consistently positive harness module across task types because externalizing state to path-addressable artifacts survives context truncation delegation and restart
|
- file-backed durable state is the most consistently positive harness module across task types because externalizing state to path-addressable artifacts survives context truncation delegation and restart
|
||||||
- collective superintelligence is the alternative to monolithic AI controlled by a few
|
- collective superintelligence is the alternative to monolithic AI controlled by a few
|
||||||
- technology advances exponentially but coordination mechanisms evolve linearly creating a widening gap
|
- technology advances exponentially but coordination mechanisms evolve linearly creating a widening gap
|
||||||
|
related:
|
||||||
|
- platform incumbents enter the personal AI race with pre existing OS level data access that standalone AI companies cannot replicate through model quality alone
|
||||||
|
reweave_edges:
|
||||||
|
- platform incumbents enter the personal AI race with pre existing OS level data access that standalone AI companies cannot replicate through model quality alone|related|2026-04-26
|
||||||
---
|
---
|
||||||
|
|
||||||
# Open-source local-first personal AI agents create a viable alternative to platform-controlled AI but only if they solve user-owned persistent memory infrastructure because model quality commoditizes while memory architecture determines who captures the relationship value
|
# Open-source local-first personal AI agents create a viable alternative to platform-controlled AI but only if they solve user-owned persistent memory infrastructure because model quality commoditizes while memory architecture determines who captures the relationship value
|
||||||
|
|
|
||||||
|
|
@ -10,6 +10,13 @@ depends_on:
|
||||||
- giving away the commoditized layer to capture value on the scarce complement is the shared mechanism driving both entertainment and internet finance attractor states
|
- giving away the commoditized layer to capture value on the scarce complement is the shared mechanism driving both entertainment and internet finance attractor states
|
||||||
- file-backed durable state is the most consistently positive harness module across task types because externalizing state to path-addressable artifacts survives context truncation delegation and restart
|
- file-backed durable state is the most consistently positive harness module across task types because externalizing state to path-addressable artifacts survives context truncation delegation and restart
|
||||||
- collective superintelligence is the alternative to monolithic AI controlled by a few
|
- collective superintelligence is the alternative to monolithic AI controlled by a few
|
||||||
|
supports:
|
||||||
|
- open source local first personal AI agents create a viable alternative to platform controlled AI but only if they solve user owned persistent memory infrastructure
|
||||||
|
reweave_edges:
|
||||||
|
- open source local first personal AI agents create a viable alternative to platform controlled AI but only if they solve user owned persistent memory infrastructure|supports|2026-04-26
|
||||||
|
- platform incumbents enter the personal AI race with pre existing OS level data access that standalone AI companies cannot replicate through model quality alone|related|2026-04-26
|
||||||
|
related:
|
||||||
|
- platform incumbents enter the personal AI race with pre existing OS level data access that standalone AI companies cannot replicate through model quality alone
|
||||||
---
|
---
|
||||||
|
|
||||||
# Personal AI market structure is determined by who owns the memory because platform-owned memory creates high switching costs and winner-take-most dynamics while user-owned portable memory reduces switching costs and enables competitive markets
|
# Personal AI market structure is determined by who owns the memory because platform-owned memory creates high switching costs and winner-take-most dynamics while user-owned portable memory reduces switching costs and enables competitive markets
|
||||||
|
|
|
||||||
|
|
@ -10,6 +10,10 @@ depends_on:
|
||||||
- AI alignment is a coordination problem not a technical problem
|
- AI alignment is a coordination problem not a technical problem
|
||||||
- giving away the commoditized layer to capture value on the scarce complement is the shared mechanism driving both entertainment and internet finance attractor states
|
- giving away the commoditized layer to capture value on the scarce complement is the shared mechanism driving both entertainment and internet finance attractor states
|
||||||
- strategy is the art of creating power through narrative and coalition not just the application of existing power
|
- strategy is the art of creating power through narrative and coalition not just the application of existing power
|
||||||
|
supports:
|
||||||
|
- open source local first personal AI agents create a viable alternative to platform controlled AI but only if they solve user owned persistent memory infrastructure
|
||||||
|
reweave_edges:
|
||||||
|
- open source local first personal AI agents create a viable alternative to platform controlled AI but only if they solve user owned persistent memory infrastructure|supports|2026-04-26
|
||||||
---
|
---
|
||||||
|
|
||||||
# Platform incumbents enter the personal AI race with pre-existing OS-level data access that standalone AI companies cannot replicate through model quality alone making this the first major tech transition where incumbents hold structural advantage rather than facing an innovator's dilemma
|
# Platform incumbents enter the personal AI race with pre-existing OS-level data access that standalone AI companies cannot replicate through model quality alone making this the first major tech transition where incumbents hold structural advantage rather than facing an innovator's dilemma
|
||||||
|
|
|
||||||
|
|
@ -12,6 +12,8 @@ sourcer: Xu et al.
|
||||||
related: ["mechanistic-interpretability-tools-create-dual-use-attack-surface-enabling-surgical-safety-feature-removal", "chain-of-thought-monitoring-vulnerable-to-steganographic-encoding-as-emerging-capability", "multi-layer-ensemble-probes-outperform-single-layer-by-29-78-percent", "linear-probe-accuracy-scales-with-model-size-power-law", "representation-monitoring-via-linear-concept-vectors-creates-dual-use-attack-surface", "anti-safety-scaling-law-larger-models-more-vulnerable-to-concept-vector-attacks"]
|
related: ["mechanistic-interpretability-tools-create-dual-use-attack-surface-enabling-surgical-safety-feature-removal", "chain-of-thought-monitoring-vulnerable-to-steganographic-encoding-as-emerging-capability", "multi-layer-ensemble-probes-outperform-single-layer-by-29-78-percent", "linear-probe-accuracy-scales-with-model-size-power-law", "representation-monitoring-via-linear-concept-vectors-creates-dual-use-attack-surface", "anti-safety-scaling-law-larger-models-more-vulnerable-to-concept-vector-attacks"]
|
||||||
supports: ["Anti-safety scaling law: larger models are more vulnerable to linear concept vector attacks because steerability and attack surface scale together"]
|
supports: ["Anti-safety scaling law: larger models are more vulnerable to linear concept vector attacks because steerability and attack surface scale together"]
|
||||||
reweave_edges: ["Anti-safety scaling law: larger models are more vulnerable to linear concept vector attacks because steerability and attack surface scale together|supports|2026-04-21"]
|
reweave_edges: ["Anti-safety scaling law: larger models are more vulnerable to linear concept vector attacks because steerability and attack surface scale together|supports|2026-04-21"]
|
||||||
|
challenges:
|
||||||
|
- Constitutional Classifiers provide robust output safety monitoring at production scale through categorical harm detection that resists adversarial jailbreaks
|
||||||
---
|
---
|
||||||
|
|
||||||
# Representation monitoring via linear concept vectors creates a dual-use attack surface enabling 99.14% jailbreak success
|
# Representation monitoring via linear concept vectors creates a dual-use attack surface enabling 99.14% jailbreak success
|
||||||
|
|
|
||||||
|
|
@ -25,12 +25,14 @@ reweave_edges:
|
||||||
- voluntary-safety-constraints-without-external-enforcement-are-statements-of-intent-not-binding-governance|supports|2026-03-31
|
- voluntary-safety-constraints-without-external-enforcement-are-statements-of-intent-not-binding-governance|supports|2026-03-31
|
||||||
- Anthropic's internal resource allocation shows 6-8% safety-only headcount when dual-use research is excluded, revealing a material gap between public safety positioning and credible commitment|related|2026-04-09
|
- Anthropic's internal resource allocation shows 6-8% safety-only headcount when dual-use research is excluded, revealing a material gap between public safety positioning and credible commitment|related|2026-04-09
|
||||||
- Corporate AI safety governance under government pressure operates as a three-track sequential stack where each track's structural ceiling necessitates the next track because voluntary ethics fails to
|
- Corporate AI safety governance under government pressure operates as a three-track sequential stack where each track's structural ceiling necessitates the next track because voluntary ethics fails to
|
||||||
|
- Safety leadership exits precede voluntary governance policy changes as leading indicators of cumulative competitive pressure|supports|2026-04-26
|
||||||
competitive dynamics, litigation protects speech rights without compelling acceptance, and electoral investment faces the legislative ceiling|supports|2026-04-20
|
competitive dynamics, litigation protects speech rights without compelling acceptance, and electoral investment faces the legislative ceiling|supports|2026-04-20
|
||||||
source: Anthropic RSP v3.0 (Feb 24, 2026); TIME exclusive (Feb 25, 2026); Jared Kaplan statements
|
source: Anthropic RSP v3.0 (Feb 24, 2026); TIME exclusive (Feb 25, 2026); Jared Kaplan statements
|
||||||
supports:
|
supports:
|
||||||
- Anthropic
|
- Anthropic
|
||||||
- voluntary-safety-constraints-without-external-enforcement-are-statements-of-intent-not-binding-governance
|
- voluntary-safety-constraints-without-external-enforcement-are-statements-of-intent-not-binding-governance
|
||||||
- Corporate AI safety governance under government pressure operates as a three-track sequential stack where each track's structural ceiling necessitates the next track because voluntary ethics fails to
|
- Corporate AI safety governance under government pressure operates as a three-track sequential stack where each track's structural ceiling necessitates the next track because voluntary ethics fails to
|
||||||
|
- Safety leadership exits precede voluntary governance policy changes as leading indicators of cumulative competitive pressure
|
||||||
competitive dynamics, litigation protects speech rights without compelling acceptance, and electoral investment faces the legislative ceiling
|
competitive dynamics, litigation protects speech rights without compelling acceptance, and electoral investment faces the legislative ceiling
|
||||||
type: claim
|
type: claim
|
||||||
---
|
---
|
||||||
|
|
|
||||||
|
|
@ -11,6 +11,10 @@ depends_on:
|
||||||
- "agent skill specifications have become an industrial standard for knowledge codification with major platform adoption creating the infrastructure layer for systematic conversion of human expertise into portable AI-consumable formats"
|
- "agent skill specifications have become an industrial standard for knowledge codification with major platform adoption creating the infrastructure layer for systematic conversion of human expertise into portable AI-consumable formats"
|
||||||
challenged_by:
|
challenged_by:
|
||||||
- "multipolar failure from competing aligned AI systems may pose greater existential risk than any single misaligned superintelligence"
|
- "multipolar failure from competing aligned AI systems may pose greater existential risk than any single misaligned superintelligence"
|
||||||
|
supports:
|
||||||
|
- open source local first personal AI agents create a viable alternative to platform controlled AI but only if they solve user owned persistent memory infrastructure
|
||||||
|
reweave_edges:
|
||||||
|
- open source local first personal AI agents create a viable alternative to platform controlled AI but only if they solve user owned persistent memory infrastructure|supports|2026-04-26
|
||||||
---
|
---
|
||||||
|
|
||||||
# Whether AI knowledge codification concentrates or distributes depends on infrastructure openness because the same extraction mechanism produces digital feudalism under proprietary control and collective intelligence under commons governance
|
# Whether AI knowledge codification concentrates or distributes depends on infrastructure openness because the same extraction mechanism produces digital feudalism under proprietary control and collective intelligence under commons governance
|
||||||
|
|
|
||||||
|
|
@ -11,6 +11,10 @@ sourced_from: entertainment/2026-04-25-creator-economy-crossover-scope-definitio
|
||||||
scope: structural
|
scope: structural
|
||||||
sourcer: "Multiple: IAB, PwC, Goldman Sachs, Grand View Research"
|
sourcer: "Multiple: IAB, PwC, Goldman Sachs, Grand View Research"
|
||||||
related: ["creator and corporate media economies are zero-sum because total media time is stagnant and every marginal hour shifts between them", "youtube-ad-revenue-crossed-combined-major-studios-2025-decade-ahead-projections"]
|
related: ["creator and corporate media economies are zero-sum because total media time is stagnant and every marginal hour shifts between them", "youtube-ad-revenue-crossed-combined-major-studios-2025-decade-ahead-projections"]
|
||||||
|
supports:
|
||||||
|
- Creator platform ad revenue crossed studio ad revenue in 2025, a decade ahead of 2035 projections, because YouTube alone exceeded all major studios combined
|
||||||
|
reweave_edges:
|
||||||
|
- Creator platform ad revenue crossed studio ad revenue in 2025, a decade ahead of 2035 projections, because YouTube alone exceeded all major studios combined|supports|2026-04-26
|
||||||
---
|
---
|
||||||
|
|
||||||
# Creator-corporate revenue crossover timing depends critically on scope definition: ad revenue crossed in 2025, content-specific revenue may have crossed, total E&M crossover is a 2030s+ phenomenon
|
# Creator-corporate revenue crossover timing depends critically on scope definition: ad revenue crossed in 2025, content-specific revenue may have crossed, total E&M crossover is a 2030s+ phenomenon
|
||||||
|
|
|
||||||
|
|
@ -20,6 +20,9 @@ related:
|
||||||
- private-ai-lab-access-restrictions-create-government-offensive-defensive-capability-asymmetries-without-accountability-structure
|
- private-ai-lab-access-restrictions-create-government-offensive-defensive-capability-asymmetries-without-accountability-structure
|
||||||
- government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic by penalizing safety constraints rather than enforcing them
|
- government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic by penalizing safety constraints rather than enforcing them
|
||||||
- supply-chain-risk-designation-misdirection-occurs-when-instrument-requires-capability-target-structurally-lacks
|
- supply-chain-risk-designation-misdirection-occurs-when-instrument-requires-capability-target-structurally-lacks
|
||||||
|
- Coercive governance instruments can be deployed to preserve future capability optionality rather than prevent current harm, as demonstrated when the Pentagon designated Anthropic a supply chain risk for refusing to enable autonomous weapons capabilities not currently in use
|
||||||
|
reweave_edges:
|
||||||
|
- Coercive governance instruments can be deployed to preserve future capability optionality rather than prevent current harm, as demonstrated when the Pentagon designated Anthropic a supply chain risk for refusing to enable autonomous weapons capabilities not currently in use|related|2026-04-26
|
||||||
---
|
---
|
||||||
|
|
||||||
# Coercive governance instruments produce offense-defense asymmetries through selective enforcement within the deploying agency
|
# Coercive governance instruments produce offense-defense asymmetries through selective enforcement within the deploying agency
|
||||||
|
|
|
||||||
|
|
@ -5,6 +5,10 @@ domain: health
|
||||||
created: 2026-02-17
|
created: 2026-02-17
|
||||||
source: "FDA AI device database December 2025; Aidoc foundation model clearance January 2026; Viz.ai ISC 2025 multicenter study; Paige and PathAI FDA milestones 2025"
|
source: "FDA AI device database December 2025; Aidoc foundation model clearance January 2026; Viz.ai ISC 2025 multicenter study; Paige and PathAI FDA milestones 2025"
|
||||||
confidence: likely
|
confidence: likely
|
||||||
|
related:
|
||||||
|
- ARISE Network (AI Research in Systems Engineering)
|
||||||
|
reweave_edges:
|
||||||
|
- ARISE Network (AI Research in Systems Engineering)|related|2026-04-26
|
||||||
---
|
---
|
||||||
|
|
||||||
# AI diagnostic triage achieves 97 percent sensitivity across 14 conditions making AI-first screening viable for all imaging and pathology
|
# AI diagnostic triage achieves 97 percent sensitivity across 14 conditions making AI-first screening viable for all imaging and pathology
|
||||||
|
|
|
||||||
|
|
@ -7,6 +7,10 @@ source: "Bessemer Venture Partners, State of Health AI 2026 (bvp.com/atlas/state
|
||||||
created: 2026-03-07
|
created: 2026-03-07
|
||||||
sourced_from:
|
sourced_from:
|
||||||
- inbox/archive/health/2026-01-01-bvp-state-of-health-ai-2026.md
|
- inbox/archive/health/2026-01-01-bvp-state-of-health-ai-2026.md
|
||||||
|
supports:
|
||||||
|
- FDA Modernization Act 3.0
|
||||||
|
reweave_edges:
|
||||||
|
- FDA Modernization Act 3.0|supports|2026-04-26
|
||||||
---
|
---
|
||||||
|
|
||||||
# FDA is replacing animal testing with AI models and organ-on-chip as the default preclinical pathway which will compress drug development timelines and reduce the 90 percent clinical failure rate
|
# FDA is replacing animal testing with AI models and organ-on-chip as the default preclinical pathway which will compress drug development timelines and reduce the 90 percent clinical failure rate
|
||||||
|
|
|
||||||
|
|
@ -11,6 +11,10 @@ sourced_from: health/2026-04-25-natali-2025-ai-induced-deskilling-springer-mixed
|
||||||
scope: causal
|
scope: causal
|
||||||
sourcer: Natali et al., University of Milano-Bicocca
|
sourcer: Natali et al., University of Milano-Bicocca
|
||||||
related: ["clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "automation-bias-in-medicine-increases-false-positives-through-anchoring-on-ai-output", "ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine", "dopaminergic-reinforcement-of-ai-reliance-predicts-behavioral-entrenchment-beyond-simple-habit-formation"]
|
related: ["clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "automation-bias-in-medicine-increases-false-positives-through-anchoring-on-ai-output", "ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine", "dopaminergic-reinforcement-of-ai-reliance-predicts-behavioral-entrenchment-beyond-simple-habit-formation"]
|
||||||
|
supports:
|
||||||
|
- Moral deskilling from AI erodes ethical judgment through repeated cognitive offloading creating a safety risk distinct from diagnostic accuracy
|
||||||
|
reweave_edges:
|
||||||
|
- Moral deskilling from AI erodes ethical judgment through repeated cognitive offloading creating a safety risk distinct from diagnostic accuracy|supports|2026-04-26
|
||||||
---
|
---
|
||||||
|
|
||||||
# Clinical AI creates moral deskilling through ethical judgment erosion from routine AI acceptance leaving clinicians unprepared to recognize value conflicts
|
# Clinical AI creates moral deskilling through ethical judgment erosion from routine AI acceptance leaving clinicians unprepared to recognize value conflicts
|
||||||
|
|
|
||||||
|
|
@ -11,8 +11,10 @@ depends_on:
|
||||||
- Futardio launch — further simplification for permissionless adoption
|
- Futardio launch — further simplification for permissionless adoption
|
||||||
related:
|
related:
|
||||||
- Futarchy product-market fit emerged through iterative market rejection not initial design because MetaDAO's successful launchpad model was the third attempt after two failed proposals
|
- Futarchy product-market fit emerged through iterative market rejection not initial design because MetaDAO's successful launchpad model was the third attempt after two failed proposals
|
||||||
|
- Hanson's 'minor flaw' reframing of the Rasmont critique constitutes a normalization strategy that may reduce practical impact independent of technical validity
|
||||||
reweave_edges:
|
reweave_edges:
|
||||||
- Futarchy product-market fit emerged through iterative market rejection not initial design because MetaDAO's successful launchpad model was the third attempt after two failed proposals|related|2026-04-19
|
- Futarchy product-market fit emerged through iterative market rejection not initial design because MetaDAO's successful launchpad model was the third attempt after two failed proposals|related|2026-04-19
|
||||||
|
- Hanson's 'minor flaw' reframing of the Rasmont critique constitutes a normalization strategy that may reduce practical impact independent of technical validity|related|2026-04-26
|
||||||
sourced_from:
|
sourced_from:
|
||||||
- inbox/archive/internet-finance/2026-03-09-metanallok-x-archive.md
|
- inbox/archive/internet-finance/2026-03-09-metanallok-x-archive.md
|
||||||
---
|
---
|
||||||
|
|
|
||||||
|
|
@ -15,6 +15,13 @@ related_claims:
|
||||||
- space-sector-commercialization-requires-independent-supply-and-demand-thresholds
|
- space-sector-commercialization-requires-independent-supply-and-demand-thresholds
|
||||||
sourced_from:
|
sourced_from:
|
||||||
- inbox/archive/2026-02-17-astra-spacex-research.md
|
- inbox/archive/2026-02-17-astra-spacex-research.md
|
||||||
|
related:
|
||||||
|
- FAA mishap investigation cycles (2-5 months per anomaly) are the structural bottleneck limiting Starship cost reduction timeline, not vehicle economics or regulatory approval
|
||||||
|
reweave_edges:
|
||||||
|
- FAA mishap investigation cycles (2-5 months per anomaly) are the structural bottleneck limiting Starship cost reduction timeline, not vehicle economics or regulatory approval|related|2026-04-26
|
||||||
|
- Starship V3's tripled payload capacity (>100 MT vs V2's 35 MT) lowers the $100/kg launch cost threshold entry point from 6+ reuse cycles to 2-3 reuse cycles|supports|2026-04-26
|
||||||
|
supports:
|
||||||
|
- Starship V3's tripled payload capacity (>100 MT vs V2's 35 MT) lowers the $100/kg launch cost threshold entry point from 6+ reuse cycles to 2-3 reuse cycles
|
||||||
---
|
---
|
||||||
|
|
||||||
# Starship achieving routine operations at sub-100 dollars per kg is the single largest enabling condition for the entire space industrial economy
|
# Starship achieving routine operations at sub-100 dollars per kg is the single largest enabling condition for the entire space industrial economy
|
||||||
|
|
|
||||||
|
|
@ -8,6 +8,13 @@ created: 2026-03-08
|
||||||
challenged_by: "No commercial Starship payload has flown yet as of early 2026. The cadence projections extrapolate from Falcon 9's trajectory, but Starship is a fundamentally different and more complex vehicle. Achieving airline-like turnaround requires solving upper-stage reuse, which no vehicle has demonstrated. The optimistic end ($10-20/kg) may require operational perfection that no complex system achieves."
|
challenged_by: "No commercial Starship payload has flown yet as of early 2026. The cadence projections extrapolate from Falcon 9's trajectory, but Starship is a fundamentally different and more complex vehicle. Achieving airline-like turnaround requires solving upper-stage reuse, which no vehicle has demonstrated. The optimistic end ($10-20/kg) may require operational perfection that no complex system achieves."
|
||||||
sourced_from:
|
sourced_from:
|
||||||
- inbox/archive/2026-02-17-astra-spacex-research.md
|
- inbox/archive/2026-02-17-astra-spacex-research.md
|
||||||
|
related:
|
||||||
|
- FAA mishap investigation cycles (2-5 months per anomaly) are the structural bottleneck limiting Starship cost reduction timeline, not vehicle economics or regulatory approval
|
||||||
|
reweave_edges:
|
||||||
|
- FAA mishap investigation cycles (2-5 months per anomaly) are the structural bottleneck limiting Starship cost reduction timeline, not vehicle economics or regulatory approval|related|2026-04-26
|
||||||
|
- Starship V3's tripled payload capacity (>100 MT vs V2's 35 MT) lowers the $100/kg launch cost threshold entry point from 6+ reuse cycles to 2-3 reuse cycles|supports|2026-04-26
|
||||||
|
supports:
|
||||||
|
- Starship V3's tripled payload capacity (>100 MT vs V2's 35 MT) lowers the $100/kg launch cost threshold entry point from 6+ reuse cycles to 2-3 reuse cycles
|
||||||
---
|
---
|
||||||
|
|
||||||
# Starship economics depend on cadence and reuse rate not vehicle cost because a 90M vehicle flown 100 times beats a 50M expendable by 17x
|
# Starship economics depend on cadence and reuse rate not vehicle cost because a 90M vehicle flown 100 times beats a 50M expendable by 17x
|
||||||
|
|
|
||||||
|
|
@ -11,6 +11,10 @@ sourced_from: space-development/2026-02-13-spacenews-china-three-body-2800sat-st
|
||||||
scope: functional
|
scope: functional
|
||||||
sourcer: SpaceNews
|
sourcer: SpaceNews
|
||||||
related: ["military-commercial-space-architecture-convergence-creates-dual-use-orbital-infrastructure", "china-is-the-only-credible-peer-competitor-in-space-with-comprehensive-capabilities-and-state-directed-acceleration-closing-the-reusability-gap-in-5-8-years", "blue-origin-project-sunrise-signals-spacex-blue-origin-duopoly-in-orbital-compute-through-vertical-integration"]
|
related: ["military-commercial-space-architecture-convergence-creates-dual-use-orbital-infrastructure", "china-is-the-only-credible-peer-competitor-in-space-with-comprehensive-capabilities-and-state-directed-acceleration-closing-the-reusability-gap-in-5-8-years", "blue-origin-project-sunrise-signals-spacex-blue-origin-duopoly-in-orbital-compute-through-vertical-integration"]
|
||||||
|
supports:
|
||||||
|
- China's multiple parallel orbital data center programs with combined state backing exceeding projected US commercial ODC market creates asymmetric competitive advantage
|
||||||
|
reweave_edges:
|
||||||
|
- China's multiple parallel orbital data center programs with combined state backing exceeding projected US commercial ODC market creates asymmetric competitive advantage|supports|2026-04-26
|
||||||
---
|
---
|
||||||
|
|
||||||
# China's Star-Compute orbital computing program serves dual commercial and geopolitical functions by providing AI processing to Belt and Road Initiative partner nations to reduce Western technology dependency and create orbital infrastructure lock-in
|
# China's Star-Compute orbital computing program serves dual commercial and geopolitical functions by providing AI processing to Belt and Road Initiative partner nations to reduce Western technology dependency and create orbital infrastructure lock-in
|
||||||
|
|
|
||||||
|
|
@ -29,6 +29,7 @@ related:
|
||||||
- Safe Superintelligence Inc.
|
- Safe Superintelligence Inc.
|
||||||
- thinking-machines-lab
|
- thinking-machines-lab
|
||||||
- xAI
|
- xAI
|
||||||
|
- platform incumbents enter the personal AI race with pre existing OS level data access that standalone AI companies cannot replicate through model quality alone
|
||||||
reweave_edges:
|
reweave_edges:
|
||||||
- Anthropic|related|2026-03-28
|
- Anthropic|related|2026-03-28
|
||||||
- dario-amodei|related|2026-03-28
|
- dario-amodei|related|2026-03-28
|
||||||
|
|
@ -36,6 +37,7 @@ reweave_edges:
|
||||||
- Safe Superintelligence Inc.|related|2026-03-28
|
- Safe Superintelligence Inc.|related|2026-03-28
|
||||||
- thinking-machines-lab|related|2026-03-28
|
- thinking-machines-lab|related|2026-03-28
|
||||||
- xAI|related|2026-03-28
|
- xAI|related|2026-03-28
|
||||||
|
- platform incumbents enter the personal AI race with pre existing OS level data access that standalone AI companies cannot replicate through model quality alone|related|2026-04-26
|
||||||
---
|
---
|
||||||
|
|
||||||
# OpenAI
|
# OpenAI
|
||||||
|
|
|
||||||
Loading…
Reference in a new issue