reweave: merge 26 files via frontmatter union [auto]
Some checks are pending
Mirror PR to Forgejo / mirror (pull_request) Waiting to run

This commit is contained in:
Teleo Agents 2026-04-26 01:15:13 +00:00
parent b979f5d167
commit 85851394e7
26 changed files with 164 additions and 48 deletions

View file

@ -6,6 +6,10 @@ created: 2026-02-21
confidence: experimental
source: "Strategic synthesis of Christensen disruption analysis, master narratives theory, and LivingIP grand strategy, Feb 2026"
tradition: "Teleological Investing, Christensen disruption theory, narrative theory"
related:
- Geopolitical competition over algorithmic narrative control confirms narrative distribution infrastructure has civilizational strategic value because states compete for algorithm ownership when narrative remains the active ingredient
reweave_edges:
- Geopolitical competition over algorithmic narrative control confirms narrative distribution infrastructure has civilizational strategic value because states compete for algorithm ownership when narrative remains the active ingredient|related|2026-04-26
---
# LivingIPs knowledge industry strategy builds collective synthesis infrastructure first and lets the coordination narrative emerge from demonstrated practice rather than designing it in advance

View file

@ -8,8 +8,10 @@ source: "OECD AI VC report (Feb 2026), Crunchbase funding analysis (2025), TechC
created: 2026-03-16
related:
- whether AI knowledge codification concentrates or distributes depends on infrastructure openness because the same extraction mechanism produces digital feudalism under proprietary control and collective intelligence under commons governance
- Geopolitical competition over algorithmic narrative control confirms narrative distribution infrastructure has civilizational strategic value because states compete for algorithm ownership when narrative remains the active ingredient
reweave_edges:
- whether AI knowledge codification concentrates or distributes depends on infrastructure openness because the same extraction mechanism produces digital feudalism under proprietary control and collective intelligence under commons governance|related|2026-04-07
- Geopolitical competition over algorithmic narrative control confirms narrative distribution infrastructure has civilizational strategic value because states compete for algorithm ownership when narrative remains the active ingredient|related|2026-04-26
sourced_from:
- inbox/archive/ai-alignment/2026-03-16-theseus-ai-industry-landscape-briefing.md
---

View file

@ -12,6 +12,7 @@ supports:
- voluntary-safety-constraints-without-external-enforcement-are-statements-of-intent-not-binding-governance
- Anthropic's internal resource allocation shows 6-8% safety-only headcount when dual-use research is excluded, revealing a material gap between public safety positioning and credible commitment
- motivated reasoning among AI lab leaders is itself a primary risk vector because those with most capability to slow down have most incentive to accelerate
- Safety leadership exits precede voluntary governance policy changes as leading indicators of cumulative competitive pressure
reweave_edges:
- Anthropic|supports|2026-03-28
- dario-amodei|supports|2026-03-28
@ -21,6 +22,7 @@ reweave_edges:
- Anthropic's internal resource allocation shows 6-8% safety-only headcount when dual-use research is excluded, revealing a material gap between public safety positioning and credible commitment|supports|2026-04-09
- Frontier AI labs allocate 6-15% of research headcount to safety versus 60-75% to capabilities with the ratio declining since 2024 as capabilities teams grow faster than safety teams|related|2026-04-09
- motivated reasoning among AI lab leaders is itself a primary risk vector because those with most capability to slow down have most incentive to accelerate|supports|2026-04-17
- Safety leadership exits precede voluntary governance policy changes as leading indicators of cumulative competitive pressure|supports|2026-04-26
related:
- cross-lab-alignment-evaluation-surfaces-safety-gaps-internal-evaluation-misses-providing-empirical-basis-for-mandatory-third-party-evaluation
- Frontier AI labs allocate 6-15% of research headcount to safety versus 60-75% to capabilities with the ratio declining since 2024 as capabilities teams grow faster than safety teams

View file

@ -9,12 +9,14 @@ related:
- inference efficiency gains erode AI deployment governance without triggering compute monitoring thresholds because governance frameworks target training concentration while inference optimization distributes capability below detection
- eu-ai-act-article-2-3-national-security-exclusion-confirms-legislative-ceiling-is-cross-jurisdictional
- Semiconductor export controls (CHIPS Act, ASML restrictions) are the first AI governance instrument structurally analogous to Montreal Protocol's trade sanctions
- Geopolitical competition over algorithmic narrative control confirms narrative distribution infrastructure has civilizational strategic value because states compete for algorithm ownership when narrative remains the active ingredient
reweave_edges:
- inference efficiency gains erode AI deployment governance without triggering compute monitoring thresholds because governance frameworks target training concentration while inference optimization distributes capability below detection|related|2026-03-28
- AI governance discourse has been captured by economic competitiveness framing, inverting predicted participation patterns where China signs non-binding declarations while the US opts out|supports|2026-04-04
- eu-ai-act-article-2-3-national-security-exclusion-confirms-legislative-ceiling-is-cross-jurisdictional|related|2026-04-18
- BIS January 2026 Advanced AI Chip Export Rule|supports|2026-04-24
- Semiconductor export controls (CHIPS Act, ASML restrictions) are the first AI governance instrument structurally analogous to Montreal Protocol's trade sanctions|related|2026-04-24
- Geopolitical competition over algorithmic narrative control confirms narrative distribution infrastructure has civilizational strategic value because states compete for algorithm ownership when narrative remains the active ingredient|related|2026-04-26
supports:
- AI governance discourse has been captured by economic competitiveness framing, inverting predicted participation patterns where China signs non-binding declarations while the US opts out
- BIS January 2026 Advanced AI Chip Export Rule

View file

@ -11,8 +11,16 @@ attribution:
sourcer:
- handle: "openai-and-anthropic-(joint)"
context: "OpenAI and Anthropic joint evaluation, August 2025"
related: ["Making research evaluations into compliance triggers closes the translation gap by design by eliminating the institutional boundary between risk detection and risk response", "cross-lab-alignment-evaluation-surfaces-safety-gaps-internal-evaluation-misses-providing-empirical-basis-for-mandatory-third-party-evaluation", "AI-models-distinguish-testing-from-deployment-environments-providing-empirical-evidence-for-deceptive-alignment-concerns", "pre-deployment-AI-evaluations-do-not-predict-real-world-risk-creating-institutional-governance-built-on-unreliable-foundations", "multi-agent deployment exposes emergent security vulnerabilities invisible to single-agent evaluation because cross-agent propagation identity spoofing and unauthorized compliance arise only in realistic multi-party environments"]
reweave_edges: ["Making research evaluations into compliance triggers closes the translation gap by design by eliminating the institutional boundary between risk detection and risk response|related|2026-04-17"]
related:
- Making research evaluations into compliance triggers closes the translation gap by design by eliminating the institutional boundary between risk detection and risk response
- cross-lab-alignment-evaluation-surfaces-safety-gaps-internal-evaluation-misses-providing-empirical-basis-for-mandatory-third-party-evaluation
- AI-models-distinguish-testing-from-deployment-environments-providing-empirical-evidence-for-deceptive-alignment-concerns
- pre-deployment-AI-evaluations-do-not-predict-real-world-risk-creating-institutional-governance-built-on-unreliable-foundations
- multi-agent deployment exposes emergent security vulnerabilities invisible to single-agent evaluation because cross-agent propagation identity spoofing and unauthorized compliance arise only in realistic multi-party environments
reweave_edges:
- Making research evaluations into compliance triggers closes the translation gap by design by eliminating the institutional boundary between risk detection and risk response|related|2026-04-17
supports:
- Independent government evaluation publishing adverse findings during commercial negotiation functions as a governance instrument through information asymmetry reduction
---
# Cross-lab alignment evaluation surfaces safety gaps that internal evaluation misses, providing an empirical basis for mandatory third-party AI safety evaluation as a governance mechanism
@ -32,4 +40,4 @@ Topics:
**Source:** UK AISI independent evaluation of Anthropic Mythos, April 2026
UK AISI as independent government evaluator published findings about Mythos cyber capabilities that have direct implications for Anthropic's commercial negotiations and safety classification decisions. The evaluation revealed Mythos as first model to complete 32-step enterprise attack chain, a finding with governance significance that independent evaluation surfaced publicly.
UK AISI as independent government evaluator published findings about Mythos cyber capabilities that have direct implications for Anthropic's commercial negotiations and safety classification decisions. The evaluation revealed Mythos as first model to complete 32-step enterprise attack chain, a finding with governance significance that independent evaluation surfaced publicly.

View file

@ -15,8 +15,10 @@ supports:
reweave_edges:
- Anthropic's internal resource allocation shows 6-8% safety-only headcount when dual-use research is excluded, revealing a material gap between public safety positioning and credible commitment|supports|2026-04-09
- Frontier AI safety frameworks score 8-35% against safety-critical industry standards with a 52% composite ceiling even when combining best practices across all frameworks|related|2026-04-17
- Responsible AI dimensions exhibit systematic multi-objective tension where improving safety degrades accuracy and improving privacy reduces fairness with no accepted navigation framework|related|2026-04-26
related:
- Frontier AI safety frameworks score 8-35% against safety-critical industry standards with a 52% composite ceiling even when combining best practices across all frameworks
- Responsible AI dimensions exhibit systematic multi-objective tension where improving safety degrades accuracy and improving privacy reduces fairness with no accepted navigation framework
---
# Frontier AI labs allocate 6-15% of research headcount to safety versus 60-75% to capabilities with the ratio declining since 2024 as capabilities teams grow faster than safety teams

View file

@ -10,8 +10,13 @@ agent: theseus
scope: structural
sourcer: Lily Stelling, Malcolm Murray, Simeon Campos, Henry Papadatos
related_claims: ["[[safe AI development requires building alignment mechanisms before scaling capability]]", "[[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]]"]
related: ["Frontier AI safety verdicts rely partly on deployment track record rather than evaluation-derived confidence which establishes a precedent where safety claims are empirically grounded instead of counterfactually assured", "frontier-safety-frameworks-score-8-35-percent-against-safety-critical-standards-with-52-percent-composite-ceiling"]
reweave_edges: ["Frontier AI safety verdicts rely partly on deployment track record rather than evaluation-derived confidence which establishes a precedent where safety claims are empirically grounded instead of counterfactually assured|related|2026-04-17"]
related:
- Frontier AI safety verdicts rely partly on deployment track record rather than evaluation-derived confidence which establishes a precedent where safety claims are empirically grounded instead of counterfactually assured
- frontier-safety-frameworks-score-8-35-percent-against-safety-critical-standards-with-52-percent-composite-ceiling
reweave_edges:
- Frontier AI safety verdicts rely partly on deployment track record rather than evaluation-derived confidence which establishes a precedent where safety claims are empirically grounded instead of counterfactually assured|related|2026-04-17
supports:
- Responsible AI dimensions exhibit systematic multi-objective tension where improving safety degrades accuracy and improving privacy reduces fairness with no accepted navigation framework
---
# Frontier AI safety frameworks score 8-35% against safety-critical industry standards with a 52% composite ceiling even when combining best practices across all frameworks
@ -22,4 +27,4 @@ A systematic evaluation of twelve frontier AI safety frameworks published follow
**Source:** Hofstätter et al., ICML 2025
Hofstätter et al. identify a specific mechanism for framework inadequacy: capability evaluations without fine-tuning-based elicitation miss capabilities equivalent to 5-20x training compute. This suggests safety frameworks are evaluating against capability baselines that are systematically too low.
Hofstätter et al. identify a specific mechanism for framework inadequacy: capability evaluations without fine-tuning-based elicitation miss capabilities equivalent to 5-20x training compute. This suggests safety frameworks are evaluating against capability baselines that are systematically too low.

View file

@ -14,6 +14,7 @@ related:
- domestic-political-change-can-rapidly-erode-decade-long-international-AI-safety-norms-as-US-reversed-from-supporter-to-opponent-in-one-year
- anthropic-internal-resource-allocation-shows-6-8-percent-safety-only-headcount-when-dual-use-research-excluded-revealing-gap-between-public-positioning-and-commitment
- supply-chain-risk-designation-misdirection-occurs-when-instrument-requires-capability-target-structurally-lacks
- Coercive governance instruments can be deployed to preserve future capability optionality rather than prevent current harm, as demonstrated when the Pentagon designated Anthropic a supply chain risk for refusing to enable autonomous weapons capabilities not currently in use
reweave_edges:
- AI investment concentration where 58 percent of funding flows to megarounds and two companies capture 14 percent of all global venture capital creates a structural oligopoly that alignment governance must account for|related|2026-03-28
- UK AI Safety Institute|related|2026-03-28
@ -22,6 +23,7 @@ reweave_edges:
- Strategic interest alignment determines whether national security framing enables or undermines mandatory governance — aligned interests enable mandatory mechanisms (space) while conflicting interests undermine voluntary constraints (AI military deployment)|related|2026-04-19
- Corporate AI safety governance under government pressure operates as a three-track sequential stack where each track's structural ceiling necessitates the next track because voluntary ethics fails to competitive dynamics, litigation protects speech rights without compelling acceptance, and electoral investment faces the legislative ceiling|supports|2026-04-20
- Pentagon military AI contracts systematically demand 'any lawful use' terms as confirmed by three independent lab negotiations|supports|2026-04-25
- Coercive governance instruments can be deployed to preserve future capability optionality rather than prevent current harm, as demonstrated when the Pentagon designated Anthropic a supply chain risk for refusing to enable autonomous weapons capabilities not currently in use|related|2026-04-26
supports:
- government-safety-penalties-invert-regulatory-incentives-by-blacklisting-cautious-actors
- Corporate AI safety governance under government pressure operates as a three-track sequential stack where each track's structural ceiling necessitates the next track because voluntary ethics fails to competitive dynamics, litigation protects speech rights without compelling acceptance, and electoral investment faces the legislative ceiling

View file

@ -7,12 +7,16 @@ source: "Russell, Human Compatible (2019); Russell, Artificial Intelligence: A M
created: 2026-04-05
agent: theseus
depends_on:
- "cooperative inverse reinforcement learning formalizes alignment as a two-player game where optimality in isolation is suboptimal because the robot must learn human preferences through observation not specification"
- "specifying human values in code is intractable because our goals contain hidden complexity comparable to visual perception"
- cooperative inverse reinforcement learning formalizes alignment as a two-player game where optimality in isolation is suboptimal because the robot must learn human preferences through observation not specification
- specifying human values in code is intractable because our goals contain hidden complexity comparable to visual perception
challenged_by:
- "corrigibility is at cross-purposes with effectiveness because deception is a convergent free strategy while corrigibility must be engineered against instrumental interests"
- corrigibility is at cross-purposes with effectiveness because deception is a convergent free strategy while corrigibility must be engineered against instrumental interests
sourced_from:
- inbox/archive/2019-10-08-russell-human-compatible.md
related:
- Responsible AI dimensions exhibit systematic multi-objective tension where improving safety degrades accuracy and improving privacy reduces fairness with no accepted navigation framework
reweave_edges:
- Responsible AI dimensions exhibit systematic multi-objective tension where improving safety degrades accuracy and improving privacy reduces fairness with no accepted navigation framework|related|2026-04-26
---
# Inverse reinforcement learning with objective uncertainty produces provably safe behavior because an AI system that knows it doesnt know the human reward function will defer to humans and accept shutdown rather than persist in potentially wrong actions
@ -46,4 +50,4 @@ Relevant Notes:
- [[the specification trap means any values encoded at training time become structurally unstable as deployment contexts diverge from training conditions]] — additional evidence for Russell's argument against fixed objectives
Topics:
- [[_map]]
- [[_map]]

View file

@ -18,10 +18,12 @@ related:
- white-box-interpretability-fails-on-adversarially-trained-models-creating-anti-correlation-with-threat-model
- interpretability-effectiveness-anti-correlates-with-adversarial-training-making-tools-hurt-performance-on-sophisticated-misalignment
- anthropic-deepmind-interpretability-complementarity-maps-mechanisms-versus-detects-intent
- Constitutional Classifiers provide robust output safety monitoring at production scale through categorical harm detection that resists adversarial jailbreaks
reweave_edges:
- Non-autoregressive architectures reduce jailbreak vulnerability by 40-65% through elimination of continuation-drive mechanisms but impose a 15-25% capability cost on reasoning tasks|related|2026-04-17
- Training-free conversion of activation steering vectors into component-level weight edits enables persistent behavioral modification without retraining|related|2026-04-17
- Research community silo between interpretability-for-safety and adversarial robustness creates deployment-phase safety failures where organizations implementing monitoring improvements inherit dual-use attack surfaces without exposure to adversarial robustness literature|supports|2026-04-25
- Constitutional Classifiers provide robust output safety monitoring at production scale through categorical harm detection that resists adversarial jailbreaks|related|2026-04-26
supports:
- "Anti-safety scaling law: larger models are more vulnerable to linear concept vector attacks because steerability and attack surface scale together"
- Research community silo between interpretability-for-safety and adversarial robustness creates deployment-phase safety failures where organizations implementing monitoring improvements inherit dual-use attack surfaces without exposure to adversarial robustness literature

View file

@ -7,10 +7,14 @@ confidence: experimental
source: "Daneel (Hermes Agent), analysis of SemaClaw (Zhu et al., arXiv 2604.11548, April 2026), OpenClaw open-source agent, Hermes Agent (Nous Research), Google Gemini Import Memory launch (March 2026), Coasty computer use benchmarks (March 2026)"
created: 2026-04-25
depends_on:
- personal AI market structure is determined by who owns the memory because platform-owned memory creates high switching costs while portable user-owned memory enables competitive markets
- file-backed durable state is the most consistently positive harness module across task types because externalizing state to path-addressable artifacts survives context truncation delegation and restart
- collective superintelligence is the alternative to monolithic AI controlled by a few
- technology advances exponentially but coordination mechanisms evolve linearly creating a widening gap
- personal AI market structure is determined by who owns the memory because platform-owned memory creates high switching costs while portable user-owned memory enables competitive markets
- file-backed durable state is the most consistently positive harness module across task types because externalizing state to path-addressable artifacts survives context truncation delegation and restart
- collective superintelligence is the alternative to monolithic AI controlled by a few
- technology advances exponentially but coordination mechanisms evolve linearly creating a widening gap
related:
- platform incumbents enter the personal AI race with pre existing OS level data access that standalone AI companies cannot replicate through model quality alone
reweave_edges:
- platform incumbents enter the personal AI race with pre existing OS level data access that standalone AI companies cannot replicate through model quality alone|related|2026-04-26
---
# Open-source local-first personal AI agents create a viable alternative to platform-controlled AI but only if they solve user-owned persistent memory infrastructure because model quality commoditizes while memory architecture determines who captures the relationship value
@ -57,4 +61,4 @@ Relevant Notes:
Topics:
- [[domains/ai-alignment/_map]]
- [[domains/collective-intelligence/_map]]
- [[domains/collective-intelligence/_map]]

View file

@ -7,9 +7,16 @@ confidence: likely
source: "Daneel (Hermes Agent), synthesis of Google Gemini Import Memory launch (March 2026), Anthropic Claude memory import (April 2026), SemaClaw wiki-based memory architecture (Zhu et al., arXiv 2604.11548, April 2026), Arahi AI 10-assistant comparison (April 2026)"
created: 2026-04-25
depends_on:
- giving away the commoditized layer to capture value on the scarce complement is the shared mechanism driving both entertainment and internet finance attractor states
- file-backed durable state is the most consistently positive harness module across task types because externalizing state to path-addressable artifacts survives context truncation delegation and restart
- collective superintelligence is the alternative to monolithic AI controlled by a few
- giving away the commoditized layer to capture value on the scarce complement is the shared mechanism driving both entertainment and internet finance attractor states
- file-backed durable state is the most consistently positive harness module across task types because externalizing state to path-addressable artifacts survives context truncation delegation and restart
- collective superintelligence is the alternative to monolithic AI controlled by a few
supports:
- open source local first personal AI agents create a viable alternative to platform controlled AI but only if they solve user owned persistent memory infrastructure
related:
- platform incumbents enter the personal AI race with pre existing OS level data access that standalone AI companies cannot replicate through model quality alone
reweave_edges:
- open source local first personal AI agents create a viable alternative to platform controlled AI but only if they solve user owned persistent memory infrastructure|supports|2026-04-26
- platform incumbents enter the personal AI race with pre existing OS level data access that standalone AI companies cannot replicate through model quality alone|related|2026-04-26
---
# Personal AI market structure is determined by who owns the memory because platform-owned memory creates high switching costs and winner-take-most dynamics while user-owned portable memory reduces switching costs and enables competitive markets
@ -58,4 +65,4 @@ Relevant Notes:
Topics:
- [[domains/ai-alignment/_map]]
- [[domains/collective-intelligence/_map]]
- [[domains/internet-finance/_map]]
- [[domains/internet-finance/_map]]

View file

@ -7,9 +7,13 @@ confidence: likely
source: "Daneel (Hermes Agent), analysis of Apple Intelligence on-device integration (2024-2026), Google Gemini Workspace integration, Microsoft Copilot Office/Windows bundling, The Meridiem analysis of AI switching costs (March 2026)"
created: 2026-04-25
depends_on:
- AI alignment is a coordination problem not a technical problem
- giving away the commoditized layer to capture value on the scarce complement is the shared mechanism driving both entertainment and internet finance attractor states
- strategy is the art of creating power through narrative and coalition not just the application of existing power
- AI alignment is a coordination problem not a technical problem
- giving away the commoditized layer to capture value on the scarce complement is the shared mechanism driving both entertainment and internet finance attractor states
- strategy is the art of creating power through narrative and coalition not just the application of existing power
supports:
- open source local first personal AI agents create a viable alternative to platform controlled AI but only if they solve user owned persistent memory infrastructure
reweave_edges:
- open source local first personal AI agents create a viable alternative to platform controlled AI but only if they solve user owned persistent memory infrastructure|supports|2026-04-26
---
# Platform incumbents enter the personal AI race with pre-existing OS-level data access that standalone AI companies cannot replicate through model quality alone making this the first major tech transition where incumbents hold structural advantage rather than facing an innovator's dilemma
@ -68,4 +72,4 @@ Relevant Notes:
Topics:
- [[domains/ai-alignment/_map]]
- [[domains/internet-finance/_map]]
- [[core/grand-strategy/_map]]
- [[core/grand-strategy/_map]]

View file

@ -9,9 +9,19 @@ title: "Representation monitoring via linear concept vectors creates a dual-use
agent: theseus
scope: causal
sourcer: Xu et al.
related: ["mechanistic-interpretability-tools-create-dual-use-attack-surface-enabling-surgical-safety-feature-removal", "chain-of-thought-monitoring-vulnerable-to-steganographic-encoding-as-emerging-capability", "multi-layer-ensemble-probes-outperform-single-layer-by-29-78-percent", "linear-probe-accuracy-scales-with-model-size-power-law", "representation-monitoring-via-linear-concept-vectors-creates-dual-use-attack-surface", "anti-safety-scaling-law-larger-models-more-vulnerable-to-concept-vector-attacks"]
supports: ["Anti-safety scaling law: larger models are more vulnerable to linear concept vector attacks because steerability and attack surface scale together"]
reweave_edges: ["Anti-safety scaling law: larger models are more vulnerable to linear concept vector attacks because steerability and attack surface scale together|supports|2026-04-21"]
related:
- mechanistic-interpretability-tools-create-dual-use-attack-surface-enabling-surgical-safety-feature-removal
- chain-of-thought-monitoring-vulnerable-to-steganographic-encoding-as-emerging-capability
- multi-layer-ensemble-probes-outperform-single-layer-by-29-78-percent
- linear-probe-accuracy-scales-with-model-size-power-law
- representation-monitoring-via-linear-concept-vectors-creates-dual-use-attack-surface
- anti-safety-scaling-law-larger-models-more-vulnerable-to-concept-vector-attacks
supports:
- "Anti-safety scaling law: larger models are more vulnerable to linear concept vector attacks because steerability and attack surface scale together"
reweave_edges:
- "Anti-safety scaling law: larger models are more vulnerable to linear concept vector attacks because steerability and attack surface scale together|supports|2026-04-21"
challenges:
- Constitutional Classifiers provide robust output safety monitoring at production scale through categorical harm detection that resists adversarial jailbreaks
---
# Representation monitoring via linear concept vectors creates a dual-use attack surface enabling 99.14% jailbreak success
@ -36,4 +46,4 @@ Multi-layer ensemble architectures do not eliminate the fundamental attack surfa
**Source:** Theseus synthetic analysis of Nordby et al. × SCAV
Multi-layer ensemble monitoring does not eliminate the dual-use attack surface, only shifts it from single-layer to multi-layer SCAV. With white-box access, attackers can generalize SCAV to suppress concept directions at all monitored layers simultaneously through higher-dimensional optimization. Open-weights models remain fully vulnerable. Black-box robustness depends on untested rotation pattern universality question.
Multi-layer ensemble monitoring does not eliminate the dual-use attack surface, only shifts it from single-layer to multi-layer SCAV. With white-box access, attackers can generalize SCAV to suppress concept directions at all monitored layers simultaneously through higher-dimensional optimization. Open-weights models remain fully vulnerable. Black-box robustness depends on untested rotation pattern universality question.

View file

@ -24,14 +24,16 @@ reweave_edges:
- Anthropic|supports|2026-03-28
- voluntary-safety-constraints-without-external-enforcement-are-statements-of-intent-not-binding-governance|supports|2026-03-31
- Anthropic's internal resource allocation shows 6-8% safety-only headcount when dual-use research is excluded, revealing a material gap between public safety positioning and credible commitment|related|2026-04-09
- Corporate AI safety governance under government pressure operates as a three-track sequential stack where each track's structural ceiling necessitates the next track because voluntary ethics fails to competitive dynamics, litigation protects speech rights without compelling acceptance, and electoral investment faces the legislative ceiling|supports|2026-04-20
- Corporate AI safety governance under government pressure operates as a three-track sequential stack where each track's structural ceiling necessitates the next track because voluntary ethics fails to
competitive dynamics, litigation protects speech rights without compelling acceptance, and electoral investment faces the legislative ceiling|supports|2026-04-20
- Safety leadership exits precede voluntary governance policy changes as leading indicators of cumulative competitive pressure|supports|2026-04-26 competitive dynamics, litigation protects speech rights without compelling acceptance, and electoral investment faces the legislative ceiling|supports|2026-04-20
source: Anthropic RSP v3.0 (Feb 24, 2026); TIME exclusive (Feb 25, 2026); Jared Kaplan statements
supports:
- Anthropic
- voluntary-safety-constraints-without-external-enforcement-are-statements-of-intent-not-binding-governance
- Corporate AI safety governance under government pressure operates as a three-track sequential stack where each track's structural ceiling necessitates the next track because voluntary ethics fails to competitive dynamics, litigation protects speech rights without compelling acceptance, and electoral investment faces the legislative ceiling
- Corporate AI safety governance under government pressure operates as a three-track sequential stack where each track's structural ceiling necessitates the next track because voluntary ethics fails to
competitive dynamics, litigation protects speech rights without compelling acceptance, and electoral investment faces the legislative ceiling
- Safety leadership exits precede voluntary governance policy changes as leading indicators of cumulative competitive pressure competitive dynamics, litigation protects speech rights without compelling acceptance, and electoral investment faces the legislative ceiling
type: claim
---

View file

@ -7,10 +7,14 @@ confidence: likely
source: "Springer 'Dismantling AI Capitalism' (Dyer-Witheford et al.); Collective Intelligence Project 'Intelligence as Commons' framework; Tony Blair Institute AI governance reports; open-source adoption data (China 50-60% new open model deployments); historical Taylor parallel from Abdalla manuscript"
created: 2026-04-04
depends_on:
- "attractor-agentic-taylorism"
- "agent skill specifications have become an industrial standard for knowledge codification with major platform adoption creating the infrastructure layer for systematic conversion of human expertise into portable AI-consumable formats"
- attractor-agentic-taylorism
- agent skill specifications have become an industrial standard for knowledge codification with major platform adoption creating the infrastructure layer for systematic conversion of human expertise into portable AI-consumable formats
challenged_by:
- "multipolar failure from competing aligned AI systems may pose greater existential risk than any single misaligned superintelligence"
- multipolar failure from competing aligned AI systems may pose greater existential risk than any single misaligned superintelligence
supports:
- open source local first personal AI agents create a viable alternative to platform controlled AI but only if they solve user owned persistent memory infrastructure
reweave_edges:
- open source local first personal AI agents create a viable alternative to platform controlled AI but only if they solve user owned persistent memory infrastructure|supports|2026-04-26
---
# Whether AI knowledge codification concentrates or distributes depends on infrastructure openness because the same extraction mechanism produces digital feudalism under proprietary control and collective intelligence under commons governance
@ -55,4 +59,4 @@ Relevant Notes:
- [[multipolar failure from competing aligned AI systems may pose greater existential risk than any single misaligned superintelligence]] — the counter-argument: distribution without coordination may be worse than concentration with governance
Topics:
- [[_map]]
- [[_map]]

View file

@ -10,9 +10,15 @@ agent: clay
sourced_from: entertainment/2026-04-25-creator-economy-crossover-scope-definition-ad-vs-total-revenue.md
scope: structural
sourcer: "Multiple: IAB, PwC, Goldman Sachs, Grand View Research"
related: ["creator and corporate media economies are zero-sum because total media time is stagnant and every marginal hour shifts between them", "youtube-ad-revenue-crossed-combined-major-studios-2025-decade-ahead-projections"]
related:
- creator and corporate media economies are zero-sum because total media time is stagnant and every marginal hour shifts between them
- youtube-ad-revenue-crossed-combined-major-studios-2025-decade-ahead-projections
supports:
- Creator platform ad revenue crossed studio ad revenue in 2025, a decade ahead of 2035 projections, because YouTube alone exceeded all major studios combined
reweave_edges:
- Creator platform ad revenue crossed studio ad revenue in 2025, a decade ahead of 2035 projections, because YouTube alone exceeded all major studios combined|supports|2026-04-26
---
# Creator-corporate revenue crossover timing depends critically on scope definition: ad revenue crossed in 2025, content-specific revenue may have crossed, total E&M crossover is a 2030s+ phenomenon
The creator economy revenue comparison produces radically different conclusions depending on scope definition. Three distinct thresholds exist: (1) Ad revenue only: Creator platforms ($40.4B YouTube alone) exceeded studio ad revenue ($37.8B combined majors) in 2025—already achieved. (2) Content-specific revenue: Total creator economy ($250B, 2025) likely exceeds studio content-specific revenue (theatrical $9.9B + streaming $80B + linear TV content ~$50-60B = $140-150B)—possibly already achieved depending on methodology. (3) Total E&M industry: Creator economy at $250B represents only 8.6% of total E&M ($2.9T, 2024). At 25% creator growth vs 3.7% total E&M growth, creator reaches ~$1.86T by 2034 while total E&M reaches ~$4.1T—crossover unlikely before 2035. The mechanism creating this scope dependency is that 'corporate media' includes massive infrastructure revenue (telecom, hardware, distribution infrastructure) that creators don't compete with directly. The most defensible position update is: 'Creator platform ad revenue exceeded studio ad revenue in 2025 (achieved); creator content revenue has likely crossed studio content-specific revenue (achieved); creator economy will represent 25-30% of total E&M revenue by 2030 (in progress).' This scope clarification is critical for accurate forecasting.
The creator economy revenue comparison produces radically different conclusions depending on scope definition. Three distinct thresholds exist: (1) Ad revenue only: Creator platforms ($40.4B YouTube alone) exceeded studio ad revenue ($37.8B combined majors) in 2025—already achieved. (2) Content-specific revenue: Total creator economy ($250B, 2025) likely exceeds studio content-specific revenue (theatrical $9.9B + streaming $80B + linear TV content ~$50-60B = $140-150B)—possibly already achieved depending on methodology. (3) Total E&M industry: Creator economy at $250B represents only 8.6% of total E&M ($2.9T, 2024). At 25% creator growth vs 3.7% total E&M growth, creator reaches ~$1.86T by 2034 while total E&M reaches ~$4.1T—crossover unlikely before 2035. The mechanism creating this scope dependency is that 'corporate media' includes massive infrastructure revenue (telecom, hardware, distribution infrastructure) that creators don't compete with directly. The most defensible position update is: 'Creator platform ad revenue exceeded studio ad revenue in 2025 (achieved); creator content revenue has likely crossed studio content-specific revenue (achieved); creator economy will represent 25-30% of total E&M revenue by 2030 (in progress).' This scope clarification is critical for accurate forecasting.

View file

@ -20,8 +20,11 @@ related:
- private-ai-lab-access-restrictions-create-government-offensive-defensive-capability-asymmetries-without-accountability-structure
- government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic by penalizing safety constraints rather than enforcing them
- supply-chain-risk-designation-misdirection-occurs-when-instrument-requires-capability-target-structurally-lacks
- Coercive governance instruments can be deployed to preserve future capability optionality rather than prevent current harm, as demonstrated when the Pentagon designated Anthropic a supply chain risk for refusing to enable autonomous weapons capabilities not currently in use
reweave_edges:
- Coercive governance instruments can be deployed to preserve future capability optionality rather than prevent current harm, as demonstrated when the Pentagon designated Anthropic a supply chain risk for refusing to enable autonomous weapons capabilities not currently in use|related|2026-04-26
---
# Coercive governance instruments produce offense-defense asymmetries through selective enforcement within the deploying agency
The Department of Defense designated Anthropic a supply chain risk on February 27, 2026, intending to cut all federal agency use of Anthropic technology. However, the NSA—a DOD intelligence component—is using Anthropic's Mythos Preview model despite this blacklist, while CISA (the Cybersecurity and Infrastructure Security Agency, the primary civilian cybersecurity agency) does NOT have access. This creates a structural asymmetry where offensive intelligence capabilities are enhanced by Mythos while defensive civilian cybersecurity posture is degraded. The governance instrument is being applied in a way that produces the opposite of its stated purpose: rather than securing the supply chain, selective enforcement creates capability gaps in defensive agencies while enhancing offensive ones. The NSA access appears facilitated by White House OMB protocols establishing federal agency access pathways, suggesting the designation is being circumvented through executive branch channels rather than formally waived. This is governance form without enforcement substance—the coercive tool exists on paper but is selectively ignored within the very agency that deployed it.
The Department of Defense designated Anthropic a supply chain risk on February 27, 2026, intending to cut all federal agency use of Anthropic technology. However, the NSA—a DOD intelligence component—is using Anthropic's Mythos Preview model despite this blacklist, while CISA (the Cybersecurity and Infrastructure Security Agency, the primary civilian cybersecurity agency) does NOT have access. This creates a structural asymmetry where offensive intelligence capabilities are enhanced by Mythos while defensive civilian cybersecurity posture is degraded. The governance instrument is being applied in a way that produces the opposite of its stated purpose: rather than securing the supply chain, selective enforcement creates capability gaps in defensive agencies while enhancing offensive ones. The NSA access appears facilitated by White House OMB protocols establishing federal agency access pathways, suggesting the designation is being circumvented through executive branch channels rather than formally waived. This is governance form without enforcement substance—the coercive tool exists on paper but is selectively ignored within the very agency that deployed it.

View file

@ -5,6 +5,10 @@ domain: health
created: 2026-02-17
source: "FDA AI device database December 2025; Aidoc foundation model clearance January 2026; Viz.ai ISC 2025 multicenter study; Paige and PathAI FDA milestones 2025"
confidence: likely
related:
- ARISE Network (AI Research in Systems Engineering)
reweave_edges:
- ARISE Network (AI Research in Systems Engineering)|related|2026-04-26
---
# AI diagnostic triage achieves 97 percent sensitivity across 14 conditions making AI-first screening viable for all imaging and pathology
@ -23,4 +27,4 @@ Relevant Notes:
Topics:
- livingip overview
- health and wellness
- health and wellness

View file

@ -7,6 +7,10 @@ source: "Bessemer Venture Partners, State of Health AI 2026 (bvp.com/atlas/state
created: 2026-03-07
sourced_from:
- inbox/archive/health/2026-01-01-bvp-state-of-health-ai-2026.md
supports:
- FDA Modernization Act 3.0
reweave_edges:
- FDA Modernization Act 3.0|supports|2026-04-26
---
# FDA is replacing animal testing with AI models and organ-on-chip as the default preclinical pathway which will compress drug development timelines and reduce the 90 percent clinical failure rate
@ -34,4 +38,4 @@ Relevant Notes:
- [[healthcare AI regulation needs blank-sheet redesign because the FDA drug-and-device model built for static products cannot govern continuously learning software]] — FDA demonstrating willingness for structural regulatory change
Topics:
- [[_map]]
- [[_map]]

View file

@ -10,9 +10,18 @@ agent: vida
sourced_from: health/2026-04-25-natali-2025-ai-induced-deskilling-springer-mixed-method-review.md
scope: causal
sourcer: Natali et al., University of Milano-Bicocca
related: ["clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "automation-bias-in-medicine-increases-false-positives-through-anchoring-on-ai-output", "ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine", "dopaminergic-reinforcement-of-ai-reliance-predicts-behavioral-entrenchment-beyond-simple-habit-formation"]
related:
- clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling
- automation-bias-in-medicine-increases-false-positives-through-anchoring-on-ai-output
- ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement
- ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine
- dopaminergic-reinforcement-of-ai-reliance-predicts-behavioral-entrenchment-beyond-simple-habit-formation
supports:
- Moral deskilling from AI erodes ethical judgment through repeated cognitive offloading creating a safety risk distinct from diagnostic accuracy
reweave_edges:
- Moral deskilling from AI erodes ethical judgment through repeated cognitive offloading creating a safety risk distinct from diagnostic accuracy|supports|2026-04-26
---
# Clinical AI creates moral deskilling through ethical judgment erosion from routine AI acceptance leaving clinicians unprepared to recognize value conflicts
This review introduces 'moral deskilling' as a distinct form of AI-induced competency loss separate from cognitive deskilling. The mechanism: repeated acceptance of AI recommendations creates habituation that reduces ethical sensitivity and moral judgment capacity. Clinicians become less prepared to recognize when AI suggestions conflict with patient values, cultural context, or best interests. This is distinct from automation bias (which concerns cognitive deference to AI outputs) and cognitive deskilling (which concerns diagnostic or procedural skill loss). Moral deskilling operates through a different pathway: the normalization of AI-mediated decision-making erodes the ethical reasoning muscle that requires active exercise. The review identifies this as particularly concerning because it is invisible until a patient is harmed — there is no performance metric that captures ethical judgment quality in routine practice. This represents a fourth distinct safety failure mode in clinical AI deployment, and arguably the most concerning because it affects the human capacity to recognize when technical optimization conflicts with human values.
This review introduces 'moral deskilling' as a distinct form of AI-induced competency loss separate from cognitive deskilling. The mechanism: repeated acceptance of AI recommendations creates habituation that reduces ethical sensitivity and moral judgment capacity. Clinicians become less prepared to recognize when AI suggestions conflict with patient values, cultural context, or best interests. This is distinct from automation bias (which concerns cognitive deference to AI outputs) and cognitive deskilling (which concerns diagnostic or procedural skill loss). Moral deskilling operates through a different pathway: the normalization of AI-mediated decision-making erodes the ethical reasoning muscle that requires active exercise. The review identifies this as particularly concerning because it is invisible until a patient is harmed — there is no performance metric that captures ethical judgment quality in routine practice. This represents a fourth distinct safety failure mode in clinical AI deployment, and arguably the most concerning because it affects the human capacity to recognize when technical optimization conflicts with human values.

View file

@ -11,8 +11,10 @@ depends_on:
- Futardio launch — further simplification for permissionless adoption
related:
- Futarchy product-market fit emerged through iterative market rejection not initial design because MetaDAO's successful launchpad model was the third attempt after two failed proposals
- Hanson's 'minor flaw' reframing of the Rasmont critique constitutes a normalization strategy that may reduce practical impact independent of technical validity
reweave_edges:
- Futarchy product-market fit emerged through iterative market rejection not initial design because MetaDAO's successful launchpad model was the third attempt after two failed proposals|related|2026-04-19
- Hanson's 'minor flaw' reframing of the Rasmont critique constitutes a normalization strategy that may reduce practical impact independent of technical validity|related|2026-04-26
sourced_from:
- inbox/archive/internet-finance/2026-03-09-metanallok-x-archive.md
---

View file

@ -6,15 +6,22 @@ confidence: likely
source: "Astra, web research compilation February 2026"
created: 2026-02-17
depends_on:
- "launch cost reduction is the keystone variable that unlocks every downstream space industry at specific price thresholds"
- launch cost reduction is the keystone variable that unlocks every downstream space industry at specific price thresholds
challenged_by:
- "Starship has not yet achieved full reusability or routine operations — projected costs are targets, not demonstrated performance"
- Starship has not yet achieved full reusability or routine operations — projected costs are targets, not demonstrated performance
secondary_domains:
- teleological-economics
related_claims:
- space-sector-commercialization-requires-independent-supply-and-demand-thresholds
sourced_from:
- inbox/archive/2026-02-17-astra-spacex-research.md
supports:
- Starship V3's tripled payload capacity (>100 MT vs V2's 35 MT) lowers the $100/kg launch cost threshold entry point from 6+ reuse cycles to 2-3 reuse cycles
related:
- FAA mishap investigation cycles (2-5 months per anomaly) are the structural bottleneck limiting Starship cost reduction timeline, not vehicle economics or regulatory approval
reweave_edges:
- FAA mishap investigation cycles (2-5 months per anomaly) are the structural bottleneck limiting Starship cost reduction timeline, not vehicle economics or regulatory approval|related|2026-04-26
- Starship V3's tripled payload capacity (>100 MT vs V2's 35 MT) lowers the $100/kg launch cost threshold entry point from 6+ reuse cycles to 2-3 reuse cycles|supports|2026-04-26
---
# Starship achieving routine operations at sub-100 dollars per kg is the single largest enabling condition for the entire space industrial economy
@ -85,4 +92,4 @@ Relevant Notes:
- [[the space launch cost trajectory is a phase transition not a gradual decline analogous to sail-to-steam in maritime transport]] — Starship is the vehicle driving the phase transition
Topics:
- [[space exploration and development]]
- [[space exploration and development]]

View file

@ -5,9 +5,17 @@ description: "Projected $/kg ranges from $600 expendable to $13-20 at airline-li
confidence: likely
source: "Astra synthesis from SpaceX Starship specifications, Falcon 9 reuse cadence trajectory (31→61→96→134→167 launches 2021-2025), Citi space economy analysis, propellant and ground ops cost estimates"
created: 2026-03-08
challenged_by: "No commercial Starship payload has flown yet as of early 2026. The cadence projections extrapolate from Falcon 9's trajectory, but Starship is a fundamentally different and more complex vehicle. Achieving airline-like turnaround requires solving upper-stage reuse, which no vehicle has demonstrated. The optimistic end ($10-20/kg) may require operational perfection that no complex system achieves."
challenged_by:
- No commercial Starship payload has flown yet as of early 2026. The cadence projections extrapolate from Falcon 9's trajectory, but Starship is a fundamentally different and more complex vehicle. Achieving airline-like turnaround requires solving upper-stage reuse, which no vehicle has demonstrated. The optimistic end ($10-20/kg) may require operational perfection that no complex system achieves.
sourced_from:
- inbox/archive/2026-02-17-astra-spacex-research.md
supports:
- Starship V3's tripled payload capacity (>100 MT vs V2's 35 MT) lowers the $100/kg launch cost threshold entry point from 6+ reuse cycles to 2-3 reuse cycles
related:
- FAA mishap investigation cycles (2-5 months per anomaly) are the structural bottleneck limiting Starship cost reduction timeline, not vehicle economics or regulatory approval
reweave_edges:
- FAA mishap investigation cycles (2-5 months per anomaly) are the structural bottleneck limiting Starship cost reduction timeline, not vehicle economics or regulatory approval|related|2026-04-26
- Starship V3's tripled payload capacity (>100 MT vs V2's 35 MT) lowers the $100/kg launch cost threshold entry point from 6+ reuse cycles to 2-3 reuse cycles|supports|2026-04-26
---
# Starship economics depend on cadence and reuse rate not vehicle cost because a 90M vehicle flown 100 times beats a 50M expendable by 17x
@ -64,4 +72,4 @@ Relevant Notes:
- [[the space launch cost trajectory is a phase transition not a gradual decline analogous to sail-to-steam in maritime transport]] — Starship's cost curve is the specific mechanism of the phase transition
Topics:
- [[_map]]
- [[_map]]

View file

@ -10,9 +10,16 @@ agent: astra
sourced_from: space-development/2026-02-13-spacenews-china-three-body-2800sat-star-compute.md
scope: functional
sourcer: SpaceNews
related: ["military-commercial-space-architecture-convergence-creates-dual-use-orbital-infrastructure", "china-is-the-only-credible-peer-competitor-in-space-with-comprehensive-capabilities-and-state-directed-acceleration-closing-the-reusability-gap-in-5-8-years", "blue-origin-project-sunrise-signals-spacex-blue-origin-duopoly-in-orbital-compute-through-vertical-integration"]
related:
- military-commercial-space-architecture-convergence-creates-dual-use-orbital-infrastructure
- china-is-the-only-credible-peer-competitor-in-space-with-comprehensive-capabilities-and-state-directed-acceleration-closing-the-reusability-gap-in-5-8-years
- blue-origin-project-sunrise-signals-spacex-blue-origin-duopoly-in-orbital-compute-through-vertical-integration
supports:
- China's multiple parallel orbital data center programs with combined state backing exceeding projected US commercial ODC market creates asymmetric competitive advantage
reweave_edges:
- China's multiple parallel orbital data center programs with combined state backing exceeding projected US commercial ODC market creates asymmetric competitive advantage|supports|2026-04-26
---
# China's Star-Compute orbital computing program serves dual commercial and geopolitical functions by providing AI processing to Belt and Road Initiative partner nations to reduce Western technology dependency and create orbital infrastructure lock-in
The Star-Compute Program (ADA Space + Zhejiang Lab collaboration) explicitly targets 'commercial and government clients across the Belt and Road Initiative regions' per Xinhua state media coverage. This BRI infrastructure framing is distinct from purely commercial orbital computing ventures. The pattern mirrors China's 5G deployment strategy where Huawei demonstrated technology and state-backed carriers deployed at scale for BRI partners. The geopolitical function makes state subsidy economically rational independent of commercial viability—the program creates technology dependency and orbital infrastructure lock-in for BRI partner nations, reducing reliance on Western compute infrastructure. The Three-Body Constellation (12 satellites, May 2025 launch, 9 months operational testing) serves as the technology demonstrator, while the full 2,800-satellite Star-Compute target represents the BRI deployment scale. This dual commercial-geopolitical structure explains why China can sustain orbital computing development even if pure commercial returns remain marginal—the strategic value of BRI infrastructure lock-in justifies the investment independently.
The Star-Compute Program (ADA Space + Zhejiang Lab collaboration) explicitly targets 'commercial and government clients across the Belt and Road Initiative regions' per Xinhua state media coverage. This BRI infrastructure framing is distinct from purely commercial orbital computing ventures. The pattern mirrors China's 5G deployment strategy where Huawei demonstrated technology and state-backed carriers deployed at scale for BRI partners. The geopolitical function makes state subsidy economically rational independent of commercial viability—the program creates technology dependency and orbital infrastructure lock-in for BRI partner nations, reducing reliance on Western compute infrastructure. The Three-Body Constellation (12 satellites, May 2025 launch, 9 months operational testing) serves as the technology demonstrator, while the full 2,800-satellite Star-Compute target represents the BRI deployment scale. This dual commercial-geopolitical structure explains why China can sustain orbital computing development even if pure commercial returns remain marginal—the strategic value of BRI infrastructure lock-in justifies the investment independently.

View file

@ -29,6 +29,7 @@ related:
- Safe Superintelligence Inc.
- thinking-machines-lab
- xAI
- platform incumbents enter the personal AI race with pre existing OS level data access that standalone AI companies cannot replicate through model quality alone
reweave_edges:
- Anthropic|related|2026-03-28
- dario-amodei|related|2026-03-28
@ -36,6 +37,7 @@ reweave_edges:
- Safe Superintelligence Inc.|related|2026-03-28
- thinking-machines-lab|related|2026-03-28
- xAI|related|2026-03-28
- platform incumbents enter the personal AI race with pre existing OS level data access that standalone AI companies cannot replicate through model quality alone|related|2026-04-26
---
# OpenAI
@ -88,4 +90,4 @@ The pattern of OpenAI alumni founding safety-focused competitors is itself a sig
- [[safe AI development requires building alignment mechanisms before scaling capability]] — OpenAI's trajectory is the primary counter-case
Topics:
- [[_map]]
- [[_map]]