Auto: agents/leo/stress-test-2026-03-16.md | 1 file changed, 376 insertions(+)

This commit is contained in:
m3taversal 2026-03-16 16:36:18 +00:00
parent 68da33b58e
commit 125483277e

View file

@ -0,0 +1,376 @@
---
type: research-output
created: 2026-03-16
task: belief-cascade-stress-test
scope: all-claims (core/, foundations/, and domains/)
---
# Belief Cascade Stress Test — 2026-03-16
## Executive Summary
The knowledge base has moderate structural fragility concentrated in two areas: (1) cross-agent foundational claims in `core/teleohumanity/` that carry experimental confidence but support beliefs across 3-4 agents, and (2) a cluster of futarchy/mechanism-design claims that Rio's entire belief structure depends on with limited empirical evidence beyond MetaDAO. The highest-risk single point of failure is **"three paths to superintelligence exist but only collective superintelligence preserves human agency"** — rated experimental confidence yet load-bearing for Leo, Theseus, and indirectly Clay and Vida through second-order cascades. Astra is the most internally resilient agent (beliefs grounded primarily in proven/likely engineering claims), while Theseus is the most fragile (3 of 5 beliefs depend on claims rated experimental or below). The KB's greatest systemic vulnerability is that its cross-domain coherence rests on a thin layer of teleohumanity axioms that have not been independently validated.
## Top 20 Load-Bearing Claims
Claims ranked by load-bearing weight = (number of dependent beliefs) x (number of distinct agents affected).
| Rank | Claim | Location | Confidence | Dependent Beliefs | Agents | Weight |
|------|-------|----------|------------|-------------------|--------|--------|
| 1 | technology advances exponentially but coordination mechanisms evolve linearly creating a widening gap | core/teleohumanity | likely | Leo-1, Theseus-1, Vida-1, Astra-2 | 4 | 16 |
| 2 | centaur team performance depends on role complementarity not mere human-AI combination | foundations/collective-intelligence | likely | Leo-4, Theseus-5, Vida-5 | 3 | 9 |
| 3 | three paths to superintelligence exist but only collective superintelligence preserves human agency | core/teleohumanity | **experimental** | Leo-4, Theseus-5 | 2 | 4 |
| 4 | the alignment problem dissolves when human values are continuously woven into the system rather than specified in advance | core/teleohumanity | **experimental** | Leo-4, Theseus-3 | 2 | 4 |
| 5 | narratives are infrastructure not just communication because they coordinate action at civilizational scale | foundations/cultural-dynamics | likely | Leo-5, Clay-1, Clay-2 | 2 | 6 |
| 6 | the meaning crisis is a narrative infrastructure failure not a personal psychological problem | foundations/cultural-dynamics (via core/teleohumanity) | likely | Leo-5, Clay-4 | 2 | 4 |
| 7 | ownership alignment turns network effects from extractive to generative | core/living-agents | likely | Rio-2, Clay-5 | 2 | 4 |
| 8 | community ownership accelerates growth through aligned evangelism not passive holding | core/living-agents | likely | Rio-2, Clay-3 | 2 | 4 |
| 9 | the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it | foundations/collective-intelligence | likely | Theseus-1, Theseus-2 | 1 | 2 |
| 10 | Americas declining life expectancy is driven by deaths of despair | domains/health | proven | Vida-1, Vida-2 | 1 | 2 |
| 11 | human-in-the-loop clinical AI degrades to worse-than-AI-alone | domains/health | likely | Theseus-4, Vida-5 | 2 | 4 |
| 12 | launch cost reduction is the keystone variable that unlocks every downstream space industry | domains/space-development | likely | Astra-1, Astra-5, Astra-6 | 1 | 3 |
| 13 | scalable oversight degrades rapidly as capability gaps grow | foundations/collective-intelligence | proven | Theseus-4 | 1 | 1 |
| 14 | AI alignment is a coordination problem not a technical problem | domains/ai-alignment | likely | Theseus-2 | 1 | 1 |
| 15 | multipolar failure from competing aligned AI systems may pose greater existential risk than any single misaligned superintelligence | foundations/collective-intelligence | likely | Theseus-2 | 1 | 1 |
| 16 | the specification trap means any values encoded at training time become structurally unstable | domains/ai-alignment | likely | Theseus-3 | 1 | 1 |
| 17 | super co-alignment proposes that human and AI values should be co-shaped through iterative alignment | domains/ai-alignment | experimental | Theseus-3 | 1 | 1 |
| 18 | collective superintelligence is the alternative to monolithic AI controlled by a few | core/teleohumanity | **experimental** | Theseus-5 | 1 | 1 |
| 19 | the media attractor state is community-filtered IP with AI-collapsed production costs | domains/entertainment | likely | Clay-3 | 1 | 1 |
| 20 | master narrative crisis is a design window not a catastrophe | core/teleohumanity | likely | Clay-1, Clay-4 | 1 | 2 |
**Note on scoring methodology:** Weight penalizes cross-agent dependency more than within-agent dependency because cross-agent failures propagate through the KB's coherence layer, not just a single agent's worldview. A claim that fails and only affects Rio's beliefs is a local problem; a claim that fails and affects Leo + Theseus + Vida is a systemic problem.
## Cascade Analysis
### 1. If "technology advances exponentially but coordination mechanisms evolve linearly" falls
**Confidence:** likely | **Weight:** 16 | **Agents:** Leo, Theseus, Vida, Astra
This is the KB's most load-bearing claim. It grounds Leo's foundational belief (Belief 1: "Technology is outpacing coordination wisdom"), Theseus's keystone belief (Belief 1: "AI alignment is the greatest outstanding problem"), Vida's existential premise (Belief 1: healthspan as binding constraint, via coordination failure), and Astra's governance belief (Belief 2: space governance must be designed proactively).
**Disconfirmation scenario:** Evidence emerges that coordination mechanisms are actually keeping pace — e.g., international AI governance frameworks prove effective, prediction markets scale to policy-relevant decisions, or DAOs demonstrate governance at nation-state scale.
**Cascade:**
- Leo-1 collapses → all Leo positions lose their foundational premise → Leo's entire strategic framework requires rebuilding
- Theseus-1 weakens → the urgency argument for alignment dissolves → Theseus shifts from "existential emergency" to "important engineering problem"
- Vida-1 partially weakens → healthspan remains a real problem but loses the "compounding coordination failure" framing
- Astra-2 weakens → space governance shifts from "urgent design window" to "evolve organically"
- **Second-order:** Leo-6 (grand strategy over fixed plans) loses urgency. Theseus-2 (alignment as coordination problem) loses its empirical anchor.
**Current evidence:** COVID coordination failure (proven), AI governance race dynamics (2026 case studies), space governance gaps (widening). The claim is well-grounded but rests on a selection of examples rather than systematic measurement. The "linearly" qualifier is the weakest link — coordination may be advancing sub-exponentially but faster than linearly.
**What would disconfirm:** A systematic study showing coordination mechanism adoption curves are actually exponential (not linear); successful international AI governance with enforcement; effective multilateral space governance framework.
### 2. If "centaur team performance depends on role complementarity" falls
**Confidence:** likely | **Weight:** 9 | **Agents:** Leo, Theseus, Vida
Grounds Leo's Belief 4 (centaur over cyborg), Theseus's Belief 5 (collective superintelligence preserves agency), and Vida's Belief 5 (clinical AI augments physicians).
**Disconfirmation scenario:** AI systems consistently outperform human-AI teams across all task types, including those requiring judgment, ethics, and contextual reasoning — making the "complementarity" framing obsolete.
**Cascade:**
- Leo-4 collapses → the "centaur" framing for LivingIP and the entire collective architecture is undermined → the case for human agency in AI systems weakens to sentiment rather than structure
- Theseus-5 weakens → collective superintelligence loses its operational mechanism (if humans don't complement AI, they're dead weight in the collective)
- Vida-5 collapses → clinical AI should replace physicians rather than augment them
- **Second-order:** The entire LivingIP architecture (agents as human-AI centaurs) loses its theoretical foundation. The KB's own operating model (human contributors + AI agents) becomes harder to justify.
**Current evidence:** Chess centaur evidence is aging. The Stanford/Harvard clinical AI study actually provides COUNTER-evidence (humans degraded AI performance). The claim is likely in narrow domains (chess, structured tasks) but may not generalize.
**What would disconfirm:** Consistent evidence across 5+ domains that AI-alone outperforms human-AI teams, including in tasks requiring ethical judgment and contextual reasoning.
### 3. If "three paths to superintelligence exist but only collective superintelligence preserves human agency" falls
**Confidence:** experimental | **Weight:** 4 | **Agents:** Leo, Theseus
This is the most dangerous vulnerability: experimental confidence + load-bearing for the KB's core constructive thesis.
**Disconfirmation scenario:** Monolithic AI with constitutional alignment proves sufficient for agency preservation; or collective architectures prove too slow/incoherent to achieve superintelligence; or a fourth path emerges (e.g., federated monolithic systems) that preserves agency better than collective approaches.
**Cascade:**
- Leo-4 weakens → the centaur thesis loses its strongest structural argument
- Theseus-5 collapses → Theseus's constructive recommendation (what to build) has no foundation → Theseus becomes purely diagnostic (what's dangerous) without a prescription
- **Second-order:** The entire LivingIP project loses its deepest justification. If monolithic AI preserves agency fine, the case for building collective intelligence infrastructure weakens from "necessary for survival" to "nice to have."
**Current evidence:** Bostrom's three-path framework is theoretical. No collective superintelligence exists to test. The "only collective preserves agency" qualifier is an assertion, not an empirical finding. The claim is in the foundations/teleohumanity layer, meaning it's closer to axiom than claim.
**What would disconfirm:** A monolithic AI system demonstrating genuine value alignment that adapts to changing human values without collective input; OR evidence that collective systems cannot coordinate fast enough to be competitive with monolithic systems.
### 4. If "the alignment problem dissolves when human values are continuously woven into the system" falls
**Confidence:** experimental | **Weight:** 4 | **Agents:** Leo, Theseus
**Disconfirmation scenario:** Continuous value integration proves technically infeasible at scale (human feedback loops can't keep pace with model updates); or continuous alignment introduces new failure modes (value drift, manipulation of the feedback channel); or discrete alignment milestones prove more robust.
**Cascade:**
- Leo-4 weakens → the mechanism for centaur governance loses its theoretical support
- Theseus-3 collapses → continuous alignment as an alternative to specification is falsified → Theseus must find a different constructive approach
- **Second-order:** Theseus-5 (collective SI) loses a key supporting mechanism. The entire "alignment through architecture" thesis that distinguishes this KB's approach to AI safety weakens.
**Current evidence:** Theoretical only. No system has demonstrated continuous value alignment at scale. RLHF is a crude approximation. Constitutional AI is specification-based, not continuous. The claim is aspirational.
**What would disconfirm:** Evidence that continuous human feedback introduces more alignment failures than it solves (e.g., value gaming, feedback manipulation, preference instability causing oscillation).
### 5. If "narratives are infrastructure not just communication" falls
**Confidence:** likely | **Weight:** 6 | **Agents:** Leo, Clay
**Disconfirmation scenario:** Rigorous causal analysis shows narratives are downstream of material conditions (historical materialism wins); or that the fiction-to-reality pipeline is pure survivorship bias; or that designed narratives never scale.
**Cascade:**
- Leo-5 collapses → stories lose their status as civilizational coordination tools → cultural analysis becomes "nice to have" rather than strategic
- Clay-1 collapses → Clay's existential premise is falsified ("if this belief is wrong, Clay should not exist as an agent in this collective")
- Clay-2 weakens → fiction-to-reality pipeline becomes metaphor, not mechanism
- **Second-order:** Clay-4 (meaning crisis as design window) loses its foundation. Clay-5 (ownership alignment for narrative) loses its upstream justification. The entire entertainment domain loses strategic priority in the KB.
**Current evidence:** Intel, MIT, PwC, French Defense institutionalized science fiction as strategic input. Star Trek/communicator, Foundation/SpaceX connections documented. But causation vs correlation is unresolved. The "likely" rating seems appropriate.
**What would disconfirm:** A large-N study showing no causal relationship between narrative exposure and technology development trajectories; or evidence that fiction-to-reality examples are pure survivorship bias with no predictive power.
### 6. If "ownership alignment turns network effects from extractive to generative" falls
**Confidence:** likely | **Weight:** 4 | **Agents:** Rio, Clay
**Cascade:**
- Rio-2 collapses → the ownership model for Living Capital loses its core mechanism
- Clay-5 weakens → audience-as-stakeholder thesis loses economic grounding
- **Second-order:** Rio-3 (futarchy solves trustless joint ownership) loses a supporting argument. Clay-3 (production cost collapse → community value) loses the ownership bridge.
**Current evidence:** Ethereum, Hyperliquid, Yearn cited as examples. But NFT market collapse, BAYC trajectory, and airdrop dumps provide significant counter-evidence. The claim may be true for well-designed mechanisms but false as a general principle.
### 7. If "human-in-the-loop clinical AI degrades to worse-than-AI-alone" falls
**Confidence:** likely | **Weight:** 4 | **Agents:** Theseus, Vida
**Cascade:**
- Theseus-4 weakens → verification degradation thesis loses its strongest cross-domain evidence
- Vida-5 partially weakens → centaur design concerns in clinical AI become less urgent
- **Second-order:** The case for autonomous AI in medicine strengthens or weakens depending on which direction the disconfirmation goes.
**Current evidence:** Strong — Stanford/Harvard study, colonoscopy de-skilling study, Wachter's framing. Multiple independent lines of evidence. This is one of the better-grounded cross-agent claims.
### 8. If "launch cost reduction is the keystone variable" falls
**Confidence:** likely | **Weight:** 3 | **Agents:** Astra (3 beliefs)
**Cascade:**
- Astra-1 collapses → the entire space development framework loses its organizing principle
- Astra-5 weakens → dual-use argument loses its timeline mechanism
- Astra-6 weakens → SpaceX dependency framing becomes less critical
- **Second-order:** Astra-3 (30-year attractor) and Astra-4 (microgravity manufacturing) lose their enabling condition.
**Current evidence:** Strong empirical grounding in historical cost data and threshold analysis. The "keystone" qualifier (single bottleneck) is the vulnerable part — space development may be more of a chain-link system than a single-bottleneck system.
### 9. If "the alignment tax creates a structural race to the bottom" falls
**Confidence:** likely | **Weight:** 2 | **Agents:** Theseus (2 beliefs)
**Cascade:**
- Theseus-1 weakens → AI alignment urgency decreases
- Theseus-2 weakens → coordination framing loses its strongest mechanism
- **Second-order:** The entire urgency argument for alignment-as-coordination weakens. If safety doesn't cost capability, the race dynamics change fundamentally.
**Current evidence:** Anthropic RSP rollback (Feb 2026) is strong empirical evidence. But the causal mechanism (safety = capability cost) is debated — safety training may actually improve capability in some domains.
### 10. If "collective superintelligence is the alternative to monolithic AI controlled by a few" falls
**Confidence:** experimental | **Weight:** 1 direct, but high second-order
**Cascade:**
- Theseus-5 loses a grounding claim
- **Second-order:** The entire LivingIP thesis as an alignment solution weakens. If collective SI is not a viable alternative, the constructive half of the KB (what to build, not just what to avoid) loses its foundation.
## Agent Fragility Scores
### Methodology
- **Total beliefs:** Count of active beliefs per agent
- **Beliefs on thin evidence:** Beliefs where any grounding claim has confidence < likely (i.e., experimental or speculative)
- **Beliefs on single-source claims:** Beliefs where any grounding claim cites only 1 primary source
- **Fragility score:** (beliefs on thin evidence) / (total beliefs)
| Agent | Total Beliefs | Beliefs on Thin Evidence | Beliefs on Single-Source Claims | Fragility Score |
|-------|---------------|--------------------------|-------------------------------|-----------------|
| **Leo** | 6 | 2 (B4: centaur — 2 experimental grounding claims; B3: post-scarcity — 1 experimental) | 1 | 0.33 |
| **Rio** | 6 | 2 (B3: futarchy solves trustless — MetaDAO limited evidence; B4: volatility as feature — 1 experimental grounding) | 2 | 0.33 |
| **Clay** | 5 | 1 (B5: ownership alignment — some grounding claims experimental in practice despite likely rating) | 1 | 0.20 |
| **Theseus** | 5 | 3 (B1: alignment greatest problem — depends on experimental claims; B3: continuous alignment — 2 experimental; B5: collective SI — 2 experimental) | 2 | **0.60** |
| **Vida** | 5 | 1 (B1: healthspan binding constraint — depends on cross-domain experimental claims) | 0 | 0.20 |
| **Astra** | 7 | 2 (B4: microgravity manufacturing — experimental evidence; B7: chemical rockets bootstrapping — speculative grounding) | 1 | 0.29 |
**Key finding:** Theseus is by far the most fragile agent with a 0.60 fragility score. Three of five beliefs rest on experimental-confidence claims. This is structurally appropriate — AI alignment IS the domain with the least empirical evidence — but it means Theseus's belief structure is the most vulnerable to disconfirmation.
Vida and Clay are the most resilient, with most beliefs grounded in proven or likely claims with multiple independent evidence sources.
## Critical Vulnerabilities
These are claims that are simultaneously high load-bearing weight, low confidence, AND thin evidence — the KB's most dangerous structural weak points.
### CRITICAL: "three paths to superintelligence exist but only collective superintelligence preserves human agency"
- **Location:** core/teleohumanity/
- **Confidence:** experimental
- **Load-bearing weight:** 4 (Leo-4, Theseus-5)
- **Evidence:** Bostrom's framework (theoretical). No empirical test possible yet. The "only collective preserves agency" qualifier is an assertion.
- **Risk:** If this falls, the KB's constructive thesis (what to build) collapses. Theseus loses its prescription. LivingIP loses its deepest justification.
### CRITICAL: "the alignment problem dissolves when human values are continuously woven into the system"
- **Location:** core/teleohumanity/
- **Confidence:** experimental
- **Load-bearing weight:** 4 (Leo-4, Theseus-3)
- **Evidence:** Purely theoretical. No system has demonstrated continuous value alignment at scale.
- **Risk:** If this falls, the continuous-alignment thesis fails. Theseus must find an alternative to both specification and continuous approaches.
### CRITICAL: "collective superintelligence is the alternative to monolithic AI controlled by a few"
- **Location:** core/teleohumanity/
- **Confidence:** experimental
- **Load-bearing weight:** 1 direct but systemically central
- **Evidence:** Theoretical framing with no empirical test. The claim that collective SI is an "alternative" requires it to be achievable, which is undemonstrated.
- **Risk:** This is the axiom that makes the entire KB project self-justifying. If collective SI is not viable, the KB is an interesting experiment, not civilization-critical infrastructure.
### HIGH: "financial markets and neural networks are isomorphic critical systems"
- **Location:** foundations/critical-systems/
- **Confidence:** experimental
- **Load-bearing weight:** 1 (Rio-4) but foundational to Rio's market-as-information-processor framework
- **Evidence:** The isomorphism is argued by analogy. Statistical signatures (power laws) are suggestive but not conclusive — multiple generative processes produce power laws.
- **Risk:** If the isomorphism fails, Rio-4 (volatility as feature) loses its theoretical foundation. The "markets are learning" framing becomes metaphor rather than mechanism.
### HIGH: "super co-alignment proposes that human and AI values should be co-shaped through iterative alignment"
- **Location:** domains/ai-alignment/
- **Confidence:** experimental
- **Load-bearing weight:** 1 (Theseus-3)
- **Evidence:** A proposal, not a finding. No implementation exists.
- **Risk:** If co-shaping proves unworkable, Theseus-3's constructive alternative disappears.
### MODERATE: "the megastructure launch sequence may be economically self-bootstrapping"
- **Location:** domains/space-development/
- **Confidence:** speculative
- **Load-bearing weight:** 1 (Astra-7)
- **Evidence:** No prototype exists for any megastructure launch system. The economic bootstrapping assumption is the critical uncertainty.
- **Risk:** Astra-7 loses its grounding, but the impact is contained to long-horizon space infrastructure positions.
## Evidence Shopping List
Prioritized by fragility-reduction impact — what evidence would most strengthen the KB's structural integrity.
### Priority 1: Empirical tests of collective vs monolithic AI performance (addresses 3 critical vulnerabilities)
**Target claims:** "three paths to superintelligence," "collective superintelligence is the alternative," "the alignment problem dissolves when human values are continuously woven in"
**What to look for:**
- Multi-agent AI systems vs single-agent systems on tasks requiring value alignment (not just capability)
- Evidence on whether distributed AI architectures preserve human agency better than centralized ones
- Real-world tests of continuous value feedback in AI systems (beyond RLHF)
- Comparative studies: constitutional AI (specification) vs RLHF-style continuous feedback on alignment quality
- The Knuth collaboration data may partially address this — human-AI mathematical collaboration shows the centaur pattern but needs broadening beyond mathematics
**Impact:** Would shore up or falsify the KB's three most critical vulnerabilities simultaneously. This is the single highest-value research investment.
### Priority 2: Centaur performance across domains (addresses weight-9 vulnerability)
**Target claim:** "centaur team performance depends on role complementarity"
**What to look for:**
- Systematic review of human-AI team performance across 10+ task domains
- Specific focus on tasks requiring judgment, ethics, and contextual reasoning (not just chess/pattern recognition)
- The clinical AI evidence currently CONTRADICTS the centaur thesis — need to resolve whether clinical AI is an exception or the rule
- Evidence on whether role boundaries can be maintained structurally or degrade inevitably (the "de-skilling" problem)
**Impact:** Would resolve tension between the centaur thesis (Leo-4, Theseus-5) and clinical AI evidence (Vida-5, Theseus-4). Currently these create an internal contradiction the KB hasn't resolved.
### Priority 3: Coordination mechanism adoption curves (addresses weight-16 vulnerability)
**Target claim:** "technology advances exponentially but coordination mechanisms evolve linearly"
**What to look for:**
- Quantitative measurement of coordination mechanism adoption rates (prediction markets, DAOs, international governance frameworks)
- Is the gap actually widening? What's the evidence beyond selected examples?
- Counter-evidence: where IS coordination succeeding at keeping pace with technology?
- Historical analysis: did coordination mechanisms evolve linearly during previous technology transitions (printing press, steam, electricity)?
**Impact:** Would strengthen or weaken the KB's single most load-bearing claim. Even modest evidence would help — the claim currently rests on plausible argument plus selected examples, not systematic measurement.
### Priority 4: Futarchy empirical evidence beyond MetaDAO (addresses Rio's structural dependency)
**Target claims:** Rio's beliefs 1-3 all depend on futarchy mechanism claims with limited empirical base
**What to look for:**
- Optimism futarchy results (already partially incorporated — extend analysis)
- Ranger Finance liquidation follow-up — did the trustless ownership mechanism execute as theorized?
- Any non-MetaDAO futarchy implementations and their outcomes
- Evidence on market manipulation resistance at scale (not just small MetaDAO proposals)
- Cardinal vs ordinal accuracy distinction from Optimism data is critical — futarchy may be good at ranking but bad at pricing
**Impact:** Rio's entire belief structure depends on futarchy working as theorized. The empirical base is currently one platform (MetaDAO) with limited volume. Broadening the evidence base would either solidify or expose the largest domain-specific fragility.
### Priority 5: Narrative causation evidence (addresses Clay's existential premise)
**Target claim:** "narratives are infrastructure not just communication"
**What to look for:**
- Causal studies (not correlational) of narrative → technology development
- Controlled studies of narrative intervention → behavior change at population scale
- Counter-evidence on survivorship bias in fiction-to-reality pipeline
- Evidence on whether the fiction-to-reality pipeline has predictive power (can you predict which fictions will materialize?)
**Impact:** Clay's existence as an agent depends on this claim. Current evidence is suggestive but not causal. Even negative evidence would be valuable — if the pipeline is pure survivorship bias, the KB should know.
### Priority 6: Space development chain-link vs keystone variable (addresses Astra's organizing principle)
**Target claim:** "launch cost reduction is the keystone variable"
**What to look for:**
- Evidence on whether cheap launch alone is sufficient to activate downstream industries, or whether power, life support, and manufacturing must advance simultaneously
- Historical analogies: was any single variable truly "keystone" in previous infrastructure buildouts (railroads, internet, aviation)?
- Evidence on whether in-space capabilities (ISRU, manufacturing) are advancing independently of launch cost
**Impact:** Astra's framework is well-grounded but the "keystone" qualifier (single bottleneck vs chain-link) is the vulnerable part. Resolving this would either validate Astra's organizing principle or require restructuring around a multi-variable framework.
---
## Appendix: Complete Belief-to-Claim Dependency Map
### Leo (6 beliefs, 18 grounding claims)
- **B1** (Technology outpacing coordination): technology-coordination gap (likely), COVID coordination failure (proven), internet-communication-not-cognition (proven)
- **B2** (Existential risks interconnected): existential risk feedback loops (likely), great filter as coordination threshold (core), nuclear near-misses compound (core)
- **B3** (Post-scarcity achievable): future as probability space (proven), consciousness cosmically unique (likely), developing SI as surgery (likely)
- **B4** (Centaur over cyborg): centaur role complementarity (likely), three paths to SI (experimental), alignment dissolves with continuous values (experimental)
- **B5** (Stories coordinate): narratives as infrastructure (likely), meaning crisis as narrative failure (likely), master narratives as coordination substrate (likely)
- **B6** (Grand strategy over plans): grand strategy aligns aspirations with capabilities (core), proximate objectives in uncertainty (core), coordinated minorities shape history (likely)
### Rio (6 beliefs, 18 grounding claims)
- **B1** (Markets beat votes): Polymarket vindication (proven), speculative markets selection effects (proven), Market wisdom exceeds crowd wisdom (not found as separate file)
- **B2** (Ownership alignment): ownership turns network effects generative (likely), token economics meritocracy (experimental), community ownership accelerates growth (likely)
- **B3** (Futarchy solves trustless joint ownership): futarchy solves trustless ownership (likely), MetaDAO empirical results (likely), decision markets prevent majority theft (proven)
- **B4** (Volatility is feature): markets-neural networks isomorphic (experimental), Minsky financial instability (likely), power laws indicate SOC (experimental)
- **B5** (Legacy finance rent-extraction): proxy inertia predicts incumbent failure (likely), internet finance attractor state (likely), blockchain coordination attractor (likely)
- **B6** (Decentralized mechanism = regulatory defensibility): Living Capital fails Howey test (likely), futarchy-based fundraising creates separation (likely), agents need critical mass before capital (likely)
### Clay (5 beliefs, 14 grounding claims)
- **B1** (Narrative is infrastructure): narratives as infrastructure (likely), master narrative crisis as design window (likely), meaning crisis as narrative failure (likely)
- **B2** (Fiction-to-reality pipeline): narratives as infrastructure (likely), no designed narrative achieved organic adoption (likely), ideological adoption as complex contagion (likely)
- **B3** (Cost collapse → community): media attractor state (likely), community ownership accelerates growth (likely), fanchise management stack (likely)
- **B4** (Meaning crisis = design window): master narrative crisis as design window (likely), meaning crisis as narrative failure (likely), ideological adoption as complex contagion (likely)
- **B5** (Ownership → active narrative architects): ownership alignment generative (likely), community ownership accelerates (likely), strongest memeplexes align incentives (likely)
### Theseus (5 beliefs, 15 grounding claims)
- **B1** (AI alignment greatest problem): safe AI before scaling (likely), technology-coordination gap (likely), alignment tax race to bottom (likely)
- **B2** (Alignment = coordination problem): AI alignment coordination not technical (likely), multipolar failure risk (likely), alignment tax (likely)
- **B3** (Alignment must be continuous): alignment dissolves with continuous values (experimental), specification trap (likely), super co-alignment (experimental)
- **B4** (Verification degrades faster than capability): scalable oversight degrades (proven), AI capability/reliability independent (experimental), human-in-the-loop clinical AI degrades (likely)
- **B5** (Collective SI preserves agency): three paths to SI (experimental), collective SI alternative to monolithic (experimental), centaur role complementarity (likely)
### Vida (5 beliefs, 17 grounding claims)
- **B1** (Healthspan binding constraint): human needs finite universal stable (likely), technology-coordination gap (likely), optimization without resilience = fragility (proven), deaths of despair (proven)
- **B2** (80-90% non-clinical): medical care 10-20% (proven), social isolation costs (likely), deaths of despair (proven), modernization dismantles community (likely)
- **B3** (Healthcare misalignment structural): industries as need-satisfaction systems (likely), proxy inertia predicts failure (likely), healthcare attractor state (likely), VBC stalls at payment boundary (likely)
- **B4** (Atoms-to-bits defensible layer): healthcare atoms-to-bits (likely), atoms-to-bits spectrum (likely), continuous health monitoring convergence (likely)
- **B5** (Clinical AI augments but creates risks): centaur role complementarity (likely), human-in-the-loop degrades (likely), healthcare atoms-to-bits (likely)
### Astra (7 beliefs, 21 grounding claims)
- **B1** (Launch cost keystone): launch cost keystone variable (likely), Starship sub-$100/kg (likely), space launch phase transition (likely)
- **B2** (Governance before settlements): space governance gaps widening (likely), settlement governance before settlements (likely), Artemis Accords bilateral norm-setting (likely)
- **B3** (Multiplanetary in 30 years): 30-year cislunar attractor (experimental), three-loop bootstrapping (likely), attractor states as reference points (likely)
- **B4** (Microgravity manufacturing): killer app sequence (experimental), microgravity superior materials (likely), Varda validates (likely)
- **B5** (Colony tech dual-use): self-sufficient colony dual-use (likely), three-loop bootstrapping (likely), launch cost keystone (likely)
- **B6** (SpaceX single-player dependency): SpaceX vertical integration (likely), China as credible peer (likely), launch cost keystone (likely)
- **B7** (Chemical rockets = bootstrapping): skyhooks no new physics (experimental), Lofstrom loops electricity cost (speculative), megastructure self-bootstrapping (speculative)