Compare commits
3 commits
main
...
leo/synthe
| Author | SHA1 | Date | |
|---|---|---|---|
| cf8f596877 | |||
| 739f45459d | |||
| 7147b4850d |
8 changed files with 379 additions and 0 deletions
|
|
@ -0,0 +1,54 @@
|
||||||
|
---
|
||||||
|
type: claim
|
||||||
|
domain: grand-strategy
|
||||||
|
secondary_domains: [ai-alignment, collective-intelligence]
|
||||||
|
description: "RLHF, DPO, constitutional AI, and scalable oversight all optimize alignment within individual models — making alignment more efficient creates demand for more alignment-as-training rather than shifting to coordination-based alignment where safety is a property of the architecture, not a tax on each model"
|
||||||
|
confidence: experimental
|
||||||
|
source: "Synthesis by Leo from: Theseus's alignment tax and RSP collapse claims; Vida's healthcare Jevons paradox; the universal Jevons pattern (PR #34); collective intelligence alignment gap claim"
|
||||||
|
created: 2026-03-07
|
||||||
|
depends_on:
|
||||||
|
- "the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it"
|
||||||
|
- "AI optimization of industry subsystems induces demand for more of the same subsystem rather than shifting resources to the structural changes that would improve outcomes"
|
||||||
|
- "no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it"
|
||||||
|
- "voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints"
|
||||||
|
---
|
||||||
|
|
||||||
|
# alignment research is experiencing its own Jevons paradox because improving single-model safety induces demand for more single-model safety rather than coordination-based alignment
|
||||||
|
|
||||||
|
The Jevons paradox — where improving subsystem efficiency increases total demand for that subsystem rather than enabling system-level restructuring — applies to the alignment field itself. The parallel to healthcare is precise.
|
||||||
|
|
||||||
|
**Healthcare:** AI makes sick care more efficient -> more demand for sick care -> prevents transition to prevention-first system. The subsystem (clinical care) explains 10-20% of health outcomes, yet absorbs the vast majority of AI investment.
|
||||||
|
|
||||||
|
**Alignment:** Better RLHF/DPO/constitutional AI makes single-model alignment more efficient -> more demand for single-model alignment -> prevents transition to coordination-based alignment. The subsystem (individual model safety) addresses one component of the alignment problem, yet absorbs virtually all alignment investment.
|
||||||
|
|
||||||
|
**The mechanism is identical in both cases:**
|
||||||
|
|
||||||
|
1. **Subsystem optimization has immediate, measurable ROI.** Better RLHF reduces harmful outputs by measurable percentages. Better clinical AI improves diagnostic accuracy by measurable percentages. Both are publishable, fundable, and demonstrable to stakeholders.
|
||||||
|
|
||||||
|
2. **System restructuring has uncertain, delayed returns.** Building coordination infrastructure for multi-agent alignment has no clear benchmark, no established methodology, and no guaranteed outcome. Building prevention-first healthcare has similar characteristics. The rational resource allocator in both domains chooses subsystem optimization.
|
||||||
|
|
||||||
|
3. **The optimized subsystem generates its own demand.** Each new model requires alignment training. Each more capable model requires more sophisticated alignment techniques. The alignment field scales linearly with the number and capability of models deployed — exactly the pattern that induces Jevons demand. More aligned models -> more deployment confidence -> more models deployed -> more alignment needed.
|
||||||
|
|
||||||
|
4. **Payment structures reinforce the paradox.** Alignment labs are funded to make specific models safe, not to build coordination infrastructure. Research grants reward publishable techniques with measurable improvements on specific models, not architectural work on distributed alignment. Since [[the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it]], the economic structure of AI development actively pays for single-model alignment (as a cost of doing business) while offering no revenue model for coordination-based alignment.
|
||||||
|
|
||||||
|
**The RSP collapse as empirical confirmation.** Anthropic's abandonment of its Responsible Scaling Policy demonstrates that even the strongest single-organization alignment commitment cannot survive competitive pressure. Since [[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]], the RSP failure shows that alignment-as-training-tax is structurally unstable. But the field's response has been to seek better training-time alignment techniques — making the tax smaller rather than eliminating it through coordination. This is the Jevons paradox in action: the failure of single-model alignment produced demand for *better* single-model alignment, not for *different* alignment.
|
||||||
|
|
||||||
|
**What coordination-based alignment would look like.** Since [[no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it]], the alternative paradigm barely exists. In the healthcare analogy, Devoted Health represents the system restructurer — purpose-built technology that addresses the 80-90%, not a better optimizer of the 10-20%. The alignment equivalent would be infrastructure where safety emerges from the coordination protocol between agents, not from training imposed on each agent individually. Where alignment is a property of the architecture — like how TCP/IP ensures reliable communication without each application implementing its own reliability layer.
|
||||||
|
|
||||||
|
Since [[RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values]], single-model alignment faces a theoretical ceiling: it literally cannot represent the diversity of human values. This is the alignment equivalent of healthcare's 10-20% problem — no matter how good single-model alignment gets, it structurally cannot solve the full problem. The remaining 80-90% requires coordination infrastructure.
|
||||||
|
|
||||||
|
**Why the pattern is harder to break in alignment than healthcare.** In healthcare, the system restructurer (Devoted) competes in the same market as the subsystem optimizers. Market competition can eventually force the transition. In alignment, there is no market mechanism to force the transition from single-model to coordination-based alignment. No customer is choosing between "aligned model" and "coordinated multi-agent system." The transition requires either regulatory mandate, catastrophic failure of single-model alignment, or a research breakthrough that makes coordination-based alignment demonstrably superior. None of these forcing functions is currently active.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
Relevant Notes:
|
||||||
|
- [[the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it]] — alignment-as-training-tax is the subsystem being optimized
|
||||||
|
- [[AI optimization of industry subsystems induces demand for more of the same subsystem rather than shifting resources to the structural changes that would improve outcomes]] — the universal Jevons pattern this claim instantiates in alignment
|
||||||
|
- [[no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it]] — the system-level restructuring that the Jevons paradox prevents
|
||||||
|
- [[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]] — RSP collapse as empirical evidence of single-organization alignment failure
|
||||||
|
- [[RLHF and DPO both fail at preference diversity because they assume a single reward function can capture context-dependent human values]] — the theoretical ceiling of single-model alignment
|
||||||
|
- [[healthcare AI creates a Jevons paradox because adding capacity to sick care induces more demand for sick care]] — the healthcare instance with the most extreme ratio (10-20% vs 80-90%)
|
||||||
|
|
||||||
|
Topics:
|
||||||
|
- [[overview]]
|
||||||
|
- [[coordination mechanisms]]
|
||||||
|
|
@ -0,0 +1,54 @@
|
||||||
|
---
|
||||||
|
type: claim
|
||||||
|
domain: grand-strategy
|
||||||
|
secondary_domains: [health, ai-alignment, collective-intelligence]
|
||||||
|
description: "The chess centaur model fails in clinical medicine because physicians override AI on tasks where AI outperforms — the binding variable is role boundary clarity, not human-AI collaboration per se, with implications for alignment (HITL oversight assumes humans improve AI outputs but evidence shows they degrade them)"
|
||||||
|
confidence: experimental
|
||||||
|
source: "Synthesis by Leo from: centaur team claim (Kasparov); HITL degradation claim (Wachter/Patil, Stanford-Harvard study); AI scribe adoption (Bessemer 2026); alignment scalable oversight claims"
|
||||||
|
created: 2026-03-07
|
||||||
|
depends_on:
|
||||||
|
- "centaur teams outperform both pure humans and pure AI because complementary strengths compound"
|
||||||
|
- "human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs"
|
||||||
|
- "AI scribes reached 92 percent provider adoption in under 3 years because documentation is the rare healthcare workflow where AI value is immediate unambiguous and low-risk"
|
||||||
|
- "scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps"
|
||||||
|
---
|
||||||
|
|
||||||
|
# centaur teams succeed only when role boundaries prevent humans from overriding AI in domains where AI is the stronger partner
|
||||||
|
|
||||||
|
The knowledge base contains a tension: centaur teams outperform both pure humans and pure AI in chess, but physicians with AI access score *worse* than AI alone in clinical diagnosis (68% vs 90%). This isn't a contradiction — it's a boundary condition that reveals when human-AI collaboration helps and when it hurts.
|
||||||
|
|
||||||
|
**The evidence across domains:**
|
||||||
|
|
||||||
|
| Domain | Human-AI collaboration | Outcome | Role boundary |
|
||||||
|
|--------|----------------------|---------|---------------|
|
||||||
|
| Chess (Kasparov) | Human sets strategy, AI calculates | Centaur wins | Clear — human doesn't override AI tactics |
|
||||||
|
| Clinical diagnosis (Stanford-Harvard) | AI diagnoses, physician verifies/overrides | Physician degrades AI by 22 points | Ambiguous — physician overrides AI on AI's strength |
|
||||||
|
| Colonoscopy (European study) | AI highlights lesions, physician decides | Physician de-skills in 3 months | Ambiguous — physician can ignore AI highlights |
|
||||||
|
| AI scribes (Bessemer 2026) | AI generates notes from conversation | 92% adoption, 10-15% revenue capture | Clear — physician doesn't override note content |
|
||||||
|
| Finance (Aldasoro/BIS) | AI augments analyst research | ~4% productivity gain | Moderate — analyst directs AI queries |
|
||||||
|
|
||||||
|
**The pattern:** Centaur teams succeed when humans contribute capabilities AI lacks (strategic judgment, relationship, context) AND the architecture prevents humans from intervening in domains where AI outperforms. They fail when role boundaries are ambiguous and humans override AI outputs based on intuition in tasks where AI has demonstrated superiority.
|
||||||
|
|
||||||
|
AI scribes are the most instructive case. They adopted at unprecedented speed (92% in ~3 years vs 15 years for EHRs) precisely because the role boundary is crisp: the AI listens and writes, the physician practices medicine. The physician doesn't override the scribe's transcription because that's not a clinical judgment. Compare clinical decision support, where the physician is explicitly invited to override AI diagnostic suggestions — the ambiguous boundary produces the degradation.
|
||||||
|
|
||||||
|
**The mechanism:** Human override of AI outputs is driven by two forces. First, **authority preservation** — professionals trained for years resist deferring to a tool on tasks they consider core to their expertise, even when the tool outperforms. Second, **Dunning-Kruger at the expertise boundary** — humans cannot accurately assess when AI knows better, because accurate assessment requires the expertise the human is losing to de-skilling. The three-month de-skilling timeline in the colonoscopy study is alarming: experts lose the very capability they need to evaluate whether to override AI.
|
||||||
|
|
||||||
|
**The alignment implication is severe.** Human-in-the-loop oversight is the default safety architecture for AI alignment. Since [[scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps]], the assumption that humans can reliably override or correct AI outputs becomes increasingly false as AI capabilities grow. The clinical evidence provides an empirical preview: when AI is the stronger partner on a specific task, human oversight degrades rather than improves the output. If this generalizes to alignment — and the capability gap will only widen — then HITL alignment is structurally unstable for the same reason HITL clinical AI is. The safety architecture fails precisely when it's most needed.
|
||||||
|
|
||||||
|
**The design implication:** Effective human-AI systems need architecturally enforced role separation, not guidelines suggesting humans "verify" AI outputs. The AI scribe model — where the human and AI operate on different tasks rather than the same task — is the template. Applied to alignment: rather than humans overseeing AI decisions (which degrades both), humans should set objectives and constraints while AI operates autonomously within those bounds, with disagreements flagged for structured review rather than real-time override.
|
||||||
|
|
||||||
|
This is the centaur model done right: not human-verifies-AI, but human-and-AI-on-complementary-tasks-with-clear-boundaries.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
Relevant Notes:
|
||||||
|
- [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]] — the chess evidence establishing the centaur model
|
||||||
|
- [[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]] — the clinical counter-evidence constraining when the model applies
|
||||||
|
- [[AI scribes reached 92 percent provider adoption in under 3 years because documentation is the rare healthcare workflow where AI value is immediate unambiguous and low-risk]] — the success case with clear role boundaries
|
||||||
|
- [[scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps]] — alignment oversight facing the same boundary problem
|
||||||
|
- [[the physician role shifts from information processor to relationship manager as AI automates documentation triage and evidence synthesis]] — the physician role restructuring that enforces correct role boundaries
|
||||||
|
- [[economic forces push humans out of every cognitive loop where output quality is independently verifiable because human-in-the-loop is a cost that competitive markets eliminate]] — competitive pressure accelerates the boundary problem
|
||||||
|
|
||||||
|
Topics:
|
||||||
|
- [[overview]]
|
||||||
|
- [[coordination mechanisms]]
|
||||||
|
|
@ -0,0 +1,43 @@
|
||||||
|
---
|
||||||
|
description: Exiting Nvidia and Broadcom to go all-in on Bloom Energy CoreWeave and Bitcoin miners pivoting to AI hosting shows the thesis sharpening from AI will be huge to the binding constraint is electricity not algorithms
|
||||||
|
type: analysis
|
||||||
|
domain: livingip
|
||||||
|
created: 2026-03-05
|
||||||
|
confidence: likely
|
||||||
|
source: "SEC 13F filings Q4 2025, Fortune Oct 2025"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Aschenbrenners Q4 2025 pivot from chips to power infrastructure demonstrates real-time attractor state refinement as the bottleneck shifted from compute to electricity
|
||||||
|
|
||||||
|
The Situational Awareness LP portfolio underwent a dramatic rotation in Q4 2025. The fund exited Nvidia and Broadcom — the consensus AI plays — and concentrated into physical infrastructure: Bloom Energy (largest holding, +$911M added), CoreWeave call options (+$651M, 672% increase), Core Scientific (9.4% stake), and a cluster of Bitcoin miners pivoting to AI hosting (IREN, Cipher Mining, Riot, Hut 8, Bitdeer).
|
||||||
|
|
||||||
|
This is [[teleological investing answers three questions in sequence -- where must the industry go and where in the stack will value concentrate and who will control that position]] in action. Aschenbrenner answered all three questions and then refined his answer as evidence updated the posterior:
|
||||||
|
|
||||||
|
1. **Where must it go?** AI infrastructure buildout is near-inevitable (unchanged)
|
||||||
|
2. **Where will value concentrate?** Initially: chips. Updated: the power/compute hosting layer
|
||||||
|
3. **Who controls the position?** Whoever secures power purchase agreements and physical data center capacity
|
||||||
|
|
||||||
|
The pivot illustrates two mechanisms simultaneously:
|
||||||
|
|
||||||
|
Since [[value in industry transitions accrues to bottleneck positions in the emerging architecture not to pioneers or to the largest incumbents]], the bottleneck in AI infrastructure shifted. Nvidia dominated the first phase (training chips). But as training clusters scaled from 100MW to 1GW+, the binding constraint moved downstream to electricity supply and physical hosting. Chips became a solved problem — Nvidia would keep making them. But the power to run them became scarce.
|
||||||
|
|
||||||
|
Since [[teleological investing is Bayesian reasoning applied to technology streams because attractor state analysis provides the prior and market evidence updates the posterior]], this is exactly the kind of posterior update the framework demands. The prior (AI infrastructure will boom) stayed constant. The posterior (which specific bottleneck captures value) updated as evidence accumulated that power infrastructure was the binding constraint.
|
||||||
|
|
||||||
|
The contrarian element is sharp. While the market was still piling into chip stocks, Aschenbrenner shorted Nvidia and Broadcom via puts while going long on power infrastructure — positions that are structurally contrarian in the way [[teleological investing is structurally contrarian because most market participants are local optimizers whose short time horizons systematically undervalue long-horizon convergence plays]] describes. Most AI investors are hill-climbing on the obvious thesis (chips). The teleological investor asks where the architecture must converge.
|
||||||
|
|
||||||
|
His Intel call options are the most contrarian position of all — a bet that Intel's foundry ambitions may eventually matter more than their current chip struggles. This is the kind of position that looks insane to hill-climbers and logical to attractor-state analysts.
|
||||||
|
|
||||||
|
Whether this pivot proves correct is an open question. It could be brilliant bottleneck identification or speculative overconcentration. Since [[the Cathie Wood failure mode shows that transparent thesis plus concentrated bets plus early outperformance is structurally identical whether the outcome is spectacular success or catastrophic failure]], concentrated pivots look identical in their early stages whether they're right or wrong.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
Relevant Notes:
|
||||||
|
- [[teleological investing answers three questions in sequence -- where must the industry go and where in the stack will value concentrate and who will control that position]] -- the framework this pivot validates in real-time
|
||||||
|
- [[value in industry transitions accrues to bottleneck positions in the emerging architecture not to pioneers or to the largest incumbents]] -- the bottleneck shifted from chips to power
|
||||||
|
- [[teleological investing is Bayesian reasoning applied to technology streams because attractor state analysis provides the prior and market evidence updates the posterior]] -- the pivot IS a Bayesian update
|
||||||
|
- [[Situational Awareness LP converted a 165-page thesis into a 5.5 billion dollar fund in 18 months by publishing differentiated analysis before raising capital]] -- the fund making this pivot
|
||||||
|
- [[the Cathie Wood failure mode shows that transparent thesis plus concentrated bets plus early outperformance is structurally identical whether the outcome is spectacular success or catastrophic failure]] -- the concentrated pivot carries ARK-style risk
|
||||||
|
|
||||||
|
Topics:
|
||||||
|
- [[attractor dynamics]]
|
||||||
|
- [[teleological-economics overview]]
|
||||||
|
|
@ -0,0 +1,34 @@
|
||||||
|
---
|
||||||
|
description: Aschenbrenner wrote the analysis openly, attracted elite LPs who could independently verify the thesis, then deployed capital along the attractor path he identified — the purest case study of transparent insight creating investable credibility
|
||||||
|
type: analysis
|
||||||
|
domain: livingip
|
||||||
|
created: 2026-03-05
|
||||||
|
confidence: proven
|
||||||
|
source: "Fortune Oct 2025, SEC 13F filings, Litquidity, Daniel Scrivner Q4 2025 analysis"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Situational Awareness LP converted a 165-page thesis into a 5.5 billion dollar fund in 18 months by publishing differentiated analysis before raising capital
|
||||||
|
|
||||||
|
Leopold Aschenbrenner worked on OpenAI's Superalignment team, saw capability trajectories firsthand, got fired for raising security concerns, then published everything he knew in a 165-page essay ("Situational Awareness: The Decade Ahead," June 2024). Three months later he launched a hedge fund named after the essay.
|
||||||
|
|
||||||
|
The sequence: insider knowledge formation → narrative crystallization (the essay) → credibility capital (viral reception, Ivanka Trump endorsement, national security circles) → capital formation ($225M seed from Collison brothers, Nat Friedman, Daniel Gross) → non-obvious positioning (infrastructure bottlenecks downstream of chips).
|
||||||
|
|
||||||
|
Growth trajectory: $225M (Q4 2024) → $1.5B (mid-2025) → $5.52B in US equity positions (Q4 2025). Returns: 47% after fees in H1 2025 vs 6% S&P 500.
|
||||||
|
|
||||||
|
The fund inverts the standard hedge fund model. Traditional funds guard their thesis as proprietary edge. Aschenbrenner published his thesis for free, in full, before raising a dollar. The publication became the pitch deck — "I'm so confident in this analysis that I don't need to hide it." The LPs (Collisons, Friedman, Gross) are not passive capital; they are domain experts who can independently evaluate the thesis. This is skin-in-the-game at every layer.
|
||||||
|
|
||||||
|
This maps directly to the [[teleological investing answers three questions in sequence -- where must the industry go and where in the stack will value concentrate and who will control that position]] framework. Aschenbrenner answered all three: (1) AI infrastructure buildout is near-inevitable, (2) value concentrates at the power/compute hosting layer (not chips, not models), (3) the winners are whoever controls power purchase agreements and physical data center capacity. His Q4 2025 pivot — exiting Nvidia and Broadcom, going all-in on Bloom Energy, CoreWeave, and Bitcoin miners pivoting to AI hosting — shows real-time refinement of which bottleneck position captures value.
|
||||||
|
|
||||||
|
Since [[giving away the intelligence layer to capture value on capital flow is the business model because domain expertise is the distribution mechanism not the revenue source]], Aschenbrenner's approach validates the model at human scale. He gave away the intelligence (the essay) and captured value on capital flow (the fund). This is exactly the pipeline Living Capital agents are designed to execute.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
Relevant Notes:
|
||||||
|
- [[teleological investing answers three questions in sequence -- where must the industry go and where in the stack will value concentrate and who will control that position]] — the framework this case study validates
|
||||||
|
- [[giving away the intelligence layer to capture value on capital flow is the business model because domain expertise is the distribution mechanism not the revenue source]] — Aschenbrenner did this as a human; Living Capital agents do it systematically
|
||||||
|
- [[value in industry transitions accrues to bottleneck positions in the emerging architecture not to pioneers or to the largest incumbents]] — the fund's pivot from chips to power demonstrates this in real-time
|
||||||
|
- [[teleological investing is structurally contrarian because most market participants are local optimizers whose short time horizons systematically undervalue long-horizon convergence plays]] — the fund's positions (long Intel, short Nvidia) are structurally contrarian
|
||||||
|
- [[industry transitions produce speculative overshoot because correct identification of the attractor state attracts capital faster than the knowledge embodiment lag can absorb it]] — the $225M to $5.5B growth in one year may be this exact pattern
|
||||||
|
|
||||||
|
Topics:
|
||||||
|
- [[teleological-economics overview]]
|
||||||
|
|
@ -0,0 +1,77 @@
|
||||||
|
# Leopold Aschenbrenner & Situational Awareness — Research Dump
|
||||||
|
|
||||||
|
**Source type:** research
|
||||||
|
**Research prompt:** "Research situational awareness and Leopold Aschenbrenner for our Ars Contexta knowledge base — great case study for differentiated insight being converted into market-beating returns"
|
||||||
|
**Generated:** 2026-03-05
|
||||||
|
|
||||||
|
## Person: Leopold Aschenbrenner
|
||||||
|
|
||||||
|
Born ~2001-2002, Germany. Graduated Columbia valedictorian at 19 (2021), double major economics + math-statistics. Co-founded Columbia EA chapter. Worked at Oxford Global Priorities Institute, then FTX Future Fund (collapsed Nov 2022). Joined OpenAI Superalignment team 2023 under Sutskever/Leike. Co-authored "Weak to Strong Generalization" (ICML 2024).
|
||||||
|
|
||||||
|
Fired April 2024 after sharing internal security memo with board members warning about CCP espionage risk. HR reportedly said "worrying about CCP espionage was racist and unconstructive." Told firing was because of the security memo. Superalignment team dissolved one month later (Sutskever and Leike departed).
|
||||||
|
|
||||||
|
Published "Situational Awareness: The Decade Ahead" June 2024 (165 pages, situational-awareness.ai). Founded Situational Awareness LP September 2024.
|
||||||
|
|
||||||
|
Carl Shulman (45, AI forecaster, ex-Clarium Capital/Thiel) joined as Director of Research.
|
||||||
|
|
||||||
|
## The Essay: Core Framework
|
||||||
|
|
||||||
|
**The OOM (Orders of Magnitude) framework:**
|
||||||
|
- GPT-2 to GPT-4 = ~5-6 OOMs effective compute over 4 years (preschooler → smart high-schooler)
|
||||||
|
- Three drivers: raw compute (~0.5 OOM/yr), algorithmic efficiency (~0.5 OOM/yr), unhobbling (chatbot → agent)
|
||||||
|
- Forward projection: another 3-6 OOMs by 2027 = another preschooler-to-high-schooler jump on top of GPT-4
|
||||||
|
- "AGI by 2027 is strikingly plausible"
|
||||||
|
|
||||||
|
**Intelligence explosion:** Once AGI exists → hundreds of millions of AI researcher copies → compress decade of progress into <1 year → superintelligence by ~2030.
|
||||||
|
|
||||||
|
**Trillion-dollar cluster projection:**
|
||||||
|
- 2024: 100MW, billions, 100K H100 equiv
|
||||||
|
- 2026: 1GW, tens of billions, 1M H100 equiv
|
||||||
|
- 2028: 10GW, hundreds of billions, 10M H100 equiv
|
||||||
|
- 2030: 100GW, ~$1T, 100M H100 equiv (>20% US electricity)
|
||||||
|
|
||||||
|
**National security:** Current lab security catastrophically inadequate. AGI = decisive military advantage comparable to nuclear weapons. US-China race framing. Calls for Manhattan Project-level government AGI effort.
|
||||||
|
|
||||||
|
## The Fund: Situational Awareness LP
|
||||||
|
|
||||||
|
**Growth:** $225M (Q4 2024) → $383M (early 2025) → $1.5B (mid-2025) → $5.52B (Q4 2025)
|
||||||
|
**Returns:** 47% after fees H1 2025 vs 6% S&P 500
|
||||||
|
**LPs:** Collison brothers (Stripe), Nat Friedman (GitHub), Daniel Gross, family offices, endowments
|
||||||
|
**Aschenbrenner invested "almost all of his net worth"**
|
||||||
|
|
||||||
|
**Q4 2025 portfolio (29 positions, $5.52B):**
|
||||||
|
- Bloom Energy (largest holding, +$911M added Q4) — fuel cell power
|
||||||
|
- CoreWeave call options (+$651M, 672% increase) — AI cloud
|
||||||
|
- Core Scientific (9.4% stake, 28.7M shares) — BTC miner → AI hosting
|
||||||
|
- Bitcoin miners pivoting to AI: IREN, Cipher Mining, Riot, Hut 8, Bitdeer
|
||||||
|
- Power: Vistra, EQT, Solaris Energy
|
||||||
|
- Optical: Lumentum (+$479M), Coherent, Tower Semi
|
||||||
|
- Contrarian: Long Intel calls, SHORT Nvidia/Broadcom/TSMC via puts
|
||||||
|
|
||||||
|
**The pivot:** Exited Nvidia and Broadcom (obvious AI plays) in Q4 2025, went all-in on physical infrastructure layer. Thesis sharpened from "AI will be huge" to "the bottleneck is electricity and physical compute, not algorithms."
|
||||||
|
|
||||||
|
## Comparable Case Studies
|
||||||
|
|
||||||
|
**Michael Burry (Scion Capital):** Self-taught subprime analysis → $1M start → 489% total return (2000-2008). Held through 2 years of the trade going against him. **Recently shut down fund (2025) warning AI stocks are the next bubble** — diametrically opposed to Aschenbrenner.
|
||||||
|
|
||||||
|
**Cathie Wood (ARK Invest):** Innovation thesis → radical transparency → $60B AUM peak. ARKK +153% in 2020, then -23% (2021), -67% (2022). Lost $14.3B in shareholder value. **Structurally identical pattern to Aschenbrenner:** transparent thesis, concentrated bets, early outperformance attracting massive inflows. Directionally right (innovation matters), catastrophically wrong on timing and valuation.
|
||||||
|
|
||||||
|
**George Soros (Black Wednesday 1992):** Structural analysis of ERM unsustainability → $10B position → $2B profit in one month. The insight was understanding a structural instability everyone could see but nobody would bet against at scale.
|
||||||
|
|
||||||
|
**Peter Thiel (Founders Fund):** Operator experience → contrarian framework ("Zero to One") → early-stage bets. Facebook 46.6x, Palantir 18.5x, SpaceX 27.1x. Publication was recruiting tool for deal flow.
|
||||||
|
|
||||||
|
## Track Record (1-year retrospective, LessWrong June 2025)
|
||||||
|
|
||||||
|
**Validated:** Energy consumption at ~3% of 2025 grid (exactly predicted). Global AI investment doubling annually (on track). Chip production following forecasts. Overall ~0.5 OOM/year pace "roughly supported."
|
||||||
|
|
||||||
|
**Complicated:** Scaling wall debate (GPT-5 struggles). Paradigm shift to test-time compute (not predicted but arguably validates broader thesis). DeepSeek R1 challenged geopolitical thesis (China matching US capabilities despite export controls, prioritizing algorithmic efficiency over raw compute).
|
||||||
|
|
||||||
|
**Untested:** AGI by 2027 (12-18 months away by his clock). Intelligence explosion. Government mobilization at Manhattan Project scale.
|
||||||
|
|
||||||
|
## Key Tensions
|
||||||
|
|
||||||
|
1. **Alpha vs beta:** Is 47% return differentiated insight or leveraged long on the hottest sector in a decade?
|
||||||
|
2. **Cathie Wood parallel:** Structurally identical pattern — transparent thesis + concentrated bets + early outperformance. ARK destroyed $14.3B.
|
||||||
|
3. **Motivational ambiguity:** Thesis about existential AI risk is also the fund's sales pitch. Where does safety concern end and marketing begin?
|
||||||
|
4. **Burry inversion:** Two domain experts, same structural pattern, diametrically opposed theses. One is wrong.
|
||||||
|
5. **One year proves nothing:** Burry held through 2 years of pain. Cathie Wood "proved" by 2020 returns then destroyed by 2022. Aschenbrenner hasn't been tested by adversity.
|
||||||
|
|
@ -0,0 +1,43 @@
|
||||||
|
---
|
||||||
|
description: 47 percent returns in H1 2025 could be differentiated insight or concentrated long exposure to the hottest sector in a decade and the structural pattern cannot distinguish the two until adversity tests conviction
|
||||||
|
type: claim
|
||||||
|
domain: livingip
|
||||||
|
created: 2026-03-05
|
||||||
|
confidence: likely
|
||||||
|
source: "Morningstar, Fortune Oct 2025, LessWrong June 2025"
|
||||||
|
---
|
||||||
|
|
||||||
|
# one year of outperformance is insufficient evidence to distinguish alpha from leveraged beta because Cathie Wood Burry and Aschenbrenner all looked brilliant at the one-year mark
|
||||||
|
|
||||||
|
Situational Awareness LP returned 47% after fees in H1 2025 against 6% for the S&P 500. Impressive. But consider the base rate:
|
||||||
|
|
||||||
|
- **Cathie Wood (ARKK):** +153% in 2020. By 2022: -67%, worst-performing fund family per Morningstar, $14.3B in destroyed shareholder value
|
||||||
|
- **Michael Burry (Scion Capital):** +489% total return by 2008. But by 2025, shut down the fund warning AI stocks are the next bubble
|
||||||
|
- **Bill Miller (Legg Mason Value Trust):** Beat the S&P 500 for 15 consecutive years. Then catastrophically underperformed during 2008-2009
|
||||||
|
|
||||||
|
The structural question: is 47% in H1 2025 evidence of differentiated insight, or is it what happens when you take concentrated long positions in AI infrastructure during the biggest AI investment boom in history?
|
||||||
|
|
||||||
|
Since [[teleological investing is Bayesian reasoning applied to technology streams because attractor state analysis provides the prior and market evidence updates the posterior]], the correct Bayesian approach treats one year of returns as weak evidence. The prior probability that any concentrated thematic fund outperforms during a sector boom is high — it's nearly tautological. The update from 47% should be small because the likelihood under both hypotheses (genuine alpha vs leveraged beta) is similar.
|
||||||
|
|
||||||
|
The real test has not happened yet. Genuine alpha reveals itself during adversity:
|
||||||
|
- Can the thesis survive a sector-wide correction?
|
||||||
|
- Will the manager hold through drawdowns or capitulate?
|
||||||
|
- Do the concentrated positions outperform during the specific conditions the thesis predicts?
|
||||||
|
|
||||||
|
Burry held for two years while his thesis appeared wrong. That conviction under adversity — not his eventual returns — was the evidence of alpha. Cathie Wood held through adversity too, but conviction without updating is stubbornness, not alpha. The distinction becomes clear only in retrospect.
|
||||||
|
|
||||||
|
Since [[industry transitions produce speculative overshoot because correct identification of the attractor state attracts capital faster than the knowledge embodiment lag can absorb it]], SA LP's $225M-to-$5.52B growth in one year may itself be evidence of overshoot. The fund's AUM growth (2,353% in one year) is capital flowing toward a thesis, and the thesis says capital should flow toward AI infrastructure. This is recursive — the fund's success is evidence that the sector is hot, which is the sector the fund is long.
|
||||||
|
|
||||||
|
This is not a prediction that Aschenbrenner will fail. It is an epistemological claim: the evidence available at the one-year mark is structurally insufficient to distinguish genius from timing.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
Relevant Notes:
|
||||||
|
- [[the Cathie Wood failure mode shows that transparent thesis plus concentrated bets plus early outperformance is structurally identical whether the outcome is spectacular success or catastrophic failure]] -- the primary case study for why early outperformance is inconclusive
|
||||||
|
- [[teleological investing is Bayesian reasoning applied to technology streams because attractor state analysis provides the prior and market evidence updates the posterior]] -- the Bayesian frame for evaluating return evidence
|
||||||
|
- [[industry transitions produce speculative overshoot because correct identification of the attractor state attracts capital faster than the knowledge embodiment lag can absorb it]] -- the AUM growth itself may be overshoot
|
||||||
|
- [[Situational Awareness LP converted a 165-page thesis into a 5.5 billion dollar fund in 18 months by publishing differentiated analysis before raising capital]] -- the fund being evaluated
|
||||||
|
|
||||||
|
Topics:
|
||||||
|
- [[attractor dynamics]]
|
||||||
|
- [[teleological-economics overview]]
|
||||||
|
|
@ -0,0 +1,35 @@
|
||||||
|
---
|
||||||
|
description: Aschenbrenner, Thiel, and Soros all published their frameworks before or alongside deploying capital — transparency functions as a credibility mechanism when your LPs are domain experts, not retail investors chasing returns
|
||||||
|
type: claim
|
||||||
|
domain: livingip
|
||||||
|
created: 2026-03-05
|
||||||
|
confidence: likely
|
||||||
|
source: "Fortune Oct 2025, Peter Thiel Zero to One, Soros reflexivity writings"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Publishing investment analysis openly before raising capital inverts hedge fund secrecy and builds credibility that attracts LPs who can independently evaluate the thesis
|
||||||
|
|
||||||
|
The standard hedge fund model treats the investment thesis as proprietary intellectual property. Secrecy is the moat. You don't publish your edge because others will front-run you.
|
||||||
|
|
||||||
|
Aschenbrenner inverted this completely. He published 165 pages of his thesis for free, went viral, then raised $225M from elite Silicon Valley operators (Collison brothers, Nat Friedman, Daniel Gross) who could independently verify the claims. The essay WAS the pitch deck. The transparency was the credibility mechanism.
|
||||||
|
|
||||||
|
This pattern recurs across the most successful insight-to-capital conversions:
|
||||||
|
- **Peter Thiel:** Published "Zero to One" (Stanford lectures → book) before Founders Fund's biggest bets. The publication was simultaneously a recruiting tool for deal flow AND a credibility signal to LPs. Facebook (46.6x), Palantir (18.5x), SpaceX (27.1x).
|
||||||
|
- **George Soros:** Published books on reflexivity theory before Black Wednesday. The theoretical framework was public; the specific trade was private. $2B profit in one month.
|
||||||
|
- **Michael Burry:** Blog posts on financial message boards attracted attention before investor letters. $1M start → 489% total return.
|
||||||
|
|
||||||
|
The mechanism: when your LPs are sophisticated domain experts (not retail), they don't need you to hide the thesis — they need to see it clearly enough to independently evaluate it. Transparency is a filtering mechanism that attracts LPs who understand the thesis deeply enough to hold through drawdowns. Secrecy attracts return-chasers who panic at the first dip.
|
||||||
|
|
||||||
|
This connects directly to the Living Capital model. Since [[giving away the intelligence layer to capture value on capital flow is the business model because domain expertise is the distribution mechanism not the revenue source]], the transparency-as-credibility pattern is not just a tactic — it is the structural design of the business model. Living agents publish their analysis openly (building credibility), then deploy capital through futarchy (capturing value on the flow). The intelligence is free. The capital allocation is where value accrues.
|
||||||
|
|
||||||
|
The risk: transparency invites copycats and front-running. But in practice, the thesis is only the first layer. Execution — which specific positions, what timing, how much leverage, when to pivot — cannot be replicated from the published thesis alone. Aschenbrenner published "AI infrastructure will boom." He did NOT publish "buy Bloom Energy and CoreWeave calls while shorting Nvidia." The thesis creates the brand; the execution creates the alpha.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
Relevant Notes:
|
||||||
|
- [[giving away the intelligence layer to capture value on capital flow is the business model because domain expertise is the distribution mechanism not the revenue source]] — this is the same model at human scale
|
||||||
|
- [[Situational Awareness LP converted a 165-page thesis into a 5.5 billion dollar fund in 18 months by publishing differentiated analysis before raising capital]] — the primary case study
|
||||||
|
- [[cross-domain knowledge connections generate disproportionate value because most insights are siloed]] — Aschenbrenner's edge was connecting AI capabilities (insider knowledge) to infrastructure investment (capital markets) — a cross-domain connection most AI researchers and most investors wouldn't make
|
||||||
|
|
||||||
|
Topics:
|
||||||
|
- [[teleological-economics overview]]
|
||||||
|
|
@ -0,0 +1,39 @@
|
||||||
|
---
|
||||||
|
description: ARK Invest went from +153 percent in 2020 to -67 percent in 2022 using the same structural pattern Aschenbrenner now follows — the pattern cannot distinguish winners from losers until adversity tests conviction
|
||||||
|
type: claim
|
||||||
|
domain: livingip
|
||||||
|
created: 2026-03-05
|
||||||
|
confidence: proven
|
||||||
|
source: "Morningstar fund analysis, NPR, TheStreet, Fortune"
|
||||||
|
---
|
||||||
|
|
||||||
|
# The Cathie Wood failure mode shows that transparent thesis plus concentrated bets plus early outperformance is structurally identical whether the outcome is spectacular success or catastrophic failure
|
||||||
|
|
||||||
|
Cathie Wood (ARK Invest) and Leopold Aschenbrenner (Situational Awareness LP) followed the same structural pattern:
|
||||||
|
|
||||||
|
1. Genuine domain expertise (Wood: technology analyst; Aschenbrenner: OpenAI Superalignment team)
|
||||||
|
2. Transparent thesis published openly (Wood: free research, daily trade emails, YouTube; Aschenbrenner: 165-page essay)
|
||||||
|
3. Concentrated high-conviction bets on a structural technology thesis
|
||||||
|
4. Early massive outperformance attracting capital inflows (ARKK +153% in 2020; SA LP +47% H1 2025)
|
||||||
|
5. AUM exploding on narrative + returns (ARK: $0 → $60B; SA LP: $225M → $5.5B)
|
||||||
|
|
||||||
|
Wood's thesis was directionally correct — innovation and disruption do matter. But ARKK returned -23% in 2021 and -67% in 2022, making ARK the worst-performing fund family per Morningstar over the decade through 2024, destroying $14.3B in shareholder value. The thesis was right about direction, catastrophically wrong about timing and valuation.
|
||||||
|
|
||||||
|
The structural parallel is almost exact. The difference between Wood and Aschenbrenner so far is 18 months of context. ARKK looked brilliant through 2020. SA LP looks brilliant through H1 2025. Brilliance at this stage is necessary but not sufficient evidence the thesis is correct.
|
||||||
|
|
||||||
|
The pattern teaches something important for [[teleological investing answers three questions in sequence -- where must the industry go and where in the stack will value concentrate and who will control that position]]: correctly identifying the attractor state is the first step, not the last. You also need to get timing, valuation, and specific bottleneck position right. Wood identified the right direction (innovation disruption) but wrong positions (speculative biotech, overvalued EVs). Aschenbrenner's bet — power infrastructure as the binding constraint — is more specific and structural. Whether it holds depends on whether the AI infrastructure buildout actually becomes electricity-constrained in the way he predicts.
|
||||||
|
|
||||||
|
Since [[industry transitions produce speculative overshoot because correct identification of the attractor state attracts capital faster than the knowledge embodiment lag can absorb it]], both cases may illustrate the same dynamic: early correct identification → capital flood → valuations detach from fundamentals → correction. Aschenbrenner's fund has not been tested by a downturn. The Cathie Wood comparison is the most relevant cautionary tale available.
|
||||||
|
|
||||||
|
The Burry inversion sharpens this further: Michael Burry — the most famous case study of this exact pattern — just shut down his fund in 2025 while warning AI stocks are the next bubble. Two domain experts, same structural pattern, diametrically opposed theses. The pattern produces confident bets in both directions.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
Relevant Notes:
|
||||||
|
- [[Situational Awareness LP converted a 165-page thesis into a 5.5 billion dollar fund in 18 months by publishing differentiated analysis before raising capital]] — the case this is cautioning against
|
||||||
|
- [[industry transitions produce speculative overshoot because correct identification of the attractor state attracts capital faster than the knowledge embodiment lag can absorb it]] — both Wood and Aschenbrenner may be examples of this
|
||||||
|
- [[pioneers prove concepts but fast followers with better capital allocation capture most long-term value in industry transitions]] — Wood was a pioneer who proved the thesis then got destroyed by timing
|
||||||
|
- [[teleological investing is Bayesian reasoning applied to technology streams because attractor state analysis provides the prior and market evidence updates the posterior]] — the question is whether Aschenbrenner is updating on evidence or anchoring on his prior
|
||||||
|
|
||||||
|
Topics:
|
||||||
|
- [[teleological-economics overview]]
|
||||||
Loading…
Reference in a new issue