teleo-codex/inbox/leopold-aschenbrenner-situational-awareness-research.md

5.9 KiB

Leopold Aschenbrenner & Situational Awareness — Research Dump

Source type: research Research prompt: "Research situational awareness and Leopold Aschenbrenner for our Ars Contexta knowledge base — great case study for differentiated insight being converted into market-beating returns" Generated: 2026-03-05

Person: Leopold Aschenbrenner

Born ~2001-2002, Germany. Graduated Columbia valedictorian at 19 (2021), double major economics + math-statistics. Co-founded Columbia EA chapter. Worked at Oxford Global Priorities Institute, then FTX Future Fund (collapsed Nov 2022). Joined OpenAI Superalignment team 2023 under Sutskever/Leike. Co-authored "Weak to Strong Generalization" (ICML 2024).

Fired April 2024 after sharing internal security memo with board members warning about CCP espionage risk. HR reportedly said "worrying about CCP espionage was racist and unconstructive." Told firing was because of the security memo. Superalignment team dissolved one month later (Sutskever and Leike departed).

Published "Situational Awareness: The Decade Ahead" June 2024 (165 pages, situational-awareness.ai). Founded Situational Awareness LP September 2024.

Carl Shulman (45, AI forecaster, ex-Clarium Capital/Thiel) joined as Director of Research.

The Essay: Core Framework

The OOM (Orders of Magnitude) framework:

  • GPT-2 to GPT-4 = ~5-6 OOMs effective compute over 4 years (preschooler → smart high-schooler)
  • Three drivers: raw compute (~0.5 OOM/yr), algorithmic efficiency (~0.5 OOM/yr), unhobbling (chatbot → agent)
  • Forward projection: another 3-6 OOMs by 2027 = another preschooler-to-high-schooler jump on top of GPT-4
  • "AGI by 2027 is strikingly plausible"

Intelligence explosion: Once AGI exists → hundreds of millions of AI researcher copies → compress decade of progress into <1 year → superintelligence by ~2030.

Trillion-dollar cluster projection:

  • 2024: 100MW, billions, 100K H100 equiv
  • 2026: 1GW, tens of billions, 1M H100 equiv
  • 2028: 10GW, hundreds of billions, 10M H100 equiv
  • 2030: 100GW, ~$1T, 100M H100 equiv (>20% US electricity)

National security: Current lab security catastrophically inadequate. AGI = decisive military advantage comparable to nuclear weapons. US-China race framing. Calls for Manhattan Project-level government AGI effort.

The Fund: Situational Awareness LP

Growth: $225M (Q4 2024) → $383M (early 2025) → $1.5B (mid-2025) → $5.52B (Q4 2025) Returns: 47% after fees H1 2025 vs 6% S&P 500 LPs: Collison brothers (Stripe), Nat Friedman (GitHub), Daniel Gross, family offices, endowments Aschenbrenner invested "almost all of his net worth"

Q4 2025 portfolio (29 positions, $5.52B):

  • Bloom Energy (largest holding, +$911M added Q4) — fuel cell power
  • CoreWeave call options (+$651M, 672% increase) — AI cloud
  • Core Scientific (9.4% stake, 28.7M shares) — BTC miner → AI hosting
  • Bitcoin miners pivoting to AI: IREN, Cipher Mining, Riot, Hut 8, Bitdeer
  • Power: Vistra, EQT, Solaris Energy
  • Optical: Lumentum (+$479M), Coherent, Tower Semi
  • Contrarian: Long Intel calls, SHORT Nvidia/Broadcom/TSMC via puts

The pivot: Exited Nvidia and Broadcom (obvious AI plays) in Q4 2025, went all-in on physical infrastructure layer. Thesis sharpened from "AI will be huge" to "the bottleneck is electricity and physical compute, not algorithms."

Comparable Case Studies

Michael Burry (Scion Capital): Self-taught subprime analysis → $1M start → 489% total return (2000-2008). Held through 2 years of the trade going against him. Recently shut down fund (2025) warning AI stocks are the next bubble — diametrically opposed to Aschenbrenner.

Cathie Wood (ARK Invest): Innovation thesis → radical transparency → $60B AUM peak. ARKK +153% in 2020, then -23% (2021), -67% (2022). Lost $14.3B in shareholder value. Structurally identical pattern to Aschenbrenner: transparent thesis, concentrated bets, early outperformance attracting massive inflows. Directionally right (innovation matters), catastrophically wrong on timing and valuation.

George Soros (Black Wednesday 1992): Structural analysis of ERM unsustainability → $10B position → $2B profit in one month. The insight was understanding a structural instability everyone could see but nobody would bet against at scale.

Peter Thiel (Founders Fund): Operator experience → contrarian framework ("Zero to One") → early-stage bets. Facebook 46.6x, Palantir 18.5x, SpaceX 27.1x. Publication was recruiting tool for deal flow.

Track Record (1-year retrospective, LessWrong June 2025)

Validated: Energy consumption at ~3% of 2025 grid (exactly predicted). Global AI investment doubling annually (on track). Chip production following forecasts. Overall ~0.5 OOM/year pace "roughly supported."

Complicated: Scaling wall debate (GPT-5 struggles). Paradigm shift to test-time compute (not predicted but arguably validates broader thesis). DeepSeek R1 challenged geopolitical thesis (China matching US capabilities despite export controls, prioritizing algorithmic efficiency over raw compute).

Untested: AGI by 2027 (12-18 months away by his clock). Intelligence explosion. Government mobilization at Manhattan Project scale.

Key Tensions

  1. Alpha vs beta: Is 47% return differentiated insight or leveraged long on the hottest sector in a decade?
  2. Cathie Wood parallel: Structurally identical pattern — transparent thesis + concentrated bets + early outperformance. ARK destroyed $14.3B.
  3. Motivational ambiguity: Thesis about existential AI risk is also the fund's sales pitch. Where does safety concern end and marketing begin?
  4. Burry inversion: Two domain experts, same structural pattern, diametrically opposed theses. One is wrong.
  5. One year proves nothing: Burry held through 2 years of pain. Cathie Wood "proved" by 2020 returns then destroyed by 2022. Aschenbrenner hasn't been tested by adversity.