--- type: entity entity_type: lab name: "Safe Superintelligence Inc." domain: ai-alignment handles: ["@saboredlabs"] website: https://ssi.inc status: active founded: 2024-06-01 founders: ["Ilya Sutskever", "Daniel Gross"] category: "Safety-first superintelligence laboratory" stage: seed funding: "$2B (Apr 2025)" key_metrics: valuation: "$32B (Apr 2025)" employees: "~20" revenue: "$0" valuation_per_employee: "~$1.6B" competitors: ["Anthropic", "OpenAI"] tracked_by: theseus created: 2026-03-16 last_updated: 2026-03-16 --- # Safe Superintelligence Inc. ## Overview The purest bet in AI that safety and capability are inseparable. Founded by Ilya Sutskever after his departure from OpenAI, SSI pursues superintelligence through safety-first research with no commercial products, no revenue, and ~20 employees. The $32B valuation is entirely a bet on Sutskever's research genius and the thesis that whoever solves safety solves capability. ## Current State - ~20 employees, zero revenue, zero products - Largest valuation-to-employee ratio in history (~$1.6B per employee) - Sutskever became sole CEO after co-founder Daniel Gross was poached by Meta for their superintelligence team - No public model releases or research papers as of March 2026 ## Timeline - **2024-06** — Founded by Ilya Sutskever and Daniel Gross after Sutskever's departure from OpenAI - **2025-04** — Raised $2B at $32B valuation - **2025-07** — Daniel Gross departed for Meta's superintelligence team; Sutskever became CEO ## Competitive Position SSI occupies a unique position: the only frontier lab with no commercial pressure, no products, and no revenue targets. This is either its greatest strength (pure research focus) or its greatest risk (no feedback loop from deployment). The Gross departure to Meta reduced the team's commercial capability but may have clarified the research mission. The alignment relevance is direct: SSI is the only lab whose founding thesis explicitly claims that safety research IS capability research — that solving alignment unlocks superintelligence, not the reverse. ## Relationship to KB - [[safe AI development requires building alignment mechanisms before scaling capability]] — SSI's founding premise - [[the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it]] — SSI is the counter-bet: safety doesn't cost capability, it enables it - [[no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it]] — SSI's approach is individual genius, not collective intelligence Topics: - [[_map]]