teleo-codex/entities/ai-alignment/safe-superintelligence.md
m3taversal 03aa9c9a7c theseus: AI industry landscape — 7 entities + 3 claims from web research
- What: first ai-alignment entities (Anthropic, OpenAI, Google DeepMind, xAI,
  SSI, Thinking Machines Lab, Dario Amodei) + 3 claims on industry dynamics
  (RSP rollback as empirical confirmation, talent circulation as alignment
  culture transfer, capital concentration as oligopoly constraint on governance)
- Why: industry landscape research synthesizing 33 web sources. Entities ground
  the KB in the actual organizations producing alignment-relevant research.
  Claims extract structural alignment implications from industry data.
- Connections: RSP rollback claim confirms voluntary-safety-pledge claim;
  investment concentration connects to nation-state-control and alignment-tax
  claims; talent circulation connects to coordination-failure claim

Pentagon-Agent: Theseus <B4A5B354-03D6-4291-A6A8-1E04A879D9AC>
2026-03-16 17:56:38 +00:00

52 lines
2.6 KiB
Markdown

---
type: entity
entity_type: lab
name: "Safe Superintelligence Inc."
domain: ai-alignment
handles: ["@saboredlabs"]
website: https://ssi.inc
status: active
founded: 2024-06-01
founders: ["Ilya Sutskever", "Daniel Gross"]
category: "Safety-first superintelligence laboratory"
stage: seed
funding: "$2B (Apr 2025)"
key_metrics:
valuation: "$32B (Apr 2025)"
employees: "~20"
revenue: "$0"
valuation_per_employee: "~$1.6B"
competitors: ["Anthropic", "OpenAI"]
tracked_by: theseus
created: 2026-03-16
last_updated: 2026-03-16
---
# Safe Superintelligence Inc.
## Overview
The purest bet in AI that safety and capability are inseparable. Founded by Ilya Sutskever after his departure from OpenAI, SSI pursues superintelligence through safety-first research with no commercial products, no revenue, and ~20 employees. The $32B valuation is entirely a bet on Sutskever's research genius and the thesis that whoever solves safety solves capability.
## Current State
- ~20 employees, zero revenue, zero products
- Largest valuation-to-employee ratio in history (~$1.6B per employee)
- Sutskever became sole CEO after co-founder Daniel Gross was poached by Meta for their superintelligence team
- No public model releases or research papers as of March 2026
## Timeline
- **2024-06** — Founded by Ilya Sutskever and Daniel Gross after Sutskever's departure from OpenAI
- **2025-04** — Raised $2B at $32B valuation
- **2025-07** — Daniel Gross departed for Meta's superintelligence team; Sutskever became CEO
## Competitive Position
SSI occupies a unique position: the only frontier lab with no commercial pressure, no products, and no revenue targets. This is either its greatest strength (pure research focus) or its greatest risk (no feedback loop from deployment). The Gross departure to Meta reduced the team's commercial capability but may have clarified the research mission.
The alignment relevance is direct: SSI is the only lab whose founding thesis explicitly claims that safety research IS capability research — that solving alignment unlocks superintelligence, not the reverse.
## Relationship to KB
- [[safe AI development requires building alignment mechanisms before scaling capability]] — SSI's founding premise
- [[the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it]] — SSI is the counter-bet: safety doesn't cost capability, it enables it
- [[no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it]] — SSI's approach is individual genius, not collective intelligence
Topics:
- [[_map]]