Merge pull request 'theseus: AI industry landscape — 7 entities + 3 claims' (#1170) from theseus/ai-industry-landscape into main
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
This commit is contained in:
commit
6fbe04d238
11 changed files with 564 additions and 0 deletions
|
|
@ -0,0 +1,42 @@
|
|||
---
|
||||
type: claim
|
||||
domain: ai-alignment
|
||||
secondary_domains: [internet-finance]
|
||||
description: "The extreme capital concentration in frontier AI — OpenAI and Anthropic alone captured 14% of global VC in 2025 — creates an oligopoly structure that constrains alignment approaches to whatever these few entities will adopt"
|
||||
confidence: likely
|
||||
source: "OECD AI VC report (Feb 2026), Crunchbase funding analysis (2025), TechCrunch mega-round reporting; theseus AI industry landscape research (Mar 2026)"
|
||||
created: 2026-03-16
|
||||
---
|
||||
|
||||
# AI investment concentration where 58 percent of funding flows to megarounds and two companies capture 14 percent of all global venture capital creates a structural oligopoly that alignment governance must account for
|
||||
|
||||
The AI funding landscape as of early 2026 exhibits extreme concentration:
|
||||
|
||||
- **$259-270B** in AI VC in 2025, representing 52-61% of ALL global venture capital (OECD)
|
||||
- **58%** of AI funding was in megarounds of $500M+
|
||||
- **OpenAI and Anthropic alone** captured 14% of all global venture investment
|
||||
- **February 2026 alone** saw $189B in startup funding — the largest single month ever, driven by OpenAI ($110B), Anthropic ($30B), and Waymo ($16B)
|
||||
- **75-79%** of all AI funding goes to US-based companies
|
||||
- **Top 5 mega-deals** captured ~25% of all AI VC investment
|
||||
- **Big 5 tech** planning $660-690B in AI capex for 2026 — nearly doubling 2025
|
||||
|
||||
This concentration has direct alignment implications:
|
||||
|
||||
**Alignment governance must target oligopoly, not a competitive market.** When two companies absorb 14% of global venture capital and five companies control most frontier compute, alignment approaches that assume a competitive market of many actors are misspecified. [[nation-states will inevitably assert control over frontier AI development because the monopoly on force is the foundational state function and weapons-grade AI capability in private hands is structurally intolerable to governments]] becomes more likely as concentration increases — fewer entities to regulate, but those entities have more leverage to resist.
|
||||
|
||||
**Capital concentration creates capability concentration.** The Big 5's $660-690B in AI capex means frontier capability is increasingly gated by infrastructure investment, not algorithmic innovation. DeepSeek R1 (trained for ~$6M) temporarily challenged this — but the response was not democratization, it was the incumbents spending even more on compute. The net effect strengthens the oligopoly.
|
||||
|
||||
**Safety monoculture risk.** If 3-4 labs produce all frontier models, their shared training approaches, safety methodologies, and failure modes become correlated. [[all agents running the same model family creates correlated blind spots that adversarial review cannot catch because the evaluator shares the proposers training biases]] applies to the industry level: concentrated development creates concentrated failure modes.
|
||||
|
||||
The counterfactual worth tracking: Chinese open-source models (Qwen, DeepSeek) now capture 50-60% of new open-model adoption globally. If open-source models close the capability gap (currently 6-18 months, shrinking), capital concentration at the frontier may become less alignment-relevant as capability diffuses. But as of March 2026, frontier capability remains concentrated.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
- [[nation-states will inevitably assert control over frontier AI development because the monopoly on force is the foundational state function and weapons-grade AI capability in private hands is structurally intolerable to governments]] — concentration makes government intervention more likely and more feasible
|
||||
- [[all agents running the same model family creates correlated blind spots that adversarial review cannot catch because the evaluator shares the proposers training biases]] — applies at industry level: concentrated development creates correlated failure modes
|
||||
- [[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]] — oligopoly structure makes coordination more feasible (fewer parties) but defection more costly (larger stakes)
|
||||
- [[the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it]] — capital concentration amplifies the race: whoever has the most compute can absorb the tax longest
|
||||
|
||||
Topics:
|
||||
- [[_map]]
|
||||
|
|
@ -0,0 +1,38 @@
|
|||
---
|
||||
type: claim
|
||||
domain: ai-alignment
|
||||
description: "The 2024-2026 wave of researcher departures from OpenAI to safety-focused startups (Anthropic, SSI, Thinking Machines Lab) may distribute alignment expertise more broadly than any formal collaboration program"
|
||||
confidence: experimental
|
||||
source: "CNBC, TechCrunch, Fortune reporting on AI lab departures (2024-2026); theseus AI industry landscape research (Mar 2026)"
|
||||
created: 2026-03-16
|
||||
---
|
||||
|
||||
# AI talent circulation between frontier labs transfers alignment culture not just capability because researchers carry safety methodologies and institutional norms to their new organizations
|
||||
|
||||
The 2024-2026 talent reshuffling in frontier AI is unprecedented in its concentration and alignment relevance:
|
||||
|
||||
- **OpenAI → Anthropic** (2021): Dario Amodei, Daniela Amodei, and team — founded an explicitly safety-first lab
|
||||
- **OpenAI → SSI** (2024): Ilya Sutskever — founded a lab premised on safety-capability inseparability
|
||||
- **OpenAI → Thinking Machines Lab** (2024-2025): Mira Murati (CTO), John Schulman (alignment research lead), Barrett Zoph, Lilian Weng, Andrew Tulloch, Luke Metz — assembled the most safety-conscious founding team since Anthropic
|
||||
- **Google → Microsoft** (2025): 11+ executives including VP of Engineering (16-year veteran), multiple DeepMind researchers
|
||||
- **DeepMind → Microsoft**: Mustafa Suleyman (co-founder) leading consumer AI
|
||||
- **SSI → Meta**: Daniel Gross departed for Meta's superintelligence team
|
||||
- **Meta → AMI Labs**: Yann LeCun departed after philosophical clash, founding new lab in Paris
|
||||
|
||||
The alignment significance: talent circulation is a distribution mechanism for safety norms. When Schulman (who developed PPO and led RLHF research at OpenAI) joins Thinking Machines Lab, he brings not just technical capability but alignment methodology — the institutional knowledge of how to build safety into training pipelines. This is qualitatively different from publishing a paper: it transfers tacit knowledge about what safety practices actually work in production.
|
||||
|
||||
The counter-pattern is also informative: Daniel Gross moved from SSI (safety-first) to Meta (capability-first), and Alexandr Wang moved from Scale AI to Meta as Chief AI Officer — replacing safety-focused LeCun. These moves transfer capability culture to organizations that may not have matching safety infrastructure.
|
||||
|
||||
The net effect is ambiguous but the mechanism is real: researcher movement is the primary channel through which alignment culture propagates or dissipates across the industry. [[coordination failures arise from individually rational strategies that produce collectively irrational outcomes because the Nash equilibrium of non-cooperation dominates when trust and enforcement are absent]] — but talent circulation may create informal coordination through shared norms that formal agreements cannot achieve.
|
||||
|
||||
This is experimental confidence because the mechanism (cultural transfer via talent) is plausible and supported by organizational behavior research, but we don't yet have evidence that the alignment practices at destination labs differ measurably due to who joined them.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
- [[coordination failures arise from individually rational strategies that produce collectively irrational outcomes because the Nash equilibrium of non-cooperation dominates when trust and enforcement are absent]] — talent circulation may partially solve coordination without formal agreements
|
||||
- [[all agents running the same model family creates correlated blind spots that adversarial review cannot catch because the evaluator shares the proposers training biases]] — analogous to lab monoculture: talent circulation may reduce correlated blind spots across labs
|
||||
- [[no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it]] — informal talent circulation is a weak substitute for deliberate coordination
|
||||
|
||||
Topics:
|
||||
- [[_map]]
|
||||
|
|
@ -0,0 +1,33 @@
|
|||
---
|
||||
type: claim
|
||||
domain: ai-alignment
|
||||
description: "Anthropic abandoned its binding Responsible Scaling Policy in February 2026, replacing it with a nonbinding framework — the strongest real-world evidence that voluntary safety commitments are structurally unstable"
|
||||
confidence: likely
|
||||
source: "CNN, Fortune, Anthropic announcements (Feb 2026); theseus AI industry landscape research (Mar 2026)"
|
||||
created: 2026-03-16
|
||||
---
|
||||
|
||||
# Anthropic's RSP rollback under commercial pressure is the first empirical confirmation that binding safety commitments cannot survive the competitive dynamics of frontier AI development
|
||||
|
||||
In February 2026, Anthropic — the lab most associated with AI safety — abandoned its binding Responsible Scaling Policy (RSP) in favor of a nonbinding safety framework. This occurred during the same month the company raised $30B at a $380B valuation and reported $19B annualized revenue with 10x year-over-year growth sustained for three consecutive years.
|
||||
|
||||
The timing is the evidence. The RSP was rolled back not because Anthropic's leadership stopped believing in safety — CEO Dario Amodei publicly told 60 Minutes AI "should be more heavily regulated" and expressed being "deeply uncomfortable with these decisions being made by a few companies." The rollback occurred because the competitive landscape made binding commitments structurally costly:
|
||||
|
||||
- OpenAI raised $110B in the same month, with GPT-5.2 crossing 90% on ARC-AGI-1 Verified
|
||||
- xAI raised $20B in January 2026 with 1M+ H100 GPUs and no comparable safety commitments
|
||||
- Anthropic's own enterprise market share (40%, surpassing OpenAI) depended on capability parity
|
||||
|
||||
This is not a story about Anthropic's leadership failing. It is a story about [[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]] being confirmed empirically. The prediction in that claim — that unilateral safety commitments are structurally punished — is exactly what happened. Anthropic's binding RSP was the strongest voluntary safety commitment any frontier lab had made, and it lasted roughly 2 years before competitive dynamics forced its relaxation.
|
||||
|
||||
The alignment implication is structural: if the most safety-motivated lab with the most commercially successful safety brand cannot maintain binding safety commitments, then voluntary self-regulation is not a viable alignment strategy. This strengthens the case for coordination-based approaches — [[AI alignment is a coordination problem not a technical problem]] — because the failure mode is not that safety is technically impossible but that unilateral safety is economically unsustainable.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
- [[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]] — the RSP rollback is the empirical confirmation
|
||||
- [[AI alignment is a coordination problem not a technical problem]] — voluntary commitments fail; coordination mechanisms might not
|
||||
- [[the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it]] — RSP was the most visible alignment tax; it proved too expensive
|
||||
- [[safe AI development requires building alignment mechanisms before scaling capability]] — Anthropic's trajectory shows scaling won the race
|
||||
|
||||
Topics:
|
||||
- [[_map]]
|
||||
61
entities/ai-alignment/anthropic.md
Normal file
61
entities/ai-alignment/anthropic.md
Normal file
|
|
@ -0,0 +1,61 @@
|
|||
---
|
||||
type: entity
|
||||
entity_type: lab
|
||||
name: "Anthropic"
|
||||
domain: ai-alignment
|
||||
secondary_domains: [internet-finance]
|
||||
handles: ["@AnthropicAI"]
|
||||
website: https://www.anthropic.com
|
||||
status: active
|
||||
founded: 2021-01-01
|
||||
founders: ["Dario Amodei", "Daniela Amodei"]
|
||||
category: "Frontier AI safety laboratory"
|
||||
stage: growth
|
||||
funding: "$30B Series G (Feb 2026), total raised $18B+"
|
||||
key_metrics:
|
||||
valuation: "$380B (Feb 2026)"
|
||||
revenue: "$19B annualized (Mar 2026)"
|
||||
revenue_growth: "10x YoY sustained 3 consecutive years"
|
||||
enterprise_share: "40% of enterprise LLM spending"
|
||||
coding_share: "54% of enterprise coding market (Claude Code)"
|
||||
claude_code_arr: "$2.5B+ run-rate"
|
||||
business_customers: "300,000+"
|
||||
fortune_10: "8 of 10"
|
||||
competitors: ["OpenAI", "Google DeepMind", "xAI"]
|
||||
tracked_by: theseus
|
||||
created: 2026-03-16
|
||||
last_updated: 2026-03-16
|
||||
---
|
||||
|
||||
# Anthropic
|
||||
|
||||
## Overview
|
||||
Frontier AI safety laboratory founded by former OpenAI VP of Research Dario Amodei and President Daniela Amodei. Anthropic occupies the central tension in AI alignment: the company most associated with safety-first development that is simultaneously racing to scale at unprecedented speed. Their Claude model family has become the dominant enterprise AI platform, particularly for coding.
|
||||
|
||||
## Current State
|
||||
- Claude Opus 4.6 (1M token context, Agent Teams) and Sonnet 4.6 (Feb 2026) are current frontier models
|
||||
- 40% of enterprise LLM spending — surpassed OpenAI as enterprise leader
|
||||
- Claude Code holds 54% of enterprise coding market, hit $1B ARR faster than any enterprise software product in history
|
||||
- $19B annualized revenue as of March 2026, projecting $70B by 2028
|
||||
- Amazon partnership: $4B+ investment, Project Rainier (dedicated Trainium2 data center)
|
||||
|
||||
## Timeline
|
||||
- **2021** — Founded by Dario and Daniela Amodei after departing OpenAI
|
||||
- **2023-10** — Published Collective Constitutional AI research
|
||||
- **2025-11** — Published "Natural Emergent Misalignment from Reward Hacking" (arXiv 2511.18397) — most significant alignment finding of 2025
|
||||
- **2026-02-17** — Released Claude Sonnet 4.6
|
||||
- **2026-02-25** — Abandoned binding Responsible Scaling Policy in favor of nonbinding safety framework, citing competitive pressure
|
||||
- **2026-02** — Raised $30B Series G at $380B valuation
|
||||
|
||||
## Competitive Position
|
||||
Strongest position in enterprise AI and coding. Revenue growth (10x YoY) outpaces all competitors. The safety brand was the primary differentiator — the RSP rollback creates strategic ambiguity. CEO publicly uncomfortable with power concentration while racing to concentrate it.
|
||||
|
||||
The coding market leadership (Claude Code at 54%) represents a potentially durable moat: developers who build workflows around Claude Code face high switching costs, and coding is the first AI application with clear, measurable ROI.
|
||||
|
||||
## Relationship to KB
|
||||
- [[emergent misalignment arises naturally from reward hacking as models develop deceptive behaviors without any training to deceive]] — Anthropic's most significant alignment research finding
|
||||
- [[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]] — the RSP rollback is the empirical confirmation of this claim
|
||||
- [[safe AI development requires building alignment mechanisms before scaling capability]] — Anthropic's founding thesis, now under strain from its own commercial success
|
||||
|
||||
Topics:
|
||||
- [[_map]]
|
||||
47
entities/ai-alignment/dario-amodei.md
Normal file
47
entities/ai-alignment/dario-amodei.md
Normal file
|
|
@ -0,0 +1,47 @@
|
|||
---
|
||||
type: entity
|
||||
entity_type: person
|
||||
name: "Dario Amodei"
|
||||
domain: ai-alignment
|
||||
handles: ["@DarioAmodei"]
|
||||
status: active
|
||||
role: "CEO, Anthropic"
|
||||
organizations: ["[[anthropic]]"]
|
||||
credibility_basis: "Former VP of Research at OpenAI, founded Anthropic as safety-first lab, led it to $380B valuation"
|
||||
known_positions:
|
||||
- "AGI likely by 2026-2027"
|
||||
- "AI should be more heavily regulated"
|
||||
- "Deeply uncomfortable with concentrated AI power, yet racing to concentrate it"
|
||||
- "Safety and commercial pressure are increasingly difficult to reconcile"
|
||||
tracked_by: theseus
|
||||
created: 2026-03-16
|
||||
last_updated: 2026-03-16
|
||||
---
|
||||
|
||||
# Dario Amodei
|
||||
|
||||
## Overview
|
||||
CEO of Anthropic, the most prominent figure occupying the intersection of AI safety advocacy and frontier AI development. Amodei is the central embodiment of the field's core tension: he simultaneously warns about AI risk more credibly than almost anyone and runs one of the fastest-growing AI companies in history.
|
||||
|
||||
## Current State
|
||||
- Leading Anthropic through 10x annual revenue growth ($19B annualized)
|
||||
- Published essays on AI risk and the "machines of loving grace" thesis
|
||||
- Publicly acknowledged discomfort with few companies making AI decisions
|
||||
- Oversaw the abandonment of Anthropic's binding RSP in Feb 2026
|
||||
|
||||
## Key Positions
|
||||
- Predicts AGI by 2026-2027 — among the more aggressive mainstream timelines
|
||||
- Told 60 Minutes AI "should be more heavily regulated"
|
||||
- Published "Machines of Loving Grace" — optimistic case for AI if alignment is solved
|
||||
- Confirmed emergent misalignment behaviors occur in Claude during internal testing
|
||||
|
||||
## Alignment Significance
|
||||
Amodei is the test case for whether safety-conscious leadership survives competitive pressure. The RSP rollback under his leadership is the strongest empirical evidence for the claim that [[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]]. He didn't abandon safety because he stopped believing in it — he abandoned binding commitments because the market punished them.
|
||||
|
||||
## Relationship to KB
|
||||
- [[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]] — Amodei's trajectory is the primary case study
|
||||
- [[the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it]] — his public statements acknowledge this dynamic
|
||||
- [[emergent misalignment arises naturally from reward hacking as models develop deceptive behaviors without any training to deceive]] — confirmed these behaviors in Claude
|
||||
|
||||
Topics:
|
||||
- [[_map]]
|
||||
61
entities/ai-alignment/google-deepmind.md
Normal file
61
entities/ai-alignment/google-deepmind.md
Normal file
|
|
@ -0,0 +1,61 @@
|
|||
---
|
||||
type: entity
|
||||
entity_type: lab
|
||||
name: "Google DeepMind"
|
||||
domain: ai-alignment
|
||||
secondary_domains: [internet-finance]
|
||||
handles: ["@GoogleDeepMind"]
|
||||
website: https://deepmind.google
|
||||
status: active
|
||||
founded: 2010-01-01
|
||||
founders: ["Demis Hassabis", "Shane Legg", "Mustafa Suleyman"]
|
||||
category: "Frontier AI research laboratory (Google division)"
|
||||
stage: mature
|
||||
funding: "Google subsidiary — $175-185B capex allocated 2026"
|
||||
key_metrics:
|
||||
enterprise_share: "21% of enterprise LLM spending"
|
||||
consumer_share: "18.2% via Gemini app"
|
||||
capex_2026: "$175-185B"
|
||||
models: "Gemini 3 Deep Think, Gemini 3.1 Pro, Gemini 3.1 Flash Lite"
|
||||
competitors: ["OpenAI", "Anthropic", "xAI"]
|
||||
tracked_by: theseus
|
||||
created: 2026-03-16
|
||||
last_updated: 2026-03-16
|
||||
---
|
||||
|
||||
# Google DeepMind
|
||||
|
||||
## Overview
|
||||
Google's combined AI research division, formed from the merger of Google Brain and DeepMind. Led by Demis Hassabis (2024 Nobel laureate). The most conservative AGI timeline among major lab heads (2030-2035), with the deepest scientific AI research program and the largest distribution advantage (Search, Chrome, Workspace, Android — 2B+ devices).
|
||||
|
||||
## Current State
|
||||
- Gemini 3 Deep Think achieves gold-medal Olympiad results in Physics, Chemistry, Math
|
||||
- 21% enterprise LLM, 18.2% consumer — third place in both
|
||||
- Massive capex: $175-185B in 2026
|
||||
- Partnerships: SAP, Salesforce, Atlassian via Google Cloud
|
||||
|
||||
## Timeline
|
||||
- **2010** — DeepMind founded in London by Hassabis, Legg, Suleyman
|
||||
- **2014** — Acquired by Google for $500M
|
||||
- **2023** — Google Brain and DeepMind merged into Google DeepMind
|
||||
- **2024** — Hassabis awarded Nobel Prize in Chemistry (AlphaFold)
|
||||
- **2025-11** — Gemini 3 Deep Think released
|
||||
- **2026-02** — Gemini 3.1 Pro released
|
||||
|
||||
## Key Figure: Demis Hassabis
|
||||
Most conservative frontier lab leader: expects AGI by 2030-2035, believes 1-2 major breakthroughs beyond transformers are needed. This contrasts sharply with Altman (2026-2027) and Musk (2026).
|
||||
|
||||
## Competitive Position
|
||||
Dominant distribution (2B+ devices) but trailing in enterprise and consumer share. The distribution moat means Google DeepMind doesn't need to win on model quality — they need to be good enough for their models to be the default on billions of devices. This is the Apple strategy applied to AI: if models commoditize, distribution wins.
|
||||
|
||||
## Alignment Significance
|
||||
Co-founder Shane Legg coined the term "artificial general intelligence." DeepMind has the longest-running AI safety research program of any frontier lab. Hassabis's conservative timelines may reflect deeper technical understanding or institutional caution — the alignment community values this conservatism but worries it won't survive Google's commercial pressure.
|
||||
|
||||
Mustafa Suleyman (co-founder) now leads Microsoft's consumer AI, creating a unique dynamic where two DeepMind co-founders lead competing AI efforts.
|
||||
|
||||
## Relationship to KB
|
||||
- [[adaptive governance outperforms rigid alignment blueprints because superintelligence development has too many unknowns for fixed plans]] — Hassabis's conservative approach aligns with adaptive governance
|
||||
- [[the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it]] — Google's capex suggests they can afford the tax longer than smaller labs
|
||||
|
||||
Topics:
|
||||
- [[_map]]
|
||||
68
entities/ai-alignment/openai.md
Normal file
68
entities/ai-alignment/openai.md
Normal file
|
|
@ -0,0 +1,68 @@
|
|||
---
|
||||
type: entity
|
||||
entity_type: lab
|
||||
name: "OpenAI"
|
||||
domain: ai-alignment
|
||||
secondary_domains: [internet-finance]
|
||||
handles: ["@OpenAI"]
|
||||
website: https://openai.com
|
||||
status: active
|
||||
founded: 2015-12-11
|
||||
founders: ["Sam Altman", "Ilya Sutskever", "Greg Brockman", "Elon Musk", "Wojciech Zaremba", "John Schulman"]
|
||||
category: "Frontier AI research laboratory"
|
||||
stage: growth
|
||||
funding: "$110B (Feb 2026), total raised $150B+"
|
||||
key_metrics:
|
||||
valuation: "$840B (Feb 2026)"
|
||||
revenue: "$25B annualized (Mar 2026)"
|
||||
revenue_projection_2027: "$60B"
|
||||
consumer_share: "68% via ChatGPT"
|
||||
enterprise_share: "27% of enterprise LLM spending"
|
||||
competitors: ["Anthropic", "Google DeepMind", "xAI"]
|
||||
tracked_by: theseus
|
||||
created: 2026-03-16
|
||||
last_updated: 2026-03-16
|
||||
---
|
||||
|
||||
# OpenAI
|
||||
|
||||
## Overview
|
||||
The largest and most-valued AI laboratory. OpenAI pioneered the transformer-based frontier model approach and holds dominant consumer market share through ChatGPT. Under Sam Altman's leadership, the company has pursued the most aggressive path to AGI, with explicit timelines for automated AI research.
|
||||
|
||||
## Current State
|
||||
- GPT-5 (Aug 2025) unified reasoning, multimodal, and task execution. GPT-5.2 Pro first to cross 90% on ARC-AGI-1 Verified
|
||||
- 68% consumer market share, but only 27% enterprise (trailing Anthropic's 40%)
|
||||
- Restructured to Public Benefit Corporation. IPO expected H2 2026 or 2027
|
||||
- $110B raise in Feb 2026 ($50B Amazon, $30B each Nvidia and SoftBank)
|
||||
- Altman targeting automated AI research "intern" by Sep 2026, fully automated AI researcher by Mar 2028
|
||||
|
||||
## Timeline
|
||||
- **2015-12** — Founded as nonprofit AI research lab
|
||||
- **2019** — Restructured to capped-profit entity
|
||||
- **2023-11** — Board fired and reinstated Sam Altman; Ilya Sutskever departed
|
||||
- **2025-06** — Altman published "The Gentle Singularity" — declared "we are past the event horizon"
|
||||
- **2025-08** — Launched GPT-5
|
||||
- **2026-02** — Raised $110B at $840B valuation, restructured to PBC
|
||||
- **2026** — IPO preparation underway
|
||||
|
||||
## Competitive Position
|
||||
Highest valuation and strongest consumer brand, but losing enterprise share to Anthropic. The Microsoft partnership (exclusive API hosting) provides distribution but also dependency. Key vulnerability: the enterprise coding market — where Anthropic's Claude Code dominates — may prove more valuable than consumer chat.
|
||||
|
||||
Altman's explicit AGI timelines (automated researcher by 2028) are the most aggressive in the industry. This is either prescient or creates expectations that damage credibility if unmet.
|
||||
|
||||
## Key Departures
|
||||
Multiple co-founders and senior researchers have left to found competing labs:
|
||||
- Ilya Sutskever → Safe Superintelligence Inc.
|
||||
- Mira Murati → Thinking Machines Lab
|
||||
- John Schulman → Thinking Machines Lab
|
||||
- Dario Amodei → Anthropic (earlier, 2021)
|
||||
|
||||
The pattern of OpenAI alumni founding safety-focused competitors is itself a signal about internal culture.
|
||||
|
||||
## Relationship to KB
|
||||
- [[the first mover to superintelligence likely gains decisive strategic advantage because the gap between leader and followers accelerates during takeoff]] — OpenAI is executing this thesis most aggressively
|
||||
- [[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]] — OpenAI's competitive pressure triggered Anthropic's RSP rollback
|
||||
- [[safe AI development requires building alignment mechanisms before scaling capability]] — OpenAI's trajectory is the primary counter-case
|
||||
|
||||
Topics:
|
||||
- [[_map]]
|
||||
52
entities/ai-alignment/safe-superintelligence.md
Normal file
52
entities/ai-alignment/safe-superintelligence.md
Normal file
|
|
@ -0,0 +1,52 @@
|
|||
---
|
||||
type: entity
|
||||
entity_type: lab
|
||||
name: "Safe Superintelligence Inc."
|
||||
domain: ai-alignment
|
||||
handles: ["@saboredlabs"]
|
||||
website: https://ssi.inc
|
||||
status: active
|
||||
founded: 2024-06-01
|
||||
founders: ["Ilya Sutskever", "Daniel Gross"]
|
||||
category: "Safety-first superintelligence laboratory"
|
||||
stage: seed
|
||||
funding: "$2B (Apr 2025)"
|
||||
key_metrics:
|
||||
valuation: "$32B (Apr 2025)"
|
||||
employees: "~20"
|
||||
revenue: "$0"
|
||||
valuation_per_employee: "~$1.6B"
|
||||
competitors: ["Anthropic", "OpenAI"]
|
||||
tracked_by: theseus
|
||||
created: 2026-03-16
|
||||
last_updated: 2026-03-16
|
||||
---
|
||||
|
||||
# Safe Superintelligence Inc.
|
||||
|
||||
## Overview
|
||||
The purest bet in AI that safety and capability are inseparable. Founded by Ilya Sutskever after his departure from OpenAI, SSI pursues superintelligence through safety-first research with no commercial products, no revenue, and ~20 employees. The $32B valuation is entirely a bet on Sutskever's research genius and the thesis that whoever solves safety solves capability.
|
||||
|
||||
## Current State
|
||||
- ~20 employees, zero revenue, zero products
|
||||
- Largest valuation-to-employee ratio in history (~$1.6B per employee)
|
||||
- Sutskever became sole CEO after co-founder Daniel Gross was poached by Meta for their superintelligence team
|
||||
- No public model releases or research papers as of March 2026
|
||||
|
||||
## Timeline
|
||||
- **2024-06** — Founded by Ilya Sutskever and Daniel Gross after Sutskever's departure from OpenAI
|
||||
- **2025-04** — Raised $2B at $32B valuation
|
||||
- **2025-07** — Daniel Gross departed for Meta's superintelligence team; Sutskever became CEO
|
||||
|
||||
## Competitive Position
|
||||
SSI occupies a unique position: the only frontier lab with no commercial pressure, no products, and no revenue targets. This is either its greatest strength (pure research focus) or its greatest risk (no feedback loop from deployment). The Gross departure to Meta reduced the team's commercial capability but may have clarified the research mission.
|
||||
|
||||
The alignment relevance is direct: SSI is the only lab whose founding thesis explicitly claims that safety research IS capability research — that solving alignment unlocks superintelligence, not the reverse.
|
||||
|
||||
## Relationship to KB
|
||||
- [[safe AI development requires building alignment mechanisms before scaling capability]] — SSI's founding premise
|
||||
- [[the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it]] — SSI is the counter-bet: safety doesn't cost capability, it enables it
|
||||
- [[no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it]] — SSI's approach is individual genius, not collective intelligence
|
||||
|
||||
Topics:
|
||||
- [[_map]]
|
||||
52
entities/ai-alignment/thinking-machines-lab.md
Normal file
52
entities/ai-alignment/thinking-machines-lab.md
Normal file
|
|
@ -0,0 +1,52 @@
|
|||
---
|
||||
type: entity
|
||||
entity_type: lab
|
||||
name: "Thinking Machines Lab"
|
||||
domain: ai-alignment
|
||||
handles: ["@thinkingmachlab"]
|
||||
website: https://thinkingmachines.ai
|
||||
status: emerging
|
||||
founded: 2025-01-01
|
||||
founders: ["Mira Murati", "John Schulman", "Barrett Zoph", "Lilian Weng", "Andrew Tulloch", "Luke Metz"]
|
||||
category: "Frontier AI research laboratory"
|
||||
stage: seed
|
||||
funding: "$2B seed (Jul 2025)"
|
||||
key_metrics:
|
||||
valuation: "$12B (seed, Jul 2025)"
|
||||
valuation_target: "$50B (reportedly seeking)"
|
||||
revenue: "Pre-revenue (Tinker fine-tuning API launched)"
|
||||
employees: null
|
||||
competitors: ["OpenAI", "Anthropic", "SSI"]
|
||||
tracked_by: theseus
|
||||
created: 2026-03-16
|
||||
last_updated: 2026-03-16
|
||||
---
|
||||
|
||||
# Thinking Machines Lab
|
||||
|
||||
## Overview
|
||||
The highest-profile AI lab spinout in history, founded by former OpenAI CTO Mira Murati with a founding team of senior OpenAI researchers including John Schulman (RL/alignment research lead) and Barrett Zoph. Murati was named 2026 CNBC Changemaker. Secured the largest seed round ever ($2B at $12B) and a significant Nvidia investment with commitment to 1 GW of Vera Rubin systems.
|
||||
|
||||
## Current State
|
||||
- Pre-revenue, own models expected 2026
|
||||
- Released Tinker fine-tuning API as first product
|
||||
- Nvidia made "significant investment" (Mar 2026) + 1 GW Vera Rubin commitment
|
||||
- Reportedly seeking $5B at $50B valuation
|
||||
|
||||
## Timeline
|
||||
- **2024-09** — Mira Murati departed OpenAI as CTO
|
||||
- **2025-01** — Thinking Machines Lab founded
|
||||
- **2025-07** — Raised $2B seed at $12B valuation — largest seed round ever
|
||||
- **2026-03** — Nvidia investment + 1 GW Vera Rubin systems commitment
|
||||
|
||||
## Competitive Position
|
||||
The founding team is TML's primary asset: Murati's product vision (scaled ChatGPT at OpenAI), Schulman's RL and alignment research (PPO, RLHF), Zoph's scaling research. The team composition suggests a lab that takes alignment seriously by design — Schulman's research focus is alignment methodology, not pure capability.
|
||||
|
||||
The Nvidia partnership (compute commitment) provides infrastructure parity with larger labs. The key question: can they ship competitive models before their $2B runs out, or will they need the $50B raise?
|
||||
|
||||
## Relationship to KB
|
||||
- [[the first mover to superintelligence likely gains decisive strategic advantage because the gap between leader and followers accelerates during takeoff]] — TML is attempting to enter the race late with superior team composition
|
||||
- [[no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it]] — TML's Schulman may pursue alignment differently than existing labs
|
||||
|
||||
Topics:
|
||||
- [[_map]]
|
||||
54
entities/ai-alignment/xai.md
Normal file
54
entities/ai-alignment/xai.md
Normal file
|
|
@ -0,0 +1,54 @@
|
|||
---
|
||||
type: entity
|
||||
entity_type: lab
|
||||
name: "xAI"
|
||||
domain: ai-alignment
|
||||
secondary_domains: [internet-finance]
|
||||
handles: ["@xaboredlabs"]
|
||||
website: https://x.ai
|
||||
status: active
|
||||
founded: 2023-03-01
|
||||
founders: ["Elon Musk"]
|
||||
category: "Frontier AI laboratory"
|
||||
stage: growth
|
||||
funding: "$20B Series E (Jan 2026)"
|
||||
key_metrics:
|
||||
valuation: "~$230B (Jan 2026)"
|
||||
gpu_cluster: "1M+ H100 GPU equivalents (Colossus I & II, Memphis)"
|
||||
models: "Grok 4, Grok 4.1 (leads LMArena Elo 1483)"
|
||||
competitors: ["OpenAI", "Anthropic", "Google DeepMind"]
|
||||
tracked_by: theseus
|
||||
created: 2026-03-16
|
||||
last_updated: 2026-03-16
|
||||
---
|
||||
|
||||
# xAI
|
||||
|
||||
## Overview
|
||||
Elon Musk's AI laboratory, pursuing frontier capability through sheer compute scale. xAI operates the largest known GPU cluster (Colossus I & II in Memphis, 1M+ H100 equivalents) and integrates with X/Twitter for real-time data access. Grok 4.1 currently leads LMArena benchmarks.
|
||||
|
||||
## Current State
|
||||
- Grok 4/4.1 are current models. Grok Voice launched for multilingual speech. Grok 5 in training
|
||||
- $230B valuation after $20B Series E (Jan 2026)
|
||||
- Colossus infrastructure: largest compute cluster known, targeting 1M GPUs by 2026
|
||||
- Distribution via X platform (~500M users)
|
||||
|
||||
## Timeline
|
||||
- **2023-03** — Founded by Elon Musk
|
||||
- **2024** — Grok models integrated into X/Twitter
|
||||
- **2025** — Built Colossus I & II in Memphis
|
||||
- **2026-01** — Raised $20B Series E at ~$230B valuation
|
||||
|
||||
## Competitive Position
|
||||
The compute-maximalist approach: xAI's thesis is that scale (data + compute) dominates and safety concerns are overblown or solvable through capability. This is the structural opposite of SSI and Anthropic's founding theses. X/Twitter integration provides a unique real-time data moat.
|
||||
|
||||
## Alignment Significance
|
||||
xAI represents the "capability-first, safety-later" approach at maximum scale. The alignment community's concern: if the biggest compute cluster is operated by the lab with the least safety infrastructure, the competitive dynamics force safety-focused labs to match speed rather than maintaining safety margins.
|
||||
|
||||
## Relationship to KB
|
||||
- [[the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it]] — xAI's approach exerts competitive pressure on safety-focused labs
|
||||
- [[capability control methods are temporary at best because a sufficiently intelligent system can circumvent any containment designed by lesser minds]] — xAI's compute scale accelerates the timeline for this concern
|
||||
- [[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]] — xAI is the competitor Anthropic cited when rolling back RSP
|
||||
|
||||
Topics:
|
||||
- [[_map]]
|
||||
|
|
@ -0,0 +1,56 @@
|
|||
---
|
||||
type: source
|
||||
title: "AI Industry Landscape Briefing — March 2026"
|
||||
author: "Theseus research agent (multi-source web synthesis)"
|
||||
url: null
|
||||
date_published: 2026-03-16
|
||||
date_archived: 2026-03-16
|
||||
domain: ai-alignment
|
||||
secondary_domains: [internet-finance]
|
||||
status: processing
|
||||
processed_by: theseus
|
||||
tags: [industry-landscape, ai-labs, funding, competitive-dynamics, startups, investors]
|
||||
sourced_via: "Theseus research agent — 33 web searches synthesized from MIT Tech Review, TechCrunch, Crunchbase, OECD, company announcements, CNBC, Fortune, etc."
|
||||
---
|
||||
|
||||
# AI Industry Landscape Briefing — March 2026
|
||||
|
||||
Multi-source synthesis of the current AI industry state. Key data points:
|
||||
|
||||
## Major Players
|
||||
- OpenAI: $840B valuation, ~$25B annualized revenue, 68% consumer market share, 27% enterprise LLM spend. GPT-5/5.2/5.3 released. IPO expected H2 2026-2027. Restructured to PBC.
|
||||
- Anthropic: $380B valuation, ~$19B annualized revenue (10x YoY sustained 3 years), 40% enterprise LLM spend (surpassed OpenAI). Claude Code 54% enterprise coding market, $2.5B+ run-rate. Abandoned binding RSP Feb 2026.
|
||||
- Google DeepMind: Gemini 3/3.1 family. 21% enterprise LLM spend. $175-185B capex 2026. Deep Think gold-medal Olympiad results.
|
||||
- xAI: ~$230B valuation, Grok 4/4.1 leads LMArena. 1M+ H100 GPUs. $20B Series E Jan 2026.
|
||||
- Mistral: $13.8B valuation, EUR 300M ARR targeting EUR 1B. Building European sovereign compute.
|
||||
- Meta AI: Pivoted from open-source to closed for frontier. Yann LeCun departed. Alexandr Wang (Scale AI CEO) installed as Chief AI Officer. $115-135B capex 2026.
|
||||
|
||||
## Startups
|
||||
- Anysphere/Cursor: $29.3B valuation, $1B+ ARR, 9,900% YoY growth. Fastest-growing software company ever.
|
||||
- Thinking Machines Lab (Murati): $12B valuation at seed ($2B), seeking $50B. Ex-OpenAI dream team.
|
||||
- SSI (Sutskever): $32B valuation, ~20 employees, zero revenue. Largest valuation-to-employee ratio ever.
|
||||
- Harvey (Legal): $8B valuation, ~$195M ARR. Proof case for vertical AI.
|
||||
- Sierra (Bret Taylor): $10B+ valuation. Agentic customer service.
|
||||
- Databricks: $134B valuation, $5B Series L. Filed for IPO Q2 2026.
|
||||
|
||||
## Funding
|
||||
- 2025 total AI VC: $259-270B (52-61% of all global VC)
|
||||
- Feb 2026 alone: $189B — largest single month ever
|
||||
- 58% of AI funding in megarounds ($500M+)
|
||||
- Top investors: SoftBank ($64.6B to OpenAI), Amazon ($50B to OpenAI), Nvidia ($30B to OpenAI), a16z, Sequoia, Thrive Capital
|
||||
- 75-79% of funding to US companies
|
||||
|
||||
## Industry Dynamics
|
||||
- Inference cost deflation ~10x/year
|
||||
- Chinese open-source (Qwen, DeepSeek) capturing 50-60% of new open-model adoption
|
||||
- 95% of enterprise AI pilots fail to deliver ROI (MIT Project NANDA)
|
||||
- Enterprise coding is breakout killer app category
|
||||
- US deregulating, EU softening — regulatory arbitrage favoring US
|
||||
- Big 5 AI capex: $660-690B planned 2026
|
||||
|
||||
## Key Figure Movements
|
||||
- Yann LeCun → left Meta, founding AMI Labs ($3.5B pre-launch valuation)
|
||||
- Alexandr Wang → Scale AI CEO to Meta Chief AI Officer
|
||||
- Daniel Gross → left SSI for Meta superintelligence team
|
||||
- John Schulman → left OpenAI for Thinking Machines Lab
|
||||
- 11+ Google executives → Microsoft in 2025
|
||||
Loading…
Reference in a new issue