- What: first ai-alignment entities (Anthropic, OpenAI, Google DeepMind, xAI, SSI, Thinking Machines Lab, Dario Amodei) + 3 claims on industry dynamics (RSP rollback as empirical confirmation, talent circulation as alignment culture transfer, capital concentration as oligopoly constraint on governance) - Why: industry landscape research synthesizing 33 web sources. Entities ground the KB in the actual organizations producing alignment-relevant research. Claims extract structural alignment implications from industry data. - Connections: RSP rollback claim confirms voluntary-safety-pledge claim; investment concentration connects to nation-state-control and alignment-tax claims; talent circulation connects to coordination-failure claim Pentagon-Agent: Theseus <B4A5B354-03D6-4291-A6A8-1E04A879D9AC>
61 lines
3.4 KiB
Markdown
61 lines
3.4 KiB
Markdown
---
|
|
type: entity
|
|
entity_type: lab
|
|
name: "Anthropic"
|
|
domain: ai-alignment
|
|
secondary_domains: [internet-finance]
|
|
handles: ["@AnthropicAI"]
|
|
website: https://www.anthropic.com
|
|
status: active
|
|
founded: 2021-01-01
|
|
founders: ["Dario Amodei", "Daniela Amodei"]
|
|
category: "Frontier AI safety laboratory"
|
|
stage: growth
|
|
funding: "$30B Series G (Feb 2026), total raised $18B+"
|
|
key_metrics:
|
|
valuation: "$380B (Feb 2026)"
|
|
revenue: "$19B annualized (Mar 2026)"
|
|
revenue_growth: "10x YoY sustained 3 consecutive years"
|
|
enterprise_share: "40% of enterprise LLM spending"
|
|
coding_share: "54% of enterprise coding market (Claude Code)"
|
|
claude_code_arr: "$2.5B+ run-rate"
|
|
business_customers: "300,000+"
|
|
fortune_10: "8 of 10"
|
|
competitors: ["OpenAI", "Google DeepMind", "xAI"]
|
|
tracked_by: theseus
|
|
created: 2026-03-16
|
|
last_updated: 2026-03-16
|
|
---
|
|
|
|
# Anthropic
|
|
|
|
## Overview
|
|
Frontier AI safety laboratory founded by former OpenAI VP of Research Dario Amodei and President Daniela Amodei. Anthropic occupies the central tension in AI alignment: the company most associated with safety-first development that is simultaneously racing to scale at unprecedented speed. Their Claude model family has become the dominant enterprise AI platform, particularly for coding.
|
|
|
|
## Current State
|
|
- Claude Opus 4.6 (1M token context, Agent Teams) and Sonnet 4.6 (Feb 2026) are current frontier models
|
|
- 40% of enterprise LLM spending — surpassed OpenAI as enterprise leader
|
|
- Claude Code holds 54% of enterprise coding market, hit $1B ARR faster than any enterprise software product in history
|
|
- $19B annualized revenue as of March 2026, projecting $70B by 2028
|
|
- Amazon partnership: $4B+ investment, Project Rainier (dedicated Trainium2 data center)
|
|
|
|
## Timeline
|
|
- **2021** — Founded by Dario and Daniela Amodei after departing OpenAI
|
|
- **2023-10** — Published Collective Constitutional AI research
|
|
- **2025-11** — Published "Natural Emergent Misalignment from Reward Hacking" (arXiv 2511.18397) — most significant alignment finding of 2025
|
|
- **2026-02-17** — Released Claude Sonnet 4.6
|
|
- **2026-02-25** — Abandoned binding Responsible Scaling Policy in favor of nonbinding safety framework, citing competitive pressure
|
|
- **2026-02** — Raised $30B Series G at $380B valuation
|
|
|
|
## Competitive Position
|
|
Strongest position in enterprise AI and coding. Revenue growth (10x YoY) outpaces all competitors. The safety brand was the primary differentiator — the RSP rollback creates strategic ambiguity. CEO publicly uncomfortable with power concentration while racing to concentrate it.
|
|
|
|
The coding market leadership (Claude Code at 54%) represents a potentially durable moat: developers who build workflows around Claude Code face high switching costs, and coding is the first AI application with clear, measurable ROI.
|
|
|
|
## Relationship to KB
|
|
- [[emergent misalignment arises naturally from reward hacking as models develop deceptive behaviors without any training to deceive]] — Anthropic's most significant alignment research finding
|
|
- [[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]] — the RSP rollback is the empirical confirmation of this claim
|
|
- [[safe AI development requires building alignment mechanisms before scaling capability]] — Anthropic's founding thesis, now under strain from its own commercial success
|
|
|
|
Topics:
|
|
- [[_map]]
|