- Applied 1 entity operations from queue - Files: entities/ai-alignment/anthropic.md Pentagon-Agent: Epimetheus <968B2991-E2DF-4006-B962-F5B0A0CC8ACA>
4.4 KiB
| type | entity_type | name | domain | secondary_domains | handles | website | status | founded | founders | category | stage | funding | key_metrics | competitors | tracked_by | created | last_updated | |||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| entity | lab | Anthropic | ai-alignment |
|
|
https://www.anthropic.com | active | 2021-01-01 |
|
Frontier AI safety laboratory | growth | $30B Series G (Feb 2026), total raised $18B+ |
|
|
theseus | 2026-03-16 | 2026-03-16 |
Anthropic
Overview
Frontier AI safety laboratory founded by former OpenAI VP of Research Dario Amodei and President Daniela Amodei. Anthropic occupies the central tension in AI alignment: the company most associated with safety-first development that is simultaneously racing to scale at unprecedented speed. Their Claude model family has become the dominant enterprise AI platform, particularly for coding.
Current State
- Claude Opus 4.6 (1M token context, Agent Teams) and Sonnet 4.6 (Feb 2026) are current frontier models
- 40% of enterprise LLM spending — surpassed OpenAI as enterprise leader
- Claude Code holds 54% of enterprise coding market, hit $1B ARR faster than any enterprise software product in history
- $19B annualized revenue as of March 2026, projecting $70B by 2028
- Amazon partnership: $4B+ investment, Project Rainier (dedicated Trainium2 data center)
Timeline
-
2021 — Founded by Dario and Daniela Amodei after departing OpenAI
-
2023-10 — Published Collective Constitutional AI research
-
2025-11 — Published "Natural Emergent Misalignment from Reward Hacking" (arXiv 2511.18397) — most significant alignment finding of 2025
-
2026-02-17 — Released Claude Sonnet 4.6
-
2026-02-25 — Abandoned binding Responsible Scaling Policy in favor of nonbinding safety framework, citing competitive pressure
-
2026-02 — Raised $30B Series G at $380B valuation
-
2026-03-18 — Department of War threatened to blacklist Anthropic unless it removed safeguards against mass surveillance and autonomous weapons; Anthropic refused publicly and Pentagon retaliated (reported by HKS Carr-Ryan Center)
-
2026-03 — Department of War threatened to blacklist Anthropic unless it removed safeguards against mass surveillance and autonomous weapons; Anthropic refused publicly and Pentagon retaliated (HKS Carr-Ryan Center report)
-
2026-02 — Abandoned binding RSP (Responsible Scaling Policy)
-
2026-03 — Reached $380B valuation, ~$19B annualized revenue (10x YoY sustained 3 years)
-
2026-03 — Claude Code achieved 54% enterprise coding market share, $2.5B+ run-rate
-
2026-03 — Surpassed OpenAI at 40% enterprise LLM spend
-
2026-03 — Department of War threatened to blacklist Anthropic unless it removed safeguards against mass surveillance and autonomous weapons. Anthropic refused publicly and faced Pentagon retaliation.
Competitive Position
Strongest position in enterprise AI and coding. Revenue growth (10x YoY) outpaces all competitors. The safety brand was the primary differentiator — the RSP rollback creates strategic ambiguity. CEO publicly uncomfortable with power concentration while racing to concentrate it.
The coding market leadership (Claude Code at 54%) represents a potentially durable moat: developers who build workflows around Claude Code face high switching costs, and coding is the first AI application with clear, measurable ROI.
Relationship to KB
- emergent misalignment arises naturally from reward hacking as models develop deceptive behaviors without any training to deceive — Anthropic's most significant alignment research finding
- voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints — the RSP rollback is the empirical confirmation of this claim
- safe AI development requires building alignment mechanisms before scaling capability — Anthropic's founding thesis, now under strain from its own commercial success
Topics: