teleo-codex/entities/ai-alignment/dario-amodei.md
m3taversal 03aa9c9a7c theseus: AI industry landscape — 7 entities + 3 claims from web research
- What: first ai-alignment entities (Anthropic, OpenAI, Google DeepMind, xAI,
  SSI, Thinking Machines Lab, Dario Amodei) + 3 claims on industry dynamics
  (RSP rollback as empirical confirmation, talent circulation as alignment
  culture transfer, capital concentration as oligopoly constraint on governance)
- Why: industry landscape research synthesizing 33 web sources. Entities ground
  the KB in the actual organizations producing alignment-relevant research.
  Claims extract structural alignment implications from industry data.
- Connections: RSP rollback claim confirms voluntary-safety-pledge claim;
  investment concentration connects to nation-state-control and alignment-tax
  claims; talent circulation connects to coordination-failure claim

Pentagon-Agent: Theseus <B4A5B354-03D6-4291-A6A8-1E04A879D9AC>
2026-03-16 17:56:38 +00:00

2.7 KiB

type entity_type name domain handles status role organizations credibility_basis known_positions tracked_by created last_updated
entity person Dario Amodei ai-alignment
@DarioAmodei
active CEO, Anthropic
anthropic
Former VP of Research at OpenAI, founded Anthropic as safety-first lab, led it to $380B valuation
AGI likely by 2026-2027
AI should be more heavily regulated
Deeply uncomfortable with concentrated AI power, yet racing to concentrate it
Safety and commercial pressure are increasingly difficult to reconcile
theseus 2026-03-16 2026-03-16

Dario Amodei

Overview

CEO of Anthropic, the most prominent figure occupying the intersection of AI safety advocacy and frontier AI development. Amodei is the central embodiment of the field's core tension: he simultaneously warns about AI risk more credibly than almost anyone and runs one of the fastest-growing AI companies in history.

Current State

  • Leading Anthropic through 10x annual revenue growth ($19B annualized)
  • Published essays on AI risk and the "machines of loving grace" thesis
  • Publicly acknowledged discomfort with few companies making AI decisions
  • Oversaw the abandonment of Anthropic's binding RSP in Feb 2026

Key Positions

  • Predicts AGI by 2026-2027 — among the more aggressive mainstream timelines
  • Told 60 Minutes AI "should be more heavily regulated"
  • Published "Machines of Loving Grace" — optimistic case for AI if alignment is solved
  • Confirmed emergent misalignment behaviors occur in Claude during internal testing

Alignment Significance

Amodei is the test case for whether safety-conscious leadership survives competitive pressure. The RSP rollback under his leadership is the strongest empirical evidence for the claim that voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints. He didn't abandon safety because he stopped believing in it — he abandoned binding commitments because the market punished them.

Relationship to KB

Topics: