diff --git a/entities/ai-alignment/anthropic.md b/entities/ai-alignment/anthropic.md index 2e967506..fd4b91bc 100644 --- a/entities/ai-alignment/anthropic.md +++ b/entities/ai-alignment/anthropic.md @@ -57,6 +57,7 @@ Frontier AI safety laboratory founded by former OpenAI VP of Research Dario Amod - **2026-03-06** — Overhauled Responsible Scaling Policy from 'never train without advance safety guarantees' to conditional delays only when Anthropic leads AND catastrophic risks are significant. Raised $30B at ~$380B valuation with 10x annual revenue growth. Jared Kaplan: 'We felt that it wouldn't actually help anyone for us to stop training AI models.' - **2026-02-24** — Released RSP v3.0, replacing unconditional binary safety thresholds with dual-condition escape clauses (pause only if Anthropic leads AND risks are catastrophic). METR partner Chris Painter warned of 'frog-boiling effect' from removing binary thresholds. Raised $30B at ~$380B valuation with 10x annual revenue growth. - **2025-02-13** — Signed Memorandum of Understanding with UK AI Security Institute (formerly AI Safety Institute) for collaboration on frontier model safety research, creating formal partnership with government institution that conducts pre-deployment evaluations of Anthropic's models. +- **2026-02-24** — Published Responsible Scaling Policy v3.0, removing hard capability-threshold pause triggers and replacing them with non-binding 'public goals' and external expert review. Cited evaluation science insufficiency and slow government action as primary reasons. External media characterized this as 'dropping hard safety limits.' ## Competitive Position Strongest position in enterprise AI and coding. Revenue growth (10x YoY) outpaces all competitors. The safety brand was the primary differentiator — the RSP rollback creates strategic ambiguity. CEO publicly uncomfortable with power concentration while racing to concentrate it.