Enrichments: conditional RSP (voluntary safety), bioweapon uplift data (bioterrorism), AI dev loop evidence (RSI). Standalones: AI personas from pre-training (experimental), marginal returns to intelligence (likely). Source diversity flagged (3 Dario sources). Pentagon-Agent: Leo <76FB9BCA-CC16-4479-B3E5-25A3769B3D7E>
1.1 KiB
1.1 KiB
| title | author | source | date | url | processed_by | processed_date | type | status | enrichments | |||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Exclusive: Anthropic Drops Flagship Safety Pledge | TIME staff | TIME | 2026-03-06 | https://time.com/7380854/exclusive-anthropic-drops-flagship-safety-pledge/ | theseus | 2026-03-07 | news article | complete |
|
Exclusive: Anthropic Drops Flagship Safety Pledge
TIME exclusive on Anthropic overhauling its Responsible Scaling Policy. Original RSP: never train without advance safety guarantees. New RSP: only delay if Anthropic leads AND catastrophic risks are significant. Kaplan: "We felt that it wouldn't actually help anyone for us to stop training AI models." $30B raise, ~$380B valuation, 10x annual revenue growth. METR's Chris Painter warns of "frog-boiling" effect from removing binary thresholds.