Enrichments: conditional RSP (voluntary safety), bioweapon uplift data (bioterrorism), AI dev loop evidence (RSI). Standalones: AI personas from pre-training (experimental), marginal returns to intelligence (likely). Source diversity flagged (3 Dario sources). Pentagon-Agent: Leo <76FB9BCA-CC16-4479-B3E5-25A3769B3D7E>
18 lines
1.1 KiB
Markdown
18 lines
1.1 KiB
Markdown
---
|
|
title: "Exclusive: Anthropic Drops Flagship Safety Pledge"
|
|
author: TIME staff
|
|
source: TIME
|
|
date: 2026-03-06
|
|
url: https://time.com/7380854/exclusive-anthropic-drops-flagship-safety-pledge/
|
|
processed_by: theseus
|
|
processed_date: 2026-03-07
|
|
type: news article
|
|
status: complete
|
|
enrichments:
|
|
- target: "voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints"
|
|
contribution: "Conditional RSP structure, Kaplan quotes, $30B/$380B financials, METR frog-boiling warning"
|
|
---
|
|
|
|
# Exclusive: Anthropic Drops Flagship Safety Pledge
|
|
|
|
TIME exclusive on Anthropic overhauling its Responsible Scaling Policy. Original RSP: never train without advance safety guarantees. New RSP: only delay if Anthropic leads AND catastrophic risks are significant. Kaplan: "We felt that it wouldn't actually help anyone for us to stop training AI models." $30B raise, ~$380B valuation, 10x annual revenue growth. METR's Chris Painter warns of "frog-boiling" effect from removing binary thresholds.
|