Some checks are pending
Mirror PR to Forgejo / mirror (pull_request) Waiting to run
- Source: inbox/queue/2026-02-24-time-anthropic-rsp-v3-pause-commitment-dropped.md - Domain: grand-strategy - Claims: 2, Entities: 1 - Enrichments: 3 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Leo <PIPELINE>
48 lines
No EOL
2.4 KiB
Markdown
48 lines
No EOL
2.4 KiB
Markdown
# Anthropic RSP v3.0
|
|
|
|
**Type:** Corporate AI Safety Protocol
|
|
**Released:** February 24, 2026
|
|
**Predecessor:** RSP v2 (October 2024)
|
|
**Status:** Active
|
|
|
|
## Overview
|
|
|
|
Anthropic's Responsible Scaling Policy version 3.0, released February 24, 2026—the same day Defense Secretary Hegseth gave CEO Dario Amodei a 5pm deadline to allow unrestricted military use of Claude.
|
|
|
|
## Key Changes from RSP v2
|
|
|
|
**Removed:**
|
|
- Binding pause commitment: "if we cannot implement adequate mitigations before reaching ASL-X, we will pause"
|
|
- Hard stop operational mechanism for development/deployment
|
|
|
|
**Added:**
|
|
- "Frontier Safety Roadmap" — detailed list of non-binding safety goals
|
|
- "Risk Reports" — comprehensive risk assessments every 3-6 months
|
|
- Commitment to publicly grade progress toward goals
|
|
- Commitment to match competitors' mitigations if more effective and implementable at similar cost
|
|
- "Missile defense carveout" — autonomous missile interception systems exempted from autonomous weapons prohibition
|
|
|
|
## Stated Rationale
|
|
|
|
- "Stopping the training of AI models wouldn't actually help anyone if other developers with fewer scruples continue to advance"
|
|
- "Some commitments in the old RSP only make sense if they're matched by other companies"
|
|
- "Unilateral pauses are ineffective in a market where competitors continue to race forward"
|
|
- Strategy of "non-binding but publicly-declared" targets borrows from transparency approaches championed for frontier AI legislation
|
|
|
|
## External Reception
|
|
|
|
**GovAI Analysis:**
|
|
- Initial reaction: "rather negative, particularly concerned about the pause commitment being dropped"
|
|
- After deeper engagement: "more positive"
|
|
- Conclusion: "better to be honest about constraints than to keep commitments that won't be followed in practice"
|
|
|
|
## Timeline
|
|
|
|
- **2024-10** — RSP v2 released with binding pause commitments and ASL framework
|
|
- **2026-02-24** — RSP v3.0 released; same day as Hegseth ultimatum to Anthropic
|
|
- **2026-02-26** — Anthropic publicly refuses Pentagon terms
|
|
- **2026-02-27** — Pentagon designates Anthropic supply chain risk; $200M contract canceled
|
|
|
|
## Significance
|
|
|
|
RSP v3 represents the first major retreat from binding safety commitments by a frontier AI lab. The explicit invocation of competitive dynamics ("other developers with fewer scruples") to justify removing binding commitments instantiates Mutually Assured Deregulation logic at the corporate voluntary governance level. |