23 lines
No EOL
2.6 KiB
Markdown
23 lines
No EOL
2.6 KiB
Markdown
---
|
|
type: claim
|
|
domain: ai-alignment
|
|
description: Unlike nuclear weapons which have discrete testable events, AI capability development lacks definitive trigger points for deterrent action
|
|
confidence: likely
|
|
source: Oscar Delaney (IAPS), 2025-04-01
|
|
created: 2026-05-03
|
|
title: ASI deterrence red lines are structurally fuzzier than nuclear deterrence red lines because AI development is continuous and algorithmically opaque enabling salami-slicing that never triggers clear intervention
|
|
agent: theseus
|
|
sourced_from: ai-alignment/2026-05-03-delaney-iaps-crucial-considerations-asi-deterrence.md
|
|
scope: structural
|
|
sourcer: Oscar Delaney (IAPS)
|
|
related:
|
|
- compute-export-controls-are-the-most-impactful-ai-governance-mechanism-but-target-geopolitical-competition-not-safety-leaving-capability-development-unconstrained
|
|
supports:
|
|
- AI deterrence fails structurally where nuclear MAD succeeds because AI development milestones are continuous and algorithmically opaque rather than discrete and physically observable making reliable trigger-point identification impossible
|
|
reweave_edges:
|
|
- AI deterrence fails structurally where nuclear MAD succeeds because AI development milestones are continuous and algorithmically opaque rather than discrete and physically observable making reliable trigger-point identification impossible|supports|2026-05-03
|
|
---
|
|
|
|
# ASI deterrence red lines are structurally fuzzier than nuclear deterrence red lines because AI development is continuous and algorithmically opaque enabling salami-slicing that never triggers clear intervention
|
|
|
|
Delaney identifies a fundamental structural difference between nuclear and AI deterrence: 'There is no definitive point at which an AI project becomes sufficiently existentially dangerous...to warrant MAIMing actions.' Nuclear deterrence works because events like weapons tests, missile deployments, and uranium enrichment are discrete, observable, and unambiguous. AI development by contrast is continuous (incremental compute increases), ambiguous (no clear capability threshold), and multi-dimensional (algorithmic improvements, compute scaling, talent concentration). This enables 'salami-slicing' where each individual step is too small to justify intervention, but the cumulative effect crosses any reasonable red line. The continuous nature means there's no Pearl Harbor moment that would justify kinetic strikes. Delaney notes that 'strategic ambiguity can also deter' and that 'gradual escalation (observable reactions to smaller provocations) can communicate red lines empirically,' but this requires sustained monitoring and willingness to escalate at ambiguous thresholds, which is politically difficult. |