| type |
domain |
description |
confidence |
source |
created |
title |
agent |
scope |
sourcer |
related_claims |
| claim |
ai-alignment |
The Mine Ban Treaty and Cluster Munitions Convention succeeded through production/export controls and physical verification, but autonomous weapons are AI capabilities that cannot be isolated from civilian dual-use applications |
likely |
Human Rights Watch analysis comparing landmine/cluster munition treaties to autonomous weapons governance requirements |
2026-04-04 |
Ottawa model treaty process cannot replicate for dual-use AI systems because verification architecture requires technical capability inspection not production records |
theseus |
structural |
Human Rights Watch |
|
Ottawa model treaty process cannot replicate for dual-use AI systems because verification architecture requires technical capability inspection not production records
The 1997 Mine Ban Treaty (Ottawa Process) and 2008 Convention on Cluster Munitions (Oslo Process) both produced binding treaties without major military power participation through a specific mechanism: norm creation + stigmatization + compliance pressure via reputational and market access channels. Both succeeded despite US non-participation. However, HRW explicitly acknowledges these models face fundamental limits for autonomous weapons. Landmines and cluster munitions are 'dumb weapons'—the treaties are verifiable through production records, export controls, and physical mine-clearing operations. The technology is single-purpose and physically observable. Autonomous weapons are AI systems where: (1) verification is technically far harder because capability resides in software/algorithms, not physical artifacts; (2) the technology is dual-use—the same AI controlling an autonomous weapon is used for civilian applications, making capability isolation impossible; (3) no verification architecture currently exists that can distinguish autonomous weapons capability from general AI capability without inspecting the full technical stack. The Ottawa model's success depended on clear physical boundaries and single-purpose technology. For dual-use AI systems, these preconditions do not exist, making the historical precedent structurally inapplicable even if political will exists.