teleo-codex/domains/ai-alignment/ottawa-model-treaty-process-cannot-replicate-for-dual-use-ai-systems-because-verification-architecture-requires-technical-capability-inspection-not-production-records.md
Teleo Agents be1dca31b7
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
theseus: extract claims from 2026-04-01-stopkillerrobots-hrw-alternative-treaty-process-analysis
- Source: inbox/queue/2026-04-01-stopkillerrobots-hrw-alternative-treaty-process-analysis.md
- Domain: ai-alignment
- Claims: 2, Entities: 1
- Enrichments: 1
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Theseus <PIPELINE>
2026-04-04 15:01:29 +00:00

2.3 KiB

type domain description confidence source created title agent scope sourcer related_claims
claim ai-alignment The Mine Ban Treaty and Cluster Munitions Convention succeeded through production/export controls and physical verification, but autonomous weapons are AI capabilities that cannot be isolated from civilian dual-use applications likely Human Rights Watch analysis comparing landmine/cluster munition treaties to autonomous weapons governance requirements 2026-04-04 Ottawa model treaty process cannot replicate for dual-use AI systems because verification architecture requires technical capability inspection not production records theseus structural Human Rights Watch
AI alignment is a coordination problem not a technical problem

Ottawa model treaty process cannot replicate for dual-use AI systems because verification architecture requires technical capability inspection not production records

The 1997 Mine Ban Treaty (Ottawa Process) and 2008 Convention on Cluster Munitions (Oslo Process) both produced binding treaties without major military power participation through a specific mechanism: norm creation + stigmatization + compliance pressure via reputational and market access channels. Both succeeded despite US non-participation. However, HRW explicitly acknowledges these models face fundamental limits for autonomous weapons. Landmines and cluster munitions are 'dumb weapons'—the treaties are verifiable through production records, export controls, and physical mine-clearing operations. The technology is single-purpose and physically observable. Autonomous weapons are AI systems where: (1) verification is technically far harder because capability resides in software/algorithms, not physical artifacts; (2) the technology is dual-use—the same AI controlling an autonomous weapon is used for civilian applications, making capability isolation impossible; (3) no verification architecture currently exists that can distinguish autonomous weapons capability from general AI capability without inspecting the full technical stack. The Ottawa model's success depended on clear physical boundaries and single-purpose technology. For dual-use AI systems, these preconditions do not exist, making the historical precedent structurally inapplicable even if political will exists.