teleo-codex/domains/grand-strategy/ai-weapons-governance-tractability-stratifies-by-strategic-utility-creating-ottawa-treaty-path-for-medium-utility-categories.md
Teleo Pipeline 37856bdd02
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
reweave: connect 2 orphan claims via vector similarity
Threshold: 0.7, Haiku classification, 6 files modified.

Pentagon-Agent: Epimetheus <0144398e-4ed3-4fe2-95a3-3d72e1abf887>
2026-04-04 12:54:41 +00:00

5.5 KiB

type domain description confidence source created attribution related supports reweave_edges
claim grand-strategy Strategic utility differentiation reveals that not all military AI is equally intractable for governance — physical compliance demonstrability for stockpile-countable weapons combined with declining strategic exclusivity creates viable pathway for category-specific treaties experimental Leo (synthesis from US Army Project Convergence, DARPA programs, CCW GGE documentation, CNAS autonomous weapons reports, HRW 'Losing Humanity' 2012) 2026-03-31
extractor sourcer
handle
leo
handle context
leo Leo (synthesis from US Army Project Convergence, DARPA programs, CCW GGE documentation, CNAS autonomous weapons reports, HRW 'Losing Humanity' 2012)
the legislative ceiling on military ai governance is conditional not absolute cwc proves binding governance without carveouts is achievable but requires three currently absent conditions
Binding international AI governance achieves legal form through scope stratification — the Council of Europe AI Framework Convention entered force by explicitly excluding national security, defense applications, and making private sector obligations optional
Binding international AI governance achieves legal form through scope stratification — the Council of Europe AI Framework Convention entered force by explicitly excluding national security, defense applications, and making private sector obligations optional|supports|2026-04-04

AI weapons governance tractability stratifies by strategic utility — high-utility targeting AI faces firm legislative ceiling while medium-utility loitering munitions and autonomous naval mines follow Ottawa Treaty path where stigmatization plus low strategic exclusivity enables binding instruments outside CCW

The legislative ceiling analysis treated AI military governance as uniform, but strategic utility varies dramatically across weapons categories. High-utility AI (targeting assistance, ISR, C2, CBRN delivery, cyber offensive) has P5 universal assessment as essential to near-peer competition — US NDS 2022 calls AI 'transformative,' China's 2019 strategy centers 'intelligent warfare,' Russia invests heavily in unmanned systems. These categories have near-zero compliance demonstrability (ISR AI is software in classified infrastructure, targeting AI runs on same hardware as non-weapons AI) and firmly hold the legislative ceiling.

Medium-utility categories tell a different story. Loitering munitions (Shahed, Switchblade, ZALA Lancet) provide real advantages but are increasingly commoditized — Shahed-136 technology is available to non-state actors (Houthis, Hezbollah), eroding strategic exclusivity. Autonomous naval mines are functionally analogous to anti-personnel landmines: passive weapons with autonomous proximity activation, not targeted decision-making. Counter-UAS systems are defensive and geographically fixed.

Crucially, these medium-utility categories have MEDIUM compliance demonstrability: loitering munition stockpiles are discrete physical objects that could be destroyed and reported (analogous to landmines under Ottawa Treaty). Naval mines are physical objects with manageable stockpile inventories. This creates the conditions for an Ottawa Treaty path: (a) triggering event provides stigmatization activation, AND (b) middle-power champion makes procedural break (convening outside CCW where P5 can block).

The naval mines parallel is particularly striking: autonomous seabed systems that detect and attack passing vessels are nearly identical to anti-personnel landmines in governance terms — discrete physical objects, stockpile-countable, deployable-in-theater, with civilian shipping as the harm analog to civilian populations in mined territory. This may be the FIRST tractable case for LAWS-specific binding instrument precisely because the Ottawa Treaty analogy is so direct.

The stratification matters because it reveals where governance investment produces highest marginal return. The CCW GGE's 'meaningful human control' framing covers all LAWS without discriminating, creating political deadlock because major powers correctly note that applying it to targeting AI means unacceptable operational friction. A stratified approach would: (1) start with Category 2 binding instruments (loitering munitions stockpile destruction; autonomous naval mines), (2) apply 'meaningful human control' only to lethal targeting decision not entire autonomous operation, (3) use Ottawa Treaty procedural model — bypass CCW, find willing states, let P5 self-exclude rather than block.

This is more tractable than blanket LAWS ban because it isolates categories with lowest P5 strategic utility, has compliance demonstrability for physical stockpiles, has normative precedent of Ottawa Treaty as model, and requires only triggering event plus middle-power champion — not verification technology that doesn't exist for software-defined systems.


Relevant Notes:

Topics: