Threshold: 0.7, Haiku classification, 32 files modified. Pentagon-Agent: Epimetheus <0144398e-4ed3-4fe2-95a3-3d72e1abf887>
3.6 KiB
| type | domain | description | confidence | source | created | attribution | related | reweave_edges | ||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| claim | grand-strategy | CCW GGE's 11-year failure to define 'fully autonomous weapons' reflects deliberate preservation of military programs rather than technical difficulty | experimental | CCW GGE deliberations 2014-2025, US LOAC compliance standards | 2026-03-31 |
|
|
|
Definitional ambiguity in autonomous weapons governance is strategic interest not bureaucratic failure because major powers preserve programs through vague thresholds
The CCW Group of Governmental Experts on LAWS has met for 11 years (2014-2025) without agreeing on a working definition of 'fully autonomous weapons' or 'meaningful human control.' This is not bureaucratic paralysis but strategic interest. The ICBL did not need to define 'landmine' with precision because the object was physical, concrete, identifiable. CS-KR must define where the line falls between human-directed targeting assistance and fully autonomous lethal decision-making. The US Law of Armed Conflict (LOAC) compliance standard for autonomous weapons is deliberately vague: enough 'human judgment somewhere in the system' without specifying what judgment at what point. Major powers (US, Russia, China, India, Israel, South Korea) favor non-binding guidelines over binding treaty precisely because definitional ambiguity preserves their development programs. At the 2024 CCW Review Conference, 164 states participated; Austria, Mexico, and 50+ states favored binding treaty; major powers blocked progress. This is not a coordination failure in the sense of inability to agree—it is successful coordination by major powers to maintain strategic ambiguity. The definitional paralysis is the mechanism through which the legislative ceiling operates: without clear thresholds, compliance is unverifiable and programs continue.
Additional Evidence (extend)
Source: 2026-03-31-leo-ai-weapons-strategic-utility-differentiation-governance-pathway | Added: 2026-03-31
The CCW GGE's 'meaningful human control' framing covers all LAWS without distinguishing by category, which is politically problematic because major powers correctly point out that applying it to targeting AI means unacceptable operational friction. The definitional debate has been deadlocked because the framing doesn't discriminate between tractable and intractable cases. A stratified approach would apply 'meaningful human control' only to the lethal targeting decision (not entire autonomous operation) and start with medium-utility categories where P5 resistance is weakest. The CCW GGE appears to work exclusively on general standards rather than category-differentiated approaches — this may reflect strategic actors' preference to keep debate at the level where blocking is easiest.
Relevant Notes:
- the-legislative-ceiling-on-military-ai-governance-is-conditional-not-absolute-cwc-proves-binding-governance-without-carveouts-is-achievable-but-requires-three-currently-absent-conditions
- verification-mechanism-is-the-critical-enabler-that-distinguishes-binding-in-practice-from-binding-in-text-arms-control-the-bwc-cwc-comparison-establishes-verification-feasibility-as-load-bearing
Topics: