diff --git a/inbox/queue/2026-03-31-leo-campaign-stop-killer-robots-ai-weapons-stigmatization-trajectory.md b/inbox/queue/2026-03-31-leo-campaign-stop-killer-robots-ai-weapons-stigmatization-trajectory.md deleted file mode 100644 index 644ed832..00000000 --- a/inbox/queue/2026-03-31-leo-campaign-stop-killer-robots-ai-weapons-stigmatization-trajectory.md +++ /dev/null @@ -1,98 +0,0 @@ ---- -type: source -title: "Campaign to Stop Killer Robots (CS-KR) — Pre-Treaty ICBL Infrastructure Analog Without the Triggering Event" -author: "Leo (KB synthesis from CS-KR public record, CCW GGE deliberations 2014-2025)" -url: https://www.stopkillerrobots.org/ -date: 2026-03-31 -domain: grand-strategy -secondary_domains: [ai-alignment, mechanisms] -format: synthesis -status: processed -priority: high -tags: [campaign-stop-killer-robots, cs-kr, laws, autonomous-weapons, lethal-autonomous-weapons-systems, stigmatization, normative-campaign, icbl-analog, triggering-event, ccw-gge, meaningful-human-control, ai-weapons-governance, three-condition-framework, ottawa-treaty-path, legislative-ceiling] -flagged_for_theseus: ["CS-KR's 'meaningful human control' framing overlaps with Theseus's AI alignment domain — does the threshold of 'meaningful human control' connect to alignment concepts like corrigibility or oversight preservation? If yes, the governance framing and the alignment framing may converge on the same technical requirement."] -flagged_for_clay: ["The triggering-event gap (CS-KR has infrastructure but no activation event) is a narrative infrastructure problem. What visual/narrative infrastructure would need to exist for an AI weapons civilian casualty event to generate ICBL-scale normative response? This is the Princess Diana analog question for Clay."] -processed_by: leo -processed_date: 2026-03-31 -claims_extracted: ["ai-weapons-stigmatization-campaign-has-normative-infrastructure-without-triggering-event-creating-icbl-phase-equivalent-waiting-for-activation.md", "definitional-ambiguity-in-autonomous-weapons-governance-is-strategic-interest-not-bureaucratic-failure-because-major-powers-preserve-programs-through-vague-thresholds.md"] -enrichments_applied: ["the-legislative-ceiling-on-military-ai-governance-is-conditional-not-absolute-cwc-proves-binding-governance-without-carveouts-is-achievable-but-requires-three-currently-absent-conditions.md"] -extraction_model: "anthropic/claude-sonnet-4.5" ---- - -## Content - -The Campaign to Stop Killer Robots (CS-KR) is the direct structural analog to the International Campaign to Ban Landmines (ICBL) — the NGO coalition that drove the Ottawa Treaty. Assessing its trajectory reveals the current state of AI weapons stigmatization infrastructure and the key missing component. - -**CS-KR founding and structure:** -- Founded April 2013 by NGO coalition including Human Rights Watch, Article 36, PAX, Amnesty International -- Now ~270 member organizations across 70+ countries (ICBL peaked at ~1,300 NGOs, but CS-KR has comparable geographic reach) -- Call for action: negotiation of "a new international treaty that would prohibit fully autonomous weapons" -- Normative threshold: "meaningful human control" over lethal targeting decisions - -**CCW GGE on LAWS (parallel formal process):** -- Convention on Certain Conventional Weapons Group of Governmental Experts on Lethal Autonomous Weapons Systems -- Established 2014; annual meetings since 2016 -- Key milestones: - - 2019: Adopted 11 Guiding Principles on LAWS (non-binding; acknowledged "meaningful human control" concept) - - 2021: Endorsed Guiding Principles again; no progress toward binding instrument - - 2023: Adopted "Recommendations" — first formal recommendations; but still non-binding - - 2024: CCW Review Conference; 164 states; Austria, Mexico, 50+ states favor binding treaty; US, Russia, China, India, Israel, South Korea favor non-binding guidelines only - - 11 years of deliberations; zero binding commitments - -**Structural parallel to ICBL (1992-1997 phase):** -The ICBL was founded in 1992 and achieved the Ottawa Treaty in 1997 — five years. CS-KR was founded in 2013; it's now 13 years later with no binding treaty. The ICBL needed three components: (1) normative infrastructure (present in CS-KR); (2) triggering event (present for ICBL — post-Cold War conflict civilian casualties; ABSENT for CS-KR); (3) middle-power champion moment (present for ICBL — Axworthy's Ottawa process; ABSENT for CS-KR — Austria has been most active but has not made the procedural break). - -**Why the triggering event hasn't occurred:** -- Russia's Shahed drone strikes on Ukrainian infrastructure (2022-2024) are the nearest candidate: unmanned systems striking civilian targets, documented casualties, widely covered -- Why Shahed didn't trigger ICBL-scale response: (a) Shahed drones are semi-autonomous with pre-programmed targeting, not real-time AI decision-making — autonomy is not attributable in the "machine decided to kill" sense; (b) Ukraine conflict has normalized drone warfare rather than stigmatizing it; (c) both sides are using drones — stigmatization requires a clear aggressor -- The triggering event needs: clear AI decision-attribution + civilian mass casualties + non-mutual deployment (one side victimizing the other) + Western media visibility + emotional anchor figure (Princess Diana equivalent) - -**The definitional paralysis problem:** -- ICBL didn't need to define "landmine" with precision — the object was physical, concrete, identifiable -- CS-KR must define "fully autonomous weapons" — where is the line between human-directed targeting assistance and fully autonomous lethal decision-making? -- CCW GGE has spent 11 years without agreeing on a working definition -- Major powers' interest: definitional ambiguity preserves their programs. The US LOAC (Law of Armed Conflict) compliance standard for autonomous weapons is deliberately vague — enough "human judgment somewhere in the system" without specifying what judgment at what point -- This is not bureaucratic failure; it's strategic interest actively maintaining ambiguity - -**Middle-power champion assessment:** -- Austria: most active; convened Vienna Conference on LAWS (2024); has called for binding instrument -- New Zealand, Ireland, Costa Rica, Mexico: active supporters but without diplomatic leverage -- The Axworthy parallel would require a senior government figure willing to convene outside CCW — invite willing states to finalize a treaty and let major powers self-exclude -- No evidence this political moment has been identified; Austrian diplomacy remains within CCW machinery - ---- - -## Agent Notes - -**Why this matters:** CS-KR's 13-year trajectory reveals the AI weapons stigmatization campaign is in the "normative infrastructure present, triggering event absent" phase — comparable to the ICBL circa 1994-1995 (three years before Ottawa). The campaign is NOT stalled in the sense of losing momentum; it's waiting for the activation component. - -**What surprised me:** The CCW GGE's 11-year failure to produce a binding instrument is often framed as evidence that AI weapons governance is impossible. But the ICBL bypassed the Conference on Disarmament — the exact equivalent — to achieve the Ottawa Treaty. The CCW GGE failure may be an ARGUMENT FOR a venue bypass, not evidence of permanent impossibility. - -**What I expected but didn't find:** Clear evidence of a middle-power government leader willing to attempt the Axworthy procedural break (convening outside CCW machinery). Austria is the closest, but they're still working within CCW. The Axworthy moment hasn't been identified or attempted. - -**KB connections:** -- [[narratives are infrastructure not just communication because they coordinate action at civilizational scale]] — CS-KR IS the narrative infrastructure; the missing component is the triggering event that activates it -- the meaning crisis is a narrative infrastructure failure not a personal psychological problem — the "who decides when AI kills" question is a narrative infrastructure problem at civilizational scale -- Ottawa Treaty analysis (today's first archive) — CS-KR has Component 1 (infrastructure) but lacks Components 2 and 3 - -**Extraction hints:** -1. STANDALONE CLAIM: Campaign to Stop Killer Robots as ICBL-phase-equivalent — normative infrastructure present; triggering event absent; middle-power champion moment not yet identified. This is a stage-assessment claim, not a pessimistic claim — the infrastructure makes the treaty possible when the event occurs. Grand-strategy domain. Confidence: experimental. -2. ENRICHMENT: Triggering-event architecture claim (Candidate 3 from research-2026-03-31.md) — CS-KR + CCW GGE trajectory is the empirical basis for the three-component sequential architecture (infrastructure → triggering event → champion moment). - -**Context:** CS-KR is primarily a policy/advocacy organization; its annual reports document coalition growth and CCW GGE progress. Key academic analysis: Mark Gubrud (IEEE), Kenneth Payne "I, Warbot" (2021). CCW GGE Meeting Reports available at https://www.un.org/disarmament/the-convention-on-certain-conventional-weapons/ - -## Curator Notes (structured handoff for extractor) -PRIMARY CONNECTION: Legislative ceiling claim (Sessions 2026-03-27 through 2026-03-30) + Ottawa Treaty analysis (today's first archive) -WHY ARCHIVED: CS-KR trajectory reveals the AI weapons stigmatization campaign is in the "infrastructure present, triggering event absent" phase. This provides the empirical basis for the triggering-event architecture claim and positions the legislative ceiling as event-dependent, not permanently structural. -EXTRACTION HINT: Extract together with the Ottawa Treaty archive and the three-condition framework revision. The CS-KR trajectory is the empirical grounding for the "infrastructure without activation" stage assessment. Flag to Clay for narrative infrastructure implications. - - -## Key Facts -- CS-KR founded April 2013 by Human Rights Watch, Article 36, PAX, Amnesty International -- CS-KR now has ~270 member organizations across 70+ countries -- CCW GGE on LAWS established 2014, annual meetings since 2016 -- CCW GGE adopted 11 Guiding Principles on LAWS in 2019 (non-binding) -- CCW GGE adopted Recommendations in 2023 (non-binding) -- 2024 CCW Review Conference: 164 states participated; Austria, Mexico, 50+ states favor binding treaty; US, Russia, China, India, Israel, South Korea favor non-binding guidelines -- ICBL was founded 1992 and achieved Ottawa Treaty in 1997 (5 years); CS-KR founded 2013, now 13 years without binding treaty -- Russia's Shahed drone strikes on Ukrainian infrastructure (2022-2024) are nearest candidate triggering event but failed to activate ICBL-scale response diff --git a/inbox/queue/2026-03-31-leo-ukraine-shahed-near-miss-triggering-event-analysis.md b/inbox/queue/2026-03-31-leo-ukraine-shahed-near-miss-triggering-event-analysis.md deleted file mode 100644 index 8db52b38..00000000 --- a/inbox/queue/2026-03-31-leo-ukraine-shahed-near-miss-triggering-event-analysis.md +++ /dev/null @@ -1,101 +0,0 @@ ---- -type: source -title: "Ukraine/Shahed Near-Miss Analysis — Why Loitering Munition Civilian Casualties Haven't Generated ICBL-Scale Normative Response" -author: "Leo (KB synthesis from public documentation of Shahed-136/131 deployments, ACLED/UN data on Ukrainian civilian casualties 2022-2025)" -url: https://archive/synthesis -date: 2026-03-31 -domain: grand-strategy -secondary_domains: [ai-alignment, mechanisms] -format: synthesis -status: null-result -priority: medium -tags: [ukraine, shahed-drones, loitering-munitions, triggering-event, near-miss, normative-shift, attribution-problem, civilian-casualties, weapons-stigmatization, autonomous-weapons, icbl-analog, narrative-infrastructure, normalization, ai-weapons-governance] -processed_by: leo -processed_date: 2026-03-31 -extraction_model: "anthropic/claude-sonnet-4.5" -extraction_notes: "LLM returned 0 claims, 0 rejected by validator" ---- - -## Content - -The Shahed-136/131 drone campaign (Iranian-designed, Russian-deployed) against Ukrainian civilian infrastructure (2022-present) is the most extensive documented use of armed autonomous-adjacent systems against civilian targets in the current conflict period. Assessing why it hasn't triggered ICBL-scale normative response reveals the specific preconditions the triggering event must meet. - -**The Shahed campaign — scale and civilian impact:** -- Shahed-136 ("Geranium-2" in Russian designation): delta-wing loitering munition with ~2.5 kg warhead; GPS/INS navigation; loiters until target lock, then dives -- Deployed by Russia against Ukrainian civilian infrastructure from September 2022: power grid (thermal stations, substations), water infrastructure, apartment buildings -- Scale: Ukraine Ministry of Defense reports intercepting 6,000+ Shahed drones (2022-2024); thousands reached targets -- Civilian casualties: UN OHCHR documented hundreds of civilian deaths directly attributed to Shahed strikes; thousands of injuries; millions affected by power outages during winter -- Geographic scope: attacks reached Kyiv, Odessa, Kharkiv, and other civilian areas far from the front line - -**Why it hasn't triggered an ICBL-scale normative shift — five failure modes:** - -**Failure Mode 1 — Attribution problem (the most fundamental):** -The Shahed-136 uses GPS/INS navigation to a pre-programmed target coordinate. It does not use real-time AI targeting decisions, face recognition, object classification, or dynamic targeting. The "autonomous" element is navigation, not target selection. Attribution of "the AI decided to kill this civilian" is not available because the targeting decision was made by humans when the coordinates were programmed. - -For the CS-KR "meaningful human control" framing to apply, the weapon must make a lethal targeting decision in real-time without human input. The Shahed fails this test. It is functionally closer to a guided missile than a LAWS. - -Implication: The triggering event for AI weapons stigmatization CANNOT be a current-generation Shahed. It requires a higher-autonomy system that makes real-time target identification and engagement decisions. - -**Failure Mode 2 — Normalization effect:** -Ukraine is deploying Ukrainian-developed drones (including loitering munitions) against Russian positions and, increasingly, against Russian territory. Both sides are using autonomous-adjacent systems. Stigmatization requires asymmetric deployment — one side using a weapon against defenseless civilians without the other side having the same capability. Mutual use normalizes. The ICBL succeeded partly because "landmines" were associated with post-conflict proliferation in civilian zones, not mutual military use in a peer conflict. - -**Failure Mode 3 — Infrastructure targeting and indirect harm:** -Most Shahed civilian casualties are indirect: power outages cause hypothermia, medical equipment failure, inability to maintain water treatment. The direct link between drone strike and civilian death is often mediated by infrastructure failure, not direct physical harm. The ICBL's emotional power came from direct, visible harm — a child who lost a limb to a mine is a specific, identifiable victim with a photograph. The Shahed's civilian harm is real but distributed and indirect, harder to anchor emotionally. - -**Failure Mode 4 — Conflict framing dominates weapons framing:** -Coverage of Ukraine is organized around "Russian aggression vs. Ukrainian resistance" rather than "autonomous weapons vs. civilians." The weapons framing is submerged in the conflict framing. For CS-KR's narrative to activate, the autonomous weapon must be the subject of the story, not merely an element of a larger conflict story. This requires either a non-war setting (peacetime deployment or police use) or a conflict where the weapon is so novel and its autonomy so distinctive that it becomes the story. - -**Failure Mode 5 — Missing anchor figure:** -Princess Diana's Angola visit worked because Diana's extraordinary cultural standing made the landmine issue unavoidable in Western media. She brought personal embodiment to an abstract weapons policy issue. No equivalent figure has personally engaged with autonomous weapons civilian casualties in a way that generates comparable media saturation. The absence of the high-status emotional anchor is not just a media strategy gap — it reflects the "narrative pre-event infrastructure" failure discussed in the triggering-event architecture analysis. - -**What this reveals about the triggering event requirements:** - -For the triggering event to generate ICBL-scale response, it needs: -1. **Autonomous targeting attribution:** The AI system makes the targeting decision in real-time (not pre-programmed GPS coordinates). This requires a more advanced autonomous system than current Shahed-class weapons. -2. **Asymmetric deployment:** Used by one side against civilians who have no equivalent capability — probably requires non-state actor deployment or authoritarian government deployment against own population. -3. **Direct, visible harm:** The civilian casualty is directly and physically attributable to the drone's decision — a specific person, killed by a specific decision the AI made, documented with specific evidence. -4. **Narrative anchor figure:** Either a cultural figure of Diana's standing, or the victim themselves becomes a recognized individual (requires Western media context and a specific, identifiable human story). -5. **Non-conflict setting OR non-mutual use:** The weapon is either used in a non-war context (police drone, border control AI) or in an asymmetric war where the deploying side has no military justification framing available. - -**Prediction for the triggering event:** -The first credible candidate is NOT in the Ukraine conflict. More likely candidates: -- A counter-terrorism or border-control autonomous drone system misidentifying and killing civilians in a context where the Western media can cover it freely -- An authoritarian government using AI-enabled targeting against an identifiable ethnic minority in a context with international documentation access -- A commercially-available modified autonomous drone used by a non-state actor for targeted political assassination in a Western country - -The Shahed campaign is evidence that even large-scale drone warfare against civilians can be insufficient to trigger the normative shift if the five failure mode criteria aren't met. - ---- - -## Agent Notes - -**Why this matters:** The Ukraine/Shahed analysis is the most concrete recent test of whether the triggering event conditions have been approached. All five failure modes are instructive — they specify what the triggering event MUST include that the Shahed campaign lacked. This is more useful than abstract criteria. - -**What surprised me:** The attribution problem is deeper than I expected. The gap between "loitering munition with GPS navigation" and "AI autonomous targeting system making real-time decisions" is the key failure. This implies the triggering event will require MORE advanced AI weapons than currently deployed — which pushes the timeline forward but also clarifies what to watch for. - -**What I expected but didn't find:** Evidence that the Ukraine conflict has substantially advanced the CS-KR normative campaign. It appears not to have — CS-KR's political progress in 2023-2024 is not notably accelerated relative to 2019-2022. The Shahed campaign has raised awareness of loitering munitions but has NOT been framed as "autonomous weapons" in mainstream coverage. - -**KB connections:** -- CS-KR trajectory analysis (today's second archive) — the triggering event gap assessment -- Triggering-event architecture (today's third archive) — the five failure modes provide specific content for the "what the triggering event requires" section -- Strategic utility differentiation (today's fourth archive) — Shahed-class weapons are Category 2 (medium strategic utility), which is exactly the category the Ottawa Treaty path applies to; but the triggering event hasn't occurred for this category - -**Extraction hints:** -1. ENRICHMENT: Triggering-event architecture claim — the five failure modes (attribution, normalization, indirect harm, conflict framing, anchor figure) add specific empirical content to the abstract three-component architecture. Inline the Ukraine/Shahed analysis as supporting evidence. -2. Not a standalone claim — this is an enrichment of the triggering-event architecture and the CS-KR assessment. - -**Context:** UN OHCHR "Ukraine: Report on the Human Rights Situation" (various 2022-2025 reports). ACLED conflict data. ISW (Institute for the Study of War) Shahed usage tracking. Center for Naval Analyses "Shahed Drone Assessment" (2023). PAX report on autonomous weapons in Ukraine (2024). - -## Curator Notes (structured handoff for extractor) -PRIMARY CONNECTION: Triggering-event architecture archive (today's third archive) — provides the empirical content for the abstract criteria -WHY ARCHIVED: Ukraine/Shahed is the most important recent near-miss test case for the triggering event hypothesis. The five failure modes are analytically precise and inform what to watch for as next-generation AI weapons are deployed. -EXTRACTION HINT: Extract as ENRICHMENT to the triggering-event architecture claim, not standalone. The five failure modes belong in the body of that claim as inline evidence. - - -## Key Facts -- Shahed-136 is a delta-wing loitering munition with ~2.5 kg warhead using GPS/INS navigation -- Russia deployed Shahed drones against Ukrainian civilian infrastructure from September 2022 -- Ukraine Ministry of Defense reports intercepting 6,000+ Shahed drones between 2022-2024 -- UN OHCHR documented hundreds of civilian deaths directly attributed to Shahed strikes -- Shahed strikes targeted power grid, water infrastructure, and apartment buildings in Kyiv, Odessa, Kharkiv -- Most Shahed civilian casualties are indirect through infrastructure failure rather than direct physical harm