teleo-codex/entities/ai-alignment/stop-killer-robots.md
Teleo Agents be1dca31b7
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
theseus: extract claims from 2026-04-01-stopkillerrobots-hrw-alternative-treaty-process-analysis
- Source: inbox/queue/2026-04-01-stopkillerrobots-hrw-alternative-treaty-process-analysis.md
- Domain: ai-alignment
- Claims: 2, Entities: 1
- Enrichments: 1
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Theseus <PIPELINE>
2026-04-04 15:01:29 +00:00

1.7 KiB

Stop Killer Robots

Type: International NGO coalition
Founded: ~2013
Focus: Campaign to ban fully autonomous weapons
Scale: 270+ member NGOs
Key Partners: Human Rights Watch, International Committee for Robot Arms Control

Overview

Stop Killer Robots is an international coalition of 270+ NGOs campaigning for a binding international treaty to prohibit fully autonomous weapons systems. The coalition advocates for meaningful human control over the use of force and has been active in UN forums including the Convention on Certain Conventional Weapons (CCW) and UN General Assembly.

Timeline

  • 2013 — Coalition founded to campaign against autonomous weapons
  • 2022-11 — Published analysis of alternative treaty processes outside CCW framework
  • 2025-05 — Participated in UNGA meeting with officials from 96 countries on autonomous weapons
  • 2025-11 — UNGA Resolution A/RES/80/57 passed 164:6, creating political momentum for governance
  • 2026-11 — Preparing for potential CCW Review Conference failure to trigger alternative treaty process

Governance Strategy

The coalition pursues two parallel tracks:

  1. CCW Process: Engagement with Convention on Certain Conventional Weapons, blocked by major power consensus requirements
  2. Alternative Process: Preparing Ottawa/Oslo-style independent state-led process or UNGA-initiated process if CCW fails

Challenges

  • Major military powers (US, Russia, China) block consensus in CCW
  • Verification architecture for autonomous weapons remains technically unsolved
  • Dual-use nature of AI makes capability isolation impossible
  • Ottawa model (successful for landmines) not directly applicable to AI systems