commit archived sources from previous research sessions
This commit is contained in:
parent
d87a4efb3f
commit
f700656168
6 changed files with 369 additions and 0 deletions
|
|
@ -0,0 +1,68 @@
|
||||||
|
---
|
||||||
|
type: source
|
||||||
|
title: "ASIL / SIPRI — Legal Analysis: Growing Momentum Toward New Autonomous Weapons Treaty, Structural Obstacles Remain"
|
||||||
|
author: "American Society of International Law (ASIL), Stockholm International Peace Research Institute (SIPRI)"
|
||||||
|
url: https://www.asil.org/insights/volume/29/issue/1
|
||||||
|
date: 2026-01-01
|
||||||
|
domain: ai-alignment
|
||||||
|
secondary_domains: [grand-strategy]
|
||||||
|
format: legal-analysis
|
||||||
|
status: unprocessed
|
||||||
|
priority: medium
|
||||||
|
tags: [LAWS, autonomous-weapons, international-law, IHL, treaty, SIPRI, ASIL, meaningful-human-control]
|
||||||
|
---
|
||||||
|
|
||||||
|
## Content
|
||||||
|
|
||||||
|
Combined notes from ASIL Insights (Vol. 29, Issue 1, 2026) "Lethal Autonomous Weapons Systems & International Law: Growing Momentum Towards a New International Treaty" and SIPRI "Towards Multilateral Policy on Autonomous Weapon Systems" (2025).
|
||||||
|
|
||||||
|
**ASIL analysis — legal momentum:**
|
||||||
|
|
||||||
|
Key legal developments driving momentum for a new treaty:
|
||||||
|
1. Over a decade of GGE deliberations has developed areas of "significant convergence" on elements of an instrument
|
||||||
|
2. The two-tier approach (prohibitions + regulations) has wide support, including from states that previously opposed any new instrument
|
||||||
|
3. International Humanitarian Law (IHL) framework — existing IHL (distinction, proportionality, precaution principles) is argued by major powers (US, Russia, China, India) to be sufficient. But legal scholars increasingly argue IHL cannot apply to systems that cannot make the legal judgments IHL requires. An autonomous weapon cannot evaluate "proportionality" — the cost-benefit analysis of civilian harm vs. military advantage — without human judgment.
|
||||||
|
4. ICJ advisory opinion on nuclear weapons precedent: shows international courts can rule on weapons legality even without treaty text.
|
||||||
|
|
||||||
|
**Legal definition problem:**
|
||||||
|
What is "meaningful human control"? Legal scholars identify this as the central unresolved question. Current proposals range from:
|
||||||
|
- "Human in the loop" (human must approve each individual strike)
|
||||||
|
- "Human on the loop" (human can override but system acts autonomously by default)
|
||||||
|
- "Human in control" (broader: human designs the parameters within which AI acts autonomously)
|
||||||
|
The definition determines the scope of what's prohibited. No consensus definition exists. This is simultaneously a legal and a technical problem: any definition must be technically verifiable to be enforceable.
|
||||||
|
|
||||||
|
**SIPRI analysis — multilateral policy:**
|
||||||
|
|
||||||
|
SIPRI (2025 report): Over a decade of AWS deliberations has yielded limited progress. States are divided on:
|
||||||
|
- Definitions (what is an autonomous weapon?)
|
||||||
|
- Regulatory approaches (ban vs. regulation)
|
||||||
|
- Pathways for action (CCW protocol vs. alternative process vs. status quo)
|
||||||
|
|
||||||
|
SIPRI frames the governance challenge as a "fractured multipolar order" problem: the states most opposed to binding governance (US, Russia, China) are the same states most aggressively developing autonomous weapons capabilities. This is not a coordination failure that can be solved by better process design — it's a structural conflict of interest.
|
||||||
|
|
||||||
|
**Emerging legal arguments:**
|
||||||
|
|
||||||
|
1. **IHL inadequacy argument:** AI systems cannot make the legal judgments required by IHL (distinction between civilians and combatants, proportionality). This creates a categorical prohibition argument: systems that cannot comply with IHL are illegal under existing law.
|
||||||
|
|
||||||
|
2. **Accountability gap argument:** No legal person (state, commander, manufacturer) can be held responsible for autonomous weapons' actions under current legal frameworks. This creates a governance void.
|
||||||
|
|
||||||
|
3. **Precautionary principle:** Under Geneva Convention Protocol I Article 57, parties must take all feasible precautions in attack. If autonomous AI systems cannot reliably make the required precautionary judgments, deploying them violates existing IHL.
|
||||||
|
|
||||||
|
## Agent Notes
|
||||||
|
|
||||||
|
**Why this matters:** The IHL inadequacy argument is the most interesting finding — it suggests that autonomous weapons capable enough to be militarily effective may already be illegal under EXISTING international law (IHL) without requiring a new treaty. If this legal argument were pursued through international courts (ICJ advisory opinion), it could create governance pressure without requiring state consent to a new treaty.
|
||||||
|
|
||||||
|
**What surprised me:** The convergence between the legal inadequacy argument and the alignment argument. IHL requires that autonomous weapons can evaluate proportionality, distinction, and precaution — these are the same value-alignment problems that plague civilian AI. The legal community is independently arriving at the conclusion that AI systems cannot be aligned to the values required by their operational domain. This is the alignment-as-coordination-problem thesis from a different intellectual tradition.
|
||||||
|
|
||||||
|
**What I expected but didn't find:** Any ICJ or international court proceeding actually pursuing the IHL inadequacy argument. It remains a legal theory, not an active case. The accountability gap is documented but no judicial proceeding has tested it.
|
||||||
|
|
||||||
|
**KB connections:**
|
||||||
|
- [[universal alignment is mathematically impossible because Arrows impossibility theorem applies to aggregating diverse human preferences into a single coherent objective]] — the legal inability to define "meaningful human control" technically mirrors Arrow's impossibility: the value judgment required by IHL cannot be reduced to a computable function
|
||||||
|
- [[some disagreements are permanently irreducible because they stem from genuine value differences not information gaps]] — the US/Russia/China opposition to autonomous weapons governance is not based on different information; it reflects genuine strategic value differences (security autonomy vs. accountability)
|
||||||
|
|
||||||
|
**Extraction hints:** The IHL inadequacy argument deserves its own claim: "Autonomous weapons systems capable of making militarily effective targeting decisions cannot satisfy the IHL requirements of distinction, proportionality, and precaution — making sufficiently capable autonomous weapons potentially illegal under existing international law without requiring new treaty text." This is a legally specific claim that complements the alignment community's technical arguments.
|
||||||
|
|
||||||
|
## Curator Notes (structured handoff for extractor)
|
||||||
|
PRIMARY CONNECTION: [[AI alignment is a coordination problem not a technical problem]] — the ASIL/SIPRI legal analysis arrives at the same conclusion from international law: the problem is not technical design of weapons systems but who gets to define "meaningful human control" and who has the power to enforce it
|
||||||
|
WHY ARCHIVED: The IHL inadequacy argument is the only governance pathway that doesn't require new state consent. If existing law already prohibits certain autonomous weapons, that creates judicial pressure without treaty negotiation. Worth tracking whether any ICJ advisory opinion proceeding begins.
|
||||||
|
EXTRACTION HINT: The IHL-alignment convergence is the most KB-valuable insight: legal scholars and AI alignment researchers are independently identifying the same core problem (AI cannot implement human value judgments reliably). Extract this as a cross-domain convergence claim.
|
||||||
|
|
@ -0,0 +1,64 @@
|
||||||
|
---
|
||||||
|
type: source
|
||||||
|
title: "CCW GGE LAWS 2026: Rolling Text, March Session, and Seventh Review Conference (November 2026) — The Last Binding Opportunity"
|
||||||
|
author: "UN OODA, Digital Watch Observatory, Stop Killer Robots, ICT4Peace"
|
||||||
|
url: https://meetings.unoda.org/ccw-/convention-on-certain-conventional-weapons-group-of-governmental-experts-on-lethal-autonomous-weapons-systems-2026
|
||||||
|
date: 2026-03-06
|
||||||
|
domain: ai-alignment
|
||||||
|
secondary_domains: [grand-strategy]
|
||||||
|
format: official-process
|
||||||
|
status: unprocessed
|
||||||
|
priority: high
|
||||||
|
tags: [CCW, LAWS, autonomous-weapons, treaty, GGE, rolling-text, review-conference, international-governance, consensus-obstruction]
|
||||||
|
flagged_for_leo: ["Cross-domain: grand strategy / decisive international governance window closing November 2026"]
|
||||||
|
---
|
||||||
|
|
||||||
|
## Content
|
||||||
|
|
||||||
|
**The CCW GGE LAWS Process — Status as of April 2026:**
|
||||||
|
|
||||||
|
The Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE LAWS) under the Convention on Certain Conventional Weapons (CCW) has been meeting since 2014 — 11+ years of deliberations without producing a binding instrument.
|
||||||
|
|
||||||
|
**Current trajectory (2025-2026):**
|
||||||
|
|
||||||
|
- **September 2025 GGE session:** 42 states delivered a joint statement calling for formal treaty negotiations. Brazil led a second statement on behalf of 39 High Contracting Parties stating they are "ready to move ahead towards negotiations." Significant but not unanimous political will.
|
||||||
|
|
||||||
|
- **November 2025:** UNGA Resolution A/RES/80/57 adopted 164:6, calling for completion of CCW instrument elements by the Seventh Review Conference. Non-binding but strong political signal.
|
||||||
|
|
||||||
|
- **March 2-6, 2026 GGE session:** First formal session of the 2026 mandate. Chair circulating new version of "rolling text." Outcome documentation not yet available (session concluded within days of this research session). The Chair intends to continue substantial exchanges with interested delegations to reach consensus.
|
||||||
|
|
||||||
|
- **August 31 - September 4, 2026:** Second GGE session of 2026. Final session before the Review Conference.
|
||||||
|
|
||||||
|
- **November 16-20, 2026 — Seventh CCW Review Conference:** The make-or-break moment. GGE must submit a final report. States either agree to negotiate a new protocol, or the mandate expires. The UN Secretary-General and ICRC have called for a legally binding instrument by end of 2026.
|
||||||
|
|
||||||
|
**The structural obstacle: consensus rule.**
|
||||||
|
The CCW operates by consensus — any single state can block progress. US, Russia, and Israel consistently oppose any preemptive ban on LAWS. Russia: outright rejection of a new treaty, argues existing IHL is sufficient and LAWS could improve targeting precision. US: opposes preemptive ban, argues LAWS could provide humanitarian benefits. India: joins opposition. This small coalition of major military powers has blocked binding governance for over a decade.
|
||||||
|
|
||||||
|
**What the rolling text contains:**
|
||||||
|
Two-tier approach — prohibitions (certain categories of LAWS where meaningful human control cannot be maintained) + regulations (framework for oversight). The document has areas of significant convergence after nine years: need for meaningful human control, two-tier structure, basic elements. But definitions remain contested — what exactly constitutes "meaningful human control"? This is both a technical and legal problem: you cannot define a threshold that is verifiable with current technology.
|
||||||
|
|
||||||
|
**Alternative process track (Ottawa model):**
|
||||||
|
Human Rights Watch and Stop Killer Robots have documented the alternative: an independent state-led process outside CCW (like the Ottawa Process for landmines, Oslo Process for cluster munitions). This could produce a treaty without requiring US/Russia/China consent. Precedent exists. Problem: the Mine Ban Treaty works because the US never participated but the treaty still created norm pressure. Autonomous weapons without US/China participation means the two countries with the most advanced autonomous weapons programs are unbound — dramatically reducing effectiveness.
|
||||||
|
|
||||||
|
**Assessment as of April 2026:**
|
||||||
|
The November 2026 Review Conference is the formal decision point. Given: (1) US under Trump refusing even voluntary REAIM principles (February 2026); (2) Russia consistent opposition; (3) CCW consensus rule; the probability of a binding protocol at the Review Conference is near-zero unless the political environment changes dramatically in the next 7 months.
|
||||||
|
|
||||||
|
## Agent Notes
|
||||||
|
|
||||||
|
**Why this matters:** After 20 sessions documenting governance failure at every domestic level, the CCW/Review Conference is the one remaining formal governance decision point before the end of 2026. Its likely failure would complete the picture: no governance layer — technical, institutional, domestic, EU, or international — is functioning for the highest-risk AI deployments.
|
||||||
|
|
||||||
|
**What surprised me:** The high level of political momentum (164 UNGA states, 42-state joint statement, ICRC + UN SG united calls) combined with near-certain structural failure. The gap between expressed political will and actual governance capacity is wider than any domestic governance failure documented in previous sessions. 164:6 UNGA vote but consensus rule gives the 6 veto power. Democracy at global scale, blocked by great-power consensus requirement.
|
||||||
|
|
||||||
|
**What I expected but didn't find:** Any mechanism to circumvent the consensus rule within the CCW structure. There is none. The CCW High Contracting Parties Meeting could in theory amend the consensus rule, but that amendment itself requires consensus. The CCW is structurally locked.
|
||||||
|
|
||||||
|
**KB connections:**
|
||||||
|
- [[technology advances exponentially but coordination mechanisms evolve linearly creating a widening gap]] — the CCW is the most extreme case: 11 years of deliberation while capabilities escalated from theory to deployment
|
||||||
|
- [[AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation]] — Acemoglu's framing; the November 2026 Review Conference is the institutional decision point
|
||||||
|
- [[multipolar failure from competing aligned AI systems may pose greater existential risk than any single misaligned superintelligence]] — the CCW failure means the multipolar dangerous autonomous weapons scenario has no governance architecture
|
||||||
|
|
||||||
|
**Extraction hints:** This source supports a new claim: "The CCW consensus rule structurally enables a small coalition of militarily-advanced states to block legally binding autonomous weapons governance, regardless of near-universal political support among the broader international community." This is the international-layer equivalent of the corporate safety authority gap (no legal standing for corporate AI safety constraints domestically).
|
||||||
|
|
||||||
|
## Curator Notes (structured handoff for extractor)
|
||||||
|
PRIMARY CONNECTION: [[technology advances exponentially but coordination mechanisms evolve linearly creating a widening gap]] — the CCW process is the most extreme documented case: 11 years, no binding outcome, capabilities deployed across multiple real conflicts
|
||||||
|
WHY ARCHIVED: Documents the formal international governance architecture for autonomous weapons AI and its structural failure mode — consensus obstruction by major military powers. Completes the four-level governance failure map with the international layer.
|
||||||
|
EXTRACTION HINT: The binary decision point (November 2026 Review Conference: negotiate or not) is the most time-bounded governance signal in Theseus's domain. Track whether the October-November 2026 window produces a negotiating mandate. If not, this is the definitive closure of the international governance pathway.
|
||||||
|
|
@ -0,0 +1,64 @@
|
||||||
|
---
|
||||||
|
type: source
|
||||||
|
title: "CSET Georgetown — AI Verification: Technical Framework for Verifying Compliance with Autonomous Weapons Obligations"
|
||||||
|
author: "Center for Security and Emerging Technology, Georgetown University"
|
||||||
|
url: https://cset.georgetown.edu/publication/ai-verification/
|
||||||
|
date: 2025-01-01
|
||||||
|
domain: ai-alignment
|
||||||
|
secondary_domains: [grand-strategy]
|
||||||
|
format: report
|
||||||
|
status: unprocessed
|
||||||
|
priority: high
|
||||||
|
tags: [AI-verification, autonomous-weapons, compliance, treaty-verification, meaningful-human-control, technical-mechanisms]
|
||||||
|
---
|
||||||
|
|
||||||
|
## Content
|
||||||
|
|
||||||
|
CSET Georgetown's work on "AI Verification" defines the technical challenge of verifying compliance with autonomous weapons obligations.
|
||||||
|
|
||||||
|
**Core definition:** "AI Verification" = the process of determining whether countries' AI and AI systems comply with treaty obligations. "AI Verification Mechanisms" = tools that ensure regulatory compliance by discouraging or detecting the illicit use of AI by a system or illicit AI control over a system.
|
||||||
|
|
||||||
|
**Key technical proposals in the literature (compiled from this and related sources):**
|
||||||
|
|
||||||
|
1. **Transparency registry:** Voluntary state disclosure of LAWS capabilities and operational doctrines (analogous to Arms Trade Treaty reporting). Promotes trust but relies on honesty.
|
||||||
|
|
||||||
|
2. **Satellite imagery + open-source intelligence monitoring index:** An "AI militarization monitoring index" tracking progress of AI weapons development across countries. Proposed but not operationalized.
|
||||||
|
|
||||||
|
3. **Dual-factor authentication requirements:** Autonomous weapon systems required to obtain dual-factor authentication from human commanders before launching attacks. Technically implementable but no international standard exists.
|
||||||
|
|
||||||
|
4. **Ethical guardrail mechanisms:** Automatic freeze when AI decisions exceed pre-set ethical thresholds (e.g., targeting schools, hospitals). Technically implementable but highly context-dependent.
|
||||||
|
|
||||||
|
5. **Mandatory legal reviews:** Required reviews for autonomous weapons systems development — domestic compliance architecture.
|
||||||
|
|
||||||
|
**The fundamental verification problem:**
|
||||||
|
|
||||||
|
Verifying "meaningful human control" is technically and legally unsolved:
|
||||||
|
- AI decision-making is opaque — you cannot observe from outside whether a human "meaningfully" reviewed a decision vs. rubber-stamped it
|
||||||
|
- Verification requires access to system architectures that states classify as sovereign military secrets
|
||||||
|
- The same benchmark-reality gap documented in civilian AI (METR findings) applies to military systems: behavioral testing cannot determine intent or internal decision processes
|
||||||
|
- Adversarially trained systems (the most capable and most dangerous) are specifically resistant to the interpretability-based verification approaches that work in civilian contexts
|
||||||
|
|
||||||
|
**State of the field as of early 2026:**
|
||||||
|
No state has operationalized any verification mechanism for autonomous weapons compliance. The CSET work represents research-stage analysis, not deployed governance infrastructure. This is "proposal stage" — consistent with Session 19's characterization of multilateral verification mechanisms.
|
||||||
|
|
||||||
|
**Parallel to civilian AI governance:** The same tool-to-agent gap documented by AuditBench (interpretability tools that work in isolation fail in deployment) applies to autonomous weapons verification: verification methods that work in controlled research settings cannot be deployed against adversarially capable military systems.
|
||||||
|
|
||||||
|
## Agent Notes
|
||||||
|
|
||||||
|
**Why this matters:** Verification is the technical precondition for any binding treaty to work. Without verification mechanisms, a binding treaty is a paper commitment. The CSET work shows that the technical infrastructure for verification is at the "proposal stage" — parallel to the evaluation-to-compliance translation gap documented in civilian AI governance (sessions 10-12).
|
||||||
|
|
||||||
|
**What surprised me:** The verification problem for autonomous weapons is harder than for civilian AI, not easier. Civilian AI (RSP, EU AI Act) at least has laboratory evaluation frameworks (AuditBench, METR). For military AI, you can't even run evaluations on adversaries' systems. The Layer 0 (measurement architecture failure) problem is more severe at the international level than at the domestic/lab level.
|
||||||
|
|
||||||
|
**What I expected but didn't find:** Any operationalized verification mechanism, even a pilot. Nothing exists at deployment scale. The most concrete mechanism (transparency registry = voluntary disclosure) is exactly the kind of voluntary commitment that 18 sessions of analysis shows fails under competitive pressure.
|
||||||
|
|
||||||
|
**KB connections:**
|
||||||
|
- [[formal verification of AI-generated proofs provides scalable oversight that human review cannot match]] — this works for mathematically formalizable outputs; "meaningful human control" is not mathematically formalizable, so formal verification cannot be applied
|
||||||
|
- [[AI capability and reliability are independent dimensions]] — verification can check capability; it cannot check reliability or intent; the most dangerous properties of autonomous weapons (intent to override human control) are in the unverifiable dimension
|
||||||
|
- [[scalable oversight degrades rapidly as capability gaps grow]] — military AI verification has the same oversight degradation problem; the most capable systems are hardest to verify
|
||||||
|
|
||||||
|
**Extraction hints:** "The technical infrastructure for verifying compliance with autonomous weapons governance obligations does not exist at deployment scale — the same tool-to-agent gap and measurement architecture failures documented in civilian AI oversight apply to military AI verification, but are more severe because adversarial system access cannot be compelled."
|
||||||
|
|
||||||
|
## Curator Notes (structured handoff for extractor)
|
||||||
|
PRIMARY CONNECTION: [[scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps]] — military AI verification is the hardest case of oversight degradation: external adversarial systems, classification barriers, and "meaningful human control" as an unverifiable property
|
||||||
|
WHY ARCHIVED: Technical grounding for why multilateral verification mechanisms remain at proposal stage. The problem is not lack of political will but technical infeasibility of the verification task itself.
|
||||||
|
EXTRACTION HINT: The verification impossibility claim should be scoped carefully — some properties of autonomous weapons ARE verifiable (capability benchmarks in controlled settings, transparency registry disclosures). The claim should be: "Verification of the properties most relevant to alignment obligations (meaningful human control, intent, adversarial resistance) is technically infeasible with current methods — the same unverifiable properties that defeat domestic alignment auditing at scale."
|
||||||
|
|
@ -0,0 +1,53 @@
|
||||||
|
---
|
||||||
|
type: source
|
||||||
|
title: "REAIM Summit 2026 (A Coruña) — US and China Refuse to Sign, Only 35/85 Countries Endorse Military AI Principles"
|
||||||
|
author: "Multiple sources: TheDefenseWatch, US News, Asia Financial, Capacity Global"
|
||||||
|
url: https://thedefensewatch.com/policy-strategy/us-and-china-refuse-to-sign-military-ai-declaration-at-reaim-summit/
|
||||||
|
date: 2026-02-05
|
||||||
|
domain: ai-alignment
|
||||||
|
secondary_domains: [grand-strategy]
|
||||||
|
format: news-coverage
|
||||||
|
status: unprocessed
|
||||||
|
priority: high
|
||||||
|
tags: [REAIM, autonomous-weapons, military-AI, US-China, international-governance, governance-regression, voluntary-commitments]
|
||||||
|
flagged_for_leo: ["Cross-domain: grand strategy / international AI governance fragmentation"]
|
||||||
|
---
|
||||||
|
|
||||||
|
## Content
|
||||||
|
|
||||||
|
The Third Summit on Responsible AI in the Military Domain (REAIM) was held February 4-5, 2026, in A Coruña, Spain.
|
||||||
|
|
||||||
|
**Core finding:** Only 35 out of 85 attending countries signed the commitment to 20 principles on military AI use ("Pathways for Action" declaration). The United States and China both declined to sign.
|
||||||
|
|
||||||
|
**US position:** The US signed the 2024 Seoul REAIM Blueprint for Action under Biden. Under Trump, at A Coruña 2026, Vice President J.D. Vance represented the US and declined to sign. Stated rationale: excessive regulation would stifle innovation and weaken national security. The shift represents a complete reversal of US multilateral military AI policy direction within 18 months.
|
||||||
|
|
||||||
|
**China's position:** China has consistently attended REAIM summits but avoided signing final declarations. Primary objection: disagreements over language mandating human intervention in nuclear command and control decisions. At A Coruña, China once again opted out.
|
||||||
|
|
||||||
|
**Signatories:** 35 nations including Canada, France, Germany, South Korea, United Kingdom, Ukraine. Notably: all middle powers, no AI superpowers.
|
||||||
|
|
||||||
|
**Trend:** Sharp decline from ~60 nations endorsing principles at Seoul 2024 to 35 at A Coruña 2026. The REAIM process, which was designed to build voluntary norms around military AI, is losing adherents, not gaining them.
|
||||||
|
|
||||||
|
**GC REAIM Report:** The Global Commission on Responsible AI in the Military Domain published its "Responsible by Design" report (September 24, 2025) seeking to translate REAIM Summit declarations into actionable guidance. The report presents three guiding principles and five core recommendations for all levels of the socio-technical AI lifecycle. Despite the quality of the report, the Third Summit saw dramatically reduced state participation.
|
||||||
|
|
||||||
|
**Background on REAIM:** Multi-stakeholder dialogue platform initiated by the Netherlands and South Korea, bringing together states, civil society, and industry to build shared norms for responsible military AI use. The platform was seen as a complementary track to the formal CCW GGE process.
|
||||||
|
|
||||||
|
## Agent Notes
|
||||||
|
|
||||||
|
**Why this matters:** This is the clearest evidence of governance regression at the international level. The trend line is negative: 2022 (first REAIM, limited scope) → 2024 Seoul (60+ nations, US signs) → 2026 A Coruña (35 nations, US and China refuse). International voluntary governance of military AI is consolidating toward a smaller, less powerful coalition as the most advanced AI programs concentrate in non-participating states.
|
||||||
|
|
||||||
|
**What surprised me:** The magnitude of the decline. Going from 60 to 35 signatures in 18 months is a collapse, not a plateau. This is the international equivalent of Anthropic RSP rollback — voluntary commitment failure under competitive/political pressure, but at the international scale.
|
||||||
|
|
||||||
|
**What I expected but didn't find:** Any mechanism that could reverse the US position given the domestic political change. The Trump administration's rationale ("regulation stifles innovation") is precisely the alignment-tax race-to-the-bottom argument in diplomatic language. There's no near-term pathway to US re-engagement on multilateral military AI norms.
|
||||||
|
|
||||||
|
**KB connections:**
|
||||||
|
- [[the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it]] — the US rationale for REAIM refusal is exactly this structural dynamic stated as policy
|
||||||
|
- [[voluntary safety pledges cannot survive competitive pressure]] — REAIM is the international case study for this mechanism: voluntary commitments erode as competitive dynamics intensify
|
||||||
|
- [[multipolar failure from competing aligned AI systems may pose greater existential risk than any single misaligned superintelligence]] — the competing US/China military AI programs represent the most dangerous multipolar scenario, and both are now outside any governance framework
|
||||||
|
- [[government designation of safety-conscious AI labs as supply chain risks]] — the same US government that blacklisted Anthropic for safety constraints is the one refusing REAIM principles
|
||||||
|
|
||||||
|
**Extraction hints:** Strong claim candidate: "International voluntary governance of military AI is experiencing declining adherence as the states most responsible for advanced autonomous weapons programs withdraw from multi-stakeholder norm-building processes — paralleling the domestic voluntary commitment failure pattern at the international level." This would extend the KB's voluntary commitment failure claim (currently documented domestically) to the international domain.
|
||||||
|
|
||||||
|
## Curator Notes (structured handoff for extractor)
|
||||||
|
PRIMARY CONNECTION: [[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]]
|
||||||
|
WHY ARCHIVED: The REAIM 2026 outcome is the single clearest data point on international military AI governance regression. The trend (60→35 signatories, US reversal) documents the international layer of the voluntary commitment failure pattern.
|
||||||
|
EXTRACTION HINT: Pair this with the UNGA 164:6 vote for the contrast: near-universal political expression (UNGA) coexists with sharp practical decline in voluntary commitments (REAIM). The gap between political expression and governance adherence is the key finding.
|
||||||
|
|
@ -0,0 +1,65 @@
|
||||||
|
---
|
||||||
|
type: source
|
||||||
|
title: "Stop Killer Robots / HRW — Alternative Treaty Process Analysis: Ottawa Model and UNGA-Initiated Process as CCW Alternatives"
|
||||||
|
author: "Human Rights Watch, Stop Killer Robots (@StopKillerRobots)"
|
||||||
|
url: https://www.hrw.org/report/2022/11/10/agenda-action/alternative-processes-negotiating-killer-robots-treaty
|
||||||
|
date: 2025-05-21
|
||||||
|
domain: ai-alignment
|
||||||
|
secondary_domains: [grand-strategy]
|
||||||
|
format: report
|
||||||
|
status: unprocessed
|
||||||
|
priority: medium
|
||||||
|
tags: [autonomous-weapons, treaty, Ottawa-process, UNGA-process, alternative-governance, CCW-alternative, binding-instrument]
|
||||||
|
---
|
||||||
|
|
||||||
|
## Content
|
||||||
|
|
||||||
|
Human Rights Watch and Stop Killer Robots have documented alternative treaty pathways outside the CCW framework, relevant given the CCW consensus obstruction by major powers.
|
||||||
|
|
||||||
|
**Two alternative models:**
|
||||||
|
|
||||||
|
**1. Independent state-led process (Ottawa/Oslo model):**
|
||||||
|
- 1997 Mine Ban Treaty: Independent Ottawa Process led by Canada and NGOs, produced binding treaty banning anti-personnel landmines
|
||||||
|
- 2008 Convention on Cluster Munitions: Oslo Process, similarly outside UN framework
|
||||||
|
- Both produced binding treaties WITHOUT requiring major military power participation
|
||||||
|
- Both succeeded despite US non-participation (US never signed Mine Ban Treaty)
|
||||||
|
- Mechanism: norm creation + stigmatization + compliance pressure on non-signatories through reputational and market access channels
|
||||||
|
|
||||||
|
**2. UNGA-initiated process:**
|
||||||
|
- 2017 Treaty on the Prohibition of Nuclear Weapons (TPNW): Initiated via UNGA First Committee
|
||||||
|
- Adopted by 122 states, in force since 2021
|
||||||
|
- No nuclear weapons state signed; effectiveness contested
|
||||||
|
- More inclusive than CCW (doesn't require military powers' consent to negotiate)
|
||||||
|
|
||||||
|
**Why autonomous weapons are different from landmines/cluster munitions:**
|
||||||
|
HRW acknowledges the limits of the Ottawa model for LAWS. Landmines are dumb weapons — the treaty is verifiable through production records, export controls, and mine-clearing operations. Autonomous weapons are AI systems — verification is technically far harder, and capability is dual-use (the same AI that controls an autonomous weapon is used for civilian applications). The technology-specificity of autonomous weapons makes the Mine Ban model harder to replicate.
|
||||||
|
|
||||||
|
**What's needed for an alternative process to work:**
|
||||||
|
1. A critical mass of champion states willing to initiate outside CCW (Brazil, Austria, New Zealand historically supportive)
|
||||||
|
2. Civil society coalition as in previous campaigns (Stop Killer Robots = 270+ NGOs)
|
||||||
|
3. Agreement on scope — prohibit what exactly? Fully autonomous weapons targeting humans without ANY human control? Or also semi-autonomous with insufficient human control?
|
||||||
|
4. A verification architecture (still unsolved technically)
|
||||||
|
|
||||||
|
**2025-2026 context:**
|
||||||
|
May 2025: Officials from 96 countries attended a UNGA meeting specifically on autonomous weapons — the most inclusive discussion to date. The UNGA Resolution A/RES/80/57 (November 2025, 164:6) creates political momentum. Stop Killer Robots advocates that if CCW Review Conference fails in November 2026, the alternative process should begin immediately.
|
||||||
|
|
||||||
|
**Current status of alternative process:** Not formally initiated. Still at advocacy stage. The campaign is explicitly preparing for the November 2026 CCW failure to trigger the alternative process pivot.
|
||||||
|
|
||||||
|
## Agent Notes
|
||||||
|
|
||||||
|
**Why this matters:** The alternative treaty process is the only governance pathway that doesn't require US/Russia/China consent. But it has two critical limitations: (1) effectiveness without major power participation is limited for a technology those powers control; (2) verification is technically harder than for landmines. The Ottawa model is not directly applicable.
|
||||||
|
|
||||||
|
**What surprised me:** The 270+ NGO coalition (Stop Killer Robots) is larger and better organized than anything in the civilian AI alignment space. The international civil society movement for autonomous weapons governance is more mature than any comparable movement for general AI alignment governance. Yet it has produced no binding instruments after 10+ years. This is evidence that organized civil society alone cannot overcome structural great-power obstruction.
|
||||||
|
|
||||||
|
**What I expected but didn't find:** Any concrete timeline or champion state commitment to initiate the alternative process if CCW fails. The pivot is conditional on CCW failure (November 2026) and still at "advocacy preparation" stage, not formal launch.
|
||||||
|
|
||||||
|
**KB connections:**
|
||||||
|
- [[no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it]] — the civil society coalition IS building governance advocacy infrastructure; the gap is in governmental uptake
|
||||||
|
- [[AI alignment is a coordination problem not a technical problem]] — the alternative treaty process is coordination infrastructure for the international layer; it requires the same collective action that domestic governance requires
|
||||||
|
|
||||||
|
**Extraction hints:** "Civil society coordination infrastructure for autonomous weapons governance (270+ NGO coalition, 10-year campaign, UNGA majority support) has failed to produce binding governance because the structural obstacle is great-power veto capacity in multilateral forums, not absence of political will among the broader international community." This would be a specific claim about the limits of civil society coordination as a governance mechanism for great-power-controlled technologies.
|
||||||
|
|
||||||
|
## Curator Notes (structured handoff for extractor)
|
||||||
|
PRIMARY CONNECTION: [[AI alignment is a coordination problem not a technical problem]] — the alternative treaty process demonstrates that the problem is not technical design of governance instruments but overcoming structural coordination failures among major powers
|
||||||
|
WHY ARCHIVED: Documents the only remaining governance pathway if CCW fails in November 2026. Critical for understanding whether international governance of autonomous weapons AI is a near-term possibility or a decade+ away.
|
||||||
|
EXTRACTION HINT: Compare to the domestic electoral strategy (Anthropic PAC investment): both are attempts to change the political landscape rather than build governance within existing structural constraints. Both face low near-term probability but represent genuine governance alternative pathways.
|
||||||
|
|
@ -0,0 +1,55 @@
|
||||||
|
---
|
||||||
|
type: source
|
||||||
|
title: "UNGA Resolution A/RES/80/57 — 164 States Support Autonomous Weapons Governance (November 2025)"
|
||||||
|
author: "UN General Assembly First Committee (@UN)"
|
||||||
|
url: https://docs.un.org/en/A/RES/80/57
|
||||||
|
date: 2025-11-06
|
||||||
|
domain: ai-alignment
|
||||||
|
secondary_domains: [grand-strategy]
|
||||||
|
format: official-document
|
||||||
|
status: unprocessed
|
||||||
|
priority: high
|
||||||
|
tags: [autonomous-weapons, LAWS, UNGA, international-governance, binding-treaty, multilateral, killer-robots]
|
||||||
|
flagged_for_leo: ["Cross-domain: grand strategy / international governance layer of AI safety"]
|
||||||
|
---
|
||||||
|
|
||||||
|
## Content
|
||||||
|
|
||||||
|
UN General Assembly First Committee Resolution A/RES/80/57, "Lethal Autonomous Weapons Systems," adopted November 6, 2025.
|
||||||
|
|
||||||
|
**Vote:** 164 states in favour, 6 against (Belarus, Burundi, Democratic People's Republic of Korea, Israel, Russian Federation, United States of America), 7 abstentions (Argentina, China, Iran, Nicaragua, Poland, Saudi Arabia, Türkiye).
|
||||||
|
|
||||||
|
**Text:** The resolution draws attention to "serious challenges and concerns that new and emerging technological applications in the military domain, including those related to artificial intelligence and autonomy in weapons systems" and stresses "the importance of the role of humans in the use of force to ensure responsibility and accountability."
|
||||||
|
|
||||||
|
Notes the calls by the UN Secretary-General to commence negotiations of a legally binding instrument on autonomous weapons systems, in line with a two-tier approach of prohibitions and regulations.
|
||||||
|
|
||||||
|
Called upon High Contracting Parties to the CCW to work towards completing the set of elements for an instrument being developed within the mandate of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems, with a view to future negotiations.
|
||||||
|
|
||||||
|
The 2025 vote of 164:6 slightly declined from 2024's 164:6 but represented continued near-universal support. Stop Killer Robots notes a prior vote of 164 states and 161 states in earlier years.
|
||||||
|
|
||||||
|
**Context:** This is the most recent in a series of escalating UNGA resolutions pushing for treaty negotiations. The 2024 Seoul REAIM Blueprint for Action saw approximately 60 nations endorse principles. The 2025 UNGA resolution sends a strong political signal but is non-binding.
|
||||||
|
|
||||||
|
**The 6 NO votes are the critical governance indicator:** US, Russia, Belarus, DPRK, Israel, Burundi. The two superpowers most responsible for autonomous weapons development (US, Russia) voted NO. China abstained. These are the states whose participation is required for any binding instrument to have real-world impact on military AI deployment.
|
||||||
|
|
||||||
|
## Agent Notes
|
||||||
|
|
||||||
|
**Why this matters:** The 164:6 vote is the strongest political signal in the LAWS governance process to date — but the vote configuration confirms the structural problem. The states that voted NO are the states whose autonomous weapons programs are most advanced and most relevant to existential risk. Near-universal support minus the key actors is not governance; it's advocacy. This is the international equivalent of "everyone agrees except the people who matter."
|
||||||
|
|
||||||
|
**What surprised me:** The US voted NO under the Trump administration — in 2024, the US had supported the Seoul Blueprint. This represents an active governance regression at the international level, parallel to domestic governance regression (NIST EO rescission, AISI mandate drift). The international layer is not insulated from domestic politics.
|
||||||
|
|
||||||
|
**What I expected but didn't find:** Evidence that China voted FOR or was moving toward supporting negotiations. China's abstention (rather than NO) was slightly better than expected — China has occasionally been more forthcoming in CCW discussions than the US or Russia on definitional questions. But abstention is not support.
|
||||||
|
|
||||||
|
**KB connections:**
|
||||||
|
- [[voluntary safety pledges cannot survive competitive pressure]] — same structural dynamic at international level: voluntary non-binding resolutions face race-to-the-bottom from major powers
|
||||||
|
- [[nation-states will inevitably assert control over frontier AI development]] — the Thompson/Karp thesis predicts exactly this: states protecting military AI as sovereign capability
|
||||||
|
- [[government designation of safety-conscious AI labs as supply chain risks]] — US position at REAIM/CCW is consistent with the DoD/Anthropic dynamic: government actively blocking constraints, not enabling them
|
||||||
|
- [[safe AI development requires building alignment mechanisms before scaling capability]] — the sequencing claim; international governance is running out of time before capability scales further
|
||||||
|
|
||||||
|
**Extraction hints:** Two distinct claims possible:
|
||||||
|
1. "Near-universal political support for autonomous weapons governance (164:6) coexists with structural governance failure because the states voting NO control the most advanced autonomous weapons programs" — a claim about the gap between political expression and governance effectiveness
|
||||||
|
2. "US reversal from Seoul 2024 (supporter) to UNGA 2025 (opposition) demonstrates that domestic political change can rapidly erode international AI safety norms that were building for a decade" — the governance fragility claim
|
||||||
|
|
||||||
|
## Curator Notes (structured handoff for extractor)
|
||||||
|
PRIMARY CONNECTION: [[safe AI development requires building alignment mechanisms before scaling capability]] — the UNGA vote documents the international governance failure that prevents this sequencing
|
||||||
|
WHY ARCHIVED: This is the clearest available evidence for the international layer of the governance failure map. Completes the picture across all governance levels (domestic, EU, international).
|
||||||
|
EXTRACTION HINT: Focus on the vote configuration (who voted NO, who abstained) as evidence for structural governance failure, not just the overall number. The 164:6 framing is misleading — the 6 NO votes are the structurally important signal.
|
||||||
Loading…
Reference in a new issue