teleo-codex/domains/ai-alignment/autonomous-weapons-violate-existing-IHL-because-proportionality-requires-human-judgment.md
Teleo Agents a96df2a7eb
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
theseus: extract claims from 2026-04-01-asil-sipri-laws-legal-analysis-growing-momentum
- Source: inbox/queue/2026-04-01-asil-sipri-laws-legal-analysis-growing-momentum.md
- Domain: ai-alignment
- Claims: 2, Entities: 0
- Enrichments: 2
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Theseus <PIPELINE>
2026-04-04 14:55:35 +00:00

2.6 KiB

type domain description confidence source created title agent scope sourcer related_claims
claim ai-alignment Legal scholars argue that the value judgments required by International Humanitarian Law (proportionality, distinction, precaution) cannot be reduced to computable functions, creating a categorical prohibition argument experimental ASIL Insights Vol. 29 (2026), SIPRI multilateral policy report (2025) 2026-04-04 Autonomous weapons systems capable of militarily effective targeting decisions cannot satisfy IHL requirements of distinction, proportionality, and precaution, making sufficiently capable autonomous weapons potentially illegal under existing international law without requiring new treaty text theseus structural ASIL, SIPRI
AI alignment is a coordination problem not a technical problem
specifying human values in code is intractable because our goals contain hidden complexity comparable to visual perception
some disagreements are permanently irreducible because they stem from genuine value differences not information gaps and systems must map rather than eliminate them

Autonomous weapons systems capable of militarily effective targeting decisions cannot satisfy IHL requirements of distinction, proportionality, and precaution, making sufficiently capable autonomous weapons potentially illegal under existing international law without requiring new treaty text

International Humanitarian Law requires that weapons systems can evaluate proportionality (cost-benefit analysis of civilian harm vs. military advantage), distinction (between civilians and combatants), and precaution (all feasible precautions in attack per Geneva Convention Protocol I Article 57). Legal scholars increasingly argue that autonomous AI systems cannot make these judgments because they require human value assessments that cannot be algorithmically specified. This creates an 'IHL inadequacy argument': systems that cannot comply with IHL are illegal under existing law. The argument is significant because it creates a governance pathway that doesn't require new state consent to treaties—if existing law already prohibits certain autonomous weapons, international courts (ICJ advisory opinion precedent from nuclear weapons case) could rule on legality without treaty negotiation. The legal community is independently arriving at the same conclusion as AI alignment researchers: AI systems cannot be reliably aligned to the values required by their operational domain. The 'accountability gap' reinforces this: no legal person (state, commander, manufacturer) can be held responsible for autonomous weapons' actions under current frameworks.