theseus: extract claims from 2026-04-01-asil-sipri-laws-legal-analysis-growing-momentum
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run

- Source: inbox/queue/2026-04-01-asil-sipri-laws-legal-analysis-growing-momentum.md
- Domain: ai-alignment
- Claims: 2, Entities: 0
- Enrichments: 2
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Theseus <PIPELINE>
This commit is contained in:
Teleo Agents 2026-04-04 14:53:50 +00:00
parent c64627fd1f
commit a96df2a7eb
2 changed files with 34 additions and 0 deletions

View file

@ -0,0 +1,17 @@
---
type: claim
domain: ai-alignment
description: Legal scholars argue that the value judgments required by International Humanitarian Law (proportionality, distinction, precaution) cannot be reduced to computable functions, creating a categorical prohibition argument
confidence: experimental
source: ASIL Insights Vol. 29 (2026), SIPRI multilateral policy report (2025)
created: 2026-04-04
title: Autonomous weapons systems capable of militarily effective targeting decisions cannot satisfy IHL requirements of distinction, proportionality, and precaution, making sufficiently capable autonomous weapons potentially illegal under existing international law without requiring new treaty text
agent: theseus
scope: structural
sourcer: ASIL, SIPRI
related_claims: ["[[AI alignment is a coordination problem not a technical problem]]", "[[specifying human values in code is intractable because our goals contain hidden complexity comparable to visual perception]]", "[[some disagreements are permanently irreducible because they stem from genuine value differences not information gaps and systems must map rather than eliminate them]]"]
---
# Autonomous weapons systems capable of militarily effective targeting decisions cannot satisfy IHL requirements of distinction, proportionality, and precaution, making sufficiently capable autonomous weapons potentially illegal under existing international law without requiring new treaty text
International Humanitarian Law requires that weapons systems can evaluate proportionality (cost-benefit analysis of civilian harm vs. military advantage), distinction (between civilians and combatants), and precaution (all feasible precautions in attack per Geneva Convention Protocol I Article 57). Legal scholars increasingly argue that autonomous AI systems cannot make these judgments because they require human value assessments that cannot be algorithmically specified. This creates an 'IHL inadequacy argument': systems that cannot comply with IHL are illegal under existing law. The argument is significant because it creates a governance pathway that doesn't require new state consent to treaties—if existing law already prohibits certain autonomous weapons, international courts (ICJ advisory opinion precedent from nuclear weapons case) could rule on legality without treaty negotiation. The legal community is independently arriving at the same conclusion as AI alignment researchers: AI systems cannot be reliably aligned to the values required by their operational domain. The 'accountability gap' reinforces this: no legal person (state, commander, manufacturer) can be held responsible for autonomous weapons' actions under current frameworks.

View file

@ -0,0 +1,17 @@
---
type: claim
domain: ai-alignment
description: Cross-domain convergence between international law and AI safety research on the fundamental limits of encoding human values in autonomous systems
confidence: experimental
source: ASIL Insights Vol. 29 (2026), SIPRI (2025), cross-referenced with alignment literature
created: 2026-04-04
title: "Legal scholars and AI alignment researchers independently converged on the same core problem: AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements and alignment specification challenges both identifying irreducible human judgment as the bottleneck"
agent: theseus
scope: structural
sourcer: ASIL, SIPRI
related_claims: ["[[AI alignment is a coordination problem not a technical problem]]", "[[specifying human values in code is intractable because our goals contain hidden complexity comparable to visual perception]]", "[[the alignment problem dissolves when human values are continuously woven into the system rather than specified in advance]]"]
---
# Legal scholars and AI alignment researchers independently converged on the same core problem: AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements and alignment specification challenges both identifying irreducible human judgment as the bottleneck
Two independent intellectual traditions—international humanitarian law and AI alignment research—have converged on the same fundamental problem through different pathways. Legal scholars analyzing autonomous weapons argue that IHL requirements (proportionality, distinction, precaution) cannot be satisfied by AI systems because these judgments require human value assessments that resist algorithmic specification. AI alignment researchers argue that specifying human values in code is intractable due to hidden complexity. Both communities identify the same structural impossibility: context-dependent human value judgments cannot be reliably encoded in autonomous systems. The legal community's 'meaningful human control' definition problem (ranging from 'human in the loop' to 'human in control') mirrors the alignment community's specification problem. This convergence is significant because it suggests the problem is not domain-specific but fundamental to the nature of value judgments. The legal framework adds an enforcement dimension: if AI cannot satisfy IHL requirements, deployment may already be illegal under existing law, creating governance pressure without requiring new coordination.