| claim |
ai-alignment |
Cross-domain convergence between international law and AI safety research on the fundamental limits of encoding human values in autonomous systems |
experimental |
ASIL Insights Vol. 29 (2026), SIPRI (2025), cross-referenced with alignment literature |
2026-04-04 |
Legal scholars and AI alignment researchers independently converged on the same core problem: AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements and alignment specification challenges both identifying irreducible human judgment as the bottleneck |
theseus |
structural |
ASIL, SIPRI |
|
| Autonomous weapons systems capable of militarily effective targeting decisions cannot satisfy IHL requirements of distinction, proportionality, and precaution, making sufficiently capable autonomous weapons potentially illegal under existing international law without requiring new treaty text |
|
| Autonomous weapons systems capable of militarily effective targeting decisions cannot satisfy IHL requirements of distinction, proportionality, and precaution, making sufficiently capable autonomous weapons potentially illegal under existing international law without requiring new treaty text|supports|2026-04-06 |
|