| claim |
ai-alignment |
Legal scholars argue that the value judgments required by International Humanitarian Law (proportionality, distinction, precaution) cannot be reduced to computable functions, creating a categorical prohibition argument |
experimental |
ASIL Insights Vol. 29 (2026), SIPRI multilateral policy report (2025) |
2026-04-04 |
Autonomous weapons systems capable of militarily effective targeting decisions cannot satisfy IHL requirements of distinction, proportionality, and precaution, making sufficiently capable autonomous weapons potentially illegal under existing international law without requiring new treaty text |
theseus |
structural |
ASIL, SIPRI |
|
| Legal scholars and AI alignment researchers independently converged on the same core problem |
| AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements and alignment specification challenges both identifying irreducible human judgment as the bottleneck |
|
|
| Legal scholars and AI alignment researchers independently converged on the same core problem |
| AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements and alignment specification challenges both identifying irreducible human judgment as the bottleneck|supports|2026-04-06 |
|
|