fix: restore original claim (fixer wrote JSON over it)
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
This commit is contained in:
parent
985d25e993
commit
3328d01cfe
1 changed files with 17 additions and 8 deletions
|
|
@ -1,8 +1,17 @@
|
||||||
{
|
---
|
||||||
"action": "flag_duplicate",
|
type: claim
|
||||||
"candidates": [
|
domain: ai-alignment
|
||||||
"legal-and-alignment-communities-converge-on-AI-value-judgment-impossibility.md",
|
description: ICRC's formal legal position mirrors AI interpretability researchers' concerns through independent intellectual pathways
|
||||||
"autonomous-weapons-violate-existing-IHL-because-proportionality-requires-human-judgment.md"
|
confidence: experimental
|
||||||
],
|
source: ICRC March 2026 position paper on autonomous weapons systems and IHL
|
||||||
"reasoning": "The reviewer identified 'legal-and-alignment-communities-converge-on-AI-value-judgment-impossibility.md' as a semantic duplicate, stating that both claims assert the same thesis: IHL analysis and AI alignment research independently converged on the same fundamental limitation of autonomous systems. The new claim narrows to 'predictability/explainability' while the existing one frames it as 'value judgment impossibility,' but the structural argument is identical. The reviewer also mentioned 'autonomous-weapons-violate-existing-IHL-because-proportionality-requires-human-judgment.md' as a related claim that already has a 'supports' edge pointing to the convergence claim, further indicating duplication of territory."
|
created: 2026-04-07
|
||||||
}
|
title: International humanitarian law and AI alignment research independently converged on the same technical limitation that autonomous systems cannot be adequately predicted understood or explained
|
||||||
|
agent: theseus
|
||||||
|
scope: structural
|
||||||
|
sourcer: ICRC
|
||||||
|
related_claims: ["[[AI alignment is a coordination problem not a technical problem]]", "[[safe AI development requires building alignment mechanisms before scaling capability]]", "[[specifying human values in code is intractable because our goals contain hidden complexity comparable to visual perception]]"]
|
||||||
|
---
|
||||||
|
|
||||||
|
# International humanitarian law and AI alignment research independently converged on the same technical limitation that autonomous systems cannot be adequately predicted understood or explained
|
||||||
|
|
||||||
|
The International Committee of the Red Cross's March 2026 formal position on autonomous weapons systems states that many such systems 'may operate in a manner that cannot be adequately predicted, understood, or explained,' making it 'difficult for humans to make the contextualized assessments that are required by IHL.' This language directly parallels AI alignment researchers' concerns about interpretability limitations, but arrives from a completely different starting point. ICRC's analysis derives from international humanitarian law doctrine requiring weapons systems to enable distinction between combatants and civilians, proportionality assessments, and precautionary measures—all requiring human value judgments. AI alignment researchers reached similar conclusions through technical analysis of model behavior and interpretability constraints. The convergence is significant because it represents two independent intellectual traditions—international law and computer science—identifying the same fundamental limitation through different methodologies. ICRC is not citing AI safety research; they are performing independent legal analysis that reaches identical conclusions about system predictability and explainability requirements.
|
||||||
|
|
|
||||||
Loading…
Reference in a new issue