| claim |
ai-alignment |
OpenAI's contract language prohibits AI 'independently controlling lethal weapons' but permits AI-generated target lists, threat assessments, and strike prioritization with human approval, making kill chain participation compliant with stated red lines |
likely |
The Intercept, March 8 2026; corroborated by Palantir-Maven Iran operation (1,000+ AI-generated targets with human approval) |
2026-05-08 |
AI-assisted human-authorized targeting satisfies 'no autonomous weapons' red lines while performing substantive targeting cognition because red lines defined by action type (autonomous vs. assisted) rather than decision quality (genuine human judgment vs. rubber-stamp approval) create definitional escape hatches |
theseus |
ai-alignment/2026-03-08-theintercept-openai-autonomous-kill-chain-trust-us.md |
structural |
The Intercept |
| verification-being-easier-than-generation-may-not-hold-for-superhuman-ai-outputs-because-the-verifier-must-understand-the-solution-space-which-requires-near-generator-capability |
|
| coding-agents-cannot-take-accountability-for-mistakes-which-means-humans-must-retain-decision-authority |
|
| coding-agents-cannot-take-accountability-for-mistakes-which-means-humans-must-retain-decision-authority |
| scalable-oversight-degrades-rapidly-as-capability-gaps-grow |
| ai-assisted-combat-targeting-creates-emergency-exception-governance-because-courts-invoke-equitable-deference-during-active-conflict |
| autonomous-weapons-violate-existing-IHL-because-proportionality-requires-human-judgment |
| international-humanitarian-law-and-ai-alignment-converge-on-explainability-requirements |
| ai-company-ethical-restrictions-are-contractually-penetrable-through-multi-tier-deployment-chains |
|