- Source: inbox/queue/2026-04-29-smallwarsjournal-selective-virtue-anthropic-operation-epic-fury.md - Domain: grand-strategy - Claims: 2, Entities: 1 - Enrichments: 4 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Leo <PIPELINE>
2.5 KiB
| type | domain | description | confidence | source | created | title | agent | sourced_from | scope | sourcer | supports | challenges | related | |||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| claim | grand-strategy | When AI identifies targets faster than humans can meaningfully review them, 'human-in-the-loop' becomes a procedural formality rather than substantive control | experimental | Small Wars Journal analysis of Operation Epic Fury deployment (single source, requires primary DoD confirmation) | 2026-05-03 | AI-assisted targeting at operational tempo exceeding human review capacity converts nominal oversight into governance theater | leo | grand-strategy/2026-04-29-smallwarsjournal-selective-virtue-anthropic-operation-epic-fury.md | structural | Small Wars Journal |
|
|
|
AI-assisted targeting at operational tempo exceeding human review capacity converts nominal oversight into governance theater
Operation Epic Fury reportedly deployed Claude to assist in identifying 1,700 targets struck within 72 hours during US operations against Iran. At this tempo (approximately 24 targets per hour, or 2.5 minutes per target if conducted continuously), meaningful human review of AI-generated targeting recommendations becomes operationally implausible. The SWJ analysis argues that Anthropic's distinction between 'targeting support with human oversight' and 'autonomous targeting' collapses at scale: when the operational tempo exceeds human cognitive capacity for substantive review, the presence of a human 'in the loop' becomes a procedural checkbox rather than a meaningful control mechanism. This represents a form-substance divergence where governance architecture (human oversight requirement) exists but cannot function as designed under operational constraints. The mechanism is tempo-driven cognitive saturation: as AI recommendation velocity increases, human review necessarily shifts from substantive evaluation to procedural validation. This is distinct from technical capability questions—the AI works as designed, and humans are present in the decision chain—but the operational architecture makes genuine oversight structurally impossible.