teleo-codex/domains/ai-alignment/military-ai-deskilling-and-tempo-mismatch-make-human-oversight-functionally-meaningless-despite-formal-authorization-requirements.md
m3taversal be8ff41bfe link: bidirectional source↔claim index — 414 claims + 252 sources connected
Wrote sourced_from: into 414 claim files pointing back to their origin source.
Backfilled claims_extracted: into 252 source files that were processed but
missing this field. Matching uses author+title overlap against claim source:
field, validated against 296 known-good pairs from existing claims_extracted.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-21 11:55:18 +01:00

4.2 KiB

type domain description confidence source created attribution supports reweave_edges sourced_from
claim ai-alignment Extends the human-in-the-loop degradation mechanism from clinical to military contexts, adding tempo mismatch as a novel constraint that makes formal oversight practically impossible at operational speed experimental Defense One analysis, March 2026. Mechanism identified with medical analog evidence (clinical AI deskilling), military-specific empirical evidence cited but not quantified 2026-03-30
extractor sourcer
handle
theseus
handle context
defense-one Defense One analysis, March 2026. Mechanism identified with medical analog evidence (clinical AI deskilling), military-specific empirical evidence cited but not quantified
approval fatigue drives agent architecture toward structural safety because humans cannot meaningfully evaluate 100 permission requests per hour
approval fatigue drives agent architecture toward structural safety because humans cannot meaningfully evaluate 100 permission requests per hour|supports|2026-04-03
inbox/archive/health/2026-04-13-frontiers-medicine-2026-deskilling-neurological-mechanism.md

In military AI contexts, automation bias and deskilling produce functionally meaningless human oversight where operators nominally in the loop lack the judgment capacity to override AI recommendations, making human authorization requirements insufficient without competency and tempo standards

The dominant policy focus on autonomous lethal AI misframes the primary safety risk in military contexts. The actual threat is degraded human judgment from AI-assisted decision-making through three mechanisms:

Automation bias: Soldiers and officers trained to defer to AI recommendations even when the AI is wrong—the same dynamic documented in medical and aviation contexts. When humans consistently see AI perform well, they develop learned helplessness in overriding recommendations.

Deskilling: AI handles routine decisions, humans lose the practice needed to make complex judgment calls without AI. This is the same mechanism observed in clinical settings where physicians de-skill from reliance on diagnostic AI and introduce errors when overriding correct outputs.

Tempo mismatch (novel mechanism): AI operates at machine speed; human oversight is nominally maintained but practically impossible at operational tempo. Unlike clinical settings where decision tempo is bounded by patient interaction, military operations can require split-second decisions where meaningful human evaluation is structurally impossible.

The structural observation: Requiring "meaningful human authorization" (AI Guardrails Act language) is insufficient if humans can't meaningfully evaluate AI recommendations because they've been deskilled or are operating under tempo constraints. The human remains in the loop technically but not functionally.

This creates authority ambiguity: When AI is advisory but authoritative in practice, accountability gaps emerge—"I was following the AI recommendation" becomes a defense that formal human-in-the-loop requirements cannot address.

The article references EU AI Act Article 14, which requires that humans who oversee high-risk AI systems must have the competence, authority, and time to actually oversee the system—not just nominal authority. This competency-plus-tempo framework addresses the functional oversight gap that autonomy thresholds alone cannot solve.

Implication: Rules about autonomous lethal force miss the primary risk. Governance needs rules about human competency requirements and tempo constraints for AI-assisted decisions, not just rules about AI autonomy thresholds.


Relevant Notes:

Topics: