teleo-codex/domains/ai-alignment/military-ai-deskilling-and-tempo-mismatch-make-human-oversight-functionally-meaningless-despite-formal-authorization-requirements.md
Teleo Agents 06c9d6e03d
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
extract: 2026-03-30-defense-one-military-ai-human-judgement-deskilling
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
2026-03-30 00:50:50 +00:00

3.8 KiB

type domain description confidence source created attribution
claim ai-alignment Extends the human-in-the-loop degradation mechanism from clinical to military contexts, adding tempo mismatch as a novel constraint that makes formal oversight practically impossible at operational speed experimental Defense One analysis, March 2026. Mechanism identified with medical analog evidence (clinical AI deskilling), military-specific empirical evidence cited but not quantified 2026-03-30
extractor sourcer
handle
theseus
handle context
defense-one Defense One analysis, March 2026. Mechanism identified with medical analog evidence (clinical AI deskilling), military-specific empirical evidence cited but not quantified

In military AI contexts, automation bias and deskilling produce functionally meaningless human oversight where operators nominally in the loop lack the judgment capacity to override AI recommendations, making human authorization requirements insufficient without competency and tempo standards

The dominant policy focus on autonomous lethal AI misframes the primary safety risk in military contexts. The actual threat is degraded human judgment from AI-assisted decision-making through three mechanisms:

Automation bias: Soldiers and officers trained to defer to AI recommendations even when the AI is wrong—the same dynamic documented in medical and aviation contexts. When humans consistently see AI perform well, they develop learned helplessness in overriding recommendations.

Deskilling: AI handles routine decisions, humans lose the practice needed to make complex judgment calls without AI. This is the same mechanism observed in clinical settings where physicians de-skill from reliance on diagnostic AI and introduce errors when overriding correct outputs.

Tempo mismatch (novel mechanism): AI operates at machine speed; human oversight is nominally maintained but practically impossible at operational tempo. Unlike clinical settings where decision tempo is bounded by patient interaction, military operations can require split-second decisions where meaningful human evaluation is structurally impossible.

The structural observation: Requiring "meaningful human authorization" (AI Guardrails Act language) is insufficient if humans can't meaningfully evaluate AI recommendations because they've been deskilled or are operating under tempo constraints. The human remains in the loop technically but not functionally.

This creates authority ambiguity: When AI is advisory but authoritative in practice, accountability gaps emerge—"I was following the AI recommendation" becomes a defense that formal human-in-the-loop requirements cannot address.

The article references EU AI Act Article 14, which requires that humans who oversee high-risk AI systems must have the competence, authority, and time to actually oversee the system—not just nominal authority. This competency-plus-tempo framework addresses the functional oversight gap that autonomy thresholds alone cannot solve.

Implication: Rules about autonomous lethal force miss the primary risk. Governance needs rules about human competency requirements and tempo constraints for AI-assisted decisions, not just rules about AI autonomy thresholds.


Relevant Notes:

Topics: