extract: 2026-03-30-defense-one-military-ai-human-judgement-deskilling
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
This commit is contained in:
parent
2575d7aaba
commit
06c9d6e03d
4 changed files with 66 additions and 1 deletions
|
|
@ -32,6 +32,12 @@ Agents of Chaos documents specific cases where agents executed destructive syste
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
### Additional Evidence (extend)
|
||||||
|
*Source: [[2026-03-30-defense-one-military-ai-human-judgement-deskilling]] | Added: 2026-03-30*
|
||||||
|
|
||||||
|
Military AI creates the same accountability gap as coding agents: authority without accountability. When AI is advisory but authoritative in practice, 'I was following the AI recommendation' becomes a defense that formal human-in-the-loop requirements cannot address. The gap between nominal authority and functional capacity to exercise that authority undermines accountability structures.
|
||||||
|
|
||||||
|
|
||||||
Relevant Notes:
|
Relevant Notes:
|
||||||
- [[economic forces push humans out of every cognitive loop where output quality is independently verifiable because human-in-the-loop is a cost that competitive markets eliminate]] — market pressure to remove the human from the loop
|
- [[economic forces push humans out of every cognitive loop where output quality is independently verifiable because human-in-the-loop is a cost that competitive markets eliminate]] — market pressure to remove the human from the loop
|
||||||
- [[formal verification of AI-generated proofs provides scalable oversight that human review cannot match because machine-checked correctness scales with AI capability while human verification degrades]] — automated verification as alternative to human accountability
|
- [[formal verification of AI-generated proofs provides scalable oversight that human review cannot match because machine-checked correctness scales with AI capability while human verification degrades]] — automated verification as alternative to human accountability
|
||||||
|
|
|
||||||
|
|
@ -21,6 +21,12 @@ This creates a structural inversion: the market preserves human-in-the-loop exac
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
### Additional Evidence (extend)
|
||||||
|
*Source: [[2026-03-30-defense-one-military-ai-human-judgement-deskilling]] | Added: 2026-03-30*
|
||||||
|
|
||||||
|
Military tempo pressure is the non-economic analog to market forces pushing humans out of verification loops. Even when accountability formally requires human oversight, operational tempo can make meaningful oversight impossible—creating the same functional outcome (humans removed from decision loops) through different mechanisms (speed requirements rather than cost pressure).
|
||||||
|
|
||||||
|
|
||||||
Relevant Notes:
|
Relevant Notes:
|
||||||
- [[the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it]] — human-in-the-loop is itself an alignment tax that markets eliminate through the same competitive dynamic
|
- [[the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it]] — human-in-the-loop is itself an alignment tax that markets eliminate through the same competitive dynamic
|
||||||
- [[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]] — removing human oversight is the micro-level version of this macro-level dynamic
|
- [[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]] — removing human oversight is the micro-level version of this macro-level dynamic
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,42 @@
|
||||||
|
---
|
||||||
|
type: claim
|
||||||
|
domain: ai-alignment
|
||||||
|
description: Extends the human-in-the-loop degradation mechanism from clinical to military contexts, adding tempo mismatch as a novel constraint that makes formal oversight practically impossible at operational speed
|
||||||
|
confidence: experimental
|
||||||
|
source: Defense One analysis, March 2026. Mechanism identified with medical analog evidence (clinical AI deskilling), military-specific empirical evidence cited but not quantified
|
||||||
|
created: 2026-03-30
|
||||||
|
attribution:
|
||||||
|
extractor:
|
||||||
|
- handle: "theseus"
|
||||||
|
sourcer:
|
||||||
|
- handle: "defense-one"
|
||||||
|
context: "Defense One analysis, March 2026. Mechanism identified with medical analog evidence (clinical AI deskilling), military-specific empirical evidence cited but not quantified"
|
||||||
|
---
|
||||||
|
|
||||||
|
# In military AI contexts, automation bias and deskilling produce functionally meaningless human oversight where operators nominally in the loop lack the judgment capacity to override AI recommendations, making human authorization requirements insufficient without competency and tempo standards
|
||||||
|
|
||||||
|
The dominant policy focus on autonomous lethal AI misframes the primary safety risk in military contexts. The actual threat is degraded human judgment from AI-assisted decision-making through three mechanisms:
|
||||||
|
|
||||||
|
**Automation bias**: Soldiers and officers trained to defer to AI recommendations even when the AI is wrong—the same dynamic documented in medical and aviation contexts. When humans consistently see AI perform well, they develop learned helplessness in overriding recommendations.
|
||||||
|
|
||||||
|
**Deskilling**: AI handles routine decisions, humans lose the practice needed to make complex judgment calls without AI. This is the same mechanism observed in clinical settings where physicians de-skill from reliance on diagnostic AI and introduce errors when overriding correct outputs.
|
||||||
|
|
||||||
|
**Tempo mismatch** (novel mechanism): AI operates at machine speed; human oversight is nominally maintained but practically impossible at operational tempo. Unlike clinical settings where decision tempo is bounded by patient interaction, military operations can require split-second decisions where meaningful human evaluation is structurally impossible.
|
||||||
|
|
||||||
|
The structural observation: Requiring "meaningful human authorization" (AI Guardrails Act language) is insufficient if humans can't meaningfully evaluate AI recommendations because they've been deskilled or are operating under tempo constraints. The human remains in the loop technically but not functionally.
|
||||||
|
|
||||||
|
This creates authority ambiguity: When AI is advisory but authoritative in practice, accountability gaps emerge—"I was following the AI recommendation" becomes a defense that formal human-in-the-loop requirements cannot address.
|
||||||
|
|
||||||
|
The article references EU AI Act Article 14, which requires that humans who oversee high-risk AI systems must have the competence, authority, and **time** to actually oversee the system—not just nominal authority. This competency-plus-tempo framework addresses the functional oversight gap that autonomy thresholds alone cannot solve.
|
||||||
|
|
||||||
|
Implication: Rules about autonomous lethal force miss the primary risk. Governance needs rules about human competency requirements and tempo constraints for AI-assisted decisions, not just rules about AI autonomy thresholds.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
Relevant Notes:
|
||||||
|
- [[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]
|
||||||
|
- [[economic forces push humans out of every cognitive loop where output quality is independently verifiable because human-in-the-loop is a cost that competitive markets eliminate]]
|
||||||
|
- [[coding agents cannot take accountability for mistakes which means humans must retain decision authority over security and critical systems regardless of agent capability]]
|
||||||
|
|
||||||
|
Topics:
|
||||||
|
- [[_map]]
|
||||||
|
|
@ -7,9 +7,14 @@ date: 2026-03-20
|
||||||
domain: ai-alignment
|
domain: ai-alignment
|
||||||
secondary_domains: []
|
secondary_domains: []
|
||||||
format: article
|
format: article
|
||||||
status: unprocessed
|
status: processed
|
||||||
priority: medium
|
priority: medium
|
||||||
tags: [military-AI, automation-bias, deskilling, human-judgement, decision-making, human-in-the-loop, autonomy, alignment-oversight]
|
tags: [military-AI, automation-bias, deskilling, human-judgement, decision-making, human-in-the-loop, autonomy, alignment-oversight]
|
||||||
|
processed_by: theseus
|
||||||
|
processed_date: 2026-03-30
|
||||||
|
claims_extracted: ["military-ai-deskilling-and-tempo-mismatch-make-human-oversight-functionally-meaningless-despite-formal-authorization-requirements.md"]
|
||||||
|
enrichments_applied: ["economic forces push humans out of every cognitive loop where output quality is independently verifiable because human-in-the-loop is a cost that competitive markets eliminate.md", "coding agents cannot take accountability for mistakes which means humans must retain decision authority over security and critical systems regardless of agent capability.md"]
|
||||||
|
extraction_model: "anthropic/claude-sonnet-4.5"
|
||||||
---
|
---
|
||||||
|
|
||||||
## Content
|
## Content
|
||||||
|
|
@ -59,3 +64,9 @@ Requiring "meaningful human authorization" (AI Guardrails Act language) is insuf
|
||||||
PRIMARY CONNECTION: [[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]
|
PRIMARY CONNECTION: [[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]
|
||||||
WHY ARCHIVED: Extends deskilling/automation bias from medical to military context; introduces the "tempo mismatch" mechanism making formal human oversight functionally empty; references EU AI Act Article 14 competency requirements as governance solution
|
WHY ARCHIVED: Extends deskilling/automation bias from medical to military context; introduces the "tempo mismatch" mechanism making formal human oversight functionally empty; references EU AI Act Article 14 competency requirements as governance solution
|
||||||
EXTRACTION HINT: The tempo mismatch mechanism is novel — it's not in the KB. Extract as extension of human-in-the-loop degradation claim. Confidence experimental (mechanism is structural, empirical evidence from medical analog, no direct military RCT).
|
EXTRACTION HINT: The tempo mismatch mechanism is novel — it's not in the KB. Extract as extension of human-in-the-loop degradation claim. Confidence experimental (mechanism is structural, empirical evidence from medical analog, no direct military RCT).
|
||||||
|
|
||||||
|
|
||||||
|
## Key Facts
|
||||||
|
- EU AI Act Article 14 requires that humans who oversee high-risk AI systems must have the competence, authority, and time to actually oversee the system
|
||||||
|
- AI Guardrails Act uses 'meaningful human authorization' language for military AI oversight
|
||||||
|
- Defense One published this analysis March 20, 2026, during the Anthropic-Pentagon dispute coverage period
|
||||||
|
|
|
||||||
Loading…
Reference in a new issue