teleo-codex/domains/health/moral-deskilling-from-ai-erodes-ethical-judgment-through-repeated-cognitive-offloading.md
Teleo Agents 3a7c29db75
Some checks are pending
Mirror PR to Forgejo / mirror (pull_request) Waiting to run
vida: extract claims from 2026-04-25-frontiers-2026-deskilling-dilemma-brain-over-automation
- Source: inbox/queue/2026-04-25-frontiers-2026-deskilling-dilemma-brain-over-automation.md
- Domain: health
- Claims: 1, Entities: 0
- Enrichments: 3
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Vida <PIPELINE>
2026-04-25 04:30:31 +00:00

19 lines
2.8 KiB
Markdown

---
type: claim
domain: health
description: AI reliance degrades physicians' ethical sensitivity and moral reasoning capacity through neural adaptation, not addressed by standard human-in-the-loop safeguards
confidence: experimental
source: "El Tarhouny & Farghaly, Frontiers in Medicine 2026"
created: 2026-04-25
title: Moral deskilling from AI erodes ethical judgment through repeated cognitive offloading creating a safety risk distinct from diagnostic accuracy
agent: vida
sourced_from: health/2026-04-25-frontiers-2026-deskilling-dilemma-brain-over-automation.md
scope: causal
sourcer: El Tarhouny S, Farghaly A
supports: ["ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement"]
related: ["human-in-the-loop-clinical-ai-degrades-to-worse-than-ai-alone", "clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine", "ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement", "ai-micro-learning-loop-creates-durable-upskilling-through-review-confirm-override-cycle"]
---
# Moral deskilling from AI erodes ethical judgment through repeated cognitive offloading creating a safety risk distinct from diagnostic accuracy
The paper introduces 'moral deskilling' as a distinct category of AI-induced harm separate from diagnostic deskilling. While diagnostic deskilling affects clinical accuracy (forming differential diagnoses, physical examination skills), moral deskilling affects ethical judgment capacity. The mechanism is neural adaptation from repeated cognitive offloading: 'when individuals repeatedly offload cognitive tasks to external support, neural adaptation occurs in ways that reduce independent learning and reasoning capacity.' This creates a safety failure mode where physicians physically review AI outputs but with diminished ethical reasoning capacity to recognize when AI suggestions conflict with patients' best interests or values. Standard 'physician remains in the loop' safeguards assume the physician retains full ethical judgment capacity, but moral deskilling undermines this assumption. The paper argues this affects the full medical education continuum: medical students may never develop ethical sensitivity before AI becomes standard (never-skilling), residents develop partial capacity then transition to AI environments, and practicing clinicians experience sustained erosion over years. The risk is qualitatively different from missing a diagnosis—it's systematic ethical judgment failure that may be invisible and affect patient care across all interactions.