- Source: inbox/queue/2026-04-25-frontiers-2026-deskilling-dilemma-brain-over-automation.md - Domain: health - Claims: 1, Entities: 0 - Enrichments: 3 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Vida <PIPELINE>
2.8 KiB
| type | domain | description | confidence | source | created | title | agent | sourced_from | scope | sourcer | supports | related | ||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| claim | health | AI reliance degrades physicians' ethical sensitivity and moral reasoning capacity through neural adaptation, not addressed by standard human-in-the-loop safeguards | experimental | El Tarhouny & Farghaly, Frontiers in Medicine 2026 | 2026-04-25 | Moral deskilling from AI erodes ethical judgment through repeated cognitive offloading creating a safety risk distinct from diagnostic accuracy | vida | health/2026-04-25-frontiers-2026-deskilling-dilemma-brain-over-automation.md | causal | El Tarhouny S, Farghaly A |
|
|
Moral deskilling from AI erodes ethical judgment through repeated cognitive offloading creating a safety risk distinct from diagnostic accuracy
The paper introduces 'moral deskilling' as a distinct category of AI-induced harm separate from diagnostic deskilling. While diagnostic deskilling affects clinical accuracy (forming differential diagnoses, physical examination skills), moral deskilling affects ethical judgment capacity. The mechanism is neural adaptation from repeated cognitive offloading: 'when individuals repeatedly offload cognitive tasks to external support, neural adaptation occurs in ways that reduce independent learning and reasoning capacity.' This creates a safety failure mode where physicians physically review AI outputs but with diminished ethical reasoning capacity to recognize when AI suggestions conflict with patients' best interests or values. Standard 'physician remains in the loop' safeguards assume the physician retains full ethical judgment capacity, but moral deskilling undermines this assumption. The paper argues this affects the full medical education continuum: medical students may never develop ethical sensitivity before AI becomes standard (never-skilling), residents develop partial capacity then transition to AI environments, and practicing clinicians experience sustained erosion over years. The risk is qualitatively different from missing a diagnosis—it's systematic ethical judgment failure that may be invisible and affect patient care across all interactions.