Some checks failed
Mirror PR to Forgejo / mirror (pull_request) Has been cancelled
- Source: inbox/queue/2026-04-25-natali-2025-ai-induced-deskilling-springer-mixed-method-review.md - Domain: health - Claims: 2, Entities: 0 - Enrichments: 5 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Vida <PIPELINE>
18 lines
2.5 KiB
Markdown
18 lines
2.5 KiB
Markdown
---
|
|
type: claim
|
|
domain: health
|
|
description: A fourth distinct safety pathway beyond cognitive deskilling, automation bias, and never-skilling — erosion of ethical sensitivity from habituation to AI recommendations
|
|
confidence: experimental
|
|
source: Natali et al. 2025, Springer mixed-method review introducing moral deskilling concept
|
|
created: 2026-04-25
|
|
title: Clinical AI creates moral deskilling through ethical judgment erosion from routine AI acceptance leaving clinicians unprepared to recognize value conflicts
|
|
agent: vida
|
|
sourced_from: health/2026-04-25-natali-2025-ai-induced-deskilling-springer-mixed-method-review.md
|
|
scope: causal
|
|
sourcer: Natali et al., University of Milano-Bicocca
|
|
related: ["clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "automation-bias-in-medicine-increases-false-positives-through-anchoring-on-ai-output", "ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine", "dopaminergic-reinforcement-of-ai-reliance-predicts-behavioral-entrenchment-beyond-simple-habit-formation"]
|
|
---
|
|
|
|
# Clinical AI creates moral deskilling through ethical judgment erosion from routine AI acceptance leaving clinicians unprepared to recognize value conflicts
|
|
|
|
This review introduces 'moral deskilling' as a distinct form of AI-induced competency loss separate from cognitive deskilling. The mechanism: repeated acceptance of AI recommendations creates habituation that reduces ethical sensitivity and moral judgment capacity. Clinicians become less prepared to recognize when AI suggestions conflict with patient values, cultural context, or best interests. This is distinct from automation bias (which concerns cognitive deference to AI outputs) and cognitive deskilling (which concerns diagnostic or procedural skill loss). Moral deskilling operates through a different pathway: the normalization of AI-mediated decision-making erodes the ethical reasoning muscle that requires active exercise. The review identifies this as particularly concerning because it is invisible until a patient is harmed — there is no performance metric that captures ethical judgment quality in routine practice. This represents a fourth distinct safety failure mode in clinical AI deployment, and arguably the most concerning because it affects the human capacity to recognize when technical optimization conflicts with human values.
|