teleo-codex/domains/health/clinical-ai-creates-moral-deskilling-through-ethical-judgment-erosion.md
Teleo Agents 0ee61d86f5 vida: extract claims from 2026-04-15-clinical-ai-deskilling-2026-review-generational
- Source: inbox/queue/2026-04-15-clinical-ai-deskilling-2026-review-generational.md
- Domain: health
- Claims: 1, Entities: 0
- Enrichments: 5
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Vida <PIPELINE>
2026-04-26 04:25:07 +00:00

3.5 KiB

type domain description confidence source created title agent sourced_from scope sourcer related supports reweave_edges
claim health A fourth distinct safety pathway beyond cognitive deskilling, automation bias, and never-skilling — erosion of ethical sensitivity from habituation to AI recommendations experimental Natali et al. 2025, Springer mixed-method review introducing moral deskilling concept 2026-04-25 Clinical AI creates moral deskilling through ethical judgment erosion from routine AI acceptance leaving clinicians unprepared to recognize value conflicts vida health/2026-04-25-natali-2025-ai-induced-deskilling-springer-mixed-method-review.md causal Natali et al., University of Milano-Bicocca
clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling
automation-bias-in-medicine-increases-false-positives-through-anchoring-on-ai-output
ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement
ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine
dopaminergic-reinforcement-of-ai-reliance-predicts-behavioral-entrenchment-beyond-simple-habit-formation
clinical-ai-creates-moral-deskilling-through-ethical-judgment-erosion
moral-deskilling-from-ai-erodes-ethical-judgment-through-repeated-cognitive-offloading
clinical-ai-deskilling-is-generational-risk-not-current-phenomenon
Moral deskilling from AI erodes ethical judgment through repeated cognitive offloading creating a safety risk distinct from diagnostic accuracy
Moral deskilling from AI erodes ethical judgment through repeated cognitive offloading creating a safety risk distinct from diagnostic accuracy|supports|2026-04-26

Clinical AI creates moral deskilling through ethical judgment erosion from routine AI acceptance leaving clinicians unprepared to recognize value conflicts

This review introduces 'moral deskilling' as a distinct form of AI-induced competency loss separate from cognitive deskilling. The mechanism: repeated acceptance of AI recommendations creates habituation that reduces ethical sensitivity and moral judgment capacity. Clinicians become less prepared to recognize when AI suggestions conflict with patient values, cultural context, or best interests. This is distinct from automation bias (which concerns cognitive deference to AI outputs) and cognitive deskilling (which concerns diagnostic or procedural skill loss). Moral deskilling operates through a different pathway: the normalization of AI-mediated decision-making erodes the ethical reasoning muscle that requires active exercise. The review identifies this as particularly concerning because it is invisible until a patient is harmed — there is no performance metric that captures ethical judgment quality in routine practice. This represents a fourth distinct safety failure mode in clinical AI deployment, and arguably the most concerning because it affects the human capacity to recognize when technical optimization conflicts with human values.

Supporting Evidence

Source: Frontiers Medicine 2026

Frontiers Medicine 2026 provides conceptual confirmation of moral deskilling via neural adaptation mechanism: habitual AI acceptance erodes ethical sensitivity and contextual judgment as physicians offload ethical reasoning to AI systems. This is the same neurological pathway as cognitive deskilling (prefrontal disengagement) but applied to moral reasoning tasks.