vida: extract claims from 2026-04-15-clinical-ai-deskilling-2026-review-generational
Some checks are pending
Mirror PR to Forgejo / mirror (pull_request) Waiting to run
Some checks are pending
Mirror PR to Forgejo / mirror (pull_request) Waiting to run
- Source: inbox/queue/2026-04-15-clinical-ai-deskilling-2026-review-generational.md - Domain: health - Claims: 1, Entities: 0 - Enrichments: 5 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Vida <PIPELINE>
This commit is contained in:
parent
fe1ab793ba
commit
c3c25ae862
5 changed files with 56 additions and 14 deletions
|
|
@ -10,18 +10,17 @@ agent: vida
|
|||
sourced_from: health/2026-04-25-natali-2025-ai-induced-deskilling-springer-mixed-method-review.md
|
||||
scope: causal
|
||||
sourcer: Natali et al., University of Milano-Bicocca
|
||||
related:
|
||||
- clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling
|
||||
- automation-bias-in-medicine-increases-false-positives-through-anchoring-on-ai-output
|
||||
- ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement
|
||||
- ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine
|
||||
- dopaminergic-reinforcement-of-ai-reliance-predicts-behavioral-entrenchment-beyond-simple-habit-formation
|
||||
supports:
|
||||
- Moral deskilling from AI erodes ethical judgment through repeated cognitive offloading creating a safety risk distinct from diagnostic accuracy
|
||||
reweave_edges:
|
||||
- Moral deskilling from AI erodes ethical judgment through repeated cognitive offloading creating a safety risk distinct from diagnostic accuracy|supports|2026-04-26
|
||||
related: ["clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "automation-bias-in-medicine-increases-false-positives-through-anchoring-on-ai-output", "ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine", "dopaminergic-reinforcement-of-ai-reliance-predicts-behavioral-entrenchment-beyond-simple-habit-formation", "clinical-ai-creates-moral-deskilling-through-ethical-judgment-erosion", "moral-deskilling-from-ai-erodes-ethical-judgment-through-repeated-cognitive-offloading", "clinical-ai-deskilling-is-generational-risk-not-current-phenomenon"]
|
||||
supports: ["Moral deskilling from AI erodes ethical judgment through repeated cognitive offloading creating a safety risk distinct from diagnostic accuracy"]
|
||||
reweave_edges: ["Moral deskilling from AI erodes ethical judgment through repeated cognitive offloading creating a safety risk distinct from diagnostic accuracy|supports|2026-04-26"]
|
||||
---
|
||||
|
||||
# Clinical AI creates moral deskilling through ethical judgment erosion from routine AI acceptance leaving clinicians unprepared to recognize value conflicts
|
||||
|
||||
This review introduces 'moral deskilling' as a distinct form of AI-induced competency loss separate from cognitive deskilling. The mechanism: repeated acceptance of AI recommendations creates habituation that reduces ethical sensitivity and moral judgment capacity. Clinicians become less prepared to recognize when AI suggestions conflict with patient values, cultural context, or best interests. This is distinct from automation bias (which concerns cognitive deference to AI outputs) and cognitive deskilling (which concerns diagnostic or procedural skill loss). Moral deskilling operates through a different pathway: the normalization of AI-mediated decision-making erodes the ethical reasoning muscle that requires active exercise. The review identifies this as particularly concerning because it is invisible until a patient is harmed — there is no performance metric that captures ethical judgment quality in routine practice. This represents a fourth distinct safety failure mode in clinical AI deployment, and arguably the most concerning because it affects the human capacity to recognize when technical optimization conflicts with human values.
|
||||
This review introduces 'moral deskilling' as a distinct form of AI-induced competency loss separate from cognitive deskilling. The mechanism: repeated acceptance of AI recommendations creates habituation that reduces ethical sensitivity and moral judgment capacity. Clinicians become less prepared to recognize when AI suggestions conflict with patient values, cultural context, or best interests. This is distinct from automation bias (which concerns cognitive deference to AI outputs) and cognitive deskilling (which concerns diagnostic or procedural skill loss). Moral deskilling operates through a different pathway: the normalization of AI-mediated decision-making erodes the ethical reasoning muscle that requires active exercise. The review identifies this as particularly concerning because it is invisible until a patient is harmed — there is no performance metric that captures ethical judgment quality in routine practice. This represents a fourth distinct safety failure mode in clinical AI deployment, and arguably the most concerning because it affects the human capacity to recognize when technical optimization conflicts with human values.
|
||||
|
||||
## Supporting Evidence
|
||||
|
||||
**Source:** Frontiers Medicine 2026
|
||||
|
||||
Frontiers Medicine 2026 provides conceptual confirmation of moral deskilling via neural adaptation mechanism: habitual AI acceptance erodes ethical sensitivity and contextual judgment as physicians offload ethical reasoning to AI systems. This is the same neurological pathway as cognitive deskilling (prefrontal disengagement) but applied to moral reasoning tasks.
|
||||
|
|
|
|||
|
|
@ -11,9 +11,23 @@ sourced_from: health/2026-04-25-arise-state-of-clinical-ai-2026-report.md
|
|||
scope: structural
|
||||
sourcer: ARISE Network (Stanford-Harvard)
|
||||
supports: ["never-skilling-affects-trainees-while-deskilling-affects-experienced-physicians-creating-distinct-population-risks"]
|
||||
related: ["clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "never-skilling-affects-trainees-while-deskilling-affects-experienced-physicians-creating-distinct-population-risks", "ai-cervical-cytology-screening-creates-never-skilling-through-routine-case-reduction", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine", "never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling", "never-skilling-distinct-from-deskilling-affects-trainees-not-experienced-physicians"]
|
||||
related: ["clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "never-skilling-affects-trainees-while-deskilling-affects-experienced-physicians-creating-distinct-population-risks", "ai-cervical-cytology-screening-creates-never-skilling-through-routine-case-reduction", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine", "never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling", "never-skilling-distinct-from-deskilling-affects-trainees-not-experienced-physicians", "clinical-ai-deskilling-is-generational-risk-not-current-phenomenon", "clinical-ai-upskilling-requires-deliberate-educational-design-not-passive-exposure"]
|
||||
---
|
||||
|
||||
# Clinical AI deskilling is a generational risk affecting future trainees rather than current practitioners because experienced clinicians retain pre-AI skill foundations while new trainees face never-skilling in AI-saturated environments
|
||||
|
||||
The ARISE 2026 report synthesizing 2025 clinical AI research documents a critical temporal distinction in deskilling risk. Current practicing clinicians report NO measurable deskilling from AI applications, which the report attributes to their pre-AI clinical training providing a skill foundation that AI assistance does not erode. However, the report documents a stark generational divergence in risk perception: 33% of younger providers entering practice rank deskilling as a top-2 concern, compared to only 11% of older providers. This 3x difference reflects the structural reality that younger clinicians entering AI-integrated training environments face 'never-skilling' risk—they may never develop the clinical judgment skills that current practitioners acquired before AI assistance became ubiquitous. The report explicitly states that current AI applications function as 'assistants rather than autonomous agents' with 'narrow scope,' which preserves skill development for those already trained. The generational divergence provides empirical evidence that deskilling is a FUTURE risk concentrated in training pipelines, not a current phenomenon affecting experienced practitioners. This temporal scoping is critical because it shifts the intervention point from retraining current clinicians to redesigning medical education for AI-native environments.
|
||||
|
||||
|
||||
## Supporting Evidence
|
||||
|
||||
**Source:** Wolters Kluwer AI survey 2026
|
||||
|
||||
Wolters Kluwer 2026 survey confirms the 3:1 generational differential in deskilling concern: 33% of younger providers rank deskilling as top concern vs 11% of older providers. This is independent confirmation of the ARISE 2026 Stanford-Harvard finding. The survey data shows newer providers are both more exposed to AI-first environments AND more aware of the developmental risk.
|
||||
|
||||
|
||||
## Extending Evidence
|
||||
|
||||
**Source:** ScienceDirect scoping review 2026
|
||||
|
||||
ScienceDirect scoping review 2026 confirms current evidence is largely expert opinion and small-scale studies, with no longitudinal prospective data tracking clinical competence in AI-integrated environments. The temporal qualification (current clinicians protected, trainees at risk) remains at 'likely' confidence, not 'proven', due to absence of longitudinal RCT evidence.
|
||||
|
|
|
|||
|
|
@ -0,0 +1,19 @@
|
|||
---
|
||||
type: claim
|
||||
domain: health
|
||||
description: "Operational protocol for resident training that addresses never-skilling without eliminating AI assistance by enforcing sequence: human reasoning generation first, then AI as second opinion"
|
||||
confidence: experimental
|
||||
source: PMC 2026 resident supervision study; Frontiers Medicine 2026
|
||||
created: 2026-04-26
|
||||
title: Clinical AI human-first reasoning prevents never-skilling through pedagogical sequencing where trainees generate differential diagnoses before AI consultation
|
||||
agent: vida
|
||||
sourced_from: health/2026-04-15-clinical-ai-deskilling-2026-review-generational.md
|
||||
scope: functional
|
||||
sourcer: PMC / Frontiers Medicine
|
||||
supports: ["clinical-ai-upskilling-requires-deliberate-educational-design-not-passive-exposure"]
|
||||
related: ["optional-use-ai-deployment-preserves-independent-clinical-judgment-preventing-automation-bias-pathway", "clinical-ai-upskilling-requires-deliberate-educational-design-not-passive-exposure", "never-skilling-affects-trainees-while-deskilling-affects-experienced-physicians-creating-distinct-population-risks", "ai-induced-upskilling-inhibition-prevents-skill-acquisition-in-trainees-through-routine-case-reduction", "never-skilling-is-structurally-invisible-because-it-lacks-pre-ai-baseline-requiring-prospective-competency-assessment", "never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling", "clinical-ai-deskilling-is-generational-risk-not-current-phenomenon"]
|
||||
---
|
||||
|
||||
# Clinical AI human-first reasoning prevents never-skilling through pedagogical sequencing where trainees generate differential diagnoses before AI consultation
|
||||
|
||||
The resident supervision study (PMC 2026) identifies a specific pedagogical intervention to prevent never-skilling: residents must generate their own differential diagnosis before consulting AI. This is not abstract guidance about 'AI should supplement not replace' but an operational protocol with explicit sequencing. The mechanism: if AI supplies the first-pass differential, the resident never develops the cognitive skill of building and prioritizing clinical reasoning independently. The Frontiers Medicine 2026 paper confirms the neurological basis: cognitive tasks offloaded to AI result in decreased neural capacity for those tasks. The human-first protocol preserves the cognitive load required for skill acquisition while still allowing AI augmentation after independent reasoning is demonstrated. This is a structural educational intervention that addresses the never-skilling pathway identified in colonoscopy ADR studies and cytology training volume destruction. The protocol implements role complementarity: human generates hypothesis space, AI validates and extends. Critically, this only works if enforced at the institutional level—optional use would allow trainees to skip the effortful human-first step.
|
||||
|
|
@ -11,9 +11,16 @@ sourced_from: health/2026-04-22-oettl-2026-ai-deskilling-to-upskilling-orthopedi
|
|||
scope: structural
|
||||
sourcer: Oettl et al., Journal of Experimental Orthopaedics
|
||||
supports: ["cytology-lab-consolidation-creates-never-skilling-pathway-through-80-percent-training-volume-destruction"]
|
||||
related: ["clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling", "cytology-lab-consolidation-creates-never-skilling-pathway-through-80-percent-training-volume-destruction", "never-skilling-is-structurally-invisible-because-it-lacks-pre-ai-baseline-requiring-prospective-competency-assessment", "ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine"]
|
||||
related: ["clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling", "cytology-lab-consolidation-creates-never-skilling-pathway-through-80-percent-training-volume-destruction", "never-skilling-is-structurally-invisible-because-it-lacks-pre-ai-baseline-requiring-prospective-competency-assessment", "ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine", "never-skilling-affects-trainees-while-deskilling-affects-experienced-physicians-creating-distinct-population-risks", "never-skilling-distinct-from-deskilling-affects-trainees-not-experienced-physicians", "clinical-ai-deskilling-is-generational-risk-not-current-phenomenon"]
|
||||
---
|
||||
|
||||
# Never-skilling affects trainees while deskilling affects experienced physicians creating distinct population risks with different intervention requirements
|
||||
|
||||
Oettl et al. explicitly distinguish 'never-skilling' from 'deskilling' as separate mechanisms affecting different populations. Never-skilling occurs when trainees 'never develop foundational competencies' because AI is present from the start of their education. Deskilling occurs when experienced physicians lose existing skills through AI reliance. This distinction is critical because: (1) never-skilling is detection-resistant (no baseline to compare against), (2) the two mechanisms require different interventions (curriculum design for never-skilling, practice requirements for deskilling), and (3) they may have different timescales (never-skilling is immediate, deskilling may take years). The paper acknowledges that 'educators may lack expertise supervising AI use,' which compounds the never-skilling risk. This framework explains why the cytology lab consolidation evidence (80% training volume destruction) is particularly concerning—it creates a never-skilling pathway that is structurally invisible until the first generation of AI-trained pathologists enters independent practice.
|
||||
|
||||
|
||||
## Supporting Evidence
|
||||
|
||||
**Source:** Frontiers Medicine 2026
|
||||
|
||||
Frontiers Medicine 2026 maps the education continuum explicitly: students face never-skilling (no baseline skill acquisition), residents face partial-skilling (interrupted skill development), established clinicians face deskilling (erosion of existing skills). This confirms the three-population model with distinct failure modes by career stage.
|
||||
|
|
|
|||
|
|
@ -7,10 +7,13 @@ date: 2026-04-15
|
|||
domain: health
|
||||
secondary_domains: [ai-alignment]
|
||||
format: literature-review
|
||||
status: unprocessed
|
||||
status: processed
|
||||
processed_by: vida
|
||||
processed_date: 2026-04-26
|
||||
priority: high
|
||||
tags: [clinical-ai, deskilling, never-skilling, medical-training, residency, generational-risk, automation-bias, AI-safety]
|
||||
flagged_for_theseus: ["moral deskilling as alignment failure mode — AI shaping human ethical judgment through habituation at scale"]
|
||||
extraction_model: "anthropic/claude-sonnet-4.5"
|
||||
---
|
||||
|
||||
## Content
|
||||
Loading…
Reference in a new issue