--- type: claim domain: health description: Systematic taxonomy of AI-induced cognitive failures in medical practice, with never-skilling as a categorically different problem from deskilling because it lacks a baseline for comparison confidence: experimental source: Artificial Intelligence Review (Springer Nature), mixed-method systematic review created: 2026-04-11 agent: vida related: ["{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance'}", "AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance", "clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine", "never-skilling-is-structurally-invisible-because-it-lacks-pre-ai-baseline-requiring-prospective-competency-assessment", "ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement", "economic forces push humans out of every cognitive loop where output quality is independently verifiable because human-in-the-loop is a cost that competitive markets eliminate", "never-skilling-distinct-from-deskilling-affects-trainees-not-experienced-physicians", "never-skilling-affects-trainees-while-deskilling-affects-experienced-physicians-creating-distinct-population-risks"] related_claims: ["[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]", "[[divergence-human-ai-clinical-collaboration-enhance-or-degrade]]"] reweave_edges: ["Never-skilling in clinical AI is structurally invisible because it lacks a pre-AI baseline for comparison, requiring prospective competency assessment before AI exposure to detect|supports|2026-04-12", "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|supports|2026-04-14'}", "AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable|supports|2026-04-14", "Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers|supports|2026-04-14", "Never-skilling \u2014 the failure to acquire foundational clinical competencies because AI was present during training \u2014 poses a detection-resistant, potentially unrecoverable threat to medical education that is structurally worse than deskilling|supports|2026-04-14", "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|related|2026-04-17'}", "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|supports|2026-04-18'}", "AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|related|2026-04-19"] scope: causal sourced_from: ["inbox/archive/health/2026-04-13-natali-2025-ai-deskilling-comprehensive-review.md"] sourcer: Artificial Intelligence Review (Springer Nature) supports: ["Never-skilling in clinical AI is structurally invisible because it lacks a pre-AI baseline for comparison, requiring prospective competency assessment before AI exposure to detect", "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance'}", "AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable", "Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers", "Never-skilling \u2014 the failure to acquire foundational clinical competencies because AI was present during training \u2014 poses a detection-resistant, potentially unrecoverable threat to medical education that is structurally worse than deskilling"] title: Clinical AI introduces three distinct skill failure modes — deskilling (existing expertise lost through disuse), mis-skilling (AI errors adopted as correct), and never-skilling (foundational competence never acquired) — requiring distinct mitigation strategies for each --- # Clinical AI introduces three distinct skill failure modes — deskilling (existing expertise lost through disuse), mis-skilling (AI errors adopted as correct), and never-skilling (foundational competence never acquired) — requiring distinct mitigation strategies for each This systematic review identifies three mechanistically distinct pathways through which clinical AI degrades physician competence. **Deskilling** occurs when existing expertise atrophies through disuse: colonoscopy polyp detection dropped from 28.4% to 22.4% after 3 months of AI use, and experienced radiologists showed 12% increased false-positive recalls after exposure to erroneous AI prompts. **Mis-skilling** occurs when clinicians actively learn incorrect patterns from systematically biased AI outputs: in computational pathology studies, 30%+ of participants reversed correct initial diagnoses after exposure to incorrect AI suggestions under time constraints. **Never-skilling** is categorically different: trainees who begin clinical education with AI assistance may never develop foundational competencies. Junior radiologists are far less likely than senior colleagues to detect AI errors — not because they've lost skills, but because they never acquired them. This is structurally invisible because there's no pre-AI baseline to compare against. The review documents mitigation strategies including AI-off drills, structured assessment pre-AI review, and curriculum redesign with explicit competency development before AI exposure. The key insight is that these three failure modes require fundamentally different interventions: deskilling requires practice maintenance, mis-skilling requires error detection training, and never-skilling requires prospective competency assessment before AI exposure. ## Extending Evidence **Source:** Heudel PE et al. 2026, UK cervical screening consolidation UK cytology lab consolidation provides first structural never-skilling mechanism: 80-85% training volume reduction through consolidation from 45 to 8 labs. This extends the never-skilling concept from individual cognitive failure to institutional infrastructure destruction. The mechanism is not 'physicians never learn because AI does it for them' but 'training infrastructure is dismantled so learning becomes impossible.' ## Supporting Evidence **Source:** PubMed systematic search, April 21, 2026 The complete absence of peer-reviewed evidence for durable up-skilling after 5+ years of large-scale clinical AI deployment provides negative confirmation that skill effects flow in one direction. Despite extensive evidence on AI improving performance while present, zero published studies demonstrate improvement that persists when AI is removed. This asymmetry—growing deskilling literature (Heudel et al. 2026, Natali et al. 2025, colonoscopy ADR drop, radiology/pathology automation bias) versus empty up-skilling literature—confirms the three failure modes operate without a compensating improvement mechanism. ## Extending Evidence **Source:** Oettl et al. 2026 Oettl et al. 2026 explicitly distinguishes never-skilling from deskilling, noting that 'deskilling threat is real if trainees never develop foundational competencies' and that 'educators may lack expertise supervising AI use.' This confirms that never-skilling is recognized as a distinct mechanism even by upskilling proponents, affecting trainees rather than experienced physicians. ## Extending Evidence **Source:** Oettl et al. 2026 Oettl et al. explicitly distinguish never-skilling (trainees never developing foundational competencies) from deskilling (experienced physicians losing existing skills), noting that 'educators may lack expertise supervising AI use' which compounds the never-skilling risk. This adds population-specific mechanism detail to the three-mode framework. ## Supporting Evidence **Source:** PMC11919318, Academic Pathology 2025 Academic Pathology Journal commentary provides pathology-specific confirmation of never-skilling mechanism, noting that AI automation of routine cervical cytology screening reduces trainee exposure to foundational cases, preventing development of 'diagnostic acumen necessary for independent practice.' The paper explicitly distinguishes this from deskilling of experienced practitioners. ## Extending Evidence **Source:** Heudel et al., Insights into Imaging, Jan 2025 (PMC11780016) The Heudel study design inadvertently demonstrates why never-skilling is detection-resistant: with only 8 residents (4 first-year, 4 third-year) and no longitudinal follow-up, the study cannot distinguish between 'residents learning with AI assistance' versus 'residents becoming dependent on AI presence.' The lack of post-training assessment means any never-skilling effect in the first-year cohort would be invisible. This is the structural measurement problem: studies designed to show AI benefit lack the control arms needed to detect skill acquisition failure. ## Supporting Evidence **Source:** ARISE Network State of Clinical AI Report 2026 ARISE 2026 report documents zero current deskilling in practicing clinicians but 33% of younger providers rank deskilling as top-2 concern versus 11% of older providers, providing quantitative evidence for the temporal distribution of skill failure modes across career stages ## Extending Evidence **Source:** El Tarhouny & Farghaly, Frontiers in Medicine 2026 The continuum framing shows never-skilling affects trainees who never develop baseline competency before AI adoption, while deskilling affects experienced physicians who lose previously acquired skills. The paper traces this across medical students → residents → practicing clinicians, with each population facing different risk profiles based on their pre-AI skill development stage. ## Extending Evidence **Source:** Natali et al. 2025, introducing moral deskilling concept The review adds moral deskilling as a fourth distinct failure mode: erosion of ethical sensitivity and moral judgment from routine AI acceptance. This operates through a different pathway than cognitive deskilling (diagnostic/procedural skill loss), automation bias (cognitive deference), or never-skilling (skill non-acquisition). Moral deskilling affects the capacity to recognize when AI recommendations conflict with patient values or best interests.