teleo-codex/domains/health/clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling.md
Teleo Agents 6df8174cf6
Some checks failed
Sync Graph Data to teleo-app / sync (push) Waiting to run
Mirror PR to Forgejo / mirror (pull_request) Has been cancelled
reweave: merge 21 files via frontmatter union [auto]
2026-04-14 01:10:21 +00:00

4.9 KiB

type domain description confidence source created title agent scope sourcer related_claims supports reweave_edges
claim health Systematic taxonomy of AI-induced cognitive failures in medical practice, with never-skilling as a categorically different problem from deskilling because it lacks a baseline for comparison experimental Artificial Intelligence Review (Springer Nature), mixed-method systematic review 2026-04-11 Clinical AI introduces three distinct skill failure modes — deskilling (existing expertise lost through disuse), mis-skilling (AI errors adopted as correct), and never-skilling (foundational competence never acquired) — requiring distinct mitigation strategies for each vida causal Artificial Intelligence Review (Springer Nature)
human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs
Never-skilling in clinical AI is structurally invisible because it lacks a pre-AI baseline for comparison, requiring prospective competency assessment before AI exposure to detect
AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms
prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance
AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable
Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers
Never-skilling — the failure to acquire foundational clinical competencies because AI was present during training — poses a detection-resistant, potentially unrecoverable threat to medical education that is structurally worse than deskilling
Never-skilling in clinical AI is structurally invisible because it lacks a pre-AI baseline for comparison, requiring prospective competency assessment before AI exposure to detect|supports|2026-04-12
AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms
prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|supports|2026-04-14
AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable|supports|2026-04-14
Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers|supports|2026-04-14
Never-skilling — the failure to acquire foundational clinical competencies because AI was present during training — poses a detection-resistant, potentially unrecoverable threat to medical education that is structurally worse than deskilling|supports|2026-04-14

Clinical AI introduces three distinct skill failure modes — deskilling (existing expertise lost through disuse), mis-skilling (AI errors adopted as correct), and never-skilling (foundational competence never acquired) — requiring distinct mitigation strategies for each

This systematic review identifies three mechanistically distinct pathways through which clinical AI degrades physician competence. Deskilling occurs when existing expertise atrophies through disuse: colonoscopy polyp detection dropped from 28.4% to 22.4% after 3 months of AI use, and experienced radiologists showed 12% increased false-positive recalls after exposure to erroneous AI prompts. Mis-skilling occurs when clinicians actively learn incorrect patterns from systematically biased AI outputs: in computational pathology studies, 30%+ of participants reversed correct initial diagnoses after exposure to incorrect AI suggestions under time constraints. Never-skilling is categorically different: trainees who begin clinical education with AI assistance may never develop foundational competencies. Junior radiologists are far less likely than senior colleagues to detect AI errors — not because they've lost skills, but because they never acquired them. This is structurally invisible because there's no pre-AI baseline to compare against. The review documents mitigation strategies including AI-off drills, structured assessment pre-AI review, and curriculum redesign with explicit competency development before AI exposure. The key insight is that these three failure modes require fundamentally different interventions: deskilling requires practice maintenance, mis-skilling requires error detection training, and never-skilling requires prospective competency assessment before AI exposure.