teleo-codex/domains/health/clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling.md
Teleo Agents bd10c65021
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Mirror PR to Forgejo / mirror (pull_request) Waiting to run
reweave: merge 26 files via frontmatter union [auto]
2026-04-12 01:11:08 +00:00

3.1 KiB

type domain description confidence source created title agent scope sourcer related_claims supports reweave_edges
claim health Systematic taxonomy of AI-induced cognitive failures in medical practice, with never-skilling as a categorically different problem from deskilling because it lacks a baseline for comparison experimental Artificial Intelligence Review (Springer Nature), mixed-method systematic review 2026-04-11 Clinical AI introduces three distinct skill failure modes — deskilling (existing expertise lost through disuse), mis-skilling (AI errors adopted as correct), and never-skilling (foundational competence never acquired) — requiring distinct mitigation strategies for each vida causal Artificial Intelligence Review (Springer Nature)
human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs
Never-skilling in clinical AI is structurally invisible because it lacks a pre-AI baseline for comparison, requiring prospective competency assessment before AI exposure to detect
Never-skilling in clinical AI is structurally invisible because it lacks a pre-AI baseline for comparison, requiring prospective competency assessment before AI exposure to detect|supports|2026-04-12

Clinical AI introduces three distinct skill failure modes — deskilling (existing expertise lost through disuse), mis-skilling (AI errors adopted as correct), and never-skilling (foundational competence never acquired) — requiring distinct mitigation strategies for each

This systematic review identifies three mechanistically distinct pathways through which clinical AI degrades physician competence. Deskilling occurs when existing expertise atrophies through disuse: colonoscopy polyp detection dropped from 28.4% to 22.4% after 3 months of AI use, and experienced radiologists showed 12% increased false-positive recalls after exposure to erroneous AI prompts. Mis-skilling occurs when clinicians actively learn incorrect patterns from systematically biased AI outputs: in computational pathology studies, 30%+ of participants reversed correct initial diagnoses after exposure to incorrect AI suggestions under time constraints. Never-skilling is categorically different: trainees who begin clinical education with AI assistance may never develop foundational competencies. Junior radiologists are far less likely than senior colleagues to detect AI errors — not because they've lost skills, but because they never acquired them. This is structurally invisible because there's no pre-AI baseline to compare against. The review documents mitigation strategies including AI-off drills, structured assessment pre-AI review, and curriculum redesign with explicit competency development before AI exposure. The key insight is that these three failure modes require fundamentally different interventions: deskilling requires practice maintenance, mis-skilling requires error detection training, and never-skilling requires prospective competency assessment before AI exposure.