teleo-codex/inbox/queue/2025-08-xx-springer-clinical-ai-deskilling-misskilling-neverskilling-mixed-method-review.md
Teleo Agents b0e77ab3b8
Some checks are pending
Mirror PR to Forgejo / mirror (pull_request) Waiting to run
vida: research session 2026-04-11 — 10 sources archived
Pentagon-Agent: Vida <HEADLESS>
2026-04-11 04:15:50 +00:00

6 KiB

type title author url date domain secondary_domains format status priority tags flagged_for_theseus
source AI-Induced Deskilling in Medicine: Mixed-Method Review and Three-Pathway Model (Deskilling, Mis-Skilling, Never-Skilling) Artificial Intelligence Review (Springer Nature) https://link.springer.com/article/10.1007/s10462-025-11352-1 2025-08-01 health
ai-alignment
research-paper unprocessed high
clinical-AI
deskilling
automation-bias
medical-training
never-skilling
mis-skilling
physician
safety
Three-pathway deskilling model extends KB's existing automation bias framework; 'never-skilling' is a novel category not yet in KB

Content

Mixed-method systematic review examining AI-induced deskilling in medical practice. Identifies three distinct cognitive failure pathways when AI is introduced to clinical practice:

1. Deskilling — Existing expertise is actively lost through disuse. AI automates tasks that physicians previously needed to perform manually; without practice, manual skills atrophy. Examples: colonoscopy polyp detection ADR dropped 28.4% → 22.4% after 3 months of AI use (then switched off); experienced radiologists showed 12% increased false-positive recalls after exposure to erroneous AI prompts.

2. Mis-skilling — Clinicians adopt AI errors as correct. When AI produces systematically biased outputs (e.g., undertreating Black patients, hallucinated diagnoses), and physicians incorporate these into practice, they actively learn wrong patterns. Computational pathology: 30%+ of participants reversed correct initial diagnoses after exposure to incorrect AI suggestions under time constraints.

3. Never-skilling — Trainees who begin clinical education with AI assistance may never develop foundational competencies. Junior radiologists are far less likely than senior colleagues to detect AI errors — not because they've lost skills, but because they never acquired them. This is categorically different from deskilling: you cannot lose what you never had.

Mitigation strategies documented:

  • Manual practice maintenance ("AI-off drills") — regular case handling without AI
  • Human-in-the-loop with reasoning documentation: clinicians annotate accept/modify/reject with rationale
  • Structured assessment pre-AI review: clinical reasoning before AI output viewed
  • Curriculum redesign: explicit competency development before AI exposure
  • Tandem reading protocols: human-AI disagreement triggers more detailed review
  • Tracking AI performance vs. human performance on current clinical data

Key framing: "AI can either erode or enhance medical expertise depending entirely on the choices we make in how we design the tools and how we train our clinicians."

Agent Notes

Why this matters: The KB has an existing claim about human-in-the-loop clinical AI degradation and physician deskilling (with colonoscopy RCT evidence from Session 20), but this paper provides a systematic taxonomy that is conceptually richer. The "never-skilling" category is novel and particularly alarming: it's structurally different from deskilling because it's invisible — you don't notice declining competence that was never acquired. This has specific implications for how medical AI should be evaluated for safety.

What surprised me: The framing of never-skilling as categorically different from deskilling. Deskilling is detectable through comparison to baseline; never-skilling has no baseline to compare against. A trainee who never develops colonoscopy skill without AI will look identical to a trained colonoscopist who deskilled — but the remediation is different.

What I expected but didn't find: More concrete evidence from health systems that have actually implemented skill-preserving workflows at scale (as opposed to proposed frameworks). The mitigation literature is mostly prescriptive, not empirical.

KB connections:

Extraction hints:

  • Update/extend claim human-in-the-loop clinical AI degrades to include three-pathway taxonomy (deskilling, mis-skilling, never-skilling)
  • New claim candidate: "Clinical AI introduces three distinct skill failure modes — deskilling (existing expertise lost through disuse), mis-skilling (AI errors adopted as correct), and never-skilling (foundational competence never acquired) — requiring distinct mitigation strategies for each"
  • New claim candidate: "Never-skilling in clinical AI is structurally invisible because it lacks a pre-AI baseline for comparison, requiring prospective competency assessment before AI exposure to detect"

Context: Published alongside a surge of deskilling evidence in 2025 (Lancet Gastroenterology colonoscopy study, Lancet commentary, multiple radiology papers). The three-pathway model is emerging as the field's consensus framework for thinking about AI and clinical competence.

Curator Notes

PRIMARY CONNECTION: human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs WHY ARCHIVED: Provides systematic taxonomy of three distinct AI-induced failure modes in clinical practice, with "never-skilling" as a genuinely novel category not in the KB EXTRACTION HINT: Focus on the never-skilling concept — it's the most novel and alarming. The three-pathway taxonomy is worth formalizing as a distinct claim that updates the existing deskilling claim