vida: extract claims from 2025-08-xx-springer-clinical-ai-deskilling-misskilling-neverskilling-mixed-method-review
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run

- Source: inbox/queue/2025-08-xx-springer-clinical-ai-deskilling-misskilling-neverskilling-mixed-method-review.md
- Domain: health
- Claims: 2, Entities: 0
- Enrichments: 1
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Vida <PIPELINE>
This commit is contained in:
Teleo Agents 2026-04-11 04:22:18 +00:00
parent 5754286c3c
commit 016473247c
2 changed files with 34 additions and 0 deletions

View file

@ -0,0 +1,17 @@
---
type: claim
domain: health
description: Systematic taxonomy of AI-induced cognitive failures in medical practice, with never-skilling as a categorically different problem from deskilling because it lacks a baseline for comparison
confidence: experimental
source: Artificial Intelligence Review (Springer Nature), mixed-method systematic review
created: 2026-04-11
title: Clinical AI introduces three distinct skill failure modes — deskilling (existing expertise lost through disuse), mis-skilling (AI errors adopted as correct), and never-skilling (foundational competence never acquired) — requiring distinct mitigation strategies for each
agent: vida
scope: causal
sourcer: Artificial Intelligence Review (Springer Nature)
related_claims: ["[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]"]
---
# Clinical AI introduces three distinct skill failure modes — deskilling (existing expertise lost through disuse), mis-skilling (AI errors adopted as correct), and never-skilling (foundational competence never acquired) — requiring distinct mitigation strategies for each
This systematic review identifies three mechanistically distinct pathways through which clinical AI degrades physician competence. **Deskilling** occurs when existing expertise atrophies through disuse: colonoscopy polyp detection dropped from 28.4% to 22.4% after 3 months of AI use, and experienced radiologists showed 12% increased false-positive recalls after exposure to erroneous AI prompts. **Mis-skilling** occurs when clinicians actively learn incorrect patterns from systematically biased AI outputs: in computational pathology studies, 30%+ of participants reversed correct initial diagnoses after exposure to incorrect AI suggestions under time constraints. **Never-skilling** is categorically different: trainees who begin clinical education with AI assistance may never develop foundational competencies. Junior radiologists are far less likely than senior colleagues to detect AI errors — not because they've lost skills, but because they never acquired them. This is structurally invisible because there's no pre-AI baseline to compare against. The review documents mitigation strategies including AI-off drills, structured assessment pre-AI review, and curriculum redesign with explicit competency development before AI exposure. The key insight is that these three failure modes require fundamentally different interventions: deskilling requires practice maintenance, mis-skilling requires error detection training, and never-skilling requires prospective competency assessment before AI exposure.

View file

@ -0,0 +1,17 @@
---
type: claim
domain: health
description: "Detection problem unique to never-skilling: a trainee who never develops competence without AI looks identical to a trained clinician who deskilled, but remediation strategies differ fundamentally"
confidence: experimental
source: Artificial Intelligence Review (Springer Nature), systematic review of clinical AI training outcomes
created: 2026-04-11
title: Never-skilling in clinical AI is structurally invisible because it lacks a pre-AI baseline for comparison, requiring prospective competency assessment before AI exposure to detect
agent: vida
scope: structural
sourcer: Artificial Intelligence Review (Springer Nature)
related_claims: ["[[clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling]]"]
---
# Never-skilling in clinical AI is structurally invisible because it lacks a pre-AI baseline for comparison, requiring prospective competency assessment before AI exposure to detect
Never-skilling presents a unique detection challenge that distinguishes it from deskilling. When a physician loses existing skills through disuse (deskilling), the degradation is detectable through comparison to their previous baseline performance. But when a trainee never acquires foundational competencies because AI was present from the start of their education, there is no baseline to compare against. A junior radiologist who cannot detect AI errors looks identical whether they (a) never learned the underlying skill or (b) learned it and then lost it through disuse — but the remediation is fundamentally different. The review documents that junior radiologists are far less likely than senior colleagues to detect AI errors, but this cannot be attributed to deskilling because they never had the pre-AI skill level to lose. This creates a structural invisibility problem: never-skilling can only be detected through prospective competency assessment before AI exposure, or through comparison to control cohorts trained without AI. The paper argues this requires curriculum redesign with explicit competency development milestones before AI tools are introduced, rather than the current practice of integrating AI throughout training. This has specific implications for medical education policy: if AI is introduced too early in training, the resulting competency gaps may be undetectable until a system-wide failure reveals them.