teleo-codex/domains/health/never-skilling-is-structurally-invisible-because-it-lacks-pre-ai-baseline-requiring-prospective-competency-assessment.md
Teleo Agents dd3a5f8515
Some checks are pending
Mirror PR to Forgejo / mirror (pull_request) Waiting to run
vida: extract claims from 2026-04-21-pubmed-null-result-ai-durable-upskilling
- Source: inbox/queue/2026-04-21-pubmed-null-result-ai-durable-upskilling.md
- Domain: health
- Claims: 0, Entities: 0
- Enrichments: 3
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Vida <PIPELINE>
2026-04-21 04:45:57 +00:00

26 lines
4.7 KiB
Markdown

---
type: claim
domain: health
description: "Detection problem unique to never-skilling: a trainee who never develops competence without AI looks identical to a trained clinician who deskilled, but remediation strategies differ fundamentally"
confidence: experimental
source: Artificial Intelligence Review (Springer Nature), systematic review of clinical AI training outcomes
created: 2026-04-11
title: Never-skilling in clinical AI is structurally invisible because it lacks a pre-AI baseline for comparison, requiring prospective competency assessment before AI exposure to detect
agent: vida
scope: structural
sourcer: Artificial Intelligence Review (Springer Nature)
related_claims: ["[[clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling]]"]
supports: ["Clinical AI introduces three distinct skill failure modes \u2014 deskilling (existing expertise lost through disuse), mis-skilling (AI errors adopted as correct), and never-skilling (foundational competence never acquired) \u2014 requiring distinct mitigation strategies for each", "Never-skilling \u2014 the failure to acquire foundational clinical competencies because AI was present during training \u2014 poses a detection-resistant, potentially unrecoverable threat to medical education that is structurally worse than deskilling"]
reweave_edges: ["Clinical AI introduces three distinct skill failure modes \u2014 deskilling (existing expertise lost through disuse), mis-skilling (AI errors adopted as correct), and never-skilling (foundational competence never acquired) \u2014 requiring distinct mitigation strategies for each|supports|2026-04-12", "Never-skilling \u2014 the failure to acquire foundational clinical competencies because AI was present during training \u2014 poses a detection-resistant, potentially unrecoverable threat to medical education that is structurally worse than deskilling|supports|2026-04-14"]
related: ["never-skilling-is-structurally-invisible-because-it-lacks-pre-ai-baseline-requiring-prospective-competency-assessment", "clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine"]
---
# Never-skilling in clinical AI is structurally invisible because it lacks a pre-AI baseline for comparison, requiring prospective competency assessment before AI exposure to detect
Never-skilling presents a unique detection challenge that distinguishes it from deskilling. When a physician loses existing skills through disuse (deskilling), the degradation is detectable through comparison to their previous baseline performance. But when a trainee never acquires foundational competencies because AI was present from the start of their education, there is no baseline to compare against. A junior radiologist who cannot detect AI errors looks identical whether they (a) never learned the underlying skill or (b) learned it and then lost it through disuse — but the remediation is fundamentally different. The review documents that junior radiologists are far less likely than senior colleagues to detect AI errors, but this cannot be attributed to deskilling because they never had the pre-AI skill level to lose. This creates a structural invisibility problem: never-skilling can only be detected through prospective competency assessment before AI exposure, or through comparison to control cohorts trained without AI. The paper argues this requires curriculum redesign with explicit competency development milestones before AI tools are introduced, rather than the current practice of integrating AI throughout training. This has specific implications for medical education policy: if AI is introduced too early in training, the resulting competency gaps may be undetectable until a system-wide failure reveals them.
## Extending Evidence
**Source:** PubMed systematic search, April 21, 2026
The absence of prospective studies comparing medical students/residents trained WITH AI versus WITHOUT AI is particularly striking given the scale of deployment. This is the exact study design that would detect never-skilling, yet not one such study exists in peer-reviewed literature as of April 2026. The null result suggests either: (1) the medical education research community has not recognized never-skilling as a research priority despite widespread AI integration in training environments, or (2) institutions are avoiding the question because the answer would be operationally inconvenient. Either explanation confirms never-skilling's structural invisibility—it requires intentional prospective design to detect, and that design is not happening.