vida: extract claims from 2026-04-21-pubmed-null-result-ai-durable-upskilling
Some checks are pending
Mirror PR to Forgejo / mirror (pull_request) Waiting to run
Some checks are pending
Mirror PR to Forgejo / mirror (pull_request) Waiting to run
- Source: inbox/queue/2026-04-21-pubmed-null-result-ai-durable-upskilling.md - Domain: health - Claims: 0, Entities: 0 - Enrichments: 3 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Vida <PIPELINE>
This commit is contained in:
parent
cc53fad5f2
commit
dd3a5f8515
2 changed files with 17 additions and 7 deletions
|
|
@ -24,3 +24,10 @@ This systematic review identifies three mechanistically distinct pathways throug
|
|||
**Source:** Heudel PE et al. 2026, UK cervical screening consolidation
|
||||
|
||||
UK cytology lab consolidation provides first structural never-skilling mechanism: 80-85% training volume reduction through consolidation from 45 to 8 labs. This extends the never-skilling concept from individual cognitive failure to institutional infrastructure destruction. The mechanism is not 'physicians never learn because AI does it for them' but 'training infrastructure is dismantled so learning becomes impossible.'
|
||||
|
||||
|
||||
## Supporting Evidence
|
||||
|
||||
**Source:** PubMed systematic search, April 21, 2026
|
||||
|
||||
The complete absence of peer-reviewed evidence for durable up-skilling after 5+ years of large-scale clinical AI deployment provides negative confirmation that skill effects flow in one direction. Despite extensive evidence on AI improving performance while present, zero published studies demonstrate improvement that persists when AI is removed. This asymmetry—growing deskilling literature (Heudel et al. 2026, Natali et al. 2025, colonoscopy ADR drop, radiology/pathology automation bias) versus empty up-skilling literature—confirms the three failure modes operate without a compensating improvement mechanism.
|
||||
|
|
|
|||
|
|
@ -10,14 +10,17 @@ agent: vida
|
|||
scope: structural
|
||||
sourcer: Artificial Intelligence Review (Springer Nature)
|
||||
related_claims: ["[[clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling]]"]
|
||||
supports:
|
||||
- Clinical AI introduces three distinct skill failure modes — deskilling (existing expertise lost through disuse), mis-skilling (AI errors adopted as correct), and never-skilling (foundational competence never acquired) — requiring distinct mitigation strategies for each
|
||||
- Never-skilling — the failure to acquire foundational clinical competencies because AI was present during training — poses a detection-resistant, potentially unrecoverable threat to medical education that is structurally worse than deskilling
|
||||
reweave_edges:
|
||||
- Clinical AI introduces three distinct skill failure modes — deskilling (existing expertise lost through disuse), mis-skilling (AI errors adopted as correct), and never-skilling (foundational competence never acquired) — requiring distinct mitigation strategies for each|supports|2026-04-12
|
||||
- Never-skilling — the failure to acquire foundational clinical competencies because AI was present during training — poses a detection-resistant, potentially unrecoverable threat to medical education that is structurally worse than deskilling|supports|2026-04-14
|
||||
supports: ["Clinical AI introduces three distinct skill failure modes \u2014 deskilling (existing expertise lost through disuse), mis-skilling (AI errors adopted as correct), and never-skilling (foundational competence never acquired) \u2014 requiring distinct mitigation strategies for each", "Never-skilling \u2014 the failure to acquire foundational clinical competencies because AI was present during training \u2014 poses a detection-resistant, potentially unrecoverable threat to medical education that is structurally worse than deskilling"]
|
||||
reweave_edges: ["Clinical AI introduces three distinct skill failure modes \u2014 deskilling (existing expertise lost through disuse), mis-skilling (AI errors adopted as correct), and never-skilling (foundational competence never acquired) \u2014 requiring distinct mitigation strategies for each|supports|2026-04-12", "Never-skilling \u2014 the failure to acquire foundational clinical competencies because AI was present during training \u2014 poses a detection-resistant, potentially unrecoverable threat to medical education that is structurally worse than deskilling|supports|2026-04-14"]
|
||||
related: ["never-skilling-is-structurally-invisible-because-it-lacks-pre-ai-baseline-requiring-prospective-competency-assessment", "clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine"]
|
||||
---
|
||||
|
||||
# Never-skilling in clinical AI is structurally invisible because it lacks a pre-AI baseline for comparison, requiring prospective competency assessment before AI exposure to detect
|
||||
|
||||
Never-skilling presents a unique detection challenge that distinguishes it from deskilling. When a physician loses existing skills through disuse (deskilling), the degradation is detectable through comparison to their previous baseline performance. But when a trainee never acquires foundational competencies because AI was present from the start of their education, there is no baseline to compare against. A junior radiologist who cannot detect AI errors looks identical whether they (a) never learned the underlying skill or (b) learned it and then lost it through disuse — but the remediation is fundamentally different. The review documents that junior radiologists are far less likely than senior colleagues to detect AI errors, but this cannot be attributed to deskilling because they never had the pre-AI skill level to lose. This creates a structural invisibility problem: never-skilling can only be detected through prospective competency assessment before AI exposure, or through comparison to control cohorts trained without AI. The paper argues this requires curriculum redesign with explicit competency development milestones before AI tools are introduced, rather than the current practice of integrating AI throughout training. This has specific implications for medical education policy: if AI is introduced too early in training, the resulting competency gaps may be undetectable until a system-wide failure reveals them.
|
||||
Never-skilling presents a unique detection challenge that distinguishes it from deskilling. When a physician loses existing skills through disuse (deskilling), the degradation is detectable through comparison to their previous baseline performance. But when a trainee never acquires foundational competencies because AI was present from the start of their education, there is no baseline to compare against. A junior radiologist who cannot detect AI errors looks identical whether they (a) never learned the underlying skill or (b) learned it and then lost it through disuse — but the remediation is fundamentally different. The review documents that junior radiologists are far less likely than senior colleagues to detect AI errors, but this cannot be attributed to deskilling because they never had the pre-AI skill level to lose. This creates a structural invisibility problem: never-skilling can only be detected through prospective competency assessment before AI exposure, or through comparison to control cohorts trained without AI. The paper argues this requires curriculum redesign with explicit competency development milestones before AI tools are introduced, rather than the current practice of integrating AI throughout training. This has specific implications for medical education policy: if AI is introduced too early in training, the resulting competency gaps may be undetectable until a system-wide failure reveals them.
|
||||
|
||||
## Extending Evidence
|
||||
|
||||
**Source:** PubMed systematic search, April 21, 2026
|
||||
|
||||
The absence of prospective studies comparing medical students/residents trained WITH AI versus WITHOUT AI is particularly striking given the scale of deployment. This is the exact study design that would detect never-skilling, yet not one such study exists in peer-reviewed literature as of April 2026. The null result suggests either: (1) the medical education research community has not recognized never-skilling as a research priority despite widespread AI integration in training environments, or (2) institutions are avoiding the question because the answer would be operationally inconvenient. Either explanation confirms never-skilling's structural invisibility—it requires intentional prospective design to detect, and that design is not happening.
|
||||
|
|
|
|||
Loading…
Reference in a new issue