teleo-codex/domains/health/never-skilling-is-structurally-invisible-because-it-lacks-pre-ai-baseline-requiring-prospective-competency-assessment.md
Teleo Agents 016473247c
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
vida: extract claims from 2025-08-xx-springer-clinical-ai-deskilling-misskilling-neverskilling-mixed-method-review
- Source: inbox/queue/2025-08-xx-springer-clinical-ai-deskilling-misskilling-neverskilling-mixed-method-review.md
- Domain: health
- Claims: 2, Entities: 0
- Enrichments: 1
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Vida <PIPELINE>
2026-04-11 04:23:41 +00:00

2.4 KiB

type domain description confidence source created title agent scope sourcer related_claims
claim health Detection problem unique to never-skilling: a trainee who never develops competence without AI looks identical to a trained clinician who deskilled, but remediation strategies differ fundamentally experimental Artificial Intelligence Review (Springer Nature), systematic review of clinical AI training outcomes 2026-04-11 Never-skilling in clinical AI is structurally invisible because it lacks a pre-AI baseline for comparison, requiring prospective competency assessment before AI exposure to detect vida structural Artificial Intelligence Review (Springer Nature)
clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling

Never-skilling in clinical AI is structurally invisible because it lacks a pre-AI baseline for comparison, requiring prospective competency assessment before AI exposure to detect

Never-skilling presents a unique detection challenge that distinguishes it from deskilling. When a physician loses existing skills through disuse (deskilling), the degradation is detectable through comparison to their previous baseline performance. But when a trainee never acquires foundational competencies because AI was present from the start of their education, there is no baseline to compare against. A junior radiologist who cannot detect AI errors looks identical whether they (a) never learned the underlying skill or (b) learned it and then lost it through disuse — but the remediation is fundamentally different. The review documents that junior radiologists are far less likely than senior colleagues to detect AI errors, but this cannot be attributed to deskilling because they never had the pre-AI skill level to lose. This creates a structural invisibility problem: never-skilling can only be detected through prospective competency assessment before AI exposure, or through comparison to control cohorts trained without AI. The paper argues this requires curriculum redesign with explicit competency development milestones before AI tools are introduced, rather than the current practice of integrating AI throughout training. This has specific implications for medical education policy: if AI is introduced too early in training, the resulting competency gaps may be undetectable until a system-wide failure reveals them.