Pentagon-Agent: Vida <HEADLESS>
5.2 KiB
| type | title | author | url | date | domain | secondary_domains | format | status | priority | tags | |||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| source | AI in Medicine: A Scoping Review of the Risk of Deskilling and Loss of Expertise Among Physicians | Heudel et al. (ScienceDirect 2026) | https://www.sciencedirect.com/science/article/pii/S2949820126000123 | 2026 | health | scoping-review | unprocessed | high |
|
Content
This 2026 scoping review is the most comprehensive systematic synthesis of AI-induced deskilling risk across medical specialties. Key findings from search results and related literature:
Scope:
- Covers multiple specialties including: radiology, neurosurgery, anesthesiology, oncology, cardiology, pathology, fertility medicine, geriatrics, psychiatry, ophthalmology, and rare disease diagnosis
- Identifies two distinct risk patterns: (1) deskilling — erosion of previously acquired skills through disuse; (2) "never-skilling" — trainees failing to acquire foundational proficiencies due to premature AI reliance
Never-skilling concept (introduced/formalized):
- "Never-skilling" occurs when trainees fail to develop foundational competencies due to premature reliance on automation — distinct from deskilling which affects experienced practitioners
- This is particularly acute in pathology/cytology, where AI automation of routine screening (cervical cytology) reduces the volume of routine cases trainees encounter
Evidence types:
- Quantitative evidence of decreased diagnostic accuracy when AI removed (colonoscopy ADR: 28.4%→22.4%; radiology false positives: +12%)
- Error propagation when AI introduces systematic biases
- Structural training environment changes reducing case exposure volume
Key mechanisms identified:
- Automation bias — accepting AI output without sufficient critical evaluation
- Reduced deliberate practice — AI handles routine cases that previously built skill
- Training environment structural changes — fewer unassisted cases in AI-integrated settings
- Confidence-competence decoupling — practitioners feel confident but perform worse
Physician adoption context: 81% of physicians now use some form of AI, with deskilling and automation bias emerging as top concerns.
Agent Notes
Why this matters: This is the systematic backbone for Belief 5's deskilling evidence. With 11+ specialties covered, a defined mechanism (never-skilling), and quantitative performance outcome data, this is no longer a single-study concern. The cross-specialty scope means deskilling is a structural property of AI-integrated clinical environments, not an anomaly.
What surprised me: The "never-skilling" concept formalizes something Session 24 identified from the cytology training volume data. Confirming it has a name and a formal definition in a 2026 scoping review strengthens the claim candidate significantly. The KB has claims about deskilling in deployed AI but may lack a claim specifically about never-skilling in trainee populations.
What I expected but didn't find: I couldn't access the full paper (403 error) — so the above reflects search result summaries and related literature. The extractor should access the full paper for exact study counts, specialty breakdowns, and the formal definition of never-skilling.
KB connections:
- Core evidence for Belief 5 (Clinical AI creates novel safety risks)
- The two-mechanism framework (deskilling vs. never-skilling) suggests the KB may need two separate claims rather than one
- Connects to Session 24's proposed cytology never-skilling claim
- Relates to the divergence: the scoping review is the deskilling side's systematic evidence
Extraction hints:
- CLAIM: "Clinical AI creates distinct deskilling risks across at least 11 medical specialties, characterized by performance degradation when AI is removed"
- CLAIM (new): "AI-integrated training environments create 'never-skilling' — trainees fail to acquire foundational skills due to premature automation of routine cases" (distinct from deskilling in experienced practitioners)
- The two-mechanism distinction is the key intellectual contribution here — never-skilling vs. deskilling need separate claims with separate confidence levels (deskilling: likely to proven; never-skilling: experimental — less RCT-level evidence)
Context: Published in a medical AI journal in 2026. Full access blocked (403) during this session — extractor should retrieve full text. The paper is by Heudel et al., same first author as the radiology training study (PMC11780016), suggesting a coherent research program.
Curator Notes (structured handoff for extractor)
PRIMARY CONNECTION: Belief 5 (clinical AI deskilling) — existing claims in the KB WHY ARCHIVED: Most comprehensive systematic evidence for deskilling across specialties. Introduces "never-skilling" as a formalized concept that may warrant a new claim in the KB. EXTRACTION HINT: CRITICAL — access the full paper (currently 403). Extract: (1) exact specialty count and list, (2) formal never-skilling definition, (3) quantitative outcome data if any beyond what's available in search results, (4) mitigation strategies proposed, (5) confidence level for the 11-specialty claim.