vida: extract claims from 2026-04-22-pmc11780016-radiology-ai-upskilling-study-2025

- Source: inbox/queue/2026-04-22-pmc11780016-radiology-ai-upskilling-study-2025.md
- Domain: health
- Claims: 0, Entities: 0
- Enrichments: 2
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Vida <PIPELINE>
This commit is contained in:
Teleo Agents 2026-04-22 08:53:17 +00:00
parent a6a698b03b
commit 27e13f8bb9
2 changed files with 14 additions and 0 deletions

View file

@ -32,3 +32,10 @@ First comprehensive scoping review (literature through August 2025) confirms con
**Source:** Oettl et al., Journal of Experimental Orthopaedics 2026 **Source:** Oettl et al., Journal of Experimental Orthopaedics 2026
Oettl et al. present the strongest available counter-argument to medical AI deskilling, arguing that AI will 'necessitate an evolution of the physician's role' toward augmentation rather than replacement. They propose three upskilling mechanisms: micro-learning at point of care, liberation from administrative burden, and performance floor standardization. However, the paper is primarily theoretical—all empirical evidence cited measures concurrent AI-assisted performance rather than post-training skill retention. Oettl et al. present the strongest available counter-argument to medical AI deskilling, arguing that AI will 'necessitate an evolution of the physician's role' toward augmentation rather than replacement. They propose three upskilling mechanisms: micro-learning at point of care, liberation from administrative burden, and performance floor standardization. However, the paper is primarily theoretical—all empirical evidence cited measures concurrent AI-assisted performance rather than post-training skill retention.
## Challenging Evidence
**Source:** Heudel et al., Insights into Imaging, 2025 (PMC11780016)
Radiology residents using AI assistance showed resilience to large AI errors (>3 points), maintaining average errors around 2.75-2.88 even when AI was significantly wrong. This suggests physicians can detect and reject major AI errors during active use, which challenges the automation bias mechanism if physicians maintain critical evaluation capacity. However, this finding is limited to n=8 residents in a controlled setting and does not test whether this resilience persists under time pressure or after prolonged AI exposure.

View file

@ -62,3 +62,10 @@ Topics:
**Source:** Oettl et al. 2026, Journal of Experimental Orthopaedics PMC12955832 **Source:** Oettl et al. 2026, Journal of Experimental Orthopaedics PMC12955832
Oettl et al. 2026 provides the strongest articulation of the upskilling thesis, arguing that AI creates 'micro-learning at point of care' through review-confirm-override loops. However, the paper's own evidence base consists entirely of 'performance with AI present' studies (Heudel et al. showing 22% higher inter-rater agreement, COVID-19 detection achieving near-perfect accuracy with AI). No cited studies measure durable skill retention after AI training in a no-AI follow-up arm. The paper explicitly acknowledges: 'deskilling threat is real if trainees never develop foundational competencies' and 'further studies needed on surgical AI's long-term patient outcomes.' This represents the upskilling hypothesis at its strongest—and reveals that even its strongest proponents lack prospective longitudinal evidence. Oettl et al. 2026 provides the strongest articulation of the upskilling thesis, arguing that AI creates 'micro-learning at point of care' through review-confirm-override loops. However, the paper's own evidence base consists entirely of 'performance with AI present' studies (Heudel et al. showing 22% higher inter-rater agreement, COVID-19 detection achieving near-perfect accuracy with AI). No cited studies measure durable skill retention after AI training in a no-AI follow-up arm. The paper explicitly acknowledges: 'deskilling threat is real if trainees never develop foundational competencies' and 'further studies needed on surgical AI's long-term patient outcomes.' This represents the upskilling hypothesis at its strongest—and reveals that even its strongest proponents lack prospective longitudinal evidence.
## Extending Evidence
**Source:** Heudel et al., Insights into Imaging, 2025 (PMC11780016)
Heudel et al. (2025) radiology study (n=8 residents, 150 chest X-rays) shows 22% improvement in inter-rater agreement (ICC-1: 0.665→0.813) and significant error reduction (p<0.001) WITH AI present. However, study design lacks post-training no-AI assessment, so it documents performance improvement during AI use, not durable skill retention. This is the primary empirical source cited by upskilling proponents (including Oettl 2026), but close reading reveals it only demonstrates AI-assisted performance, not independent upskilling. Residents showed 'resilience to AI errors above acceptability threshold' (maintaining ~2.75-2.88 error when AI made >3-point errors), suggesting some critical evaluation capacity persists during AI use.