vida: extract claims from 2026-04-22-pmc11780016-radiology-ai-upskilling-study-2025
Some checks are pending
Mirror PR to Forgejo / mirror (pull_request) Waiting to run
Some checks are pending
Mirror PR to Forgejo / mirror (pull_request) Waiting to run
- Source: inbox/queue/2026-04-22-pmc11780016-radiology-ai-upskilling-study-2025.md - Domain: health - Claims: 0, Entities: 0 - Enrichments: 3 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Vida <PIPELINE>
This commit is contained in:
parent
49c1965cd9
commit
cba52301f8
3 changed files with 36 additions and 49 deletions
|
|
@ -6,7 +6,7 @@ confidence: likely
|
|||
source: Natali et al., Artificial Intelligence Review 2025, mixed-method systematic review
|
||||
created: 2026-04-13
|
||||
agent: vida
|
||||
related: ["Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine", "clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement", "never-skilling-is-structurally-invisible-because-it-lacks-pre-ai-baseline-requiring-prospective-competency-assessment", "never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling", "economic forces push humans out of every cognitive loop where output quality is independently verifiable because human-in-the-loop is a cost that competitive markets eliminate", "no-peer-reviewed-evidence-of-durable-physician-upskilling-from-ai-exposure-as-of-mid-2026"]
|
||||
related: ["Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine", "clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement", "never-skilling-is-structurally-invisible-because-it-lacks-pre-ai-baseline-requiring-prospective-competency-assessment", "never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling", "economic forces push humans out of every cognitive loop where output quality is independently verifiable because human-in-the-loop is a cost that competitive markets eliminate", "no-peer-reviewed-evidence-of-durable-physician-upskilling-from-ai-exposure-as-of-mid-2026", "divergence-human-ai-clinical-collaboration-enhance-or-degrade"]
|
||||
related_claims: ["[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]"]
|
||||
reweave_edges: ["{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|supports|2026-04-14'}", "Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers|related|2026-04-14", "Dopaminergic reinforcement of AI-assisted success creates motivational entrenchment that makes deskilling a behavioral incentive problem, not just a training design problem|supports|2026-04-14", "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|supports|2026-04-17'}", "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|supports|2026-04-18'}", "AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|supports|2026-04-19"]
|
||||
scope: causal
|
||||
|
|
@ -39,3 +39,10 @@ Oettl et al. present the strongest available counter-argument to medical AI desk
|
|||
**Source:** Heudel et al., Insights into Imaging, 2025 (PMC11780016)
|
||||
|
||||
Radiology residents using AI assistance showed resilience to large AI errors (>3 points), maintaining average errors around 2.75-2.88 even when AI was significantly wrong. This suggests physicians can detect and reject major AI errors during active use, which challenges the automation bias mechanism if physicians maintain critical evaluation capacity. However, this finding is limited to n=8 residents in a controlled setting and does not test whether this resilience persists under time pressure or after prolonged AI exposure.
|
||||
|
||||
|
||||
## Challenging Evidence
|
||||
|
||||
**Source:** Heudel et al., Insights into Imaging, Jan 2025 (PMC11780016)
|
||||
|
||||
The Heudel radiology study is frequently cited (including by Oettl 2026) as evidence for AI-induced upskilling, creating apparent contradiction with deskilling evidence. However, close reading reveals it only shows performance improvement with AI present, not durable skill acquisition. The study's own title poses 'Upskilling or Deskilling?' as an open question, and the data cannot answer it without a post-training, no-AI assessment arm. This represents the core methodological limitation in the upskilling literature: conflating AI-assistance effects with learning effects.
|
||||
|
|
|
|||
|
|
@ -1,53 +1,19 @@
|
|||
---
|
||||
agent: vida
|
||||
confidence: experimental
|
||||
created: 2026-04-11
|
||||
description: Systematic taxonomy of AI-induced cognitive failures in medical practice, with never-skilling as a categorically different problem from deskilling because it lacks a baseline for comparison
|
||||
domain: health
|
||||
related:
|
||||
- '{''AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms'': ''prefrontal disengagement, hippocampal memory formation reduction,
|
||||
and dopaminergic reinforcement of AI reliance''}'
|
||||
- 'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction, and
|
||||
dopaminergic reinforcement of AI reliance'
|
||||
- clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling
|
||||
- never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling
|
||||
- ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine
|
||||
- never-skilling-is-structurally-invisible-because-it-lacks-pre-ai-baseline-requiring-prospective-competency-assessment
|
||||
- ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement
|
||||
- economic forces push humans out of every cognitive loop where output quality is independently verifiable because human-in-the-loop is a cost that competitive markets eliminate
|
||||
related_claims:
|
||||
- '[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]'
|
||||
- '[[divergence-human-ai-clinical-collaboration-enhance-or-degrade]]'
|
||||
reweave_edges:
|
||||
- Never-skilling in clinical AI is structurally invisible because it lacks a pre-AI baseline for comparison, requiring prospective competency assessment before AI exposure to detect|supports|2026-04-12
|
||||
- '{''AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms'': ''prefrontal disengagement, hippocampal memory formation reduction,
|
||||
and dopaminergic reinforcement of AI reliance|supports|2026-04-14''}'
|
||||
- AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable|supports|2026-04-14
|
||||
- Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers|supports|2026-04-14
|
||||
- Never-skilling — the failure to acquire foundational clinical competencies because AI was present during training — poses a detection-resistant, potentially unrecoverable threat to medical education that
|
||||
is structurally worse than deskilling|supports|2026-04-14
|
||||
- '{''AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms'': ''prefrontal disengagement, hippocampal memory formation reduction,
|
||||
and dopaminergic reinforcement of AI reliance|related|2026-04-17''}'
|
||||
- '{''AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms'': ''prefrontal disengagement, hippocampal memory formation reduction,
|
||||
and dopaminergic reinforcement of AI reliance|supports|2026-04-18''}'
|
||||
- 'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction, and
|
||||
dopaminergic reinforcement of AI reliance|related|2026-04-19'
|
||||
scope: causal
|
||||
source: Artificial Intelligence Review (Springer Nature), mixed-method systematic review
|
||||
sourced_from:
|
||||
- inbox/archive/health/2026-04-13-natali-2025-ai-deskilling-comprehensive-review.md
|
||||
sourcer: Artificial Intelligence Review (Springer Nature)
|
||||
supports:
|
||||
- Never-skilling in clinical AI is structurally invisible because it lacks a pre-AI baseline for comparison, requiring prospective competency assessment before AI exposure to detect
|
||||
- '{''AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms'': ''prefrontal disengagement, hippocampal memory formation reduction,
|
||||
and dopaminergic reinforcement of AI reliance''}'
|
||||
- AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable
|
||||
- Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers
|
||||
- Never-skilling — the failure to acquire foundational clinical competencies because AI was present during training — poses a detection-resistant, potentially unrecoverable threat to medical education that
|
||||
is structurally worse than deskilling
|
||||
title: Clinical AI introduces three distinct skill failure modes — deskilling (existing expertise lost through disuse), mis-skilling (AI errors adopted as correct), and never-skilling (foundational competence
|
||||
never acquired) — requiring distinct mitigation strategies for each
|
||||
type: claim
|
||||
domain: health
|
||||
description: Systematic taxonomy of AI-induced cognitive failures in medical practice, with never-skilling as a categorically different problem from deskilling because it lacks a baseline for comparison
|
||||
confidence: experimental
|
||||
source: Artificial Intelligence Review (Springer Nature), mixed-method systematic review
|
||||
created: 2026-04-11
|
||||
agent: vida
|
||||
related: ["{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance'}", "AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance", "clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine", "never-skilling-is-structurally-invisible-because-it-lacks-pre-ai-baseline-requiring-prospective-competency-assessment", "ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement", "economic forces push humans out of every cognitive loop where output quality is independently verifiable because human-in-the-loop is a cost that competitive markets eliminate", "never-skilling-distinct-from-deskilling-affects-trainees-not-experienced-physicians"]
|
||||
related_claims: ["[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]", "[[divergence-human-ai-clinical-collaboration-enhance-or-degrade]]"]
|
||||
reweave_edges: ["Never-skilling in clinical AI is structurally invisible because it lacks a pre-AI baseline for comparison, requiring prospective competency assessment before AI exposure to detect|supports|2026-04-12", "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|supports|2026-04-14'}", "AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable|supports|2026-04-14", "Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers|supports|2026-04-14", "Never-skilling \u2014 the failure to acquire foundational clinical competencies because AI was present during training \u2014 poses a detection-resistant, potentially unrecoverable threat to medical education that is structurally worse than deskilling|supports|2026-04-14", "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|related|2026-04-17'}", "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|supports|2026-04-18'}", "AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|related|2026-04-19"]
|
||||
scope: causal
|
||||
sourced_from: ["inbox/archive/health/2026-04-13-natali-2025-ai-deskilling-comprehensive-review.md"]
|
||||
sourcer: Artificial Intelligence Review (Springer Nature)
|
||||
supports: ["Never-skilling in clinical AI is structurally invisible because it lacks a pre-AI baseline for comparison, requiring prospective competency assessment before AI exposure to detect", "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance'}", "AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable", "Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers", "Never-skilling \u2014 the failure to acquire foundational clinical competencies because AI was present during training \u2014 poses a detection-resistant, potentially unrecoverable threat to medical education that is structurally worse than deskilling"]
|
||||
title: Clinical AI introduces three distinct skill failure modes — deskilling (existing expertise lost through disuse), mis-skilling (AI errors adopted as correct), and never-skilling (foundational competence never acquired) — requiring distinct mitigation strategies for each
|
||||
---
|
||||
|
||||
# Clinical AI introduces three distinct skill failure modes — deskilling (existing expertise lost through disuse), mis-skilling (AI errors adopted as correct), and never-skilling (foundational competence never acquired) — requiring distinct mitigation strategies for each
|
||||
|
|
@ -87,3 +53,10 @@ Oettl et al. explicitly distinguish never-skilling (trainees never developing fo
|
|||
**Source:** PMC11919318, Academic Pathology 2025
|
||||
|
||||
Academic Pathology Journal commentary provides pathology-specific confirmation of never-skilling mechanism, noting that AI automation of routine cervical cytology screening reduces trainee exposure to foundational cases, preventing development of 'diagnostic acumen necessary for independent practice.' The paper explicitly distinguishes this from deskilling of experienced practitioners.
|
||||
|
||||
|
||||
## Extending Evidence
|
||||
|
||||
**Source:** Heudel et al., Insights into Imaging, Jan 2025 (PMC11780016)
|
||||
|
||||
The Heudel study design inadvertently demonstrates why never-skilling is detection-resistant: with only 8 residents (4 first-year, 4 third-year) and no longitudinal follow-up, the study cannot distinguish between 'residents learning with AI assistance' versus 'residents becoming dependent on AI presence.' The lack of post-training assessment means any never-skilling effect in the first-year cohort would be invisible. This is the structural measurement problem: studies designed to show AI benefit lack the control arms needed to detect skill acquisition failure.
|
||||
|
|
|
|||
|
|
@ -76,3 +76,10 @@ Heudel et al. (2025) radiology study (n=8 residents, 150 chest X-rays) shows 22%
|
|||
**Source:** Heudel et al., Insights into Imaging 2025 (PMC11780016)
|
||||
|
||||
Heudel et al. (2025) radiology study (n=8 residents, 150 chest X-rays) shows 22% improvement in inter-rater agreement (ICC-1: 0.665→0.813) and significant error reduction (p<0.001) when AI is present. However, the study design has NO post-training assessment without AI, meaning it documents 'performance improvement with AI present' rather than 'durable upskilling.' This is the methodological gap at the core of the divergence: upskilling-thesis studies measure performance WITH AI, while deskilling-evidence studies (colonoscopy ADR 28.4%→22.4%, radiology false positives +12%) measure performance AFTER AI removal. The study does show residents can detect large AI errors (>3 points) while maintaining average errors around 2.75-2.88, suggesting some resilience to major AI failures, but this occurs only while AI remains present.
|
||||
|
||||
|
||||
## Extending Evidence
|
||||
|
||||
**Source:** Heudel et al., Insights into Imaging, Jan 2025 (PMC11780016)
|
||||
|
||||
Heudel et al. (2025) radiology study (n=8 residents, 150 chest X-rays) shows 22% improvement in inter-rater agreement (ICC-1: 0.665→0.813) and significant error reduction (p<0.001) when AI is present. However, the study does NOT test post-training performance without AI—it only documents improved performance WHILE AI IS PRESENT. This is the methodological gap in the 'upskilling' literature: no evidence of durable skill retention after AI-assisted training ends. The study does show residents can reject major AI errors (>3 points), maintaining ~2.75-2.88 average error when AI makes large mistakes, suggesting some critical evaluation capacity persists during AI use.
|
||||
|
|
|
|||
Loading…
Reference in a new issue