vida: extract claims from 2026-04-22-oettl-2026-ai-deskilling-to-upskilling-orthopedics
Some checks are pending
Mirror PR to Forgejo / mirror (pull_request) Waiting to run

- Source: inbox/queue/2026-04-22-oettl-2026-ai-deskilling-to-upskilling-orthopedics.md
- Domain: health
- Claims: 1, Entities: 0
- Enrichments: 3
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Vida <PIPELINE>
This commit is contained in:
Teleo Agents 2026-04-22 07:25:10 +00:00
parent 26fba3149a
commit 3929b7846c
4 changed files with 62 additions and 45 deletions

View file

@ -1,31 +1,26 @@
--- ---
agent: vida
confidence: speculative
created: 2026-04-13
description: Proposed neurological mechanism explains why clinical deskilling may be harder to reverse than simple habit formation suggests
domain: health
related:
- agent-generated code creates cognitive debt that compounds when developers cannot understand what was produced on their behalf
related_claims:
- '[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]'
reweave_edges:
- AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable|supports|2026-04-14
- Dopaminergic reinforcement of AI-assisted success creates motivational entrenchment that makes deskilling a behavioral incentive problem, not just a training design problem|supports|2026-04-14
- Never-skilling — the failure to acquire foundational clinical competencies because AI was present during training — poses a detection-resistant, potentially unrecoverable threat to medical education that
is structurally worse than deskilling|supports|2026-04-14
scope: causal
source: Frontiers in Medicine 2026, theoretical mechanism based on cognitive offloading research
sourcer: Frontiers in Medicine
supports:
- AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable
- Dopaminergic reinforcement of AI-assisted success creates motivational entrenchment that makes deskilling a behavioral incentive problem, not just a training design problem
- Never-skilling — the failure to acquire foundational clinical competencies because AI was present during training — poses a detection-resistant, potentially unrecoverable threat to medical education that
is structurally worse than deskilling
title: 'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction,
and dopaminergic reinforcement of AI reliance'
type: claim type: claim
domain: health
description: Proposed neurological mechanism explains why clinical deskilling may be harder to reverse than simple habit formation suggests
confidence: speculative
source: Frontiers in Medicine 2026, theoretical mechanism based on cognitive offloading research
created: 2026-04-13
agent: vida
related: ["agent-generated code creates cognitive debt that compounds when developers cannot understand what was produced on their behalf", "ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine", "dopaminergic-reinforcement-of-ai-reliance-predicts-behavioral-entrenchment-beyond-simple-habit-formation", "clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling"]
related_claims: ["[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]"]
reweave_edges: ["AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable|supports|2026-04-14", "Dopaminergic reinforcement of AI-assisted success creates motivational entrenchment that makes deskilling a behavioral incentive problem, not just a training design problem|supports|2026-04-14", "Never-skilling \u2014 the failure to acquire foundational clinical competencies because AI was present during training \u2014 poses a detection-resistant, potentially unrecoverable threat to medical education that is structurally worse than deskilling|supports|2026-04-14"]
scope: causal
sourcer: Frontiers in Medicine
supports: ["AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable", "Dopaminergic reinforcement of AI-assisted success creates motivational entrenchment that makes deskilling a behavioral incentive problem, not just a training design problem", "Never-skilling \u2014 the failure to acquire foundational clinical competencies because AI was present during training \u2014 poses a detection-resistant, potentially unrecoverable threat to medical education that is structurally worse than deskilling"]
title: "AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance"
--- ---
# AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance # AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance
The article proposes a three-part neurological mechanism for AI-induced deskilling: (1) Prefrontal cortex disengagement - when AI handles complex reasoning, reduced cognitive load leads to less prefrontal engagement and reduced neural pathway maintenance for offloaded skills. (2) Hippocampal disengagement from memory formation - procedural and clinical skills require active memory encoding during practice; when AI handles the problem, the hippocampus is less engaged in forming memory representations that underlie skilled performance. (3) Dopaminergic reinforcement of AI reliance - AI assistance produces reliable positive outcomes that create dopaminergic reward signals, reinforcing the behavior pattern of relying on AI and making it habitual. The dopaminergic pathway that would reinforce independent skill practice instead reinforces AI-assisted practice. Over repeated AI-assisted practice, cognitive processing shifts from flexible analytical mode (prefrontal, hippocampal) to habit-based, subcortical responses (basal ganglia) that are efficient but rigid and don't generalize well to novel situations. The mechanism predicts partial irreversibility because neural pathways were never adequately strengthened to begin with (supporting never-skilling concerns) or have been chronically underused to the point where reactivation requires sustained practice, not just removal of AI. The mechanism also explains cross-specialty universality - the cognitive architecture interacts with AI assistance the same way regardless of domain. Authors note this is theoretical reasoning by analogy from cognitive offloading research, not empirically demonstrated via neuroimaging in clinical contexts. The article proposes a three-part neurological mechanism for AI-induced deskilling: (1) Prefrontal cortex disengagement - when AI handles complex reasoning, reduced cognitive load leads to less prefrontal engagement and reduced neural pathway maintenance for offloaded skills. (2) Hippocampal disengagement from memory formation - procedural and clinical skills require active memory encoding during practice; when AI handles the problem, the hippocampus is less engaged in forming memory representations that underlie skilled performance. (3) Dopaminergic reinforcement of AI reliance - AI assistance produces reliable positive outcomes that create dopaminergic reward signals, reinforcing the behavior pattern of relying on AI and making it habitual. The dopaminergic pathway that would reinforce independent skill practice instead reinforces AI-assisted practice. Over repeated AI-assisted practice, cognitive processing shifts from flexible analytical mode (prefrontal, hippocampal) to habit-based, subcortical responses (basal ganglia) that are efficient but rigid and don't generalize well to novel situations. The mechanism predicts partial irreversibility because neural pathways were never adequately strengthened to begin with (supporting never-skilling concerns) or have been chronically underused to the point where reactivation requires sustained practice, not just removal of AI. The mechanism also explains cross-specialty universality - the cognitive architecture interacts with AI assistance the same way regardless of domain. Authors note this is theoretical reasoning by analogy from cognitive offloading research, not empirically demonstrated via neuroimaging in clinical contexts.
## Challenging Evidence
**Source:** Oettl et al. 2026, Journal of Experimental Orthopaedics
Oettl et al. 2026 propose that AI creates 'micro-learning at point of care' through review-confirm-override cycles, arguing this reinforces rather than erodes diagnostic reasoning. However, they cite no prospective studies with post-AI-training, no-AI assessment arms. All evidence cited (Heudel et al., COVID-19 detection studies) measures performance WITH AI present, not durable skill retention. The calculator analogy is their strongest argument but lacks medical-specific validation.

View file

@ -73,3 +73,10 @@ The complete absence of peer-reviewed evidence for durable up-skilling after 5+
**Source:** Oettl et al. 2026 **Source:** Oettl et al. 2026
Oettl et al. 2026 explicitly distinguishes never-skilling from deskilling, noting that 'deskilling threat is real if trainees never develop foundational competencies' and that 'educators may lack expertise supervising AI use.' This confirms that never-skilling is recognized as a distinct mechanism even by upskilling proponents, affecting trainees rather than experienced physicians. Oettl et al. 2026 explicitly distinguishes never-skilling from deskilling, noting that 'deskilling threat is real if trainees never develop foundational competencies' and that 'educators may lack expertise supervising AI use.' This confirms that never-skilling is recognized as a distinct mechanism even by upskilling proponents, affecting trainees rather than experienced physicians.
## Extending Evidence
**Source:** Oettl et al. 2026
Oettl et al. explicitly distinguish never-skilling (trainees never developing foundational competencies) from deskilling (experienced physicians losing existing skills), noting that 'educators may lack expertise supervising AI use' which compounds the never-skilling risk. This adds population-specific mechanism detail to the three-mode framework.

View file

@ -1,24 +1,14 @@
--- ---
confidence: likely
created: 2026-02-18
description: Stanford-Harvard study shows AI alone 90 percent vs doctors plus AI 68 percent vs doctors alone 65 percent and a colonoscopy study found experienced gastroenterologists measurably de-skilled
after just three months with AI assistance
domain: health
related:
- economic forces push humans out of every cognitive loop where output quality is independently verifiable because human-in-the-loop is a cost that competitive markets eliminate
related_claims:
- ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine
- never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling
- ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement
- llms-amplify-human-cognitive-biases-through-sequential-processing-and-lack-contextual-resistance
reweave_edges:
- NCT07328815 - Mitigating Automation Bias in Physician-LLM Diagnostic Reasoning|supports|2026-04-07
- Does human oversight improve or degrade AI clinical decision-making?|supports|2026-04-17
source: DJ Patil interviewing Bob Wachter, Commonwealth Club, February 9 2026; Stanford/Harvard diagnostic accuracy study; European colonoscopy AI de-skilling study
supports:
- NCT07328815 - Mitigating Automation Bias in Physician-LLM Diagnostic Reasoning
- Does human oversight improve or degrade AI clinical decision-making?
type: claim type: claim
domain: health
description: Stanford-Harvard study shows AI alone 90 percent vs doctors plus AI 68 percent vs doctors alone 65 percent and a colonoscopy study found experienced gastroenterologists measurably de-skilled after just three months with AI assistance
confidence: likely
source: DJ Patil interviewing Bob Wachter, Commonwealth Club, February 9 2026; Stanford/Harvard diagnostic accuracy study; European colonoscopy AI de-skilling study
created: 2026-02-18
related: ["economic forces push humans out of every cognitive loop where output quality is independently verifiable because human-in-the-loop is a cost that competitive markets eliminate", "divergence-human-ai-clinical-collaboration-enhance-or-degrade", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine", "medical LLM benchmark performance does not translate to clinical impact because physicians with and without AI access achieve similar diagnostic accuracy in randomized trials", "no-peer-reviewed-evidence-of-durable-physician-upskilling-from-ai-exposure-as-of-mid-2026", "clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling"]
related_claims: ["ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine", "never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling", "ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement", "llms-amplify-human-cognitive-biases-through-sequential-processing-and-lack-contextual-resistance"]
reweave_edges: ["NCT07328815 - Mitigating Automation Bias in Physician-LLM Diagnostic Reasoning|supports|2026-04-07", "Does human oversight improve or degrade AI clinical decision-making?|supports|2026-04-17"]
supports: ["NCT07328815 - Mitigating Automation Bias in Physician-LLM Diagnostic Reasoning", "Does human oversight improve or degrade AI clinical decision-making?"]
--- ---
# human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs # human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs
@ -86,3 +76,9 @@ Relevant Notes:
Topics: Topics:
- health and wellness - health and wellness
## Challenging Evidence
**Source:** Oettl et al. 2026
Oettl et al. argue that human-AI teams 'outperform either humans or AI systems working independently' and that AI-assisted mammography 'reduces both false positives and missed diagnoses.' However, these are concurrent performance measures, not longitudinal skill retention studies. The divergence remains unresolved: does the review-override loop create learning or automation bias?

View file

@ -0,0 +1,19 @@
---
type: claim
domain: health
description: The two skill degradation mechanisms target different populations and require different protective interventions because one prevents initial competency development while the other erodes existing skills
confidence: experimental
source: Oettl et al. 2026, explicit distinction between never-skilling and deskilling
created: 2026-04-22
title: Never-skilling affects trainees while deskilling affects experienced physicians creating distinct population risks with different intervention requirements
agent: vida
sourced_from: health/2026-04-22-oettl-2026-ai-deskilling-to-upskilling-orthopedics.md
scope: structural
sourcer: Oettl et al., Journal of Experimental Orthopaedics
supports: ["cytology-lab-consolidation-creates-never-skilling-pathway-through-80-percent-training-volume-destruction"]
related: ["clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling", "cytology-lab-consolidation-creates-never-skilling-pathway-through-80-percent-training-volume-destruction", "never-skilling-is-structurally-invisible-because-it-lacks-pre-ai-baseline-requiring-prospective-competency-assessment", "ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine"]
---
# Never-skilling affects trainees while deskilling affects experienced physicians creating distinct population risks with different intervention requirements
Oettl et al. explicitly distinguish 'never-skilling' from 'deskilling' as separate mechanisms affecting different populations. Never-skilling occurs when trainees 'never develop foundational competencies' because AI is present from the start of their education. Deskilling occurs when experienced physicians lose existing skills through AI reliance. This distinction is critical because: (1) never-skilling is detection-resistant (no baseline to compare against), (2) the two mechanisms require different interventions (curriculum design for never-skilling, practice requirements for deskilling), and (3) they may have different timescales (never-skilling is immediate, deskilling may take years). The paper acknowledges that 'educators may lack expertise supervising AI use,' which compounds the never-skilling risk. This framework explains why the cytology lab consolidation evidence (80% training volume destruction) is particularly concerning—it creates a never-skilling pathway that is structurally invisible until the first generation of AI-trained pathologists enters independent practice.