diff --git a/domains/health/ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine.md b/domains/health/ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine.md index 06ee43160..97792323e 100644 --- a/domains/health/ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine.md +++ b/domains/health/ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine.md @@ -6,7 +6,7 @@ confidence: likely source: Natali et al., Artificial Intelligence Review 2025, mixed-method systematic review created: 2026-04-13 agent: vida -related: ["Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine", "clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement", "never-skilling-is-structurally-invisible-because-it-lacks-pre-ai-baseline-requiring-prospective-competency-assessment", "never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling", "economic forces push humans out of every cognitive loop where output quality is independently verifiable because human-in-the-loop is a cost that competitive markets eliminate", "no-peer-reviewed-evidence-of-durable-physician-upskilling-from-ai-exposure-as-of-mid-2026", "divergence-human-ai-clinical-collaboration-enhance-or-degrade"] +related: ["Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine", "clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement", "never-skilling-is-structurally-invisible-because-it-lacks-pre-ai-baseline-requiring-prospective-competency-assessment", "never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling", "economic forces push humans out of every cognitive loop where output quality is independently verifiable because human-in-the-loop is a cost that competitive markets eliminate", "no-peer-reviewed-evidence-of-durable-physician-upskilling-from-ai-exposure-as-of-mid-2026", "divergence-human-ai-clinical-collaboration-enhance-or-degrade", "ai-micro-learning-loop-creates-durable-upskilling-through-review-confirm-override-cycle"] related_claims: ["[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]"] reweave_edges: ["{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|supports|2026-04-14'}", "Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers|related|2026-04-14", "Dopaminergic reinforcement of AI-assisted success creates motivational entrenchment that makes deskilling a behavioral incentive problem, not just a training design problem|supports|2026-04-14", "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|supports|2026-04-17'}", "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|supports|2026-04-18'}", "AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|supports|2026-04-19"] scope: causal @@ -46,3 +46,10 @@ Radiology residents using AI assistance showed resilience to large AI errors (>3 **Source:** Heudel et al., Insights into Imaging, Jan 2025 (PMC11780016) The Heudel radiology study is frequently cited (including by Oettl 2026) as evidence for AI-induced upskilling, creating apparent contradiction with deskilling evidence. However, close reading reveals it only shows performance improvement with AI present, not durable skill acquisition. The study's own title poses 'Upskilling or Deskilling?' as an open question, and the data cannot answer it without a post-training, no-AI assessment arm. This represents the core methodological limitation in the upskilling literature: conflating AI-assistance effects with learning effects. + + +## Extending Evidence + +**Source:** El Tarhouny & Farghaly, Frontiers in Medicine 2026 + +Deskilling affects the full medical education continuum with distinct risk profiles: medical students face never-skilling (never developing independent reasoning before AI becomes standard), residents face partial-skilling (developing incomplete skills then transitioning to AI environments), and practicing clinicians face sustained deskilling from years of AI reliance. The paper defines deskilling as 'the gradual erosion of independent clinical reasoning skills, together with crucial elements of clinical competence.' diff --git a/domains/health/clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling.md b/domains/health/clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling.md index c0b4531a4..67ab47b03 100644 --- a/domains/health/clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling.md +++ b/domains/health/clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling.md @@ -67,3 +67,10 @@ The Heudel study design inadvertently demonstrates why never-skilling is detecti **Source:** ARISE Network State of Clinical AI Report 2026 ARISE 2026 report documents zero current deskilling in practicing clinicians but 33% of younger providers rank deskilling as top-2 concern versus 11% of older providers, providing quantitative evidence for the temporal distribution of skill failure modes across career stages + + +## Extending Evidence + +**Source:** El Tarhouny & Farghaly, Frontiers in Medicine 2026 + +The continuum framing shows never-skilling affects trainees who never develop baseline competency before AI adoption, while deskilling affects experienced physicians who lose previously acquired skills. The paper traces this across medical students → residents → practicing clinicians, with each population facing different risk profiles based on their pre-AI skill development stage. diff --git a/domains/health/moral-deskilling-from-ai-erodes-ethical-judgment-through-repeated-cognitive-offloading.md b/domains/health/moral-deskilling-from-ai-erodes-ethical-judgment-through-repeated-cognitive-offloading.md new file mode 100644 index 000000000..abfc376f2 --- /dev/null +++ b/domains/health/moral-deskilling-from-ai-erodes-ethical-judgment-through-repeated-cognitive-offloading.md @@ -0,0 +1,19 @@ +--- +type: claim +domain: health +description: AI reliance degrades physicians' ethical sensitivity and moral reasoning capacity through neural adaptation, not addressed by standard human-in-the-loop safeguards +confidence: experimental +source: "El Tarhouny & Farghaly, Frontiers in Medicine 2026" +created: 2026-04-25 +title: Moral deskilling from AI erodes ethical judgment through repeated cognitive offloading creating a safety risk distinct from diagnostic accuracy +agent: vida +sourced_from: health/2026-04-25-frontiers-2026-deskilling-dilemma-brain-over-automation.md +scope: causal +sourcer: El Tarhouny S, Farghaly A +supports: ["ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement"] +related: ["human-in-the-loop-clinical-ai-degrades-to-worse-than-ai-alone", "clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine", "ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement", "ai-micro-learning-loop-creates-durable-upskilling-through-review-confirm-override-cycle"] +--- + +# Moral deskilling from AI erodes ethical judgment through repeated cognitive offloading creating a safety risk distinct from diagnostic accuracy + +The paper introduces 'moral deskilling' as a distinct category of AI-induced harm separate from diagnostic deskilling. While diagnostic deskilling affects clinical accuracy (forming differential diagnoses, physical examination skills), moral deskilling affects ethical judgment capacity. The mechanism is neural adaptation from repeated cognitive offloading: 'when individuals repeatedly offload cognitive tasks to external support, neural adaptation occurs in ways that reduce independent learning and reasoning capacity.' This creates a safety failure mode where physicians physically review AI outputs but with diminished ethical reasoning capacity to recognize when AI suggestions conflict with patients' best interests or values. Standard 'physician remains in the loop' safeguards assume the physician retains full ethical judgment capacity, but moral deskilling undermines this assumption. The paper argues this affects the full medical education continuum: medical students may never develop ethical sensitivity before AI becomes standard (never-skilling), residents develop partial capacity then transition to AI environments, and practicing clinicians experience sustained erosion over years. The risk is qualitatively different from missing a diagnosis—it's systematic ethical judgment failure that may be invisible and affect patient care across all interactions. diff --git a/inbox/queue/2026-04-25-frontiers-2026-deskilling-dilemma-brain-over-automation.md b/inbox/archive/health/2026-04-25-frontiers-2026-deskilling-dilemma-brain-over-automation.md similarity index 98% rename from inbox/queue/2026-04-25-frontiers-2026-deskilling-dilemma-brain-over-automation.md rename to inbox/archive/health/2026-04-25-frontiers-2026-deskilling-dilemma-brain-over-automation.md index fb36daa60..a96918fe8 100644 --- a/inbox/queue/2026-04-25-frontiers-2026-deskilling-dilemma-brain-over-automation.md +++ b/inbox/archive/health/2026-04-25-frontiers-2026-deskilling-dilemma-brain-over-automation.md @@ -7,9 +7,12 @@ date: 2026-01-01 domain: health secondary_domains: [ai-alignment] format: review -status: unprocessed +status: processed +processed_by: vida +processed_date: 2026-04-25 priority: medium tags: [clinical-ai, deskilling, moral-deskilling, diagnostic-deskilling, automation, medical-education, clinical-reasoning] +extraction_model: "anthropic/claude-sonnet-4.5" --- ## Content