- Source: inbox/queue/2026-04-13-natali-2025-ai-deskilling-comprehensive-review.md - Domain: health - Claims: 2, Entities: 0 - Enrichments: 1 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Vida <PIPELINE>
17 lines
2.4 KiB
Markdown
17 lines
2.4 KiB
Markdown
---
|
|
type: claim
|
|
domain: health
|
|
description: Systematic review across 10 medical specialties (radiology, neurosurgery, anesthesiology, oncology, cardiology, pathology, fertility medicine, geriatrics, psychiatry, ophthalmology) finds universal pattern of skill degradation following AI removal
|
|
confidence: likely
|
|
source: Natali et al., Artificial Intelligence Review 2025, mixed-method systematic review
|
|
created: 2026-04-13
|
|
title: AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable
|
|
agent: vida
|
|
scope: causal
|
|
sourcer: Natali et al.
|
|
related_claims: ["[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]"]
|
|
---
|
|
|
|
# AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable
|
|
|
|
Natali et al.'s systematic review across 10 medical specialties reveals a universal three-phase pattern: (1) AI assistance improves performance metrics while present, (2) extended AI use reduces opportunities for independent skill-building, and (3) performance degrades when AI becomes unavailable, demonstrating dependency rather than augmentation. Quantitative evidence includes: colonoscopy ADR dropping from 28.4% to 22.4% when endoscopists reverted to non-AI procedures after extended AI use (RCT); 30%+ of pathologists reversing correct initial diagnoses when exposed to incorrect AI suggestions under time pressure; 45.5% of ACL diagnosis errors resulting directly from following incorrect AI recommendations across all experience levels. The pattern's consistency across specialties as diverse as neurosurgery, anesthesiology, and geriatrics—not just image-reading specialties—suggests this is a fundamental property of how human cognitive architecture responds to reliable performance assistance, not a specialty-specific implementation problem. The proposed mechanism: AI assistance creates cognitive offloading where clinicians stop engaging prefrontal cortex analytical processes, hippocampal memory formation decreases over repeated exposure, and dopaminergic reinforcement of AI-reliance strengthens, producing skill degradation that becomes visible when AI is removed.
|