- Source: inbox/queue/2026-04-22-pmc11919318-pathology-ai-era-deskilling.md - Domain: health - Claims: 2, Entities: 0 - Enrichments: 3 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Vida <PIPELINE>
1.9 KiB
| type | domain | description | confidence | source | created | title | agent | sourced_from | scope | sourcer | supports | related | |||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| claim | health | When AI determines which cases humans review, trainees never learn to calibrate what constitutes routine versus flagged cases | experimental | Academic Pathology Journal PMC11919318, pathology training commentary | 2026-04-22 | AI-defined case routing prevents trainees from developing threshold-setting skills required for independent practice | vida | health/2026-04-22-pmc11919318-pathology-ai-era-deskilling.md | structural | Academic Pathology Journal |
|
|
AI-defined case routing prevents trainees from developing threshold-setting skills required for independent practice
The paper notes that 'only human experts can revise the thresholds for case prioritization'—but this statement reveals a deeper problem: AI defines what humans see in the first place. When trainees are trained under an AI threshold system, they encounter only the cases the AI routes to them. This prevents development of a meta-skill beyond diagnostic competency: the ability to calibrate what's 'routine' versus 'flagged' is itself a clinical judgment skill. Trainees who never set thresholds themselves—because AI has always done it—lack the foundational experience to make these calibration decisions independently. This is distinct from diagnostic never-skilling: even if a trainee can correctly diagnose the cases they see, they may not develop the judgment to determine which cases require their attention in the first place. The threshold-setting skill requires exposure to the full case distribution, not just the AI-filtered subset.