- Source: inbox/queue/2026-04-25-arise-state-of-clinical-ai-2026-report.md - Domain: health - Claims: 2, Entities: 1 - Enrichments: 4 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Vida <PIPELINE>
3.3 KiB
| type | domain | description | confidence | source | created | title | agent | scope | sourcer | related_claims | related | ||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| claim | health | Controlled study of 27 radiologists in mammography shows erroneous AI prompts systematically bias interpretation toward false positives through cognitive anchoring mechanism | likely | Natali et al. 2025 review, citing controlled mammography study with 27 radiologists | 2026-04-13 | Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers | vida | causal | Natali et al. |
|
Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers
A controlled study of 27 radiologists performing mammography reads found that erroneous AI prompts increased false-positive recalls by up to 12 percentage points, with the effect persisting across experience levels. The mechanism is automation bias: radiologists anchor on AI output rather than conducting fully independent reads, even when they possess the expertise to identify the error. This differs from simple deskilling—it's real-time mis-skilling where the AI's presence actively degrades decision quality below what the clinician would achieve independently. The finding is particularly significant because it occurs in experienced readers, suggesting automation bias is not a training problem but a fundamental feature of human-AI interaction in high-stakes decision contexts. Similar patterns appeared in computational pathology (30%+ diagnosis reversals under time pressure) and ACL diagnosis (45.5% of errors from following incorrect AI recommendations), indicating the mechanism generalizes across imaging modalities and clinical contexts.
Supporting Evidence
Source: Heudel PE et al. 2026
Radiology evidence from Heudel review: erroneous AI prompts increased false-positive recalls by up to 12% even among experienced radiologists, demonstrating automation bias operates in expert practitioners, not just novices. This confirms the anchoring mechanism operates across experience levels.
Challenging Evidence
Source: Oettl et al., Journal of Experimental Orthopaedics 2026
Oettl et al. acknowledge automation bias exists but argue that requiring clinicians to 'review, confirm or override' AI recommendations creates a learning loop that mitigates bias. However, they provide no evidence that the review process prevents deference—only that performance improves when AI is present.
Supporting Evidence
Source: ARISE Network State of Clinical AI Report 2026
ARISE 2026 synthesis documents 'risks of over-reliance, with clinicians following incorrect model recommendations even when errors were detectable' across multiple 2025 studies, confirming automation bias persists despite error visibility