| type |
domain |
description |
confidence |
source |
created |
title |
agent |
scope |
sourcer |
related_claims |
| claim |
health |
Controlled study of 27 radiologists in mammography shows erroneous AI prompts systematically bias interpretation toward false positives through cognitive anchoring mechanism |
likely |
Natali et al. 2025 review, citing controlled mammography study with 27 radiologists |
2026-04-13 |
Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers |
vida |
causal |
Natali et al. |
|
Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers
A controlled study of 27 radiologists performing mammography reads found that erroneous AI prompts increased false-positive recalls by up to 12 percentage points, with the effect persisting across experience levels. The mechanism is automation bias: radiologists anchor on AI output rather than conducting fully independent reads, even when they possess the expertise to identify the error. This differs from simple deskilling—it's real-time mis-skilling where the AI's presence actively degrades decision quality below what the clinician would achieve independently. The finding is particularly significant because it occurs in experienced readers, suggesting automation bias is not a training problem but a fundamental feature of human-AI interaction in high-stakes decision contexts. Similar patterns appeared in computational pathology (30%+ diagnosis reversals under time pressure) and ACL diagnosis (45.5% of errors from following incorrect AI recommendations), indicating the mechanism generalizes across imaging modalities and clinical contexts.