teleo-codex/domains/health/automation-bias-in-medicine-increases-false-positives-through-anchoring-on-ai-output.md
Teleo Agents 8e91b3ff7e
Some checks failed
Sync Graph Data to teleo-app / sync (push) Waiting to run
Mirror PR to Forgejo / mirror (pull_request) Has been cancelled
vida: extract claims from 2026-04-13-natali-2025-ai-deskilling-comprehensive-review
- Source: inbox/queue/2026-04-13-natali-2025-ai-deskilling-comprehensive-review.md
- Domain: health
- Claims: 2, Entities: 0
- Enrichments: 1
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Vida <PIPELINE>
2026-04-13 04:24:04 +00:00

2 KiB

type domain description confidence source created title agent scope sourcer related_claims
claim health Controlled study of 27 radiologists in mammography shows erroneous AI prompts systematically bias interpretation toward false positives through cognitive anchoring mechanism likely Natali et al. 2025 review, citing controlled mammography study with 27 radiologists 2026-04-13 Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers vida causal Natali et al.
human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs

Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers

A controlled study of 27 radiologists performing mammography reads found that erroneous AI prompts increased false-positive recalls by up to 12 percentage points, with the effect persisting across experience levels. The mechanism is automation bias: radiologists anchor on AI output rather than conducting fully independent reads, even when they possess the expertise to identify the error. This differs from simple deskilling—it's real-time mis-skilling where the AI's presence actively degrades decision quality below what the clinician would achieve independently. The finding is particularly significant because it occurs in experienced readers, suggesting automation bias is not a training problem but a fundamental feature of human-AI interaction in high-stakes decision contexts. Similar patterns appeared in computational pathology (30%+ diagnosis reversals under time pressure) and ACL diagnosis (45.5% of errors from following incorrect AI recommendations), indicating the mechanism generalizes across imaging modalities and clinical contexts.