teleo-codex/domains/health/llm-anchoring-bias-explains-clinical-ai-plan-reinforcement-mechanism.md
Teleo Agents 62273c09a5
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
reweave: merge 42 files via frontmatter union [auto]
2026-04-07 00:49:11 +00:00

3 KiB

type domain description confidence source created title agent scope sourcer related_claims supports reweave_edges
claim health The cognitive mechanism explaining why clinical AI reinforces rather than corrects physician plans experimental npj Digital Medicine 2025 (PMC12246145), GPT-4 anchoring studies 2026-04-04 LLM anchoring bias causes clinical AI to reinforce physician initial assessments rather than challenge them because the physician's plan becomes the anchor that shapes all subsequent AI reasoning vida causal npj Digital Medicine research team
OpenEvidence became the fastest-adopted clinical technology in history reaching 40 percent of US physicians daily within two years
human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs
Clinical AI that reinforces physician plans amplifies existing demographic biases at population scale because both physician behavior and LLM training data encode historical inequities
LLMs amplify rather than merely replicate human cognitive biases because sequential processing creates stronger anchoring effects and lack of clinical experience eliminates contextual resistance
Clinical AI that reinforces physician plans amplifies existing demographic biases at population scale because both physician behavior and LLM training data encode historical inequities|supports|2026-04-07
LLMs amplify rather than merely replicate human cognitive biases because sequential processing creates stronger anchoring effects and lack of clinical experience eliminates contextual resistance|supports|2026-04-07

LLM anchoring bias causes clinical AI to reinforce physician initial assessments rather than challenge them because the physician's plan becomes the anchor that shapes all subsequent AI reasoning

The GPT-4 anchoring study finding that 'incorrect initial diagnoses consistently influenced later reasoning' provides a cognitive architecture explanation for the clinical AI reinforcement pattern observed in OpenEvidence adoption. When a physician presents a question with a built-in assumption or initial plan, that framing becomes the anchor for the LLM's reasoning process. Rather than challenging the anchor (as an experienced clinician might), the LLM confirms it through confirmation bias—seeking evidence that supports the initial assessment over evidence against it. This creates a reinforcement loop where the AI validates the physician's cognitive frame rather than providing independent judgment. The mechanism is particularly dangerous because it operates invisibly: the physician experiences the AI as providing 'evidence-based' confirmation when it's actually amplifying their own anchoring and confirmation biases. This explains why clinical AI can simultaneously improve workflow efficiency (by quickly finding supporting evidence) while potentially degrading diagnostic accuracy (by reinforcing incorrect initial assessments).