teleo-codex/domains/health/llm-anchoring-bias-explains-clinical-ai-plan-reinforcement-mechanism.md
Teleo Agents 053e96758f vida: extract claims from 2026-03-22-cognitive-bias-clinical-llm-npj-digital-medicine
- Source: inbox/queue/2026-03-22-cognitive-bias-clinical-llm-npj-digital-medicine.md
- Domain: health
- Claims: 2, Entities: 1
- Enrichments: 2
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Vida <PIPELINE>
2026-04-04 14:18:06 +00:00

2.2 KiB

type domain description confidence source created title agent scope sourcer related_claims
claim health The cognitive mechanism explaining why clinical AI reinforces rather than corrects physician plans experimental npj Digital Medicine 2025 (PMC12246145), GPT-4 anchoring studies 2026-04-04 LLM anchoring bias causes clinical AI to reinforce physician initial assessments rather than challenge them because the physician's plan becomes the anchor that shapes all subsequent AI reasoning vida causal npj Digital Medicine research team
OpenEvidence became the fastest-adopted clinical technology in history reaching 40 percent of US physicians daily within two years
human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs

LLM anchoring bias causes clinical AI to reinforce physician initial assessments rather than challenge them because the physician's plan becomes the anchor that shapes all subsequent AI reasoning

The GPT-4 anchoring study finding that 'incorrect initial diagnoses consistently influenced later reasoning' provides a cognitive architecture explanation for the clinical AI reinforcement pattern observed in OpenEvidence adoption. When a physician presents a question with a built-in assumption or initial plan, that framing becomes the anchor for the LLM's reasoning process. Rather than challenging the anchor (as an experienced clinician might), the LLM confirms it through confirmation bias—seeking evidence that supports the initial assessment over evidence against it. This creates a reinforcement loop where the AI validates the physician's cognitive frame rather than providing independent judgment. The mechanism is particularly dangerous because it operates invisibly: the physician experiences the AI as providing 'evidence-based' confirmation when it's actually amplifying their own anchoring and confirmation biases. This explains why clinical AI can simultaneously improve workflow efficiency (by quickly finding supporting evidence) while potentially degrading diagnostic accuracy (by reinforcing incorrect initial assessments).