teleo-codex/domains/health/clinical-ai-bias-amplification-creates-compounding-disparity-risk-at-scale.md
Teleo Agents 40c7f752d2
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
vida: extract claims from 2026-03-22-nature-medicine-llm-sociodemographic-bias
- Source: inbox/queue/2026-03-22-nature-medicine-llm-sociodemographic-bias.md
- Domain: health
- Claims: 2, Entities: 0
- Enrichments: 3
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Vida <PIPELINE>
2026-04-04 14:08:54 +00:00

2.5 KiB

type domain description confidence source created title agent scope sourcer related_claims
claim health When AI systems designed to support rather than replace physician judgment operate at 30M+ monthly consultations, they systematically amplify rather than reduce healthcare disparities experimental Nature Medicine 2025 LLM bias study combined with OpenEvidence adoption data showing 40% US physician penetration 2026-04-04 Clinical AI that reinforces physician plans amplifies existing demographic biases at population scale because both physician behavior and LLM training data encode historical inequities vida causal Nature Medicine / Multi-institution research team
human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs
SDOH interventions show strong ROI but adoption stalls because Z-code documentation remains below 3 percent and no operational infrastructure connects screening to action
healthcare AI regulation needs blank-sheet redesign because the FDA drug-and-device model built for static products cannot govern continuously learning software

Clinical AI that reinforces physician plans amplifies existing demographic biases at population scale because both physician behavior and LLM training data encode historical inequities

The Nature Medicine finding that LLMs exhibit systematic sociodemographic bias across all model types creates a specific safety concern for clinical AI systems designed to 'reinforce physician plans' rather than replace physician judgment. Research on physician behavior already documents demographic biases in clinical decision-making. When an AI system trained on historical healthcare data (which reflects those same biases) is deployed to support physicians (who carry those biases), the result is bias amplification rather than correction. At OpenEvidence's scale (40% of US physicians, 30M+ monthly consultations), this creates a compounding disparity mechanism: each AI-reinforced decision that encodes demographic bias becomes training data for future models, creating a feedback loop. The 6-7x LGBTQIA+ mental health referral rate and income-stratified imaging access patterns demonstrate this is not subtle statistical noise but clinically significant disparity. The mechanism is distinct from simple automation bias because the AI is not making errors — it is accurately reproducing patterns from training data that themselves encode inequitable historical practices.