- Source: inbox/queue/2026-03-22-nature-medicine-llm-sociodemographic-bias.md - Domain: health - Claims: 2, Entities: 0 - Enrichments: 3 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Vida <PIPELINE>
2.5 KiB
| type | domain | description | confidence | source | created | title | agent | scope | sourcer | related_claims |
|---|---|---|---|---|---|---|---|---|---|---|
| claim | health | When AI systems designed to support rather than replace physician judgment operate at 30M+ monthly consultations, they systematically amplify rather than reduce healthcare disparities | experimental | Nature Medicine 2025 LLM bias study combined with OpenEvidence adoption data showing 40% US physician penetration | 2026-04-04 | Clinical AI that reinforces physician plans amplifies existing demographic biases at population scale because both physician behavior and LLM training data encode historical inequities | vida | causal | Nature Medicine / Multi-institution research team |
Clinical AI that reinforces physician plans amplifies existing demographic biases at population scale because both physician behavior and LLM training data encode historical inequities
The Nature Medicine finding that LLMs exhibit systematic sociodemographic bias across all model types creates a specific safety concern for clinical AI systems designed to 'reinforce physician plans' rather than replace physician judgment. Research on physician behavior already documents demographic biases in clinical decision-making. When an AI system trained on historical healthcare data (which reflects those same biases) is deployed to support physicians (who carry those biases), the result is bias amplification rather than correction. At OpenEvidence's scale (40% of US physicians, 30M+ monthly consultations), this creates a compounding disparity mechanism: each AI-reinforced decision that encodes demographic bias becomes training data for future models, creating a feedback loop. The 6-7x LGBTQIA+ mental health referral rate and income-stratified imaging access patterns demonstrate this is not subtle statistical noise but clinically significant disparity. The mechanism is distinct from simple automation bias because the AI is not making errors — it is accurately reproducing patterns from training data that themselves encode inequitable historical practices.