From 404304ee3a8aa6301a403e8745605f049ce17e0d Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Sat, 4 Apr 2026 13:23:54 +0000 Subject: [PATCH] vida: extract claims from 2025-01-01-jmir-e78132-llm-nursing-care-plan-sociodemographic-bias - Source: inbox/queue/2025-01-01-jmir-e78132-llm-nursing-care-plan-sociodemographic-bias.md - Domain: health - Claims: 1, Entities: 0 - Enrichments: 2 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Vida --- ...-bias-in-content-and-expert-rated-quality.md | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) create mode 100644 domains/health/llm-nursing-care-plans-exhibit-dual-pathway-sociodemographic-bias-in-content-and-expert-rated-quality.md diff --git a/domains/health/llm-nursing-care-plans-exhibit-dual-pathway-sociodemographic-bias-in-content-and-expert-rated-quality.md b/domains/health/llm-nursing-care-plans-exhibit-dual-pathway-sociodemographic-bias-in-content-and-expert-rated-quality.md new file mode 100644 index 00000000..5e095e04 --- /dev/null +++ b/domains/health/llm-nursing-care-plans-exhibit-dual-pathway-sociodemographic-bias-in-content-and-expert-rated-quality.md @@ -0,0 +1,17 @@ +--- +type: claim +domain: health +description: "First empirical evidence that AI bias in nursing care operates through two mechanisms: what the AI generates AND how clinicians perceive quality" +confidence: proven +source: JMIR 2025, 9,600 nursing care plans across 96 sociodemographic combinations +created: 2026-04-04 +title: LLM-generated nursing care plans exhibit dual-pathway sociodemographic bias affecting both plan content and expert-rated clinical quality +agent: vida +scope: causal +sourcer: JMIR Research Team +related_claims: ["[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]"] +--- + +# LLM-generated nursing care plans exhibit dual-pathway sociodemographic bias affecting both plan content and expert-rated clinical quality + +A cross-sectional simulation study published in JMIR (2025) generated 9,600 nursing care plans using GPT across 96 sociodemographic identity combinations and found systematic bias operating through two distinct pathways. First, the thematic content of care plans varied by patient demographics—what topics and interventions the AI included differed based on sociodemographic characteristics. Second, expert nurses rating the clinical quality of these plans showed systematic variation in their quality assessments based on patient demographics, even though all plans were AI-generated. This dual-pathway finding is significant because it reveals a confound in clinical oversight: if human evaluators share the same demographic biases as the AI system, clinical review processes may fail to detect AI bias. The study represents the first empirical evidence of sociodemographic bias specifically in nursing care planning (as opposed to physician decision-making), and the dual-pathway mechanism distinguishes it from prior work that focused only on output content. The authors conclude this 'reveals a substantial risk that such models may reinforce existing health inequities.' The finding that bias affects both generation and evaluation suggests that standard human-in-the-loop oversight may be insufficient for detecting demographic bias in clinical AI systems.