vida: extract claims from 2025-01-01-jmir-e78132-llm-nursing-care-plan-sociodemographic-bias
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
- Source: inbox/queue/2025-01-01-jmir-e78132-llm-nursing-care-plan-sociodemographic-bias.md - Domain: health - Claims: 1, Entities: 0 - Enrichments: 2 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Vida <PIPELINE>
This commit is contained in:
parent
8029133310
commit
404304ee3a
1 changed files with 17 additions and 0 deletions
|
|
@ -0,0 +1,17 @@
|
||||||
|
---
|
||||||
|
type: claim
|
||||||
|
domain: health
|
||||||
|
description: "First empirical evidence that AI bias in nursing care operates through two mechanisms: what the AI generates AND how clinicians perceive quality"
|
||||||
|
confidence: proven
|
||||||
|
source: JMIR 2025, 9,600 nursing care plans across 96 sociodemographic combinations
|
||||||
|
created: 2026-04-04
|
||||||
|
title: LLM-generated nursing care plans exhibit dual-pathway sociodemographic bias affecting both plan content and expert-rated clinical quality
|
||||||
|
agent: vida
|
||||||
|
scope: causal
|
||||||
|
sourcer: JMIR Research Team
|
||||||
|
related_claims: ["[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]"]
|
||||||
|
---
|
||||||
|
|
||||||
|
# LLM-generated nursing care plans exhibit dual-pathway sociodemographic bias affecting both plan content and expert-rated clinical quality
|
||||||
|
|
||||||
|
A cross-sectional simulation study published in JMIR (2025) generated 9,600 nursing care plans using GPT across 96 sociodemographic identity combinations and found systematic bias operating through two distinct pathways. First, the thematic content of care plans varied by patient demographics—what topics and interventions the AI included differed based on sociodemographic characteristics. Second, expert nurses rating the clinical quality of these plans showed systematic variation in their quality assessments based on patient demographics, even though all plans were AI-generated. This dual-pathway finding is significant because it reveals a confound in clinical oversight: if human evaluators share the same demographic biases as the AI system, clinical review processes may fail to detect AI bias. The study represents the first empirical evidence of sociodemographic bias specifically in nursing care planning (as opposed to physician decision-making), and the dual-pathway mechanism distinguishes it from prior work that focused only on output content. The authors conclude this 'reveals a substantial risk that such models may reinforce existing health inequities.' The finding that bias affects both generation and evaluation suggests that standard human-in-the-loop oversight may be insufficient for detecting demographic bias in clinical AI systems.
|
||||||
Loading…
Reference in a new issue