From 5fa6420ed958847ff6f75a89db879be69af5d636 Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Thu, 2 Apr 2026 10:48:18 +0000 Subject: [PATCH] vida: extract claims from 2026-01-xx-ecri-2026-health-tech-hazards-ai-chatbot-misuse-top-hazard - Source: inbox/queue/2026-01-xx-ecri-2026-health-tech-hazards-ai-chatbot-misuse-top-hazard.md - Domain: health - Claims: 2, Entities: 1 - Enrichments: 2 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Vida --- ...ent-safety-hazard-two-consecutive-years.md | 17 +++++++++++++ ...-accumulation-not-after-safety-evidence.md | 17 +++++++++++++ entities/health/ecri.md | 24 +++++++++++++++++++ 3 files changed, 58 insertions(+) create mode 100644 domains/health/clinical-ai-chatbot-misuse-documented-as-top-patient-safety-hazard-two-consecutive-years.md create mode 100644 domains/health/regulatory-deregulation-occurring-during-active-harm-accumulation-not-after-safety-evidence.md create mode 100644 entities/health/ecri.md diff --git a/domains/health/clinical-ai-chatbot-misuse-documented-as-top-patient-safety-hazard-two-consecutive-years.md b/domains/health/clinical-ai-chatbot-misuse-documented-as-top-patient-safety-hazard-two-consecutive-years.md new file mode 100644 index 00000000..56c81e15 --- /dev/null +++ b/domains/health/clinical-ai-chatbot-misuse-documented-as-top-patient-safety-hazard-two-consecutive-years.md @@ -0,0 +1,17 @@ +--- +type: claim +domain: health +description: Independent patient safety organization ECRI documented real-world harm from AI chatbots including incorrect diagnoses and dangerous clinical advice while 40 million people use ChatGPT daily for health information +confidence: experimental +source: ECRI 2025 and 2026 Health Technology Hazards Reports +created: 2026-04-02 +title: Clinical AI chatbot misuse is a documented ongoing harm source not a theoretical risk as evidenced by ECRI ranking it the number one health technology hazard for two consecutive years +agent: vida +scope: causal +sourcer: ECRI +related_claims: ["[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]", "[[medical LLM benchmark performance does not translate to clinical impact because physicians with and without AI access achieve similar diagnostic accuracy in randomized trials]]", "[[healthcare AI regulation needs blank-sheet redesign because the FDA drug-and-device model built for static products cannot govern continuously learning software]]"] +--- + +# Clinical AI chatbot misuse is a documented ongoing harm source not a theoretical risk as evidenced by ECRI ranking it the number one health technology hazard for two consecutive years + +ECRI, the most credible independent patient safety organization in the US, ranked misuse of AI chatbots as the #1 health technology hazard in both 2025 and 2026. This is not theoretical concern but documented harm tracking. Specific documented failures include: incorrect diagnoses, unnecessary testing recommendations, promotion of subpar medical supplies, and hallucinated body parts. In one probe, ECRI asked a chatbot whether placing an electrosurgical return electrode over a patient's shoulder blade was acceptable—the chatbot stated this was appropriate, advice that would leave the patient at risk of severe burns. The scale is significant: over 40 million people daily use ChatGPT for health information according to OpenAI. The core mechanism of harm is that these tools produce 'human-like and expert-sounding responses' which makes automation bias dangerous—clinicians and patients cannot distinguish confident-sounding correct advice from confident-sounding dangerous advice. Critically, LLM-based chatbots (ChatGPT, Claude, Copilot, Gemini, Grok) are not regulated as medical devices and not validated for healthcare purposes, yet are increasingly used by clinicians, patients, and hospital staff. ECRI's recommended mitigations—user education, verification with knowledgeable sources, AI governance committees, clinician training, and performance audits—are all voluntary institutional practices with no regulatory teeth. The two-year consecutive #1 ranking indicates this is not a transient concern but an active, persistent harm pattern. diff --git a/domains/health/regulatory-deregulation-occurring-during-active-harm-accumulation-not-after-safety-evidence.md b/domains/health/regulatory-deregulation-occurring-during-active-harm-accumulation-not-after-safety-evidence.md new file mode 100644 index 00000000..bc0dd83f --- /dev/null +++ b/domains/health/regulatory-deregulation-occurring-during-active-harm-accumulation-not-after-safety-evidence.md @@ -0,0 +1,17 @@ +--- +type: claim +domain: health +description: FDA expanded CDS enforcement discretion on January 6 2026 in the same month ECRI published AI chatbots as the number one health technology hazard revealing temporal contradiction between regulatory rollback and patient safety alarm +confidence: experimental +source: FDA CDS Guidance January 2026, ECRI 2026 Health Technology Hazards Report +created: 2026-04-02 +title: Clinical AI deregulation is occurring during active harm accumulation not after evidence of safety as demonstrated by simultaneous FDA enforcement discretion expansion and ECRI top hazard designation in January 2026 +agent: vida +scope: structural +sourcer: ECRI +related_claims: ["[[healthcare AI regulation needs blank-sheet redesign because the FDA drug-and-device model built for static products cannot govern continuously learning software]]", "[[clinical-ai-chatbot-misuse-documented-as-top-patient-safety-hazard-two-consecutive-years]]"] +--- + +# Clinical AI deregulation is occurring during active harm accumulation not after evidence of safety as demonstrated by simultaneous FDA enforcement discretion expansion and ECRI top hazard designation in January 2026 + +The FDA's January 6, 2026 CDS enforcement discretion expansion and ECRI's January 2026 publication of AI chatbots as the #1 health technology hazard occurred in the same 30-day window. This temporal coincidence represents the clearest evidence that deregulation is occurring during active harm accumulation, not after evidence of safety. ECRI is not an advocacy group but the operational patient safety infrastructure that directly informs hospital purchasing decisions and risk management—their rankings are based on documented harm tracking. The FDA's enforcement discretion expansion means more AI clinical decision support tools will enter deployment with reduced regulatory oversight at precisely the moment when the most credible patient safety organization is flagging AI chatbot misuse as the highest-priority patient safety concern. This pattern extends beyond the US: the EU AI Act rollback also occurred in the same 30-day window. The simultaneity reveals a regulatory-safety gap where policy is expanding deployment capacity while safety infrastructure is documenting active failure modes. This is not a case of regulators waiting for harm signals to emerge—the harm signals are already present and escalating (two consecutive years at #1), yet regulatory trajectory is toward expanded deployment rather than increased oversight. diff --git a/entities/health/ecri.md b/entities/health/ecri.md new file mode 100644 index 00000000..7f9a7011 --- /dev/null +++ b/entities/health/ecri.md @@ -0,0 +1,24 @@ +# ECRI (Emergency Care Research Institute) + +**Type:** Independent patient safety organization +**Founded:** 1968 +**Focus:** Health technology hazard identification, patient safety research, clinical evidence evaluation + +## Overview + +ECRI is a nonprofit, independent patient safety organization that has published Health Technology Hazard Reports for decades. Their rankings directly inform hospital purchasing decisions and risk management protocols across the US healthcare system. ECRI is widely regarded as the most credible independent patient safety organization in the United States. + +## Significance + +ECRI's annual Health Technology Hazards Report represents operational patient safety infrastructure, not academic commentary. When ECRI designates something as a top hazard, it reflects documented harm tracking and empirical evidence from their incident reporting systems. + +## Timeline + +- **2025** — Published Health Technology Hazards Report ranking AI chatbot misuse as #1 health technology hazard +- **2026-01** — Published 2026 Health Technology Hazards Report ranking AI chatbot misuse as #1 health technology hazard for second consecutive year, documenting harm including incorrect diagnoses, dangerous electrosurgical advice, and hallucinated body parts +- **2026-03** — Published separate 2026 Top 10 Patient Safety Concerns list, ranking AI diagnostic capabilities as #1 patient safety concern + +## Related + +- [[clinical-ai-chatbot-misuse-documented-as-top-patient-safety-hazard-two-consecutive-years]] +- [[regulatory-deregulation-occurring-during-active-harm-accumulation-not-after-safety-evidence]] \ No newline at end of file