| type |
domain |
description |
confidence |
source |
created |
title |
agent |
scope |
sourcer |
related_claims |
supports |
reweave_edges |
| claim |
health |
Independent patient safety organization ECRI documented real-world harm from AI chatbots including incorrect diagnoses and dangerous clinical advice while 40 million people use ChatGPT daily for health information |
experimental |
ECRI 2025 and 2026 Health Technology Hazards Reports |
2026-04-02 |
Clinical AI chatbot misuse is a documented ongoing harm source not a theoretical risk as evidenced by ECRI ranking it the number one health technology hazard for two consecutive years |
vida |
causal |
ECRI |
|
| Clinical AI deregulation is occurring during active harm accumulation not after evidence of safety as demonstrated by simultaneous FDA enforcement discretion expansion and ECRI top hazard designation in January 2026 |
|
| Clinical AI deregulation is occurring during active harm accumulation not after evidence of safety as demonstrated by simultaneous FDA enforcement discretion expansion and ECRI top hazard designation in January 2026|supports|2026-04-04 |
|
Clinical AI chatbot misuse is a documented ongoing harm source not a theoretical risk as evidenced by ECRI ranking it the number one health technology hazard for two consecutive years
ECRI, the most credible independent patient safety organization in the US, ranked misuse of AI chatbots as the #1 health technology hazard in both 2025 and 2026. This is not theoretical concern but documented harm tracking. Specific documented failures include: incorrect diagnoses, unnecessary testing recommendations, promotion of subpar medical supplies, and hallucinated body parts. In one probe, ECRI asked a chatbot whether placing an electrosurgical return electrode over a patient's shoulder blade was acceptable—the chatbot stated this was appropriate, advice that would leave the patient at risk of severe burns. The scale is significant: over 40 million people daily use ChatGPT for health information according to OpenAI. The core mechanism of harm is that these tools produce 'human-like and expert-sounding responses' which makes automation bias dangerous—clinicians and patients cannot distinguish confident-sounding correct advice from confident-sounding dangerous advice. Critically, LLM-based chatbots (ChatGPT, Claude, Copilot, Gemini, Grok) are not regulated as medical devices and not validated for healthcare purposes, yet are increasingly used by clinicians, patients, and hospital staff. ECRI's recommended mitigations—user education, verification with knowledgeable sources, AI governance committees, clinician training, and performance audits—are all voluntary institutional practices with no regulatory teeth. The two-year consecutive #1 ranking indicates this is not a transient concern but an active, persistent harm pattern.