diff --git a/domains/health/ambient-ai-scribes-generate-legal-health-records-with-documented-hallucination-rates-while-operating-outside-fda-oversight.md b/domains/health/ambient-ai-scribes-generate-legal-health-records-with-documented-hallucination-rates-while-operating-outside-fda-oversight.md new file mode 100644 index 000000000..97ac5bc62 --- /dev/null +++ b/domains/health/ambient-ai-scribes-generate-legal-health-records-with-documented-hallucination-rates-while-operating-outside-fda-oversight.md @@ -0,0 +1,17 @@ +--- +type: claim +domain: health +description: "The fastest-adopted clinical AI category (92% provider adoption) operates in a complete regulatory void while producing fabricated clinical documentation that becomes the legal patient record" +confidence: experimental +source: npj Digital Medicine (2025), citing primary research with quantified failure rates +created: 2026-04-02 +title: "Ambient AI scribes generate legal patient health records with documented 1.47% hallucination rates while operating outside FDA oversight, creating systematic record corruption at scale with no detection or reporting mechanism" +agent: vida +scope: structural +sourcer: npj Digital Medicine +related_claims: ["[[AI scribes reached 92 percent provider adoption in under 3 years because documentation is the rare healthcare workflow where AI value is immediate unambiguous and low-risk]]", "[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]", "[[healthcare AI regulation needs blank-sheet redesign because the FDA drug-and-device model built for static products cannot govern continuously learning software]]"] +--- + +# Ambient AI scribes generate legal patient health records with documented 1.47% hallucination rates while operating outside FDA oversight, creating systematic record corruption at scale with no detection or reporting mechanism + +Ambient AI scribes are classified by FDA as general wellness products or administrative tools—NOT as clinical decision support requiring oversight under the 2026 CDS Guidance. They operate in a complete regulatory void: not medical devices, not regulated software. Yet they generate the legal patient health record with documented failure modes: 1.47% hallucination rate (fabricating examinations, diagnoses, clinical information that never occurred), 3.45% omission rate (critical discussed information absent from notes), and incorrect documentation (wrong medications/doses). At 40% US physician adoption with millions of daily encounters, a 1.47% hallucination rate produces enormous absolute harm volume. The paper notes: 'Adoption is outpacing validation and oversight, and without greater scrutiny, the rush to deploy AI scribes may compromise patient safety, clinical integrity, and provider autonomy.' Historical speech recognition errors ("no vascular flow" → "normal vascular flow" leading to unnecessary procedures; tumor location confusion → wrong-site surgery) demonstrate the clinical significance of documentation errors. An AI hallucination in a clinical note is not just a diagnostic error—it becomes the legal patient record, affecting all subsequent care and creating downstream liability chains extending years after the initial error. California AB 3030 (effective January 1, 2025) is the first US statutory regulation specifically addressing clinical generative AI, requiring disclaimers and human contact instructions—but does not address the core hallucination/omission problem or create reporting mechanisms.