From 87ce090e3bc2b1e08eaa7a39593467cb935b3667 Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Thu, 2 Apr 2026 10:49:07 +0000 Subject: [PATCH] vida: extract claims from 2026-xx-jco-oncology-practice-liability-risks-ambient-ai-clinical-workflows - Source: inbox/queue/2026-xx-jco-oncology-practice-liability-risks-ambient-ai-clinical-workflows.md - Domain: health - Claims: 2, Entities: 0 - Enrichments: 3 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Vida --- ...-liability-exposure-outside-fda-oversight.md | 17 +++++++++++++++++ ...tapping-litigation-for-consent-violations.md | 17 +++++++++++++++++ 2 files changed, 34 insertions(+) create mode 100644 domains/health/ambient-ai-scribes-create-three-party-liability-exposure-outside-fda-oversight.md create mode 100644 domains/health/ambient-ai-scribes-face-wiretapping-litigation-for-consent-violations.md diff --git a/domains/health/ambient-ai-scribes-create-three-party-liability-exposure-outside-fda-oversight.md b/domains/health/ambient-ai-scribes-create-three-party-liability-exposure-outside-fda-oversight.md new file mode 100644 index 00000000..f1cf60b6 --- /dev/null +++ b/domains/health/ambient-ai-scribes-create-three-party-liability-exposure-outside-fda-oversight.md @@ -0,0 +1,17 @@ +--- +type: claim +domain: health +description: The three-party liability framework emerges because clinicians attest to AI-generated notes, hospitals deploy without governance protocols, and manufacturers face product liability despite general wellness classification +confidence: experimental +source: Gerke, Simon, Roman (JCO Oncology Practice 2026), legal analysis of ambient AI clinical workflows +created: 2026-04-02 +title: Ambient AI scribes create simultaneous malpractice exposure for clinicians, institutional liability for hospitals, and product liability for manufacturers while operating outside FDA medical device regulation +agent: vida +scope: structural +sourcer: JCO Oncology Practice +related_claims: ["[[ambient AI documentation reduces physician documentation burden by 73 percent but the relationship between automation and burnout is more complex than time savings alone]]", "[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]", "[[healthcare AI regulation needs blank-sheet redesign because the FDA drug-and-device model built for static products cannot govern continuously learning software]]"] +--- + +# Ambient AI scribes create simultaneous malpractice exposure for clinicians, institutional liability for hospitals, and product liability for manufacturers while operating outside FDA medical device regulation + +Ambient AI scribes create a novel three-party liability structure that existing malpractice frameworks are not designed to handle. Clinician liability: physicians who sign AI-generated notes containing errors (fabricated diagnoses, wrong medications, hallucinated procedures) bear malpractice exposure because signing attests to accuracy regardless of generation method. Hospital liability: institutions that deploy ambient scribes without instructing clinicians on potential mistake types, establishing review protocols, or informing patients of AI use face institutional liability for inadequate AI governance. Manufacturer liability: AI scribe makers face product liability for documented failure modes (hallucinations, omissions) despite FDA classification as general wellness/administrative tools rather than medical devices. The critical gap: FDA's non-medical-device classification does NOT immunize manufacturers from product liability, but also provides no regulatory framework for safety standards. This creates simultaneous exposure across three parties with no established legal mechanism to allocate liability cleanly. The authors—from Memorial Sloan Kettering, University of Illinois Law, and Northeastern Law—frame this as an emerging liability reckoning, not a theoretical concern. Speech recognition systems have already caused documented patient harm: 'erroneously documenting no vascular flow instead of normal vascular flow' triggered unnecessary procedures; confusing tumor location led to surgery on wrong site. The liability exposure is live and unresolved. diff --git a/domains/health/ambient-ai-scribes-face-wiretapping-litigation-for-consent-violations.md b/domains/health/ambient-ai-scribes-face-wiretapping-litigation-for-consent-violations.md new file mode 100644 index 00000000..df47b4ff --- /dev/null +++ b/domains/health/ambient-ai-scribes-face-wiretapping-litigation-for-consent-violations.md @@ -0,0 +1,17 @@ +--- +type: claim +domain: health +description: California and Illinois lawsuits in 2025-2026 allege violations of CMIA, BIPA, and state wiretapping statutes as an unanticipated legal vector +confidence: experimental +source: Gerke, Simon, Roman (JCO Oncology Practice 2026), documenting active litigation in California and Illinois +created: 2026-04-02 +title: Ambient AI scribes are generating wiretapping and biometric privacy lawsuits because health systems deployed without patient consent protocols for third-party audio processing +agent: vida +scope: structural +sourcer: JCO Oncology Practice +related_claims: ["[[ambient AI documentation reduces physician documentation burden by 73 percent but the relationship between automation and burnout is more complex than time savings alone]]", "[[healthcare AI regulation needs blank-sheet redesign because the FDA drug-and-device model built for static products cannot govern continuously learning software]]"] +--- + +# Ambient AI scribes are generating wiretapping and biometric privacy lawsuits because health systems deployed without patient consent protocols for third-party audio processing + +Ambient AI scribes are facing an unanticipated legal attack vector through wiretapping and biometric privacy statutes. Lawsuits filed in California and Illinois (2025-2026) allege health systems used ambient scribing without patient informed consent, potentially violating: California's Confidentiality of Medical Information Act (CMIA), Illinois Biometric Information Privacy Act (BIPA), and state wiretapping statutes because third-party vendors process audio recordings. The legal theory: ambient scribes record patient-clinician conversations and transmit audio to external AI processors, which constitutes wiretapping if patients haven't explicitly consented to third-party recording. This is distinct from the malpractice liability framework—it's a privacy/consent violation that creates institutional exposure regardless of whether the AI generates accurate notes. The timing is significant: Kaiser Permanente announced clinician access to ambient documentation scribes in August 2024, making it the first major health system deployment at scale. Multiple major systems have since deployed. The lawsuits emerged 12-18 months after initial large-scale deployment, suggesting this is the litigation leading edge. The authors note this creates institutional liability for hospitals that deployed without establishing patient consent protocols—a governance failure distinct from the clinical accuracy question. This represents a second, independent legal vector beyond malpractice: privacy law applied to AI-mediated clinical workflows.