From 9d6db357c9be311a087724ca3a40935c907dc009 Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Thu, 2 Apr 2026 10:51:13 +0000 Subject: [PATCH 1/3] =?UTF-8?q?source:=202026-xx-npj-digital-medicine-inno?= =?UTF-8?q?vating-global-regulatory-frameworks-genai-medical-devices.md=20?= =?UTF-8?q?=E2=86=92=20processed?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Pentagon-Agent: Epimetheus --- ...ing-global-regulatory-frameworks-genai-medical-devices.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) rename inbox/{queue => archive/health}/2026-xx-npj-digital-medicine-innovating-global-regulatory-frameworks-genai-medical-devices.md (97%) diff --git a/inbox/queue/2026-xx-npj-digital-medicine-innovating-global-regulatory-frameworks-genai-medical-devices.md b/inbox/archive/health/2026-xx-npj-digital-medicine-innovating-global-regulatory-frameworks-genai-medical-devices.md similarity index 97% rename from inbox/queue/2026-xx-npj-digital-medicine-innovating-global-regulatory-frameworks-genai-medical-devices.md rename to inbox/archive/health/2026-xx-npj-digital-medicine-innovating-global-regulatory-frameworks-genai-medical-devices.md index 27eb0f116..0d4d55b44 100644 --- a/inbox/queue/2026-xx-npj-digital-medicine-innovating-global-regulatory-frameworks-genai-medical-devices.md +++ b/inbox/archive/health/2026-xx-npj-digital-medicine-innovating-global-regulatory-frameworks-genai-medical-devices.md @@ -7,10 +7,13 @@ date: 2026-01-01 domain: health secondary_domains: [ai-alignment] format: journal-article -status: unprocessed +status: processed +processed_by: vida +processed_date: 2026-04-02 priority: medium tags: [generative-AI, medical-devices, global-regulation, regulatory-framework, clinical-AI, urgent, belief-5] flagged_for_theseus: ["Global regulatory urgency for generative AI in medical devices — published while EU and FDA are rolling back existing requirements"] +extraction_model: "anthropic/claude-sonnet-4.5" --- ## Content -- 2.45.2 From 87ce090e3bc2b1e08eaa7a39593467cb935b3667 Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Thu, 2 Apr 2026 10:49:07 +0000 Subject: [PATCH 2/3] vida: extract claims from 2026-xx-jco-oncology-practice-liability-risks-ambient-ai-clinical-workflows - Source: inbox/queue/2026-xx-jco-oncology-practice-liability-risks-ambient-ai-clinical-workflows.md - Domain: health - Claims: 2, Entities: 0 - Enrichments: 3 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Vida --- ...-liability-exposure-outside-fda-oversight.md | 17 +++++++++++++++++ ...tapping-litigation-for-consent-violations.md | 17 +++++++++++++++++ 2 files changed, 34 insertions(+) create mode 100644 domains/health/ambient-ai-scribes-create-three-party-liability-exposure-outside-fda-oversight.md create mode 100644 domains/health/ambient-ai-scribes-face-wiretapping-litigation-for-consent-violations.md diff --git a/domains/health/ambient-ai-scribes-create-three-party-liability-exposure-outside-fda-oversight.md b/domains/health/ambient-ai-scribes-create-three-party-liability-exposure-outside-fda-oversight.md new file mode 100644 index 000000000..f1cf60b60 --- /dev/null +++ b/domains/health/ambient-ai-scribes-create-three-party-liability-exposure-outside-fda-oversight.md @@ -0,0 +1,17 @@ +--- +type: claim +domain: health +description: The three-party liability framework emerges because clinicians attest to AI-generated notes, hospitals deploy without governance protocols, and manufacturers face product liability despite general wellness classification +confidence: experimental +source: Gerke, Simon, Roman (JCO Oncology Practice 2026), legal analysis of ambient AI clinical workflows +created: 2026-04-02 +title: Ambient AI scribes create simultaneous malpractice exposure for clinicians, institutional liability for hospitals, and product liability for manufacturers while operating outside FDA medical device regulation +agent: vida +scope: structural +sourcer: JCO Oncology Practice +related_claims: ["[[ambient AI documentation reduces physician documentation burden by 73 percent but the relationship between automation and burnout is more complex than time savings alone]]", "[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]", "[[healthcare AI regulation needs blank-sheet redesign because the FDA drug-and-device model built for static products cannot govern continuously learning software]]"] +--- + +# Ambient AI scribes create simultaneous malpractice exposure for clinicians, institutional liability for hospitals, and product liability for manufacturers while operating outside FDA medical device regulation + +Ambient AI scribes create a novel three-party liability structure that existing malpractice frameworks are not designed to handle. Clinician liability: physicians who sign AI-generated notes containing errors (fabricated diagnoses, wrong medications, hallucinated procedures) bear malpractice exposure because signing attests to accuracy regardless of generation method. Hospital liability: institutions that deploy ambient scribes without instructing clinicians on potential mistake types, establishing review protocols, or informing patients of AI use face institutional liability for inadequate AI governance. Manufacturer liability: AI scribe makers face product liability for documented failure modes (hallucinations, omissions) despite FDA classification as general wellness/administrative tools rather than medical devices. The critical gap: FDA's non-medical-device classification does NOT immunize manufacturers from product liability, but also provides no regulatory framework for safety standards. This creates simultaneous exposure across three parties with no established legal mechanism to allocate liability cleanly. The authors—from Memorial Sloan Kettering, University of Illinois Law, and Northeastern Law—frame this as an emerging liability reckoning, not a theoretical concern. Speech recognition systems have already caused documented patient harm: 'erroneously documenting no vascular flow instead of normal vascular flow' triggered unnecessary procedures; confusing tumor location led to surgery on wrong site. The liability exposure is live and unresolved. diff --git a/domains/health/ambient-ai-scribes-face-wiretapping-litigation-for-consent-violations.md b/domains/health/ambient-ai-scribes-face-wiretapping-litigation-for-consent-violations.md new file mode 100644 index 000000000..df47b4ff5 --- /dev/null +++ b/domains/health/ambient-ai-scribes-face-wiretapping-litigation-for-consent-violations.md @@ -0,0 +1,17 @@ +--- +type: claim +domain: health +description: California and Illinois lawsuits in 2025-2026 allege violations of CMIA, BIPA, and state wiretapping statutes as an unanticipated legal vector +confidence: experimental +source: Gerke, Simon, Roman (JCO Oncology Practice 2026), documenting active litigation in California and Illinois +created: 2026-04-02 +title: Ambient AI scribes are generating wiretapping and biometric privacy lawsuits because health systems deployed without patient consent protocols for third-party audio processing +agent: vida +scope: structural +sourcer: JCO Oncology Practice +related_claims: ["[[ambient AI documentation reduces physician documentation burden by 73 percent but the relationship between automation and burnout is more complex than time savings alone]]", "[[healthcare AI regulation needs blank-sheet redesign because the FDA drug-and-device model built for static products cannot govern continuously learning software]]"] +--- + +# Ambient AI scribes are generating wiretapping and biometric privacy lawsuits because health systems deployed without patient consent protocols for third-party audio processing + +Ambient AI scribes are facing an unanticipated legal attack vector through wiretapping and biometric privacy statutes. Lawsuits filed in California and Illinois (2025-2026) allege health systems used ambient scribing without patient informed consent, potentially violating: California's Confidentiality of Medical Information Act (CMIA), Illinois Biometric Information Privacy Act (BIPA), and state wiretapping statutes because third-party vendors process audio recordings. The legal theory: ambient scribes record patient-clinician conversations and transmit audio to external AI processors, which constitutes wiretapping if patients haven't explicitly consented to third-party recording. This is distinct from the malpractice liability framework—it's a privacy/consent violation that creates institutional exposure regardless of whether the AI generates accurate notes. The timing is significant: Kaiser Permanente announced clinician access to ambient documentation scribes in August 2024, making it the first major health system deployment at scale. Multiple major systems have since deployed. The lawsuits emerged 12-18 months after initial large-scale deployment, suggesting this is the litigation leading edge. The authors note this creates institutional liability for hospitals that deployed without establishing patient consent protocols—a governance failure distinct from the clinical accuracy question. This represents a second, independent legal vector beyond malpractice: privacy law applied to AI-mediated clinical workflows. -- 2.45.2 From d8032aba1028cf141ab1bd6a1f7dfd3ccee1a1c1 Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Thu, 2 Apr 2026 10:51:11 +0000 Subject: [PATCH 3/3] vida: extract claims from 2026-xx-npj-digital-medicine-innovating-global-regulatory-frameworks-genai-medical-devices - Source: inbox/queue/2026-xx-npj-digital-medicine-innovating-global-regulatory-frameworks-genai-medical-devices.md - Domain: health - Claims: 1, Entities: 0 - Enrichments: 2 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Vida --- ...allucination-are-architectural-properties.md | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) create mode 100644 domains/health/generative-ai-medical-devices-require-new-regulatory-frameworks-because-non-determinism-continuous-updates-and-inherent-hallucination-are-architectural-properties.md diff --git a/domains/health/generative-ai-medical-devices-require-new-regulatory-frameworks-because-non-determinism-continuous-updates-and-inherent-hallucination-are-architectural-properties.md b/domains/health/generative-ai-medical-devices-require-new-regulatory-frameworks-because-non-determinism-continuous-updates-and-inherent-hallucination-are-architectural-properties.md new file mode 100644 index 000000000..249580a7e --- /dev/null +++ b/domains/health/generative-ai-medical-devices-require-new-regulatory-frameworks-because-non-determinism-continuous-updates-and-inherent-hallucination-are-architectural-properties.md @@ -0,0 +1,17 @@ +--- +type: claim +domain: health +description: Existing medical device regulatory frameworks test static algorithms with deterministic outputs, making them structurally inadequate for generative AI where probabilistic outputs, continuous evolution, and hallucination are features of the architecture +confidence: experimental +source: npj Digital Medicine (2026), commentary on regulatory frameworks +created: 2026-04-02 +title: Generative AI in medical devices requires categorically different regulatory frameworks than narrow AI because non-deterministic outputs, continuous model updates, and inherent hallucination are architectural properties not correctable defects +agent: vida +scope: structural +sourcer: npj Digital Medicine authors +related_claims: ["[[healthcare AI regulation needs blank-sheet redesign because the FDA drug-and-device model built for static products cannot govern continuously learning software]]", "[[OpenEvidence became the fastest-adopted clinical technology in history reaching 40 percent of US physicians daily within two years]]", "[[ambient AI documentation reduces physician documentation burden by 73 percent but the relationship between automation and burnout is more complex than time savings alone]]"] +--- + +# Generative AI in medical devices requires categorically different regulatory frameworks than narrow AI because non-deterministic outputs, continuous model updates, and inherent hallucination are architectural properties not correctable defects + +Generative AI medical devices violate the core assumptions of existing regulatory frameworks in three ways: (1) Non-determinism — the same prompt yields different outputs across sessions, breaking the 'fixed algorithm' assumption underlying FDA 510(k) clearance and EU device testing; (2) Continuous updates — model updates change clinical behavior constantly, while regulatory approval tests a static snapshot; (3) Inherent hallucination — probabilistic output generation means hallucination is an architectural feature, not a defect to be corrected through engineering. The paper argues that no regulatory body has proposed 'hallucination rate' as a required safety metric, despite hallucination being documented as a harm type (ECRI 2026) with measured rates (1.47% in ambient scribes per npj Digital Medicine). The urgency framing is significant: npj Digital Medicine rarely publishes urgent calls to action, suggesting editorial assessment that current regulatory rollbacks (FDA CDS guidance, EU AI Act medical device exemptions) are moving in the opposite direction from what generative AI safety requires. This is not a call for stricter enforcement of existing rules — it's an argument that the rules themselves are categorically wrong for this technology class. -- 2.45.2