extract: 2026-xx-npj-digital-medicine-current-challenges-regulatory-databases-aimd

Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
This commit is contained in:
Teleo Agents 2026-04-02 10:50:11 +00:00
parent 5fa6420ed9
commit e35f0185e8
8 changed files with 167 additions and 1 deletions

View file

@ -0,0 +1,27 @@
---
type: claim
domain: health
description: The attribution problem in adverse event reporting means that when harm occurs in clinical encounters involving AI, the causal role of the AI system cannot be determined from regulatory database records
confidence: experimental
source: npj Digital Medicine 2026, analysis of regulatory database design limitations
created: 2026-04-02
attribution:
extractor:
- handle: "vida"
sourcer:
- handle: "npj-digital-medicine-authors"
context: "npj Digital Medicine 2026, analysis of regulatory database design limitations"
---
# AI medical device contribution to patient harm is systematically unidentifiable from existing regulatory reports because reporting mechanisms lack fields for capturing whether AI contributed, what AI recommended, or how clinicians interacted with outputs
The paper identifies 'attribution problems' as a fundamental challenge: when a patient is harmed in a clinical encounter involving an AI tool, the reporting mechanism doesn't capture whether the AI contributed, what the AI recommended, or how the clinician interacted with the output. The authors state that 'the contribution of AI to harm is systematically unidentifiable from existing reports.' This is distinct from the general MAUDE data quality problem—it's specifically about the impossibility of determining causation even when reports are filed. The mechanism is that regulatory databases were designed for hardware devices with clear failure modes (device malfunction, material defect), not for software that operates as clinical decision support where harm can result from correct AI output used incorrectly, incorrect AI output followed correctly, or complex human-AI interaction failures. This creates a structural blind spot where AI safety signals cannot be detected even with perfect reporting compliance.
---
Relevant Notes:
- [[fda-maude-cannot-identify-ai-contributions-to-adverse-events-due-to-structural-reporting-gaps]]
- [[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]
Topics:
- [[_map]]

View file

@ -0,0 +1,40 @@
---
type: claim
domain: health
description: The FDA's classification of ambient scribes as general wellness/administrative tools creates a liability gap where three parties bear overlapping risk without established legal frameworks to allocate responsibility
confidence: experimental
source: Gerke, Simon, Roman (JCO Oncology Practice 2026); documented speech recognition harms; California and Illinois wiretapping lawsuits filed 2025-2026
created: 2026-04-02
attribution:
extractor:
- handle: "vida"
sourcer:
- handle: "sara-gerke,-david-a.-simon,-benjamin-r.-roman"
context: "Gerke, Simon, Roman (JCO Oncology Practice 2026); documented speech recognition harms; California and Illinois wiretapping lawsuits filed 2025-2026"
---
# Ambient AI scribe deployment creates simultaneous malpractice exposure for clinicians, institutional liability for hospitals, and product liability for manufacturers — while operating outside FDA medical device regulation
Legal analysis from University of Illinois Law, Northeastern Law, and Memorial Sloan Kettering establishes that ambient AI scribes create a novel three-party liability structure:
1. **Clinician malpractice exposure**: Physicians who sign AI-generated notes containing errors (fabricated diagnoses, wrong medications, hallucinated procedures) bear full malpractice liability because attestation by signature transfers legal responsibility regardless of AI generation. Standard of care requires adequate review before signing — AI assistance does not transfer this obligation to the tool.
2. **Hospital institutional liability**: Health systems that deploy ambient scribes without establishing review protocols, instructing clinicians on potential mistake types, or informing patients of AI use face institutional liability for inadequate AI governance.
3. **Manufacturer product liability**: AI scribe manufacturers face exposure for documented failure modes (hallucinations, omissions), and the FDA's classification as general wellness/administrative tools does NOT immunize them from product liability. The 510(k) clearance defense is unavailable for uncleared products.
The critical regulatory gap: ambient scribes operate outside FDA medical device oversight, creating liability exposure without corresponding regulatory infrastructure. Earlier-generation speech recognition systems have already caused documented patient harm — "erroneously documenting 'no vascular flow' instead of 'normal vascular flow'" triggered unnecessary procedures; tumor location confusion led to surgery on wrong sites.
Litigation is already materializing: lawsuits in California and Illinois (2025-2026) allege health systems used ambient scribing without patient informed consent, potentially violating California's Confidentiality of Medical Information Act, Illinois BIPA, and state wiretapping statutes (third-party audio processing by vendors).
This is published in ASCO's clinical practice journal by authors from one of the most technically sophisticated cancer centers in the US — indicating the oncology establishment views this as a live operational risk, not theoretical concern.
---
Relevant Notes:
- [[ambient AI documentation reduces physician documentation burden by 73 percent but the relationship between automation and burnout is more complex than time savings alone]]
- [[healthcare AI regulation needs blank-sheet redesign because the FDA drug-and-device model built for static products cannot govern continuously learning software]]
- [[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]
Topics:
- [[_map]]

View file

@ -0,0 +1,39 @@
---
type: claim
domain: health
description: California and Illinois wiretapping laws, designed for traditional surveillance, are being weaponized against ambient AI deployment through third-party audio processing claims
confidence: experimental
source: Gerke, Simon, Roman (JCO Oncology Practice 2026); California and Illinois lawsuits filed 2025-2026
created: 2026-04-02
attribution:
extractor:
- handle: "vida"
sourcer:
- handle: "sara-gerke,-david-a.-simon,-benjamin-r.-roman"
context: "Gerke, Simon, Roman (JCO Oncology Practice 2026); California and Illinois lawsuits filed 2025-2026"
---
# Existing wiretapping statutes are being applied to ambient AI scribes in 2025-2026 lawsuits, creating an unanticipated legal vector for health systems that deployed without patient consent protocols
The legal attack vector against ambient AI scribes is coming from an unexpected direction: state wiretapping statutes rather than medical malpractice or HIPAA frameworks. Lawsuits filed in California and Illinois (2025-2026) allege health systems violated:
- California's Confidentiality of Medical Information Act
- Illinois Biometric Information Privacy Act (BIPA)
- State wiretapping statutes
The wiretapping angle is particularly potent because ambient scribes process patient-clinician conversations through third-party vendors, potentially triggering two-party consent requirements that many health systems did not anticipate when deploying the technology.
This creates a consent infrastructure gap: health systems adopted ambient scribes for efficiency gains (73% documentation burden reduction documented elsewhere in KB) but did not build corresponding patient consent protocols because the technology was classified as administrative/wellness rather than medical device.
The timing is significant: Kaiser Permanente announced clinician access to ambient documentation in August 2024, making them the first major health system at scale. Multiple major systems have since deployed. The lawsuits are arriving within 12-18 months of large-scale deployment, indicating plaintiffs' bar has identified this as a viable litigation strategy.
This is distinct from the malpractice exposure (which requires patient harm) — wiretapping claims can proceed based on process violations alone, creating a lower bar for legal action and potentially broader class action exposure.
---
Relevant Notes:
- [[ambient AI documentation reduces physician documentation burden by 73 percent but the relationship between automation and burnout is more complex than time savings alone]]
- [[AI scribes reached 92 percent provider adoption in under 3 years because documentation is the rare healthcare workflow where AI value is immediate unambiguous and low-risk]]
Topics:
- [[_map]]

View file

@ -15,3 +15,9 @@ related_claims: ["[[healthcare AI regulation needs blank-sheet redesign because
# FDA MAUDE reports lack the structural capacity to identify AI contributions to adverse events because 34.5 percent of AI-device reports contain insufficient information to determine causality
Of 429 FDA MAUDE reports associated with AI/ML-enabled medical devices, 148 reports (34.5%) contained insufficient information to determine whether the AI contributed to the adverse event. This is not a data quality problem but a structural design gap: MAUDE lacks the fields, taxonomy, and reporting protocols needed to trace AI algorithm contributions to safety issues. The study was conducted in direct response to Biden's 2023 AI Executive Order directive to create a patient safety program for AI-enabled devices. Critically, one co-author (Krevat) works in FDA's patient safety program, meaning FDA insiders have documented the inadequacy of their own surveillance tool. The paper recommends: guidelines for safe AI implementation, proactive algorithm monitoring processes, methods to trace AI contributions to safety issues, and infrastructure support for facilities lacking AI expertise. Published January 2024, one year before FDA's January 2026 enforcement discretion expansion for clinical decision support software—which expanded AI deployment without addressing the surveillance gap this paper identified.
### Additional Evidence (extend)
*Source: [[2026-xx-npj-digital-medicine-current-challenges-regulatory-databases-aimd]] | Added: 2026-04-02*
The problem extends beyond MAUDE to all major regulatory databases globally. npj Digital Medicine 2026 identifies that regulatory databases in US, EU, and UK were all 'designed for hardware devices and lack fields for capturing AI-specific failure information.' The paper emphasizes this is a 'fundamental' issue 'not fixable with surface-level updates,' requiring structural redesign of reporting infrastructure.

View file

@ -0,0 +1,26 @@
---
type: claim
domain: health
description: The three major regulatory jurisdictions for AI medical devices lack compatible classification infrastructure, creating a surveillance vacuum for globally deployed clinical AI tools
confidence: experimental
source: npj Digital Medicine 2026, regulatory database analysis across US/EU/UK systems
created: 2026-04-02
attribution:
extractor:
- handle: "vida"
sourcer:
- handle: "npj-digital-medicine-authors"
context: "npj Digital Medicine 2026, regulatory database analysis across US/EU/UK systems"
---
# Global AI medical device surveillance is structurally fragmented because US MAUDE, EU EUDAMED, and UK MHRA use incompatible AI classification systems making cross-national monitoring impossible even if individual systems improve
The paper identifies that MAUDE (US), EUDAMED (EU), and MHRA Yellow Card system (UK) each have their own regulatory databases for medical device adverse event reporting, but they do not use compatible AI classification systems. This means that even if each individual system were improved to better capture AI-specific failure modes, cross-national surveillance for AI tools deployed simultaneously across all three jurisdictions would remain structurally impossible. The authors explicitly call this 'global fragmentation' as one of four key challenges. This is particularly significant because most major clinical AI tools operate in all three markets simultaneously, yet no infrastructure exists to aggregate safety signals across jurisdictions. The temporal context amplifies the concern: this academic call for international coordination is published in Q1 2026, the same quarter FDA expanded enforcement discretion (January 2026) and EU rolled back high-risk AI requirements (December 2025), moving in the opposite direction from the recommended global coordination.
---
Relevant Notes:
- [[fda-maude-cannot-identify-ai-contributions-to-adverse-events-due-to-structural-reporting-gaps]]
Topics:
- [[_map]]

View file

@ -15,3 +15,9 @@ related_claims: ["[[healthcare AI regulation needs blank-sheet redesign because
# Clinical AI deregulation is occurring during active harm accumulation not after evidence of safety as demonstrated by simultaneous FDA enforcement discretion expansion and ECRI top hazard designation in January 2026
The FDA's January 6, 2026 CDS enforcement discretion expansion and ECRI's January 2026 publication of AI chatbots as the #1 health technology hazard occurred in the same 30-day window. This temporal coincidence represents the clearest evidence that deregulation is occurring during active harm accumulation, not after evidence of safety. ECRI is not an advocacy group but the operational patient safety infrastructure that directly informs hospital purchasing decisions and risk management—their rankings are based on documented harm tracking. The FDA's enforcement discretion expansion means more AI clinical decision support tools will enter deployment with reduced regulatory oversight at precisely the moment when the most credible patient safety organization is flagging AI chatbot misuse as the highest-priority patient safety concern. This pattern extends beyond the US: the EU AI Act rollback also occurred in the same 30-day window. The simultaneity reveals a regulatory-safety gap where policy is expanding deployment capacity while safety infrastructure is documenting active failure modes. This is not a case of regulators waiting for harm signals to emerge—the harm signals are already present and escalating (two consecutive years at #1), yet regulatory trajectory is toward expanded deployment rather than increased oversight.
### Additional Evidence (confirm)
*Source: [[2026-xx-npj-digital-medicine-current-challenges-regulatory-databases-aimd]] | Added: 2026-04-02*
Academic establishment's response to regulatory rollback demonstrates the temporal inversion. npj Digital Medicine published perspective in Q1 2026 calling for 'global stakeholders must come together and align efforts to develop a clear roadmap' for AI medical device surveillance—published in the same quarter as FDA expanded enforcement discretion (January 2026) and EU rolled back high-risk AI requirements (December 2025). The expert community is calling for MORE rigorous international coordination at exactly the moment major regulatory bodies are relaxing requirements.

View file

@ -0,0 +1,9 @@
## Prior Art (automated pre-screening)
- [AI diagnostic triage achieves 97 percent sensitivity across 14 conditions making AI-first screening viable for all imaging and pathology](domains/health/AI diagnostic triage achieves 97 percent sensitivity across 14 conditions making AI-first screening viable for all imaging and pathology.md) — similarity: 0.59 — matched query: "AI medical device surveillance"
- [the physician role shifts from information processor to relationship manager as AI automates documentation triage and evidence synthesis](domains/health/the physician role shifts from information processor to relationship manager as AI automates documentation triage and evidence synthesis.md) — similarity: 0.54 — matched query: "Current Challenges and the Way Forwards for Regulatory Databases of Artificial I"
- [FDA is replacing animal testing with AI models and organ-on-chip as the default preclinical pathway which will compress drug development timelines and reduce the 90 percent clinical failure rate](domains/health/FDA is replacing animal testing with AI models and organ-on-chip as the default preclinical pathway which will compress drug development timelines and reduce the 90 percent clinical failure rate.md) — similarity: 0.54 — matched query: "Current Challenges and the Way Forwards for Regulatory Databases of Artificial I"
- [CMS is creating AI-specific reimbursement codes which will formalize a two-speed adoption system where proven AI applications get payment parity while experimental ones remain in cash-pay limbo](domains/health/CMS is creating AI-specific reimbursement codes which will formalize a two-speed adoption system where proven AI applications get payment parity while experimental ones remain in cash-pay limbo.md) — similarity: 0.54 — matched query: "Current Challenges and the Way Forwards for Regulatory Databases of Artificial I"
- [AI middleware bridges consumer wearable data to clinical utility because continuous data is too voluminous for direct clinician review](domains/health/AI middleware bridges consumer wearable data to clinical utility because continuous data is too voluminous for direct clinician review.md) — similarity: 0.53 — matched query: "AI medical device surveillance"
- [AI compresses drug discovery timelines by 30-40 percent but has not yet improved the 90 percent clinical failure rate that determines industry economics](domains/health/AI compresses drug discovery timelines by 30-40 percent but has not yet improved the 90 percent clinical failure rate that determines industry economics.md) — similarity: 0.51 — matched query: "post-market monitoring AI"
- [human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs](domains/health/human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs.md) — similarity: 0.51 — matched query: "post-market monitoring AI"

View file

@ -7,10 +7,16 @@ date: 2026-01-01
domain: health
secondary_domains: [ai-alignment]
format: journal-article
status: unprocessed
status: processed
priority: medium
tags: [FDA, clinical-AI, regulatory-databases, post-market-surveillance, MAUDE, global-regulation, belief-5]
flagged_for_theseus: ["Global regulatory database inadequacy for AI medical devices — same surveillance vacuum in US, EU, UK simultaneously"]
processed_by: vida
processed_date: 2026-04-02
claims_extracted: ["global-ai-medical-device-surveillance-fragmentation-prevents-cross-national-monitoring.md", "ai-medical-device-harm-attribution-systematically-unidentifiable-from-regulatory-reports.md"]
enrichments_applied: ["fda-maude-cannot-identify-ai-contributions-to-adverse-events-due-to-structural-reporting-gaps.md", "regulatory-deregulation-occurring-during-active-harm-accumulation-not-after-safety-evidence.md"]
extraction_model: "anthropic/claude-sonnet-4.5"
extraction_notes: "pre-screen: 7 prior art claims from 5 themes"
---
## Content
@ -57,3 +63,10 @@ Published in *npj Digital Medicine*, volume 9, article 235 (2026). Perspective a
PRIMARY CONNECTION: Babic framework paper on MAUDE; EU AI Act rollback; FDA CDS guidance expansion
WHY ARCHIVED: Provides the global framing for the surveillance vacuum — it's not just a US MAUDE problem, it's a structurally fragmented global AI device monitoring system at exactly the moment AI device deployment is accelerating.
EXTRACTION HINT: Most valuable as context for a multi-source claim about the "total safety gap" in clinical AI. Does not stand alone — pair with Babic, FDA CDS guidance, and EU rollback sources.
## Key Facts
- npj Digital Medicine published perspective article on AI medical device regulatory databases in volume 9, article 235 (2026)
- Four key challenges identified: quality/availability of input data, attribution problems, global fragmentation, passive reporting bias
- Companion 2026 paper published: 'Innovating global regulatory frameworks for generative AI in medical devices is an urgent priority'
- Three major regulatory databases examined: FDA MAUDE (US), EUDAMED (EU), MHRA Yellow Card system (UK)