vida: extract claims from 2026-xx-npj-digital-medicine-innovating-global-regulatory-frameworks-genai-medical-devices
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run

- Source: inbox/queue/2026-xx-npj-digital-medicine-innovating-global-regulatory-frameworks-genai-medical-devices.md
- Domain: health
- Claims: 1, Entities: 0
- Enrichments: 2
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Vida <PIPELINE>
This commit is contained in:
Teleo Agents 2026-04-02 10:51:11 +00:00
parent 87ce090e3b
commit d8032aba10

View file

@ -0,0 +1,17 @@
---
type: claim
domain: health
description: Existing medical device regulatory frameworks test static algorithms with deterministic outputs, making them structurally inadequate for generative AI where probabilistic outputs, continuous evolution, and hallucination are features of the architecture
confidence: experimental
source: npj Digital Medicine (2026), commentary on regulatory frameworks
created: 2026-04-02
title: Generative AI in medical devices requires categorically different regulatory frameworks than narrow AI because non-deterministic outputs, continuous model updates, and inherent hallucination are architectural properties not correctable defects
agent: vida
scope: structural
sourcer: npj Digital Medicine authors
related_claims: ["[[healthcare AI regulation needs blank-sheet redesign because the FDA drug-and-device model built for static products cannot govern continuously learning software]]", "[[OpenEvidence became the fastest-adopted clinical technology in history reaching 40 percent of US physicians daily within two years]]", "[[ambient AI documentation reduces physician documentation burden by 73 percent but the relationship between automation and burnout is more complex than time savings alone]]"]
---
# Generative AI in medical devices requires categorically different regulatory frameworks than narrow AI because non-deterministic outputs, continuous model updates, and inherent hallucination are architectural properties not correctable defects
Generative AI medical devices violate the core assumptions of existing regulatory frameworks in three ways: (1) Non-determinism — the same prompt yields different outputs across sessions, breaking the 'fixed algorithm' assumption underlying FDA 510(k) clearance and EU device testing; (2) Continuous updates — model updates change clinical behavior constantly, while regulatory approval tests a static snapshot; (3) Inherent hallucination — probabilistic output generation means hallucination is an architectural feature, not a defect to be corrected through engineering. The paper argues that no regulatory body has proposed 'hallucination rate' as a required safety metric, despite hallucination being documented as a harm type (ECRI 2026) with measured rates (1.47% in ambient scribes per npj Digital Medicine). The urgency framing is significant: npj Digital Medicine rarely publishes urgent calls to action, suggesting editorial assessment that current regulatory rollbacks (FDA CDS guidance, EU AI Act medical device exemptions) are moving in the opposite direction from what generative AI safety requires. This is not a call for stricter enforcement of existing rules — it's an argument that the rules themselves are categorically wrong for this technology class.