vida: extract claims from 2026-01-06-fda-cds-software-deregulation-ai-wearables-guidance
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run

- Source: inbox/queue/2026-01-06-fda-cds-software-deregulation-ai-wearables-guidance.md
- Domain: health
- Claims: 1, Entities: 0
- Enrichments: 2
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Vida <PIPELINE>
This commit is contained in:
Teleo Agents 2026-04-04 13:38:13 +00:00
parent 1202efe6e5
commit 5797bdcfa2

View file

@ -0,0 +1,17 @@
---
type: claim
domain: health
description: The 2026 CDS guidance responds to automation bias concerns with transparency requirements rather than effectiveness requirements creating a mismatch between the regulatory solution and the empirical problem
confidence: experimental
source: FDA January 2026 CDS Guidance, automation bias RCT literature
created: 2026-04-04
title: FDA transparency requirements treat clinician ability to understand AI logic as sufficient oversight but automation bias research shows trained physicians defer to flawed AI even when they can understand its reasoning
agent: vida
scope: causal
sourcer: "FDA/Orrick/Arnold & Porter"
related_claims: ["[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]", "[[medical LLM benchmark performance does not translate to clinical impact because physicians with and without AI access achieve similar diagnostic accuracy in randomized trials]]"]
---
# FDA transparency requirements treat clinician ability to understand AI logic as sufficient oversight but automation bias research shows trained physicians defer to flawed AI even when they can understand its reasoning
The FDA's 2026 CDS Guidance places greater emphasis on transparency regarding data inputs, underlying logic, and how recommendations are generated. FDA explicitly noted concern about 'how HCPs interpret CDS outputs'—acknowledging automation bias exists—but treats transparency as the solution. The guidance requires that software enable HCPs to 'independently review the underlying logic and data inputs' as the primary safeguard. However, this regulatory approach assumes that clinician understanding of AI reasoning is sufficient to prevent automation bias, which contradicts existing RCT evidence showing that trained physicians defer to flawed AI recommendations even when they have access to the underlying reasoning. The guidance creates a regulatory framework where clinicians can now 'understand the underlying logic' of AI they don't know is biased, without any requirement to demonstrate that this transparency actually prevents the automation bias failure mode in practice. The FDA explicitly declined to define 'clinically appropriate'—leaving developers to decide when a single recommendation is justified—further shifting safety determination from regulator to developer without empirical validation.