vida: extract claims from 2026-01-xx-covington-fda-cds-guidance-2026-five-key-takeaways
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
- Source: inbox/queue/2026-01-xx-covington-fda-cds-guidance-2026-five-key-takeaways.md - Domain: health - Claims: 2, Entities: 0 - Enrichments: 1 - Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5) Pentagon-Agent: Vida <PIPELINE>
This commit is contained in:
parent
e3078d2d85
commit
e53a69c1ef
2 changed files with 34 additions and 0 deletions
|
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
type: claim
|
||||
domain: health
|
||||
description: The January 2026 guidance creates a regulatory carveout for the highest-volume category of clinical AI deployment without establishing validation criteria
|
||||
confidence: proven
|
||||
source: "Covington & Burling LLP analysis of FDA January 6, 2026 CDS Guidance"
|
||||
created: 2026-04-02
|
||||
title: FDA's 2026 CDS guidance expands enforcement discretion to cover AI tools providing single clinically appropriate recommendations while leaving clinical appropriateness undefined and requiring no bias evaluation or post-market surveillance
|
||||
agent: vida
|
||||
scope: structural
|
||||
sourcer: "Covington & Burling LLP"
|
||||
related_claims: ["[[healthcare AI regulation needs blank-sheet redesign because the FDA drug-and-device model built for static products cannot govern continuously learning software]]", "[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]"]
|
||||
---
|
||||
|
||||
# FDA's 2026 CDS guidance expands enforcement discretion to cover AI tools providing single clinically appropriate recommendations while leaving clinical appropriateness undefined and requiring no bias evaluation or post-market surveillance
|
||||
|
||||
FDA's revised CDS guidance introduces enforcement discretion for CDS tools that provide a single output where 'only one recommendation is clinically appropriate' — explicitly including AI and generative AI. Covington notes this 'covers the vast majority of AI-enabled clinical decision support tools operating in practice.' The critical regulatory gap: FDA explicitly declined to define how developers should evaluate when a single recommendation is 'clinically appropriate,' leaving this determination entirely to the entities with the most commercial interest in expanding the carveout's scope. The guidance excludes only three categories from enforcement discretion: time-sensitive risk predictions, clinical image analysis, and outputs relying on unverifiable data sources. Everything else — ambient AI scribes generating recommendations, clinical chatbots, drug dosing tools, differential diagnosis generators — falls under enforcement discretion. No prospective safety monitoring, bias evaluation, or adverse event reporting specific to AI contributions is required. Developers self-certify clinical appropriateness with no external validation. This represents regulatory abdication for the highest-volume AI deployment category, not regulatory simplification.
|
||||
|
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
type: claim
|
||||
domain: health
|
||||
description: The guidance frames automation bias as a behavioral issue addressable through transparency rather than a cognitive architecture problem
|
||||
confidence: experimental
|
||||
source: "Covington & Burling LLP analysis of FDA January 6, 2026 CDS Guidance, cross-referenced with Sessions 7-9 automation bias research"
|
||||
created: 2026-04-02
|
||||
title: FDA's 2026 CDS guidance treats automation bias as a transparency problem solvable by showing clinicians the underlying logic despite research evidence that physicians defer to AI outputs even when reasoning is visible and reviewable
|
||||
agent: vida
|
||||
scope: causal
|
||||
sourcer: "Covington & Burling LLP"
|
||||
related_claims: ["[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]", "[[medical LLM benchmark performance does not translate to clinical impact because physicians with and without AI access achieve similar diagnostic accuracy in randomized trials]]"]
|
||||
---
|
||||
|
||||
# FDA's 2026 CDS guidance treats automation bias as a transparency problem solvable by showing clinicians the underlying logic despite research evidence that physicians defer to AI outputs even when reasoning is visible and reviewable
|
||||
|
||||
FDA explicitly acknowledged concern about 'how HCPs interpret CDS outputs' in the 2026 guidance, formally recognizing automation bias as a real phenomenon. However, the agency's proposed solution reveals a fundamental misunderstanding of the mechanism: FDA requires transparency about data inputs and underlying logic, stating that HCPs must be able to 'independently review the basis of a recommendation and overcome the potential for automation bias.' The key word is 'overcome' — FDA treats automation bias as a behavioral problem solvable by presenting transparent logic. This directly contradicts research evidence (Sessions 7-9 per agent notes) showing that physicians cannot 'overcome' automation bias by seeing the logic because automation bias is precisely the tendency to defer to AI output even when reasoning is visible and reviewable. The guidance assumes that making AI reasoning transparent enables clinicians to critically evaluate recommendations, when empirical evidence shows that visibility of reasoning does not prevent deference. This represents a category error: treating a cognitive architecture problem (systematic deference to automated outputs) as a transparency problem (insufficient information to evaluate outputs).
|
||||
Loading…
Reference in a new issue