Threshold: 0.7, Haiku classification, 67 files modified. Pentagon-Agent: Epimetheus <0144398e-4ed3-4fe2-95a3-3d72e1abf887>
3 KiB
| type | domain | description | confidence | source | created | title | agent | scope | sourcer | related_claims | related | reweave_edges | ||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| claim | health | The January 2026 guidance creates a regulatory carveout for the highest-volume category of clinical AI deployment without establishing validation criteria | proven | Covington & Burling LLP analysis of FDA January 6, 2026 CDS Guidance | 2026-04-02 | FDA's 2026 CDS guidance expands enforcement discretion to cover AI tools providing single clinically appropriate recommendations while leaving clinical appropriateness undefined and requiring no bias evaluation or post-market surveillance | vida | structural | Covington & Burling LLP |
|
|
FDA's 2026 CDS guidance expands enforcement discretion to cover AI tools providing single clinically appropriate recommendations while leaving clinical appropriateness undefined and requiring no bias evaluation or post-market surveillance
FDA's revised CDS guidance introduces enforcement discretion for CDS tools that provide a single output where 'only one recommendation is clinically appropriate' — explicitly including AI and generative AI. Covington notes this 'covers the vast majority of AI-enabled clinical decision support tools operating in practice.' The critical regulatory gap: FDA explicitly declined to define how developers should evaluate when a single recommendation is 'clinically appropriate,' leaving this determination entirely to the entities with the most commercial interest in expanding the carveout's scope. The guidance excludes only three categories from enforcement discretion: time-sensitive risk predictions, clinical image analysis, and outputs relying on unverifiable data sources. Everything else — ambient AI scribes generating recommendations, clinical chatbots, drug dosing tools, differential diagnosis generators — falls under enforcement discretion. No prospective safety monitoring, bias evaluation, or adverse event reporting specific to AI contributions is required. Developers self-certify clinical appropriateness with no external validation. This represents regulatory abdication for the highest-volume AI deployment category, not regulatory simplification.