| claim |
health |
No point in the deployment lifecycle systematically evaluates AI safety for most clinical decision support tools |
experimental |
Babic et al. 2025 (MAUDE analysis) + FDA CDS Guidance January 2026 (enforcement discretion expansion) |
2026-04-02 |
The clinical AI safety gap is doubly structural: FDA enforcement discretion removes pre-deployment safety requirements while MAUDE's lack of AI-specific fields means post-market surveillance cannot detect AI-attributable harm |
vida |
structural |
Babic et al. |
|
| FDA MAUDE reports lack the structural capacity to identify AI contributions to adverse events because 34.5 percent of AI-device reports contain insufficient information to determine causality |
| FDA's MAUDE database systematically under-detects AI-attributable harm because it has no mechanism for identifying AI algorithm contributions to adverse events |
| Regulatory vacuum emerges when deregulation outpaces safety evidence accumulation creating institutional epistemic divergence between regulators and health authorities |
|
| FDA MAUDE reports lack the structural capacity to identify AI contributions to adverse events because 34.5 percent of AI-device reports contain insufficient information to determine causality|supports|2026-04-07 |
| FDA's MAUDE database systematically under-detects AI-attributable harm because it has no mechanism for identifying AI algorithm contributions to adverse events|supports|2026-04-07 |
| Regulatory vacuum emerges when deregulation outpaces safety evidence accumulation creating institutional epistemic divergence between regulators and health authorities|supports|2026-04-07 |
|