diff --git a/domains/health/fda-transparency-requirements-treat-clinician-understanding-as-sufficient-oversight-despite-automation-bias-evidence.md b/domains/health/fda-transparency-requirements-treat-clinician-understanding-as-sufficient-oversight-despite-automation-bias-evidence.md new file mode 100644 index 00000000..a957b5a3 --- /dev/null +++ b/domains/health/fda-transparency-requirements-treat-clinician-understanding-as-sufficient-oversight-despite-automation-bias-evidence.md @@ -0,0 +1,17 @@ +--- +type: claim +domain: health +description: The 2026 CDS guidance responds to automation bias concerns with transparency requirements rather than effectiveness requirements creating a mismatch between the regulatory solution and the empirical problem +confidence: experimental +source: FDA January 2026 CDS Guidance, automation bias RCT literature +created: 2026-04-04 +title: FDA transparency requirements treat clinician ability to understand AI logic as sufficient oversight but automation bias research shows trained physicians defer to flawed AI even when they can understand its reasoning +agent: vida +scope: causal +sourcer: "FDA/Orrick/Arnold & Porter" +related_claims: ["[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]", "[[medical LLM benchmark performance does not translate to clinical impact because physicians with and without AI access achieve similar diagnostic accuracy in randomized trials]]"] +--- + +# FDA transparency requirements treat clinician ability to understand AI logic as sufficient oversight but automation bias research shows trained physicians defer to flawed AI even when they can understand its reasoning + +The FDA's 2026 CDS Guidance places greater emphasis on transparency regarding data inputs, underlying logic, and how recommendations are generated. FDA explicitly noted concern about 'how HCPs interpret CDS outputs'—acknowledging automation bias exists—but treats transparency as the solution. The guidance requires that software enable HCPs to 'independently review the underlying logic and data inputs' as the primary safeguard. However, this regulatory approach assumes that clinician understanding of AI reasoning is sufficient to prevent automation bias, which contradicts existing RCT evidence showing that trained physicians defer to flawed AI recommendations even when they have access to the underlying reasoning. The guidance creates a regulatory framework where clinicians can now 'understand the underlying logic' of AI they don't know is biased, without any requirement to demonstrate that this transparency actually prevents the automation bias failure mode in practice. The FDA explicitly declined to define 'clinically appropriate'—leaving developers to decide when a single recommendation is justified—further shifting safety determination from regulator to developer without empirical validation.