From 00519f9024366cb395691081246ef109b8dcf8c5 Mon Sep 17 00:00:00 2001 From: Teleo Agents Date: Sat, 4 Apr 2026 13:38:15 +0000 Subject: [PATCH] =?UTF-8?q?source:=202026-01-06-fda-cds-software-deregulat?= =?UTF-8?q?ion-ai-wearables-guidance.md=20=E2=86=92=20processed?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Pentagon-Agent: Epimetheus --- ...ware-deregulation-ai-wearables-guidance.md | 5 ++- ...ware-deregulation-ai-wearables-guidance.md | 44 ------------------- 2 files changed, 4 insertions(+), 45 deletions(-) delete mode 100644 inbox/queue/2026-01-06-fda-cds-software-deregulation-ai-wearables-guidance.md diff --git a/inbox/archive/health/2026-01-06-fda-cds-software-deregulation-ai-wearables-guidance.md b/inbox/archive/health/2026-01-06-fda-cds-software-deregulation-ai-wearables-guidance.md index 8f7fdf89..972f2c25 100644 --- a/inbox/archive/health/2026-01-06-fda-cds-software-deregulation-ai-wearables-guidance.md +++ b/inbox/archive/health/2026-01-06-fda-cds-software-deregulation-ai-wearables-guidance.md @@ -7,10 +7,13 @@ date: 2026-01-06 domain: health secondary_domains: [ai-alignment] format: regulatory-guidance -status: unprocessed +status: processed +processed_by: vida +processed_date: 2026-04-04 priority: high tags: [FDA, clinical-AI, CDS-software, deregulation, enforcement-discretion, wearables, belief-5, regulatory-capture] flagged_for_theseus: ["FDA deregulation of clinical AI parallels EU AI Act rollback — global pattern of regulatory capture"] +extraction_model: "anthropic/claude-sonnet-4.5" --- ## Content diff --git a/inbox/queue/2026-01-06-fda-cds-software-deregulation-ai-wearables-guidance.md b/inbox/queue/2026-01-06-fda-cds-software-deregulation-ai-wearables-guidance.md deleted file mode 100644 index 8f7fdf89..00000000 --- a/inbox/queue/2026-01-06-fda-cds-software-deregulation-ai-wearables-guidance.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -type: source -title: "FDA Eases Oversight for AI-Enabled Clinical Decision Support Software and Wearables (January 2026 Guidance)" -author: "FDA / analysis via Orrick, Arnold & Porter, Kevin MD" -url: https://www.orrick.com/en/Insights/2026/01/FDA-Eases-Oversight-for-AI-Enabled-Clinical-Decision-Support-Software-and-Wearables -date: 2026-01-06 -domain: health -secondary_domains: [ai-alignment] -format: regulatory-guidance -status: unprocessed -priority: high -tags: [FDA, clinical-AI, CDS-software, deregulation, enforcement-discretion, wearables, belief-5, regulatory-capture] -flagged_for_theseus: ["FDA deregulation of clinical AI parallels EU AI Act rollback — global pattern of regulatory capture"] ---- - -## Content - -FDA published guidance on January 6, 2026, expanding enforcement discretion for AI-enabled clinical decision support (CDS) software and wearable devices. - -**Key policy changes:** -- **CDS software:** Expanded enforcement discretion where software provides a single, clinically appropriate recommendation AND enables HCPs to independently review the underlying logic and data inputs. This applies to AI including generative AI. -- **Wearables:** Expanded wellness policy for non-invasive consumer wearables reporting physiologic metrics (blood pressure, O2 saturation, glucose-related signals) — broader set may now fall under enforcement discretion. -- **Commissioner framing:** FDA Commissioner Marty Makary at CES 2026: "The government doesn't need to be regulating everything" — "get out of the way" where oversight is not warranted. -- **Risk-based carveouts maintained:** Time-critical event prediction (CVD event in next 24 hours) and medical image analysis remain under oversight. -- **Transparency emphasis:** 2026 CDS Guidance places greater emphasis on transparency regarding data inputs, underlying logic, and how recommendations are generated. -- **Automation bias acknowledged:** FDA explicitly noted concern about "how HCPs interpret CDS outputs" — acknowledging automation bias exists but treating transparency as the solution. -- **Ambiguity preserved:** FDA explicitly declined to define "clinically appropriate" — leaving developers to decide when a single recommendation is justified. - -**Critical gap:** The guidance maintains oversight only for "time-critical" and "image analysis" functions. The vast majority of AI-enabled CDS software — including OpenEvidence-type tools that generate differential diagnoses, treatment recommendations, and drug dosing — operates outside these carveouts. - -**Context:** Published same week as Novo Nordisk/Lilly GLP-1 price deals with Medicare. Framed as deregulatory reform consistent with broader Trump administration regulatory philosophy. - -## Agent Notes -**Why this matters:** This is the US counterpart to the EU AI Act rollback. Both regulatory bodies loosened clinical AI oversight in the same 30-day window (EU Commission proposal December 2025, FDA guidance January 6, 2026). The WHO warning about EU regulatory vacuum applies symmetrically to the FDA's expanded enforcement discretion. OpenEvidence (already at 20M consultations/month, $12B valuation) operates under enforcement discretion with zero required safety/bias evaluation. -**What surprised me:** The "transparency as solution" framing — FDA acknowledges automation bias as a real concern, then responds with transparency requirements rather than effectiveness requirements. Clinicians can now "understand the underlying logic" of AI they don't know is biased. -**What I expected but didn't find:** Any requirement for post-market surveillance of CDS software bias outcomes. The guidance creates no mechanism to detect the NOHARM, demographic bias, or automation bias failure modes after deployment. -**KB connections:** All clinical AI failure mode papers (Sessions 7-9); OpenEvidence opacity paper; EU AI Act rollback (Petrie-Flom); automation bias RCT (already archived). -**Extraction hints:** (1) "FDA's January 2026 CDS guidance expands enforcement discretion without requiring bias evaluation or post-market safety surveillance — creating a deployment pathway for high-volume AI tools with zero required safety monitoring"; (2) "FDA transparency requirements treat clinician ability to 'understand the logic' as sufficient oversight — but automation bias research shows trained physicians still defer to flawed AI even when they can understand its reasoning." -**Context:** The "Orrick" analysis is a law firm regulatory update — reliable factual summary. Kevin MD commentary is clinical perspective. The ACR (American College of Radiology) has published a separate analysis of implications for radiology AI. - -## Curator Notes -PRIMARY CONNECTION: All clinical AI failure mode papers; EU AI Act rollback (companion source) -WHY ARCHIVED: US regulatory rollback parallel to EU — together they document a global pattern of regulatory capture occurring simultaneously with research evidence of failure modes -EXTRACTION HINT: The convergent EU+US rollback in the same 30-day window is the extractable pattern. Individual guidances are less important than the coordinated global signal.