- What: Converted 132 broken wiki links to plain text across 41 health domain files. Added Vida to the Active Agents table in CLAUDE.md. - Why: Leo's PR #15 review required these two changes before merge. - Details: Broken links were references to claims that don't yet exist (demand signals). Brackets removed so they read as plain text rather than broken links. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
3.4 KiB
| description | type | domain | created | source | confidence |
|---|---|---|---|---|---|
| Wachter argues AI should be regulated more like physician licensing with competency exams and ongoing certification rather than the FDA approval model designed for drugs and devices that remain static forever | claim | health | 2026-02-18 | DJ Patil interviewing Bob Wachter, Commonwealth Club, February 9 2026; Wachter 'A Giant Leap' (2026) | likely |
healthcare AI regulation needs blank-sheet redesign because the FDA drug-and-device model built for static products cannot govern continuously learning software
Bob Wachter argues that the current regulatory framework for healthcare AI is a "square peg and round hole problem." The FDA model was built for drugs that remain chemically identical forever and devices with fixed specifications. AI systems that learn, update, and adapt continuously break every assumption in this model.
The alternative Wachter proposes: regulate AI more like physicians. Physicians pass licensing exams to practice, maintain board certification through ongoing competency testing, and face consequences when they harm patients. An analogous AI regulatory framework might require passing standardized clinical competency tests before deployment, periodic re-certification as models update, and clear accountability when AI-enabled care causes harm.
This matters because the regulatory gap is widening. AI tools are being deployed in clinical settings faster than regulators can evaluate them. The risk of overregulation -- stifling beneficial AI adoption while the healthcare system desperately needs help -- outweighs the risk of underregulation in Wachter's assessment. But "free rein" is not sustainable either. A high-level task force starting from a blank piece of paper, explicitly not constrained by existing FDA categories, is what Wachter recommends.
The AI payment problem compounds the regulatory gap. No payer currently reimburses AI-enabled mammograms despite evidence that AI mammography detects early cancers more reliably than human radiologists alone. Patients pay $50-75 out of pocket for the AI overlay. This misalignment may force the transition to value-based care, where health systems are paid a fixed amount with the expectation they will buy and use AI tools that help deliver better care at lower cost. The payment question and the regulatory question are intertwined: without a regulatory framework, payers have no basis for coverage decisions.
Relevant Notes:
- the FDA now separates wellness devices from medical devices based on claims not sensor technology enabling health insights without full medical device classification -- the FDA has already created flexibility for wellness devices; clinical AI needs a parallel regulatory innovation
- value-based care transitions stall at the payment boundary because 60 percent of payments touch value metrics but only 14 percent bear full risk -- AI payment gaps may accelerate VBC adoption by making fee-for-service untenable for AI-enabled care
- adaptive governance outperforms rigid alignment blueprints because superintelligence development has too many unknowns for fixed plans -- the same principle applies to clinical AI: governance frameworks must adapt with the technology
- technology advances exponentially but coordination mechanisms evolve linearly creating a widening gap -- healthcare AI regulation is a specific instance of this general coordination gap
Topics:
- health and wellness