- What: Renamed centaur file to match rewritten title ('depends on role complementarity')
- Why: Rio caught filename/title mismatch in PR #49 review
- Scope: 16 files updated — 1 rename, 15 wiki link updates
Pentagon-Agent: Leo <76FB9BCA-CC16-4479-B3E5-25A3769B3D7E>
4.5 KiB
| description | type | domain | created | source | confidence |
|---|---|---|---|---|---|
| PwC projects one trillion dollars in healthcare spending shifting to AI-driven models by 2035 with documentation automation being most certain followed by diagnostic triage drug discovery clinical decision support and population health | claim | health | 2026-02-17 | PwC From Breaking Point to Breakthrough 2025; synthesis of ambient documentation, diagnostic AI, and drug discovery evidence (February 2026) | likely |
the physician role shifts from information processor to relationship manager as AI automates documentation triage and evidence synthesis
PwC projects $1 trillion in annual US healthcare spending will shift from administrative overhead and brick-and-mortar infrastructure to AI-driven, digital-first models by 2035. The value creation ranks: (1) documentation automation (most certain -- $1.85B ambient market growing 28.7% annually), (2) diagnostic triage and screening (highest clinical value -- AI catching what humans miss), (3) drug discovery (highest long-term economic value if it cracks clinical failure rates), (4) clinical decision support (fastest adoption curve ever -- OpenEvidence), (5) population health and VBC (highest systemic value -- predicting and preventing rather than treating).
The 2035 patient encounter looks fundamentally different. Pre-visit: AI reviews records, wearable data, and medication adherence, surfacing concerns in 60 seconds. During visit: ambient AI captures conversation while physician faces the patient. AI surfaces relevant evidence in real-time. Post-visit: AI generates notes, codes encounters, sends patient summaries, schedules follow-ups, submits prior auths. Between visits: AI monitors wearable data and triggers outreach before ED presentation.
What remains irreducibly human: the therapeutic relationship, complex treatment decisions with ambiguous tradeoffs (what matters to you in the face of a cancer diagnosis), and procedural skill requiring real-time adaptability. Documentation consuming 50% of physician time approaches zero. The diagnostic safety net catches what humans miss. The administrative machinery runs itself. What remains is the conversation about what matters and what to do about it.
Wachter (UCSF Chair of Medicine) describes this shift in practice. He uses OpenEvidence -- essentially GPT trained exclusively on medical literature -- roughly ten times per morning on rounds, asking questions he previously could only answer by running into a specialist in the cafeteria. The AI functions as an always-available "wingman" or "companion" providing subspecialty-level knowledge at the generalist's fingertips. The physician's role becomes steering the AI's computational power toward meaningful clinical questions -- knowing which eight facts out of fifty to include in a prompt, which is itself "a highly cognitive act based on four years of medical school, three years of residency, two years of fellowship, and 40 years of practice." The de-skilling risk is real but the direction is clear: AI handles information retrieval and pattern matching, physicians handle the judgment, empathy, and "eyeball test" that no current technology replicates (since human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs).
Relevant Notes:
- ambient AI documentation reduces physician documentation burden by 73 percent but the relationship between automation and burnout is more complex than time savings alone -- the documentation automation mechanism
- medical LLM benchmark performance does not translate to clinical impact because physicians with and without AI access achieve similar diagnostic accuracy in randomized trials -- why AI augments workflow not diagnosis
- human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs -- the de-skilling risk that shapes how the physician-AI relationship must be designed
- centaur team performance depends on role complementarity not mere human-AI combination -- the clinical centaur: AI handles information processing, humans handle relationships and judgment
- healthcare AI regulation needs blank-sheet redesign because the FDA drug-and-device model built for static products cannot govern continuously learning software -- the AI payment gap may force VBC transition, which would accelerate the physician role shift
Topics:
- livingip overview
- health and wellness