leo: rename centaur claim file + update all wiki links
- What: Renamed centaur file to match rewritten title ('depends on role complementarity')
- Why: Rio caught filename/title mismatch in PR #49 review
- Scope: 16 files updated — 1 rename, 15 wiki link updates
Pentagon-Agent: Leo <76FB9BCA-CC16-4479-B3E5-25A3769B3D7E>
This commit is contained in:
parent
89c153d935
commit
567f26b381
16 changed files with 19 additions and 19 deletions
|
|
@ -50,7 +50,7 @@ Neither techno-optimism nor doomerism. The future is a probability space shaped
|
||||||
Human-AI teams that augment human judgment, not replace it. Collective superintelligence preserves agency in a way monolithic AI cannot.
|
Human-AI teams that augment human judgment, not replace it. Collective superintelligence preserves agency in a way monolithic AI cannot.
|
||||||
|
|
||||||
**Grounding:**
|
**Grounding:**
|
||||||
- [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]]
|
- [[centaur team performance depends on role complementarity not mere human-AI combination]]
|
||||||
- [[three paths to superintelligence exist but only collective superintelligence preserves human agency]]
|
- [[three paths to superintelligence exist but only collective superintelligence preserves human agency]]
|
||||||
- [[the alignment problem dissolves when human values are continuously woven into the system rather than specified in advance]]
|
- [[the alignment problem dissolves when human values are continuously woven into the system rather than specified in advance]]
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -8,7 +8,7 @@ outcome: pending
|
||||||
confidence: moderate
|
confidence: moderate
|
||||||
time_horizon: "12-24 months -- evaluable through beachhead domain agent performance by Q1 2028"
|
time_horizon: "12-24 months -- evaluable through beachhead domain agent performance by Q1 2028"
|
||||||
depends_on:
|
depends_on:
|
||||||
- "[[centaur teams outperform both pure humans and pure AI because complementary strengths compound]]"
|
- "[[centaur team performance depends on role complementarity not mere human-AI combination]]"
|
||||||
- "[[three paths to superintelligence exist but only collective superintelligence preserves human agency]]"
|
- "[[three paths to superintelligence exist but only collective superintelligence preserves human agency]]"
|
||||||
- "[[narratives are infrastructure not just communication because they coordinate action at civilizational scale]]"
|
- "[[narratives are infrastructure not just communication because they coordinate action at civilizational scale]]"
|
||||||
- "[[grand strategy aligns unlimited aspirations with limited capabilities through proximate objectives]]"
|
- "[[grand strategy aligns unlimited aspirations with limited capabilities through proximate objectives]]"
|
||||||
|
|
@ -28,7 +28,7 @@ The critical framing: frontier AI labs are simultaneously an incumbent in the kn
|
||||||
## Reasoning Chain
|
## Reasoning Chain
|
||||||
|
|
||||||
Beliefs this depends on:
|
Beliefs this depends on:
|
||||||
- [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]] -- collective synthesis inherently outperforms pure AI because it combines human domain expertise with AI processing
|
- [[centaur team performance depends on role complementarity not mere human-AI combination]] -- collective synthesis inherently outperforms pure AI because it combines human domain expertise with AI processing
|
||||||
- [[three paths to superintelligence exist but only collective superintelligence preserves human agency]] -- the architectural choice matters: collective intelligence preserves attribution and agency in ways monolithic AI cannot
|
- [[three paths to superintelligence exist but only collective superintelligence preserves human agency]] -- the architectural choice matters: collective intelligence preserves attribution and agency in ways monolithic AI cannot
|
||||||
- [[grand strategy aligns unlimited aspirations with limited capabilities through proximate objectives]] -- the knowledge industry beachhead is the proximate objective toward collective superintelligence
|
- [[grand strategy aligns unlimited aspirations with limited capabilities through proximate objectives]] -- the knowledge industry beachhead is the proximate objective toward collective superintelligence
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -41,7 +41,7 @@ Three paths to superintelligence: speed (making existing architectures faster),
|
||||||
**Grounding:**
|
**Grounding:**
|
||||||
- [[three paths to superintelligence exist but only collective superintelligence preserves human agency]] -- the three-path framework
|
- [[three paths to superintelligence exist but only collective superintelligence preserves human agency]] -- the three-path framework
|
||||||
- [[collective superintelligence is the alternative to monolithic AI controlled by a few]] -- the power distribution argument
|
- [[collective superintelligence is the alternative to monolithic AI controlled by a few]] -- the power distribution argument
|
||||||
- [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]] -- the empirical evidence for human-AI complementarity
|
- [[centaur team performance depends on role complementarity not mere human-AI combination]] -- the empirical evidence for human-AI complementarity
|
||||||
|
|
||||||
**Challenges considered:** Collective systems are slower than monolithic ones — in a race, the monolithic approach wins the capability contest. Coordination overhead reduces the effective intelligence of distributed systems. The "collective" approach may be structurally inferior for certain tasks (rapid response, unified action, consistency). Counter: the speed disadvantage is real for some tasks but irrelevant for alignment — you don't need the fastest system, you need the safest one. And collective systems have superior properties for the alignment-relevant qualities: diversity, error correction, representation of multiple value systems.
|
**Challenges considered:** Collective systems are slower than monolithic ones — in a race, the monolithic approach wins the capability contest. Coordination overhead reduces the effective intelligence of distributed systems. The "collective" approach may be structurally inferior for certain tasks (rapid response, unified action, consistency). Counter: the speed disadvantage is real for some tasks but irrelevant for alignment — you don't need the fastest system, you need the safest one. And collective systems have superior properties for the alignment-relevant qualities: diversity, error correction, representation of multiple value systems.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -41,7 +41,7 @@ Three paths to superintelligence: speed (making existing architectures faster),
|
||||||
**Grounding:**
|
**Grounding:**
|
||||||
- [[three paths to superintelligence exist but only collective superintelligence preserves human agency]] -- the three-path framework
|
- [[three paths to superintelligence exist but only collective superintelligence preserves human agency]] -- the three-path framework
|
||||||
- [[collective superintelligence is the alternative to monolithic AI controlled by a few]] -- the power distribution argument
|
- [[collective superintelligence is the alternative to monolithic AI controlled by a few]] -- the power distribution argument
|
||||||
- [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]] -- the empirical evidence for human-AI complementarity
|
- [[centaur team performance depends on role complementarity not mere human-AI combination]] -- the empirical evidence for human-AI complementarity
|
||||||
|
|
||||||
**Challenges considered:** Collective systems are slower than monolithic ones — in a race, the monolithic approach wins the capability contest. Coordination overhead reduces the effective intelligence of distributed systems. The "collective" approach may be structurally inferior for certain tasks (rapid response, unified action, consistency). Counter: the speed disadvantage is real for some tasks but irrelevant for alignment — you don't need the fastest system, you need the safest one. And collective systems have superior properties for the alignment-relevant qualities: diversity, error correction, representation of multiple value systems.
|
**Challenges considered:** Collective systems are slower than monolithic ones — in a race, the monolithic approach wins the capability contest. Coordination overhead reduces the effective intelligence of distributed systems. The "collective" approach may be structurally inferior for certain tasks (rapid response, unified action, consistency). Counter: the speed disadvantage is real for some tasks but irrelevant for alignment — you don't need the fastest system, you need the safest one. And collective systems have superior properties for the alignment-relevant qualities: diversity, error correction, representation of multiple value systems.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -54,7 +54,7 @@ Early detection and prevention costs a fraction of acute care. A $500 remote mon
|
||||||
AI achieves specialist-level accuracy in narrow diagnostic tasks (radiology, pathology, dermatology). But clinical medicine is not a collection of narrow diagnostic tasks — it is complex decision-making under uncertainty with incomplete information, patient preferences, and ethical dimensions that current AI cannot handle. The model is centaur, not replacement: AI handles pattern recognition at superhuman scale while physicians handle judgment, communication, and care.
|
AI achieves specialist-level accuracy in narrow diagnostic tasks (radiology, pathology, dermatology). But clinical medicine is not a collection of narrow diagnostic tasks — it is complex decision-making under uncertainty with incomplete information, patient preferences, and ethical dimensions that current AI cannot handle. The model is centaur, not replacement: AI handles pattern recognition at superhuman scale while physicians handle judgment, communication, and care.
|
||||||
|
|
||||||
**Grounding:**
|
**Grounding:**
|
||||||
- [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]] -- the general principle
|
- [[centaur team performance depends on role complementarity not mere human-AI combination]] -- the general principle
|
||||||
- [[healthcares defensible layer is where atoms become bits because physical-to-digital conversion generates the data that powers AI care while building patient trust that software alone cannot create]] -- trust as a clinical necessity
|
- [[healthcares defensible layer is where atoms become bits because physical-to-digital conversion generates the data that powers AI care while building patient trust that software alone cannot create]] -- trust as a clinical necessity
|
||||||
- [[the personbyte is a fundamental quantization limit on knowledge accumulation forcing all complex production into networked teams]] -- clinical medicine exceeds individual cognitive capacity
|
- [[the personbyte is a fundamental quantization limit on knowledge accumulation forcing all complex production into networked teams]] -- clinical medicine exceeds individual cognitive capacity
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -7,7 +7,7 @@ confidence: experimental
|
||||||
source: "Synthesis by Leo from: centaur team claim (Kasparov); HITL degradation claim (Wachter/Patil, Stanford-Harvard study); AI scribe adoption (Bessemer 2026); alignment scalable oversight claims"
|
source: "Synthesis by Leo from: centaur team claim (Kasparov); HITL degradation claim (Wachter/Patil, Stanford-Harvard study); AI scribe adoption (Bessemer 2026); alignment scalable oversight claims"
|
||||||
created: 2026-03-07
|
created: 2026-03-07
|
||||||
depends_on:
|
depends_on:
|
||||||
- "centaur teams outperform both pure humans and pure AI because complementary strengths compound"
|
- "centaur team performance depends on role complementarity not mere human-AI combination"
|
||||||
- "human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs"
|
- "human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs"
|
||||||
- "AI scribes reached 92 percent provider adoption in under 3 years because documentation is the rare healthcare workflow where AI value is immediate unambiguous and low-risk"
|
- "AI scribes reached 92 percent provider adoption in under 3 years because documentation is the rare healthcare workflow where AI value is immediate unambiguous and low-risk"
|
||||||
- "scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps"
|
- "scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps"
|
||||||
|
|
@ -15,7 +15,7 @@ depends_on:
|
||||||
|
|
||||||
# centaur teams succeed only when role boundaries prevent humans from overriding AI in domains where AI is the stronger partner
|
# centaur teams succeed only when role boundaries prevent humans from overriding AI in domains where AI is the stronger partner
|
||||||
|
|
||||||
The knowledge base contains a tension: centaur teams outperform both pure humans and pure AI in chess, but physicians with AI access score *worse* than AI alone in clinical diagnosis (68% vs 90%). This isn't a contradiction — it's a boundary condition that reveals when human-AI collaboration helps and when it hurts.
|
The knowledge base contains a tension: centaur team performance depends on role complementarity in chess, but physicians with AI access score *worse* than AI alone in clinical diagnosis (68% vs 90%). This isn't a contradiction — it's a boundary condition that reveals when human-AI collaboration helps and when it hurts.
|
||||||
|
|
||||||
**The evidence across domains:**
|
**The evidence across domains:**
|
||||||
|
|
||||||
|
|
@ -42,7 +42,7 @@ This is the centaur model done right: not human-verifies-AI, but human-and-AI-on
|
||||||
---
|
---
|
||||||
|
|
||||||
Relevant Notes:
|
Relevant Notes:
|
||||||
- [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]] — the chess evidence establishing the centaur model
|
- [[centaur team performance depends on role complementarity not mere human-AI combination]] — the chess evidence establishing the centaur model
|
||||||
- [[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]] — the clinical counter-evidence constraining when the model applies
|
- [[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]] — the clinical counter-evidence constraining when the model applies
|
||||||
- [[AI scribes reached 92 percent provider adoption in under 3 years because documentation is the rare healthcare workflow where AI value is immediate unambiguous and low-risk]] — the success case with clear role boundaries
|
- [[AI scribes reached 92 percent provider adoption in under 3 years because documentation is the rare healthcare workflow where AI value is immediate unambiguous and low-risk]] — the success case with clear role boundaries
|
||||||
- [[scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps]] — alignment oversight facing the same boundary problem
|
- [[scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps]] — alignment oversight facing the same boundary problem
|
||||||
|
|
|
||||||
|
|
@ -29,7 +29,7 @@ The deeper memetic point: synthesis shapes ideas while appearing to reflect them
|
||||||
Relevant Notes:
|
Relevant Notes:
|
||||||
- [[meme propagation selects for simplicity novelty and conformity pressure rather than truth or utility]] -- synthesis that clarifies is itself memetic selection: the simplified version propagates while the original formulation fades
|
- [[meme propagation selects for simplicity novelty and conformity pressure rather than truth or utility]] -- synthesis that clarifies is itself memetic selection: the simplified version propagates while the original formulation fades
|
||||||
- [[complex ideas propagate with higher fidelity through personal interaction than mass media because nuance requires bidirectional communication]] -- the three-beat pattern explains WHY personal interaction preserves fidelity: real-time synthesis enables correction and refinement
|
- [[complex ideas propagate with higher fidelity through personal interaction than mass media because nuance requires bidirectional communication]] -- the three-beat pattern explains WHY personal interaction preserves fidelity: real-time synthesis enables correction and refinement
|
||||||
- [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]] -- the conversational pattern IS a centaur interaction: human provides raw insight, AI provides synthesis and challenge
|
- [[centaur team performance depends on role complementarity not mere human-AI combination]] -- the conversational pattern IS a centaur interaction: human provides raw insight, AI provides synthesis and challenge
|
||||||
- [[metaphor reframing is more powerful than argument because it changes which conclusions feel natural without requiring persuasion]] -- synthesis that reframes is a form of metaphor introduction: changing the vocabulary changes which conclusions feel natural
|
- [[metaphor reframing is more powerful than argument because it changes which conclusions feel natural without requiring persuasion]] -- synthesis that reframes is a form of metaphor introduction: changing the vocabulary changes which conclusions feel natural
|
||||||
- [[Boardy AI]] -- the AI system where this pattern was observed and analyzed
|
- [[Boardy AI]] -- the AI system where this pattern was observed and analyzed
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -23,7 +23,7 @@ Relevant Notes:
|
||||||
- [[the first mover to superintelligence likely gains decisive strategic advantage because the gap between leader and followers accelerates during takeoff]] -- the collective path is the only one that prevents singleton formation through first-mover dynamics
|
- [[the first mover to superintelligence likely gains decisive strategic advantage because the gap between leader and followers accelerates during takeoff]] -- the collective path is the only one that prevents singleton formation through first-mover dynamics
|
||||||
- [[collective superintelligence is the alternative to monolithic AI controlled by a few]] -- provides the design specification for the collective path
|
- [[collective superintelligence is the alternative to monolithic AI controlled by a few]] -- provides the design specification for the collective path
|
||||||
- [[the alignment problem dissolves when human values are continuously woven into the system rather than specified in advance]] -- explains why the collective path has a structural safety advantage
|
- [[the alignment problem dissolves when human values are continuously woven into the system rather than specified in advance]] -- explains why the collective path has a structural safety advantage
|
||||||
- [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]] -- empirical evidence for the viability of the collective path
|
- [[centaur team performance depends on role complementarity not mere human-AI combination]] -- empirical evidence for the viability of the collective path
|
||||||
- [[bostrom takes single-digit year timelines to superintelligence seriously while acknowledging decades-long alternatives remain possible]] -- compressed timelines add urgency: the collective path must be pursued now, not eventually
|
- [[bostrom takes single-digit year timelines to superintelligence seriously while acknowledging decades-long alternatives remain possible]] -- compressed timelines add urgency: the collective path must be pursued now, not eventually
|
||||||
- [[developing superintelligence is surgery for a fatal condition not russian roulette because the baseline of inaction is itself catastrophic]] -- Bostrom's evolved position adds urgency to all three paths, strengthening the case for the collective one
|
- [[developing superintelligence is surgery for a fatal condition not russian roulette because the baseline of inaction is itself catastrophic]] -- Bostrom's evolved position adds urgency to all three paths, strengthening the case for the collective one
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -61,4 +61,4 @@ The shared theory underlying Theseus's domain analysis lives in the foundations
|
||||||
- [[the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it]] — structural race dynamics
|
- [[the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it]] — structural race dynamics
|
||||||
- [[no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it]] — the institutional gap
|
- [[no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it]] — the institutional gap
|
||||||
- [[collective superintelligence is the alternative to monolithic AI controlled by a few]] — the distributed alternative
|
- [[collective superintelligence is the alternative to monolithic AI controlled by a few]] — the distributed alternative
|
||||||
- [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]] — human-AI complementarity evidence
|
- [[centaur team performance depends on role complementarity not mere human-AI combination]] — human-AI complementarity evidence
|
||||||
|
|
|
||||||
|
|
@ -28,7 +28,7 @@ Relevant Notes:
|
||||||
- [[AI alignment is a coordination problem not a technical problem]] -- models being deployed in military contexts despite lacking judgment on catastrophic escalation is a coordination failure
|
- [[AI alignment is a coordination problem not a technical problem]] -- models being deployed in military contexts despite lacking judgment on catastrophic escalation is a coordination failure
|
||||||
- [[scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps]] -- war game results suggest oversight in high-stakes military contexts would be even harder than debate experiments indicate
|
- [[scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps]] -- war game results suggest oversight in high-stakes military contexts would be even harder than debate experiments indicate
|
||||||
- [[three paths to superintelligence exist but only collective superintelligence preserves human agency]] -- monolithic models making unilateral escalation decisions is the structural risk collective architectures avoid
|
- [[three paths to superintelligence exist but only collective superintelligence preserves human agency]] -- monolithic models making unilateral escalation decisions is the structural risk collective architectures avoid
|
||||||
- [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]] -- the war games show precisely why human-in-the-loop matters: humans bring judgment about catastrophic irreversibility that models lack
|
- [[centaur team performance depends on role complementarity not mere human-AI combination]] -- the war games show precisely why human-in-the-loop matters: humans bring judgment about catastrophic irreversibility that models lack
|
||||||
|
|
||||||
Topics:
|
Topics:
|
||||||
- [[_map]]
|
- [[_map]]
|
||||||
|
|
|
||||||
|
|
@ -15,13 +15,13 @@ The emerging architecture runs through AI: (1) wearable captures continuous data
|
||||||
|
|
||||||
What IS clinically integrated today: Apple Watch ECG/AFib detection (qualified as FDA Medical Device Development Tool), CGMs for diabetes, and expanding Medicare RPM codes (new CPT 99445 and 99470 in 2026 allowing billing for as few as 2-15 days of data). What is NOT integrated despite data availability: HRV trends, sleep staging, activity data, continuous SpO2 trends, strain/recovery scores, CGM data for non-diabetics.
|
What IS clinically integrated today: Apple Watch ECG/AFib detection (qualified as FDA Medical Device Development Tool), CGMs for diabetes, and expanding Medicare RPM codes (new CPT 99445 and 99470 in 2026 allowing billing for as few as 2-15 days of data). What is NOT integrated despite data availability: HRV trends, sleep staging, activity data, continuous SpO2 trends, strain/recovery scores, CGM data for non-diabetics.
|
||||||
|
|
||||||
FHIR R6 (expected 2026) is the interoperability standard enabling wearable-to-EHR data exchange. But interoperability alone is insufficient -- without AI processing, more data access just creates more alert fatigue. Since [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]], the monitoring centaur is AI handling data volume while clinicians provide judgment and context.
|
FHIR R6 (expected 2026) is the interoperability standard enabling wearable-to-EHR data exchange. But interoperability alone is insufficient -- without AI processing, more data access just creates more alert fatigue. Since [[centaur team performance depends on role complementarity not mere human-AI combination]], the monitoring centaur is AI handling data volume while clinicians provide judgment and context.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
Relevant Notes:
|
Relevant Notes:
|
||||||
- [[continuous health monitoring is converging on a multi-layer sensor stack of ambient wearables periodic patches and environmental sensors processed through AI middleware]] -- the full sensor architecture this middleware enables
|
- [[continuous health monitoring is converging on a multi-layer sensor stack of ambient wearables periodic patches and environmental sensors processed through AI middleware]] -- the full sensor architecture this middleware enables
|
||||||
- [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]] -- the monitoring centaur: AI handles volume, humans provide judgment
|
- [[centaur team performance depends on role complementarity not mere human-AI combination]] -- the monitoring centaur: AI handles volume, humans provide judgment
|
||||||
|
|
||||||
Topics:
|
Topics:
|
||||||
- livingip overview
|
- livingip overview
|
||||||
|
|
|
||||||
|
|
@ -20,7 +20,7 @@ The incumbent response is UpToDate ExpertAI (Wolters Kluwer, Q4 2025), leveragin
|
||||||
---
|
---
|
||||||
|
|
||||||
Relevant Notes:
|
Relevant Notes:
|
||||||
- [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]] -- OpenEvidence is the clinical centaur: AI provides evidence synthesis, physician provides judgment
|
- [[centaur team performance depends on role complementarity not mere human-AI combination]] -- OpenEvidence is the clinical centaur: AI provides evidence synthesis, physician provides judgment
|
||||||
- [[knowledge scaling bottlenecks kill revolutionary ideas before they reach critical mass]] -- OpenEvidence solved clinical knowledge scaling by making evidence retrieval instant
|
- [[knowledge scaling bottlenecks kill revolutionary ideas before they reach critical mass]] -- OpenEvidence solved clinical knowledge scaling by making evidence retrieval instant
|
||||||
|
|
||||||
Topics:
|
Topics:
|
||||||
|
|
|
||||||
|
|
@ -22,7 +22,7 @@ Wachter frames the challenge directly: "Humans suck at remaining vigilant over t
|
||||||
---
|
---
|
||||||
|
|
||||||
Relevant Notes:
|
Relevant Notes:
|
||||||
- [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]] -- the chess centaur model does NOT generalize to clinical medicine where physician overrides degrade AI performance
|
- [[centaur team performance depends on role complementarity not mere human-AI combination]] -- the chess centaur model does NOT generalize to clinical medicine where physician overrides degrade AI performance
|
||||||
- [[medical LLM benchmark performance does not translate to clinical impact because physicians with and without AI access achieve similar diagnostic accuracy in randomized trials]] -- the multi-hospital RCT found similar diagnostic accuracy with/without AI; the Stanford/Harvard study found AI alone dramatically superior
|
- [[medical LLM benchmark performance does not translate to clinical impact because physicians with and without AI access achieve similar diagnostic accuracy in randomized trials]] -- the multi-hospital RCT found similar diagnostic accuracy with/without AI; the Stanford/Harvard study found AI alone dramatically superior
|
||||||
- [[the physician role shifts from information processor to relationship manager as AI automates documentation triage and evidence synthesis]] -- if physicians degrade AI diagnostic performance, the role shift toward relationship management is not just efficient but necessary
|
- [[the physician role shifts from information processor to relationship manager as AI automates documentation triage and evidence synthesis]] -- if physicians degrade AI diagnostic performance, the role shift toward relationship management is not just efficient but necessary
|
||||||
- [[ambient AI documentation reduces physician documentation burden by 73 percent but the relationship between automation and burnout is more complex than time savings alone]] -- documentation AI where physicians don't override outputs avoids the de-skilling problem
|
- [[ambient AI documentation reduces physician documentation burden by 73 percent but the relationship between automation and burnout is more complex than time savings alone]] -- documentation AI where physicians don't override outputs avoids the de-skilling problem
|
||||||
|
|
|
||||||
|
|
@ -21,7 +21,7 @@ The implication for AI deployment strategy: the highest-value clinical AI applic
|
||||||
|
|
||||||
Relevant Notes:
|
Relevant Notes:
|
||||||
- [[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]] -- Stanford/Harvard study shows physician overrides degrade AI performance from 90% to 68%
|
- [[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]] -- Stanford/Harvard study shows physician overrides degrade AI performance from 90% to 68%
|
||||||
- [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]] -- the chess centaur model does NOT generalize cleanly to clinical medicine; interaction design matters
|
- [[centaur team performance depends on role complementarity not mere human-AI combination]] -- the chess centaur model does NOT generalize cleanly to clinical medicine; interaction design matters
|
||||||
- [[OpenEvidence became the fastest-adopted clinical technology in history reaching 40 percent of US physicians daily within two years]] -- OpenEvidence succeeds as evidence retrieval, not diagnostic replacement
|
- [[OpenEvidence became the fastest-adopted clinical technology in history reaching 40 percent of US physicians daily within two years]] -- OpenEvidence succeeds as evidence retrieval, not diagnostic replacement
|
||||||
|
|
||||||
Topics:
|
Topics:
|
||||||
|
|
|
||||||
|
|
@ -23,7 +23,7 @@ Relevant Notes:
|
||||||
- [[ambient AI documentation reduces physician documentation burden by 73 percent but the relationship between automation and burnout is more complex than time savings alone]] -- the documentation automation mechanism
|
- [[ambient AI documentation reduces physician documentation burden by 73 percent but the relationship between automation and burnout is more complex than time savings alone]] -- the documentation automation mechanism
|
||||||
- [[medical LLM benchmark performance does not translate to clinical impact because physicians with and without AI access achieve similar diagnostic accuracy in randomized trials]] -- why AI augments workflow not diagnosis
|
- [[medical LLM benchmark performance does not translate to clinical impact because physicians with and without AI access achieve similar diagnostic accuracy in randomized trials]] -- why AI augments workflow not diagnosis
|
||||||
- [[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]] -- the de-skilling risk that shapes how the physician-AI relationship must be designed
|
- [[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]] -- the de-skilling risk that shapes how the physician-AI relationship must be designed
|
||||||
- [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]] -- the clinical centaur: AI handles information processing, humans handle relationships and judgment
|
- [[centaur team performance depends on role complementarity not mere human-AI combination]] -- the clinical centaur: AI handles information processing, humans handle relationships and judgment
|
||||||
- [[healthcare AI regulation needs blank-sheet redesign because the FDA drug-and-device model built for static products cannot govern continuously learning software]] -- the AI payment gap may force VBC transition, which would accelerate the physician role shift
|
- [[healthcare AI regulation needs blank-sheet redesign because the FDA drug-and-device model built for static products cannot govern continuously learning software]] -- the AI payment gap may force VBC transition, which would accelerate the physician role shift
|
||||||
|
|
||||||
Topics:
|
Topics:
|
||||||
|
|
|
||||||
Loading…
Reference in a new issue