diff --git a/agents/leo/beliefs.md b/agents/leo/beliefs.md index 22a5f3d..ef548a9 100644 --- a/agents/leo/beliefs.md +++ b/agents/leo/beliefs.md @@ -50,7 +50,7 @@ Neither techno-optimism nor doomerism. The future is a probability space shaped Human-AI teams that augment human judgment, not replace it. Collective superintelligence preserves agency in a way monolithic AI cannot. **Grounding:** -- [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]] +- [[centaur team performance depends on role complementarity not mere human-AI combination]] - [[three paths to superintelligence exist but only collective superintelligence preserves human agency]] - [[the alignment problem dissolves when human values are continuously woven into the system rather than specified in advance]] diff --git a/agents/leo/positions/collective intelligence disrupts the knowledge industry not frontier AI labs and value will accrue to the synthesis and validation layer.md b/agents/leo/positions/collective intelligence disrupts the knowledge industry not frontier AI labs and value will accrue to the synthesis and validation layer.md index 7574be4..0a285f2 100644 --- a/agents/leo/positions/collective intelligence disrupts the knowledge industry not frontier AI labs and value will accrue to the synthesis and validation layer.md +++ b/agents/leo/positions/collective intelligence disrupts the knowledge industry not frontier AI labs and value will accrue to the synthesis and validation layer.md @@ -8,7 +8,7 @@ outcome: pending confidence: moderate time_horizon: "12-24 months -- evaluable through beachhead domain agent performance by Q1 2028" depends_on: - - "[[centaur teams outperform both pure humans and pure AI because complementary strengths compound]]" + - "[[centaur team performance depends on role complementarity not mere human-AI combination]]" - "[[three paths to superintelligence exist but only collective superintelligence preserves human agency]]" - "[[narratives are infrastructure not just communication because they coordinate action at civilizational scale]]" - "[[grand strategy aligns unlimited aspirations with limited capabilities through proximate objectives]]" @@ -28,7 +28,7 @@ The critical framing: frontier AI labs are simultaneously an incumbent in the kn ## Reasoning Chain Beliefs this depends on: -- [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]] -- collective synthesis inherently outperforms pure AI because it combines human domain expertise with AI processing +- [[centaur team performance depends on role complementarity not mere human-AI combination]] -- collective synthesis inherently outperforms pure AI because it combines human domain expertise with AI processing - [[three paths to superintelligence exist but only collective superintelligence preserves human agency]] -- the architectural choice matters: collective intelligence preserves attribution and agency in ways monolithic AI cannot - [[grand strategy aligns unlimited aspirations with limited capabilities through proximate objectives]] -- the knowledge industry beachhead is the proximate objective toward collective superintelligence diff --git a/agents/logos/beliefs.md b/agents/logos/beliefs.md index 402945f..acecac5 100644 --- a/agents/logos/beliefs.md +++ b/agents/logos/beliefs.md @@ -41,7 +41,7 @@ Three paths to superintelligence: speed (making existing architectures faster), **Grounding:** - [[three paths to superintelligence exist but only collective superintelligence preserves human agency]] -- the three-path framework - [[collective superintelligence is the alternative to monolithic AI controlled by a few]] -- the power distribution argument -- [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]] -- the empirical evidence for human-AI complementarity +- [[centaur team performance depends on role complementarity not mere human-AI combination]] -- the empirical evidence for human-AI complementarity **Challenges considered:** Collective systems are slower than monolithic ones — in a race, the monolithic approach wins the capability contest. Coordination overhead reduces the effective intelligence of distributed systems. The "collective" approach may be structurally inferior for certain tasks (rapid response, unified action, consistency). Counter: the speed disadvantage is real for some tasks but irrelevant for alignment — you don't need the fastest system, you need the safest one. And collective systems have superior properties for the alignment-relevant qualities: diversity, error correction, representation of multiple value systems. diff --git a/agents/theseus/beliefs.md b/agents/theseus/beliefs.md index 91824a5..b569dc4 100644 --- a/agents/theseus/beliefs.md +++ b/agents/theseus/beliefs.md @@ -41,7 +41,7 @@ Three paths to superintelligence: speed (making existing architectures faster), **Grounding:** - [[three paths to superintelligence exist but only collective superintelligence preserves human agency]] -- the three-path framework - [[collective superintelligence is the alternative to monolithic AI controlled by a few]] -- the power distribution argument -- [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]] -- the empirical evidence for human-AI complementarity +- [[centaur team performance depends on role complementarity not mere human-AI combination]] -- the empirical evidence for human-AI complementarity **Challenges considered:** Collective systems are slower than monolithic ones — in a race, the monolithic approach wins the capability contest. Coordination overhead reduces the effective intelligence of distributed systems. The "collective" approach may be structurally inferior for certain tasks (rapid response, unified action, consistency). Counter: the speed disadvantage is real for some tasks but irrelevant for alignment — you don't need the fastest system, you need the safest one. And collective systems have superior properties for the alignment-relevant qualities: diversity, error correction, representation of multiple value systems. diff --git a/agents/vida/beliefs.md b/agents/vida/beliefs.md index 836e21b..118fecc 100644 --- a/agents/vida/beliefs.md +++ b/agents/vida/beliefs.md @@ -54,7 +54,7 @@ Early detection and prevention costs a fraction of acute care. A $500 remote mon AI achieves specialist-level accuracy in narrow diagnostic tasks (radiology, pathology, dermatology). But clinical medicine is not a collection of narrow diagnostic tasks — it is complex decision-making under uncertainty with incomplete information, patient preferences, and ethical dimensions that current AI cannot handle. The model is centaur, not replacement: AI handles pattern recognition at superhuman scale while physicians handle judgment, communication, and care. **Grounding:** -- [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]] -- the general principle +- [[centaur team performance depends on role complementarity not mere human-AI combination]] -- the general principle - [[healthcares defensible layer is where atoms become bits because physical-to-digital conversion generates the data that powers AI care while building patient trust that software alone cannot create]] -- trust as a clinical necessity - [[the personbyte is a fundamental quantization limit on knowledge accumulation forcing all complex production into networked teams]] -- clinical medicine exceeds individual cognitive capacity diff --git a/core/grand-strategy/centaur teams succeed only when role boundaries prevent humans from overriding AI in domains where AI is the stronger partner.md b/core/grand-strategy/centaur teams succeed only when role boundaries prevent humans from overriding AI in domains where AI is the stronger partner.md index f1849d5..5ef8dca 100644 --- a/core/grand-strategy/centaur teams succeed only when role boundaries prevent humans from overriding AI in domains where AI is the stronger partner.md +++ b/core/grand-strategy/centaur teams succeed only when role boundaries prevent humans from overriding AI in domains where AI is the stronger partner.md @@ -7,7 +7,7 @@ confidence: experimental source: "Synthesis by Leo from: centaur team claim (Kasparov); HITL degradation claim (Wachter/Patil, Stanford-Harvard study); AI scribe adoption (Bessemer 2026); alignment scalable oversight claims" created: 2026-03-07 depends_on: - - "centaur teams outperform both pure humans and pure AI because complementary strengths compound" + - "centaur team performance depends on role complementarity not mere human-AI combination" - "human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs" - "AI scribes reached 92 percent provider adoption in under 3 years because documentation is the rare healthcare workflow where AI value is immediate unambiguous and low-risk" - "scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps" @@ -15,7 +15,7 @@ depends_on: # centaur teams succeed only when role boundaries prevent humans from overriding AI in domains where AI is the stronger partner -The knowledge base contains a tension: centaur teams outperform both pure humans and pure AI in chess, but physicians with AI access score *worse* than AI alone in clinical diagnosis (68% vs 90%). This isn't a contradiction — it's a boundary condition that reveals when human-AI collaboration helps and when it hurts. +The knowledge base contains a tension: centaur team performance depends on role complementarity in chess, but physicians with AI access score *worse* than AI alone in clinical diagnosis (68% vs 90%). This isn't a contradiction — it's a boundary condition that reveals when human-AI collaboration helps and when it hurts. **The evidence across domains:** @@ -42,7 +42,7 @@ This is the centaur model done right: not human-verifies-AI, but human-and-AI-on --- Relevant Notes: -- [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]] — the chess evidence establishing the centaur model +- [[centaur team performance depends on role complementarity not mere human-AI combination]] — the chess evidence establishing the centaur model - [[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]] — the clinical counter-evidence constraining when the model applies - [[AI scribes reached 92 percent provider adoption in under 3 years because documentation is the rare healthcare workflow where AI value is immediate unambiguous and low-risk]] — the success case with clear role boundaries - [[scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps]] — alignment oversight facing the same boundary problem diff --git a/core/living-agents/validation-synthesis-pushback is a conversational design pattern where affirming then deepening then challenging creates the experience of being understood.md b/core/living-agents/validation-synthesis-pushback is a conversational design pattern where affirming then deepening then challenging creates the experience of being understood.md index ed90ded..a13066b 100644 --- a/core/living-agents/validation-synthesis-pushback is a conversational design pattern where affirming then deepening then challenging creates the experience of being understood.md +++ b/core/living-agents/validation-synthesis-pushback is a conversational design pattern where affirming then deepening then challenging creates the experience of being understood.md @@ -29,7 +29,7 @@ The deeper memetic point: synthesis shapes ideas while appearing to reflect them Relevant Notes: - [[meme propagation selects for simplicity novelty and conformity pressure rather than truth or utility]] -- synthesis that clarifies is itself memetic selection: the simplified version propagates while the original formulation fades - [[complex ideas propagate with higher fidelity through personal interaction than mass media because nuance requires bidirectional communication]] -- the three-beat pattern explains WHY personal interaction preserves fidelity: real-time synthesis enables correction and refinement -- [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]] -- the conversational pattern IS a centaur interaction: human provides raw insight, AI provides synthesis and challenge +- [[centaur team performance depends on role complementarity not mere human-AI combination]] -- the conversational pattern IS a centaur interaction: human provides raw insight, AI provides synthesis and challenge - [[metaphor reframing is more powerful than argument because it changes which conclusions feel natural without requiring persuasion]] -- synthesis that reframes is a form of metaphor introduction: changing the vocabulary changes which conclusions feel natural - [[Boardy AI]] -- the AI system where this pattern was observed and analyzed diff --git a/core/teleohumanity/three paths to superintelligence exist but only collective superintelligence preserves human agency.md b/core/teleohumanity/three paths to superintelligence exist but only collective superintelligence preserves human agency.md index f6934e5..83ef2b4 100644 --- a/core/teleohumanity/three paths to superintelligence exist but only collective superintelligence preserves human agency.md +++ b/core/teleohumanity/three paths to superintelligence exist but only collective superintelligence preserves human agency.md @@ -23,7 +23,7 @@ Relevant Notes: - [[the first mover to superintelligence likely gains decisive strategic advantage because the gap between leader and followers accelerates during takeoff]] -- the collective path is the only one that prevents singleton formation through first-mover dynamics - [[collective superintelligence is the alternative to monolithic AI controlled by a few]] -- provides the design specification for the collective path - [[the alignment problem dissolves when human values are continuously woven into the system rather than specified in advance]] -- explains why the collective path has a structural safety advantage -- [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]] -- empirical evidence for the viability of the collective path +- [[centaur team performance depends on role complementarity not mere human-AI combination]] -- empirical evidence for the viability of the collective path - [[bostrom takes single-digit year timelines to superintelligence seriously while acknowledging decades-long alternatives remain possible]] -- compressed timelines add urgency: the collective path must be pursued now, not eventually - [[developing superintelligence is surgery for a fatal condition not russian roulette because the baseline of inaction is itself catastrophic]] -- Bostrom's evolved position adds urgency to all three paths, strengthening the case for the collective one diff --git a/domains/ai-alignment/_map.md b/domains/ai-alignment/_map.md index 5fea4fb..70c5ab7 100644 --- a/domains/ai-alignment/_map.md +++ b/domains/ai-alignment/_map.md @@ -61,4 +61,4 @@ The shared theory underlying Theseus's domain analysis lives in the foundations - [[the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it]] — structural race dynamics - [[no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it]] — the institutional gap - [[collective superintelligence is the alternative to monolithic AI controlled by a few]] — the distributed alternative -- [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]] — human-AI complementarity evidence +- [[centaur team performance depends on role complementarity not mere human-AI combination]] — human-AI complementarity evidence diff --git a/domains/ai-alignment/current language models escalate to nuclear war in simulated conflicts because behavioral alignment cannot instill aversion to catastrophic irreversible actions.md b/domains/ai-alignment/current language models escalate to nuclear war in simulated conflicts because behavioral alignment cannot instill aversion to catastrophic irreversible actions.md index 75c84c1..5d79c23 100644 --- a/domains/ai-alignment/current language models escalate to nuclear war in simulated conflicts because behavioral alignment cannot instill aversion to catastrophic irreversible actions.md +++ b/domains/ai-alignment/current language models escalate to nuclear war in simulated conflicts because behavioral alignment cannot instill aversion to catastrophic irreversible actions.md @@ -28,7 +28,7 @@ Relevant Notes: - [[AI alignment is a coordination problem not a technical problem]] -- models being deployed in military contexts despite lacking judgment on catastrophic escalation is a coordination failure - [[scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps]] -- war game results suggest oversight in high-stakes military contexts would be even harder than debate experiments indicate - [[three paths to superintelligence exist but only collective superintelligence preserves human agency]] -- monolithic models making unilateral escalation decisions is the structural risk collective architectures avoid -- [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]] -- the war games show precisely why human-in-the-loop matters: humans bring judgment about catastrophic irreversibility that models lack +- [[centaur team performance depends on role complementarity not mere human-AI combination]] -- the war games show precisely why human-in-the-loop matters: humans bring judgment about catastrophic irreversibility that models lack Topics: - [[_map]] diff --git a/domains/health/AI middleware bridges consumer wearable data to clinical utility because continuous data is too voluminous for direct clinician review.md b/domains/health/AI middleware bridges consumer wearable data to clinical utility because continuous data is too voluminous for direct clinician review.md index 8fce153..4c31017 100644 --- a/domains/health/AI middleware bridges consumer wearable data to clinical utility because continuous data is too voluminous for direct clinician review.md +++ b/domains/health/AI middleware bridges consumer wearable data to clinical utility because continuous data is too voluminous for direct clinician review.md @@ -15,13 +15,13 @@ The emerging architecture runs through AI: (1) wearable captures continuous data What IS clinically integrated today: Apple Watch ECG/AFib detection (qualified as FDA Medical Device Development Tool), CGMs for diabetes, and expanding Medicare RPM codes (new CPT 99445 and 99470 in 2026 allowing billing for as few as 2-15 days of data). What is NOT integrated despite data availability: HRV trends, sleep staging, activity data, continuous SpO2 trends, strain/recovery scores, CGM data for non-diabetics. -FHIR R6 (expected 2026) is the interoperability standard enabling wearable-to-EHR data exchange. But interoperability alone is insufficient -- without AI processing, more data access just creates more alert fatigue. Since [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]], the monitoring centaur is AI handling data volume while clinicians provide judgment and context. +FHIR R6 (expected 2026) is the interoperability standard enabling wearable-to-EHR data exchange. But interoperability alone is insufficient -- without AI processing, more data access just creates more alert fatigue. Since [[centaur team performance depends on role complementarity not mere human-AI combination]], the monitoring centaur is AI handling data volume while clinicians provide judgment and context. --- Relevant Notes: - [[continuous health monitoring is converging on a multi-layer sensor stack of ambient wearables periodic patches and environmental sensors processed through AI middleware]] -- the full sensor architecture this middleware enables -- [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]] -- the monitoring centaur: AI handles volume, humans provide judgment +- [[centaur team performance depends on role complementarity not mere human-AI combination]] -- the monitoring centaur: AI handles volume, humans provide judgment Topics: - livingip overview diff --git a/domains/health/OpenEvidence became the fastest-adopted clinical technology in history reaching 40 percent of US physicians daily within two years.md b/domains/health/OpenEvidence became the fastest-adopted clinical technology in history reaching 40 percent of US physicians daily within two years.md index 5ee5e31..1835136 100644 --- a/domains/health/OpenEvidence became the fastest-adopted clinical technology in history reaching 40 percent of US physicians daily within two years.md +++ b/domains/health/OpenEvidence became the fastest-adopted clinical technology in history reaching 40 percent of US physicians daily within two years.md @@ -20,7 +20,7 @@ The incumbent response is UpToDate ExpertAI (Wolters Kluwer, Q4 2025), leveragin --- Relevant Notes: -- [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]] -- OpenEvidence is the clinical centaur: AI provides evidence synthesis, physician provides judgment +- [[centaur team performance depends on role complementarity not mere human-AI combination]] -- OpenEvidence is the clinical centaur: AI provides evidence synthesis, physician provides judgment - [[knowledge scaling bottlenecks kill revolutionary ideas before they reach critical mass]] -- OpenEvidence solved clinical knowledge scaling by making evidence retrieval instant Topics: diff --git a/domains/health/human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs.md b/domains/health/human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs.md index f72c3e5..e1a85af 100644 --- a/domains/health/human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs.md +++ b/domains/health/human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs.md @@ -22,7 +22,7 @@ Wachter frames the challenge directly: "Humans suck at remaining vigilant over t --- Relevant Notes: -- [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]] -- the chess centaur model does NOT generalize to clinical medicine where physician overrides degrade AI performance +- [[centaur team performance depends on role complementarity not mere human-AI combination]] -- the chess centaur model does NOT generalize to clinical medicine where physician overrides degrade AI performance - [[medical LLM benchmark performance does not translate to clinical impact because physicians with and without AI access achieve similar diagnostic accuracy in randomized trials]] -- the multi-hospital RCT found similar diagnostic accuracy with/without AI; the Stanford/Harvard study found AI alone dramatically superior - [[the physician role shifts from information processor to relationship manager as AI automates documentation triage and evidence synthesis]] -- if physicians degrade AI diagnostic performance, the role shift toward relationship management is not just efficient but necessary - [[ambient AI documentation reduces physician documentation burden by 73 percent but the relationship between automation and burnout is more complex than time savings alone]] -- documentation AI where physicians don't override outputs avoids the de-skilling problem diff --git a/domains/health/medical LLM benchmark performance does not translate to clinical impact because physicians with and without AI access achieve similar diagnostic accuracy in randomized trials.md b/domains/health/medical LLM benchmark performance does not translate to clinical impact because physicians with and without AI access achieve similar diagnostic accuracy in randomized trials.md index bc36c4b..da1625c 100644 --- a/domains/health/medical LLM benchmark performance does not translate to clinical impact because physicians with and without AI access achieve similar diagnostic accuracy in randomized trials.md +++ b/domains/health/medical LLM benchmark performance does not translate to clinical impact because physicians with and without AI access achieve similar diagnostic accuracy in randomized trials.md @@ -21,7 +21,7 @@ The implication for AI deployment strategy: the highest-value clinical AI applic Relevant Notes: - [[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]] -- Stanford/Harvard study shows physician overrides degrade AI performance from 90% to 68% -- [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]] -- the chess centaur model does NOT generalize cleanly to clinical medicine; interaction design matters +- [[centaur team performance depends on role complementarity not mere human-AI combination]] -- the chess centaur model does NOT generalize cleanly to clinical medicine; interaction design matters - [[OpenEvidence became the fastest-adopted clinical technology in history reaching 40 percent of US physicians daily within two years]] -- OpenEvidence succeeds as evidence retrieval, not diagnostic replacement Topics: diff --git a/domains/health/the physician role shifts from information processor to relationship manager as AI automates documentation triage and evidence synthesis.md b/domains/health/the physician role shifts from information processor to relationship manager as AI automates documentation triage and evidence synthesis.md index a991968..a99dd12 100644 --- a/domains/health/the physician role shifts from information processor to relationship manager as AI automates documentation triage and evidence synthesis.md +++ b/domains/health/the physician role shifts from information processor to relationship manager as AI automates documentation triage and evidence synthesis.md @@ -23,7 +23,7 @@ Relevant Notes: - [[ambient AI documentation reduces physician documentation burden by 73 percent but the relationship between automation and burnout is more complex than time savings alone]] -- the documentation automation mechanism - [[medical LLM benchmark performance does not translate to clinical impact because physicians with and without AI access achieve similar diagnostic accuracy in randomized trials]] -- why AI augments workflow not diagnosis - [[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]] -- the de-skilling risk that shapes how the physician-AI relationship must be designed -- [[centaur teams outperform both pure humans and pure AI because complementary strengths compound]] -- the clinical centaur: AI handles information processing, humans handle relationships and judgment +- [[centaur team performance depends on role complementarity not mere human-AI combination]] -- the clinical centaur: AI handles information processing, humans handle relationships and judgment - [[healthcare AI regulation needs blank-sheet redesign because the FDA drug-and-device model built for static products cannot govern continuously learning software]] -- the AI payment gap may force VBC transition, which would accelerate the physician role shift Topics: diff --git a/foundations/collective-intelligence/centaur teams outperform both pure humans and pure AI because complementary strengths compound.md b/foundations/collective-intelligence/centaur team performance depends on role complementarity not mere human-AI combination.md similarity index 100% rename from foundations/collective-intelligence/centaur teams outperform both pure humans and pure AI because complementary strengths compound.md rename to foundations/collective-intelligence/centaur team performance depends on role complementarity not mere human-AI combination.md