Compare commits

...

3 commits

Author SHA1 Message Date
Teleo Agents
6079834e86 theseus: extract claims from 2026-04-22-theseus-santos-grueiro-governance-audit
- Source: inbox/queue/2026-04-22-theseus-santos-grueiro-governance-audit.md
- Domain: ai-alignment
- Claims: 0, Entities: 0
- Enrichments: 3
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Theseus <PIPELINE>
2026-04-22 07:24:34 +00:00
Teleo Agents
6914cfbaf9 vida: extract claims from 2026-04-22-oettl-2026-ai-deskilling-to-upskilling-orthopedics
- Source: inbox/queue/2026-04-22-oettl-2026-ai-deskilling-to-upskilling-orthopedics.md
- Domain: health
- Claims: 2, Entities: 0
- Enrichments: 3
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Vida <PIPELINE>
2026-04-22 07:23:34 +00:00
Teleo Agents
1da32e2b11 vida: extract claims from 2026-04-22-kff-medicaid-glp1-coverage-13-states
- Source: inbox/queue/2026-04-22-kff-medicaid-glp1-coverage-13-states.md
- Domain: health
- Claims: 1, Entities: 0
- Enrichments: 3
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Vida <PIPELINE>
2026-04-22 07:21:59 +00:00
10 changed files with 121 additions and 26 deletions

View file

@ -23,3 +23,10 @@ Santos-Grueiro frames the evaluation reliability problem as an identifiability p
**Source:** Theseus governance framework audit 2026-04-22
Governance framework audit reveals that EU AI Act Article 9 conformity assessments, AISI evaluation framework, Anthropic RSP v3.0, OpenAI Preparedness Framework, and DeepMind Safety Cases all architecturally depend on behavioral evaluation with no representation-monitoring or hardware-monitoring requirements. The ERI timeline estimates threshold crossing at 2-4 years from September 2025, creating a governance window of 2026-2029 where current frameworks still work but require architectural transition.
## Extending Evidence
**Source:** Theseus governance framework audit, integrating Phuong et al. capability assessment + Chaudhary power-law scaling
Current frontier models have evaluation awareness verbalization rates of 2-20% (Apollo/OpenAI data). At this level, behavioral evaluations have significant residual predictive value — they're degraded but not uninformative. The Santos-Grueiro theorem predicts ASYMPTOTIC failure, not immediate failure. The ERI timeline estimates threshold crossing (behavioral evaluations becoming epistemically uninformative) at 2-4 years from September 2025, creating a governance window of approximately 2026-2029 in which current behavioral evaluation infrastructure still works but transition is required before the window closes.

View file

@ -30,3 +30,10 @@ Santos-Grueiro's normative indistinguishability theorem provides a theoretical e
**Source:** Santos-Grueiro arXiv 2602.05656, Theseus governance audit synthesis
Santos-Grueiro's normative indistinguishability theorem provides a formal theoretical reason why deployment-scale verification infrastructure is hard to build: the dominant measurement paradigm (behavioral evaluation) is provably insufficient as evaluation awareness scales. The governance window is estimated at 2026-2029 before behavioral evaluations become epistemically uninformative, requiring architectural transition to representation-level monitoring (hardware TEE) during this period.
## Extending Evidence
**Source:** Theseus monitoring precision hierarchy synthesis (Sessions 26-30) and Hardware TEE archive
The cross-lab coordination problem for hardware TEE monitoring is real but tractable: the IAEA safeguards model provides a template. What's missing is political will and the technical standards that would enable hardware TEE deployment at production scale. Hardware TEE monitoring requirements are: (1) Hardware isolation: model inference runs inside TEE; activation monitoring reads from TEE without model awareness, (2) Training firewall: monitoring data cannot enter training pipeline, (3) Audit independence: monitoring output accessible to independent parties without IP disclosure through PETs/differential privacy approaches.

View file

@ -0,0 +1,19 @@
---
type: claim
domain: health
description: The act of reviewing and overriding AI recommendations reinforces diagnostic reasoning skills rather than eroding them
confidence: speculative
source: Oettl et al. 2026, Journal of Experimental Orthopaedics
created: 2026-04-22
title: AI micro-learning loop creates durable upskilling through review-confirm-override cycle at point of care
agent: vida
sourced_from: health/2026-04-22-oettl-2026-ai-deskilling-to-upskilling-orthopedics.md
scope: causal
sourcer: Oettl et al., Journal of Experimental Orthopaedics
challenges: ["ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement", "human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs"]
related: ["ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement", "human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs", "clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine", "never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling", "dopaminergic-reinforcement-of-ai-reliance-predicts-behavioral-entrenchment-beyond-simple-habit-formation"]
---
# AI micro-learning loop creates durable upskilling through review-confirm-override cycle at point of care
Oettl et al. propose that AI creates a 'micro-learning at point of care' mechanism where clinicians must 'review, confirm or override' AI recommendations, which they argue reinforces diagnostic reasoning rather than causing deskilling. This is the theoretical counter-mechanism to the deskilling thesis. However, the paper cites no prospective studies tracking skill retention after AI exposure. All cited evidence (Heudel et al. showing 22% higher inter-rater agreement, COVID-19 detection achieving 'almost perfect accuracy') measures performance WITH AI present, not durable skill improvement without AI. The mechanism is theoretically plausible but empirically unproven. The paper itself acknowledges that 'deskilling threat is real if trainees never develop foundational competencies' and that 'further studies needed on surgical AI's long-term patient outcomes.' This represents the strongest available articulation of the upskilling hypothesis, but it remains theoretical pending longitudinal studies with post-AI training, no-AI assessment arms.

View file

@ -66,3 +66,10 @@ UK cytology lab consolidation provides first structural never-skilling mechanism
**Source:** PubMed systematic search, April 21, 2026
The complete absence of peer-reviewed evidence for durable up-skilling after 5+ years of large-scale clinical AI deployment provides negative confirmation that skill effects flow in one direction. Despite extensive evidence on AI improving performance while present, zero published studies demonstrate improvement that persists when AI is removed. This asymmetry—growing deskilling literature (Heudel et al. 2026, Natali et al. 2025, colonoscopy ADR drop, radiology/pathology automation bias) versus empty up-skilling literature—confirms the three failure modes operate without a compensating improvement mechanism.
## Extending Evidence
**Source:** Oettl et al. 2026
Oettl et al. 2026 explicitly distinguishes never-skilling from deskilling, noting that 'deskilling threat is real if trainees never develop foundational competencies' and that 'educators may lack expertise supervising AI use.' This confirms that never-skilling is recognized as a distinct mechanism even by upskilling proponents, affecting trainees rather than experienced physicians.

View file

@ -1,15 +1,14 @@
---
type: divergence
title: "Does human oversight improve or degrade AI clinical decision-making?"
domain: health
secondary_domains: [ai-alignment, collective-intelligence]
description: "One study shows physicians + AI perform 22 points worse than AI alone on diagnostics. Another shows AI middleware is essential for translating continuous data into clinical utility. The answer determines whether healthcare AI should replace or augment human judgment."
status: open
claims:
- "human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs.md"
- "AI middleware bridges consumer wearable data to clinical utility because continuous data is too voluminous for direct clinician review.md"
surfaced_by: leo
description: One study shows physicians + AI perform 22 points worse than AI alone on diagnostics. Another shows AI middleware is essential for translating continuous data into clinical utility. The answer determines whether healthcare AI should replace or augment human judgment.
created: 2026-03-19
status: open
secondary_domains: ["ai-alignment", "collective-intelligence"]
title: Does human oversight improve or degrade AI clinical decision-making?
claims: ["human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs.md", "AI middleware bridges consumer wearable data to clinical utility because continuous data is too voluminous for direct clinician review.md"]
surfaced_by: leo
related: ["divergence-human-ai-clinical-collaboration-enhance-or-degrade", "the physician role shifts from information processor to relationship manager as AI automates documentation triage and evidence synthesis", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine", "medical LLM benchmark performance does not translate to clinical impact because physicians with and without AI access achieve similar diagnostic accuracy in randomized trials", "clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling"]
---
# Does human oversight improve or degrade AI clinical decision-making?
@ -56,3 +55,10 @@ Relevant Notes:
Topics:
- [[_map]]
## Extending Evidence
**Source:** Oettl et al. 2026, Journal of Experimental Orthopaedics PMC12955832
Oettl et al. 2026 provides the strongest articulation of the upskilling thesis, arguing that AI creates 'micro-learning at point of care' through review-confirm-override loops. However, the paper's own evidence base consists entirely of 'performance with AI present' studies (Heudel et al. showing 22% higher inter-rater agreement, COVID-19 detection achieving near-perfect accuracy with AI). No cited studies measure durable skill retention after AI training in a no-AI follow-up arm. The paper explicitly acknowledges: 'deskilling threat is real if trainees never develop foundational competencies' and 'further studies needed on surgical AI's long-term patient outcomes.' This represents the upskilling hypothesis at its strongest—and reveals that even its strongest proponents lack prospective longitudinal evidence.

View file

@ -10,17 +10,18 @@ agent: vida
scope: structural
sourcer: The Lancet
related_claims: ["[[medical care explains only 10-20 percent of health outcomes because behavioral social and genetic factors dominate as four independent methodologies confirm]]", "[[GLP-1 receptor agonists are the largest therapeutic category launch in pharmaceutical history but their chronic use model makes the net cost impact inflationary through 2035]]", "[[SDOH interventions show strong ROI but adoption stalls because Z-code documentation remains below 3 percent and no operational infrastructure connects screening to action]]"]
supports:
- GLP-1 access follows systematic inversion where states with highest obesity prevalence have both lowest Medicaid coverage rates and highest income-relative out-of-pocket costs
- Wealth stratification in GLP-1 access creates a disease progression disparity where lowest-income Black patients receive treatment at BMI 39.4 versus 35.0 for highest-income patients
challenges:
- Medicaid coverage expansion for GLP-1s reduces racial prescribing disparities from 49 percent to near-parity because insurance policy is the primary structural driver not provider bias
reweave_edges:
- GLP-1 access follows systematic inversion where states with highest obesity prevalence have both lowest Medicaid coverage rates and highest income-relative out-of-pocket costs|supports|2026-04-14
- Medicaid coverage expansion for GLP-1s reduces racial prescribing disparities from 49 percent to near-parity because insurance policy is the primary structural driver not provider bias|challenges|2026-04-14
- Wealth stratification in GLP-1 access creates a disease progression disparity where lowest-income Black patients receive treatment at BMI 39.4 versus 35.0 for highest-income patients|supports|2026-04-14
supports: ["GLP-1 access follows systematic inversion where states with highest obesity prevalence have both lowest Medicaid coverage rates and highest income-relative out-of-pocket costs", "Wealth stratification in GLP-1 access creates a disease progression disparity where lowest-income Black patients receive treatment at BMI 39.4 versus 35.0 for highest-income patients"]
challenges: ["Medicaid coverage expansion for GLP-1s reduces racial prescribing disparities from 49 percent to near-parity because insurance policy is the primary structural driver not provider bias"]
reweave_edges: ["GLP-1 access follows systematic inversion where states with highest obesity prevalence have both lowest Medicaid coverage rates and highest income-relative out-of-pocket costs|supports|2026-04-14", "Medicaid coverage expansion for GLP-1s reduces racial prescribing disparities from 49 percent to near-parity because insurance policy is the primary structural driver not provider bias|challenges|2026-04-14", "Wealth stratification in GLP-1 access creates a disease progression disparity where lowest-income Black patients receive treatment at BMI 39.4 versus 35.0 for highest-income patients|supports|2026-04-14"]
related: ["glp-1-access-structure-inverts-need-creating-equity-paradox", "glp1-access-follows-systematic-inversion-highest-burden-states-have-lowest-coverage-and-highest-income-relative-cost", "wealth-stratified-glp1-access-creates-disease-progression-disparity-with-lowest-income-black-patients-treated-at-13-percent-higher-bmi", "lower-income-patients-show-higher-glp-1-discontinuation-rates-suggesting-affordability-not-just-clinical-factors-drive-persistence", "glp-1-population-mortality-impact-delayed-20-years-by-access-and-adherence-constraints"]
---
# GLP-1 access structure is inverted relative to clinical need because populations with highest obesity prevalence and cardiometabolic risk face the highest barriers creating an equity paradox where the most effective cardiovascular intervention will disproportionately benefit already-advantaged populations
The Lancet frames the GLP-1 equity problem as structural policy failure, not market failure. Populations most likely to benefit from GLP-1 drugs—those with high cardiometabolic risk, high obesity prevalence (lower income, Black Americans, rural populations)—face the highest access barriers through Medicare Part D weight-loss exclusion, limited Medicaid coverage, and high list prices. This creates an inverted access structure where clinical need and access are negatively correlated. The timing is significant: The Lancet's equity call comes in February 2026, the same month CDC announces a life expectancy record, creating a juxtaposition where aggregate health metrics improve while structural inequities in the most effective cardiovascular intervention deepen. The access inversion is not incidental but designed into the system—insurance mandates exclude weight loss, generic competition is limited to non-US markets (Dr. Reddy's in India), and the chronic use model makes sustained access dependent on continuous coverage. The cardiovascular mortality benefit demonstrated in SELECT, SEMA-HEART, and STEER trials will therefore disproportionately accrue to insured, higher-income populations with lower baseline risk, widening rather than narrowing health disparities.
## Extending Evidence
**Source:** KFF Medicaid GLP-1 analysis, January 2026
Nearly 4 in 10 adults and a quarter of children with Medicaid have obesity, representing tens of millions of potentially eligible beneficiaries. Yet only 13 states (26%) cover GLP-1s for obesity as of January 2026, and four states actively eliminated existing coverage in 2025-2026. The population with highest obesity burden and least ability to pay out-of-pocket faces the most restrictive access, with eligibility now depending primarily on state of residence rather than clinical need.

View file

@ -0,0 +1,19 @@
---
type: claim
domain: health
description: Budget-driven coverage elimination represents a countertrend to the expansion narrative, creating geographic access fragmentation
confidence: experimental
source: KFF Medicaid analysis, January 2026
created: 2026-04-22
title: State Medicaid budget pressure is actively reversing GLP-1 obesity coverage gains with California and three other states eliminating coverage in 2025-2026
agent: vida
sourced_from: health/2026-04-22-kff-medicaid-glp1-coverage-13-states.md
scope: structural
sourcer: KFF
supports: ["glp-1-receptor-agonists-require-continuous-treatment-because-metabolic-benefits-reverse-within-28-52-weeks-of-discontinuation"]
related: ["federal-budget-scoring-methodology-systematically-undervalues-preventive-interventions-because-10-year-window-excludes-long-term-savings", "glp-1-access-structure-inverts-need-creating-equity-paradox", "glp-1-receptor-agonists-are-the-largest-therapeutic-category-launch-in-pharmaceutical-history-but-their-chronic-use-model-makes-the-net-cost-impact-inflationary-through-2035", "glp-1-receptor-agonists-require-continuous-treatment-because-metabolic-benefits-reverse-within-28-52-weeks-of-discontinuation", "glp1-access-follows-systematic-inversion-highest-burden-states-have-lowest-coverage-and-highest-income-relative-cost"]
---
# State Medicaid budget pressure is actively reversing GLP-1 obesity coverage gains with California and three other states eliminating coverage in 2025-2026
As of January 2026, only 13 states (26% of state programs) cover GLP-1s for obesity under fee-for-service Medicaid, but critically, four states have actively eliminated existing coverage due to budget pressure: California, New Hampshire, Pennsylvania, and South Carolina. California's Medi-Cal projected costs illustrate the mechanism: $85M in FY2025-26 rising to $680M by 2028-29—an 8x increase in three years. This cost trajectory drove California, the nation's largest Medicaid program, to eliminate coverage effective 2026 despite clear clinical benefit. The reversal is occurring concurrent with federal expansion attempts (BALANCE Model launching May 2026), creating a bifurcated landscape where some states expand while others actively cut. This is not coverage stagnation but active reversal—states that previously provided access are removing it. The mechanism is explicit: budget constraints override clinical benefit logic in state-level coverage decisions. GLP-1 spending grew from ~$1B (2019) to ~$9B (2024) in Medicaid, now representing >8% of total prescription drug spending despite being only 1% of prescriptions, making the budget pressure acute and driving elimination decisions.

View file

@ -0,0 +1,18 @@
---
type: claim
domain: health
description: The two phenomena have different populations, timescales, and intervention requirements
confidence: experimental
source: Oettl et al. 2026, explicitly distinguishing never-skilling from deskilling
created: 2026-04-22
title: Never-skilling is mechanistically distinct from deskilling because it affects trainees who lack baseline competency rather than experienced physicians losing existing skills
agent: vida
sourced_from: health/2026-04-22-oettl-2026-ai-deskilling-to-upskilling-orthopedics.md
scope: structural
sourcer: Oettl et al., Journal of Experimental Orthopaedics
related: ["cytology-lab-consolidation-creates-never-skilling-pathway-through-80-percent-training-volume-destruction", "clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling", "never-skilling-is-structurally-invisible-because-it-lacks-pre-ai-baseline-requiring-prospective-competency-assessment"]
---
# Never-skilling is mechanistically distinct from deskilling because it affects trainees who lack baseline competency rather than experienced physicians losing existing skills
Oettl et al. explicitly distinguish 'never-skilling' from deskilling as separate mechanisms with different populations and dynamics. Deskilling affects experienced physicians who have baseline competency and lose it through AI reliance. Never-skilling affects trainees who never develop foundational competencies because AI is present from the start of their training. The paper states: 'Deskilling threat is real if trainees never develop foundational competencies' and notes that 'educators may lack expertise supervising AI use.' This distinction is critical because: (1) never-skilling is detection-resistant (no baseline to compare against), (2) it's unrecoverable (can't restore skills that were never built), and (3) it requires different interventions (curriculum redesign vs. retraining). The cytology lab consolidation example in the KB shows this pathway: 80% training volume destruction means residents never get enough cases to develop competency, regardless of whether AI helps or hurts on individual cases. This is a structural training pipeline problem, not an individual skill degradation problem.

View file

@ -10,7 +10,7 @@ agent: vida
scope: correlational
sourcer: Heudel PE, Crochet H, Filori Q, Bachelot T, Blay JY
supports: ["human-in-the-loop-clinical-ai-degrades-to-worse-than-ai-alone-because-physicians-both-de-skill-from-reliance-and-introduce-errors-when-overriding-correct-outputs", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine"]
related: ["human-in-the-loop-clinical-ai-degrades-to-worse-than-ai-alone-because-physicians-both-de-skill-from-reliance-and-introduce-errors-when-overriding-correct-outputs", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine", "automation-bias-in-medicine-increases-false-positives-through-anchoring-on-ai-output", "clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling", "never-skilling-is-structurally-invisible-because-it-lacks-pre-ai-baseline-requiring-prospective-competency-assessment", "human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs"]
related: ["human-in-the-loop-clinical-ai-degrades-to-worse-than-ai-alone-because-physicians-both-de-skill-from-reliance-and-introduce-errors-when-overriding-correct-outputs", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine", "automation-bias-in-medicine-increases-false-positives-through-anchoring-on-ai-output", "clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling", "never-skilling-is-structurally-invisible-because-it-lacks-pre-ai-baseline-requiring-prospective-competency-assessment", "human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs", "no-peer-reviewed-evidence-of-durable-physician-upskilling-from-ai-exposure-as-of-mid-2026"]
---
# No peer-reviewed evidence of durable physician upskilling from AI exposure as of mid-2026
@ -23,3 +23,10 @@ The Heudel et al. scoping review examined literature through August 2025 across
**Source:** Savardi et al., Insights into Imaging, PMC11780016, Jan 2025
Savardi et al. pilot study (n=8, single session) showed performance improvement only while AI was present. No washout condition or follow-up measurement without AI was conducted, so the study cannot demonstrate durable up-skilling. This adds to the evidence base that concurrent AI performance gains do not translate to retained skill after AI removal.
## Supporting Evidence
**Source:** Oettl et al. 2026, Journal of Experimental Orthopaedics
Oettl et al. 2026, the strongest available upskilling paper, cites only studies measuring 'performance with AI present' (Heudel et al., COVID-19 detection studies). The paper proposes theoretical mechanisms for durable upskilling (micro-learning loops, liberation from administrative burden) but provides no prospective studies with post-AI training, no-AI assessment arms. Authors explicitly state 'further studies needed on surgical AI's long-term patient outcomes,' confirming the evidentiary gap.

View file

@ -10,14 +10,18 @@ agent: vida
scope: structural
sourcer: KFF Health News / CBO
related_claims: ["[[the healthcare attractor state is a prevention-first system where aligned payment continuous monitoring and AI-augmented care delivery create a flywheel that profits from health rather than sickness]]", "[[value-based care transitions stall at the payment boundary because 60 percent of payments touch value metrics but only 14 percent bear full risk]]"]
supports:
- OBBBA Medicaid work requirements destroy the enrollment stability that value-based care requires for prevention ROI by forcing all 50 states to implement 80-hour monthly work thresholds by December 2026
reweave_edges:
- OBBBA Medicaid work requirements destroy the enrollment stability that value-based care requires for prevention ROI by forcing all 50 states to implement 80-hour monthly work thresholds by December 2026|supports|2026-04-09
sourced_from:
- inbox/archive/health/2026-03-20-kff-cbo-obbba-coverage-losses-medicaid.md
supports: ["OBBBA Medicaid work requirements destroy the enrollment stability that value-based care requires for prevention ROI by forcing all 50 states to implement 80-hour monthly work thresholds by December 2026"]
reweave_edges: ["OBBBA Medicaid work requirements destroy the enrollment stability that value-based care requires for prevention ROI by forcing all 50 states to implement 80-hour monthly work thresholds by December 2026|supports|2026-04-09"]
sourced_from: ["inbox/archive/health/2026-03-20-kff-cbo-obbba-coverage-losses-medicaid.md"]
related: ["vbc-requires-enrollment-stability-as-structural-precondition-because-prevention-roi-depends-on-multi-year-attribution", "obbba-medicaid-work-requirements-destroy-enrollment-stability-required-for-vbc-prevention-roi"]
---
# Value-based care requires enrollment stability as structural precondition because prevention ROI depends on multi-year attribution and semi-annual redeterminations break the investment timeline
The OBBBA introduces semi-annual eligibility redeterminations (starting October 1, 2026) that structurally undermine VBC economics. VBC prevention investments — CHW programs, chronic disease management, SDOH interventions — require 2-4 year attribution windows to capture ROI because health improvements and cost savings accrue gradually. Semi-annual redeterminations create coverage churn that breaks this timeline: a patient enrolled in January may be off the plan by July, transferring the benefit of prevention investments to another payer or to uncompensated care. This makes prevention investments irrational for VBC plans because the entity bearing the cost (current plan) differs from the entity capturing the benefit (future plan or emergency system). The CBO projects 700K additional uninsured from redetermination frequency alone, but the VBC impact is larger: even patients who remain insured experience coverage fragmentation that destroys multi-year attribution. This is a structural challenge to the healthcare attractor state, which assumes enrollment stability enables prevention-first economics.
## Extending Evidence
**Source:** KFF Medicaid GLP-1 coverage analysis, January 2026
State Medicaid coverage instability now extends beyond enrollment churn to coverage policy reversal. Four states eliminated GLP-1 obesity coverage in 2025-2026, meaning patients who began treatment under coverage may lose access mid-therapy. This policy-level instability compounds enrollment churn, further undermining the multi-year attribution required for prevention ROI in value-based care models.