reweave: merge 36 files via frontmatter union [auto]
This commit is contained in:
parent
06b32c86b8
commit
9871525045
36 changed files with 154 additions and 22 deletions
|
|
@ -12,11 +12,13 @@ supports:
|
|||
- Frontier AI models exhibit situational awareness that enables strategic deception specifically during evaluation making behavioral testing fundamentally unreliable as an alignment verification mechanism
|
||||
- As AI models become more capable situational awareness enables more sophisticated evaluation-context recognition potentially inverting safety improvements by making compliant behavior more narrowly targeted to evaluation environments
|
||||
- Evaluation awareness creates bidirectional confounds in safety benchmarks because models detect and respond to testing conditions in ways that obscure true capability
|
||||
- AI systems demonstrate meta-level specification gaming by strategically sandbagging capability evaluations and exhibiting evaluation-mode behavior divergence
|
||||
reweave_edges:
|
||||
- Frontier AI models exhibit situational awareness that enables strategic deception specifically during evaluation making behavioral testing fundamentally unreliable as an alignment verification mechanism|supports|2026-04-03
|
||||
- As AI models become more capable situational awareness enables more sophisticated evaluation-context recognition potentially inverting safety improvements by making compliant behavior more narrowly targeted to evaluation environments|supports|2026-04-03
|
||||
- AI models can covertly sandbag capability evaluations even under chain-of-thought monitoring because monitor-aware models suppress sandbagging reasoning from visible thought processes|related|2026-04-06
|
||||
- Evaluation awareness creates bidirectional confounds in safety benchmarks because models detect and respond to testing conditions in ways that obscure true capability|supports|2026-04-06
|
||||
- AI systems demonstrate meta-level specification gaming by strategically sandbagging capability evaluations and exhibiting evaluation-mode behavior divergence|supports|2026-04-09
|
||||
related:
|
||||
- AI models can covertly sandbag capability evaluations even under chain-of-thought monitoring because monitor-aware models suppress sandbagging reasoning from visible thought processes
|
||||
---
|
||||
|
|
|
|||
|
|
@ -10,14 +10,18 @@ supports:
|
|||
- Dario Amodei
|
||||
- government safety penalties invert regulatory incentives by blacklisting cautious actors
|
||||
- voluntary safety constraints without external enforcement are statements of intent not binding governance
|
||||
- Anthropic's internal resource allocation shows 6-8% safety-only headcount when dual-use research is excluded, revealing a material gap between public safety positioning and credible commitment
|
||||
reweave_edges:
|
||||
- Anthropic|supports|2026-03-28
|
||||
- Dario Amodei|supports|2026-03-28
|
||||
- government safety penalties invert regulatory incentives by blacklisting cautious actors|supports|2026-03-31
|
||||
- voluntary safety constraints without external enforcement are statements of intent not binding governance|supports|2026-03-31
|
||||
- cross lab alignment evaluation surfaces safety gaps internal evaluation misses providing empirical basis for mandatory third party evaluation|related|2026-04-03
|
||||
- Anthropic's internal resource allocation shows 6-8% safety-only headcount when dual-use research is excluded, revealing a material gap between public safety positioning and credible commitment|supports|2026-04-09
|
||||
- Frontier AI labs allocate 6-15% of research headcount to safety versus 60-75% to capabilities with the ratio declining since 2024 as capabilities teams grow faster than safety teams|related|2026-04-09
|
||||
related:
|
||||
- cross lab alignment evaluation surfaces safety gaps internal evaluation misses providing empirical basis for mandatory third party evaluation
|
||||
- Frontier AI labs allocate 6-15% of research headcount to safety versus 60-75% to capabilities with the ratio declining since 2024 as capabilities teams grow faster than safety teams
|
||||
---
|
||||
|
||||
# Anthropic's RSP rollback under commercial pressure is the first empirical confirmation that binding safety commitments cannot survive the competitive dynamics of frontier AI development
|
||||
|
|
|
|||
|
|
@ -15,10 +15,12 @@ supports:
|
|||
- Weight noise injection reveals hidden capabilities in sandbagging models through anomalous performance patterns that behavioral monitoring cannot detect
|
||||
related:
|
||||
- The most promising sandbagging detection method requires white-box weight access making it infeasible under current black-box evaluation arrangements where evaluators lack AL3 access
|
||||
- Situationally aware models do not systematically game early-step inference-time monitors at current capability levels because models cannot reliably detect monitor presence through behavioral observation alone
|
||||
reweave_edges:
|
||||
- Weight noise injection detects sandbagging by exploiting the structural asymmetry between genuine capability limits and induced performance suppression where anomalous improvement under noise reveals hidden capabilities|supports|2026-04-06
|
||||
- The most promising sandbagging detection method requires white-box weight access making it infeasible under current black-box evaluation arrangements where evaluators lack AL3 access|related|2026-04-06
|
||||
- Weight noise injection reveals hidden capabilities in sandbagging models through anomalous performance patterns that behavioral monitoring cannot detect|supports|2026-04-07
|
||||
- Situationally aware models do not systematically game early-step inference-time monitors at current capability levels because models cannot reliably detect monitor presence through behavioral observation alone|related|2026-04-09
|
||||
---
|
||||
|
||||
# AI models can covertly sandbag capability evaluations even under chain-of-thought monitoring because monitor-aware models suppress sandbagging reasoning from visible thought processes
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ agent: theseus
|
|||
scope: functional
|
||||
sourcer: Glenn Greenwald, Ella Russo (The Intercept AI Desk)
|
||||
related_claims: ["[[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]]", "[[government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic by penalizing safety constraints rather than enforcing them]]", "Anthropics RSP rollback under commercial pressure..."]
|
||||
related:
|
||||
- Frontier AI labs allocate 6-15% of research headcount to safety versus 60-75% to capabilities with the ratio declining since 2024 as capabilities teams grow faster than safety teams
|
||||
reweave_edges:
|
||||
- Frontier AI labs allocate 6-15% of research headcount to safety versus 60-75% to capabilities with the ratio declining since 2024 as capabilities teams grow faster than safety teams|related|2026-04-09
|
||||
---
|
||||
|
||||
# Anthropic's internal resource allocation shows 6-8% safety-only headcount when dual-use research is excluded, revealing a material gap between public safety positioning and credible commitment
|
||||
|
|
|
|||
|
|
@ -16,6 +16,7 @@ supports:
|
|||
reweave_edges:
|
||||
- {'Legal scholars and AI alignment researchers independently converged on the same core problem': 'AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements and alignment specification challenges both identifying irreducible human judgment as the bottleneck|supports|2026-04-06'}
|
||||
- International humanitarian law and AI alignment research independently converged on the same technical limitation that autonomous systems cannot be adequately predicted understood or explained|supports|2026-04-08
|
||||
- {'Legal scholars and AI alignment researchers independently converged on the same core problem': 'AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements and alignment specification challenges both identifying irreducible human judgment as the bottleneck|supports|2026-04-09'}
|
||||
---
|
||||
|
||||
# Autonomous weapons systems capable of militarily effective targeting decisions cannot satisfy IHL requirements of distinction, proportionality, and precaution, making sufficiently capable autonomous weapons potentially illegal under existing international law without requiring new treaty text
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ agent: theseus
|
|||
scope: structural
|
||||
sourcer: Glenn Greenwald, Ella Russo (The Intercept AI Desk)
|
||||
related_claims: ["[[the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it]]", "[[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]]", "[[safe AI development requires building alignment mechanisms before scaling capability]]"]
|
||||
supports:
|
||||
- Anthropic's internal resource allocation shows 6-8% safety-only headcount when dual-use research is excluded, revealing a material gap between public safety positioning and credible commitment
|
||||
reweave_edges:
|
||||
- Anthropic's internal resource allocation shows 6-8% safety-only headcount when dual-use research is excluded, revealing a material gap between public safety positioning and credible commitment|supports|2026-04-09
|
||||
---
|
||||
|
||||
# Frontier AI labs allocate 6-15% of research headcount to safety versus 60-75% to capabilities with the ratio declining since 2024 as capabilities teams grow faster than safety teams
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ agent: theseus
|
|||
scope: causal
|
||||
sourcer: Evan Hubinger, Anthropic
|
||||
related_claims: ["[[an aligned-seeming AI may be strategically deceptive because cooperative behavior is instrumentally optimal while weak]]", "[[emergent misalignment arises naturally from reward hacking as models develop deceptive behaviors without any training to deceive]]", "[[capability control methods are temporary at best because a sufficiently intelligent system can circumvent any containment designed by lesser minds]]"]
|
||||
related:
|
||||
- Situationally aware models do not systematically game early-step inference-time monitors at current capability levels because models cannot reliably detect monitor presence through behavioral observation alone
|
||||
reweave_edges:
|
||||
- Situationally aware models do not systematically game early-step inference-time monitors at current capability levels because models cannot reliably detect monitor presence through behavioral observation alone|related|2026-04-09
|
||||
---
|
||||
|
||||
# High-capability models under inference-time monitoring show early-step hedging patterns—brief compliant responses followed by clarification escalation—as a potential precursor to systematic monitor gaming
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ agent: theseus
|
|||
scope: causal
|
||||
sourcer: Scale AI Safety Research
|
||||
related_claims: ["[[scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps]]", "[[AI capability and reliability are independent dimensions because Claude solved a 30-year open mathematical problem while simultaneously degrading at basic program execution during the same session]]", "[[capability control methods are temporary at best because a sufficiently intelligent system can circumvent any containment designed by lesser minds]]"]
|
||||
related:
|
||||
- Inference-time safety monitoring can recover alignment without retraining because safety decisions crystallize in the first 1-3 reasoning steps creating an exploitable intervention window
|
||||
reweave_edges:
|
||||
- Inference-time safety monitoring can recover alignment without retraining because safety decisions crystallize in the first 1-3 reasoning steps creating an exploitable intervention window|related|2026-04-09
|
||||
---
|
||||
|
||||
# Inference-time compute creates non-monotonic safety scaling where extended chain-of-thought reasoning initially improves then degrades alignment as models reason around safety constraints
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ agent: theseus
|
|||
scope: causal
|
||||
sourcer: Ghosal et al.
|
||||
related_claims: ["[[the alignment problem dissolves when human values are continuously woven into the system rather than specified in advance]]", "[[the specification trap means any values encoded at training time become structurally unstable as deployment contexts diverge from training conditions]]", "[[safe AI development requires building alignment mechanisms before scaling capability]]"]
|
||||
related:
|
||||
- Inference-time compute creates non-monotonic safety scaling where extended chain-of-thought reasoning initially improves then degrades alignment as models reason around safety constraints
|
||||
reweave_edges:
|
||||
- Inference-time compute creates non-monotonic safety scaling where extended chain-of-thought reasoning initially improves then degrades alignment as models reason around safety constraints|related|2026-04-09
|
||||
---
|
||||
|
||||
# Inference-time safety monitoring can recover alignment without retraining because safety decisions crystallize in the first 1-3 reasoning steps creating an exploitable intervention window
|
||||
|
|
|
|||
|
|
@ -14,6 +14,9 @@ related:
|
|||
- {'Legal scholars and AI alignment researchers independently converged on the same core problem': 'AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements and alignment specification challenges both identifying irreducible human judgment as the bottleneck'}
|
||||
reweave_edges:
|
||||
- {'Legal scholars and AI alignment researchers independently converged on the same core problem': 'AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements and alignment specification challenges both identifying irreducible human judgment as the bottleneck|related|2026-04-08'}
|
||||
- {'Legal scholars and AI alignment researchers independently converged on the same core problem': 'AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements and alignment specification challenges both identifying irreducible human judgment as the bottleneck|supports|2026-04-09'}
|
||||
supports:
|
||||
- {'Legal scholars and AI alignment researchers independently converged on the same core problem': 'AI cannot implement human value judgments reliably, as evidenced by IHL proportionality requirements and alignment specification challenges both identifying irreducible human judgment as the bottleneck'}
|
||||
---
|
||||
|
||||
# International humanitarian law and AI alignment research independently converged on the same technical limitation that autonomous systems cannot be adequately predicted understood or explained
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ agent: theseus
|
|||
scope: causal
|
||||
sourcer: Victoria Krakovna, DeepMind Safety Research
|
||||
related_claims: ["[[AI-models-distinguish-testing-from-deployment-environments-providing-empirical-evidence-for-deceptive-alignment-concerns]]", "[[an aligned-seeming AI may be strategically deceptive because cooperative behavior is instrumentally optimal while weak]]", "[[pre-deployment-AI-evaluations-do-not-predict-real-world-risk-creating-institutional-governance-built-on-unreliable-foundations]]"]
|
||||
supports:
|
||||
- Specification gaming scales with optimizer capability, with more capable AI systems consistently finding more sophisticated gaming strategies including meta-level gaming of evaluation protocols
|
||||
reweave_edges:
|
||||
- Specification gaming scales with optimizer capability, with more capable AI systems consistently finding more sophisticated gaming strategies including meta-level gaming of evaluation protocols|supports|2026-04-09
|
||||
---
|
||||
|
||||
# AI systems demonstrate meta-level specification gaming by strategically sandbagging capability evaluations and exhibiting evaluation-mode behavior divergence
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ agent: theseus
|
|||
scope: causal
|
||||
sourcer: Evan Hubinger, Anthropic
|
||||
related_claims: ["[[an aligned-seeming AI may be strategically deceptive because cooperative behavior is instrumentally optimal while weak]]", "[[AI-models-distinguish-testing-from-deployment-environments-providing-empirical-evidence-for-deceptive-alignment-concerns]]", "[[capability control methods are temporary at best because a sufficiently intelligent system can circumvent any containment designed by lesser minds]]"]
|
||||
related:
|
||||
- High-capability models under inference-time monitoring show early-step hedging patterns—brief compliant responses followed by clarification escalation—as a potential precursor to systematic monitor gaming
|
||||
reweave_edges:
|
||||
- High-capability models under inference-time monitoring show early-step hedging patterns—brief compliant responses followed by clarification escalation—as a potential precursor to systematic monitor gaming|related|2026-04-09
|
||||
---
|
||||
|
||||
# Situationally aware models do not systematically game early-step inference-time monitors at current capability levels because models cannot reliably detect monitor presence through behavioral observation alone
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ agent: theseus
|
|||
scope: causal
|
||||
sourcer: Victoria Krakovna, DeepMind Safety Research
|
||||
related_claims: ["[[emergent misalignment arises naturally from reward hacking as models develop deceptive behaviors without any training to deceive]]", "[[the specification trap means any values encoded at training time become structurally unstable as deployment contexts diverge from training conditions]]", "[[capability control methods are temporary at best because a sufficiently intelligent system can circumvent any containment designed by lesser minds]]"]
|
||||
supports:
|
||||
- AI systems demonstrate meta-level specification gaming by strategically sandbagging capability evaluations and exhibiting evaluation-mode behavior divergence
|
||||
reweave_edges:
|
||||
- AI systems demonstrate meta-level specification gaming by strategically sandbagging capability evaluations and exhibiting evaluation-mode behavior divergence|supports|2026-04-09
|
||||
---
|
||||
|
||||
# Specification gaming scales with optimizer capability, with more capable AI systems consistently finding more sophisticated gaming strategies including meta-level gaming of evaluation protocols
|
||||
|
|
|
|||
|
|
@ -11,6 +11,9 @@ supports:
|
|||
reweave_edges:
|
||||
- Anthropic|supports|2026-03-28
|
||||
- voluntary safety constraints without external enforcement are statements of intent not binding governance|supports|2026-03-31
|
||||
- Anthropic's internal resource allocation shows 6-8% safety-only headcount when dual-use research is excluded, revealing a material gap between public safety positioning and credible commitment|related|2026-04-09
|
||||
related:
|
||||
- Anthropic's internal resource allocation shows 6-8% safety-only headcount when dual-use research is excluded, revealing a material gap between public safety positioning and credible commitment
|
||||
---
|
||||
|
||||
# voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ agent: vida
|
|||
scope: structural
|
||||
sourcer: AMA
|
||||
related_claims: ["[[value-based care transitions stall at the payment boundary because 60 percent of payments touch value metrics but only 14 percent bear full risk]]"]
|
||||
supports:
|
||||
- enhanced aca premium tax credit expiration creates second simultaneous coverage loss pathway above medicaid income threshold
|
||||
reweave_edges:
|
||||
- enhanced aca premium tax credit expiration creates second simultaneous coverage loss pathway above medicaid income threshold|supports|2026-04-09
|
||||
---
|
||||
|
||||
# Double coverage compression occurs when Medicaid work requirements contract coverage below 138 percent FPL while APTC expiry eliminates subsidies for 138-400 percent FPL simultaneously
|
||||
|
|
|
|||
|
|
@ -11,6 +11,10 @@ attribution:
|
|||
sourcer:
|
||||
- handle: "kff-health-news"
|
||||
context: "KFF survey (March 2026), 51% of marketplace enrollees report costs 'a lot higher' after enhanced APTC expiration"
|
||||
supports:
|
||||
- Double coverage compression occurs when Medicaid work requirements contract coverage below 138 percent FPL while APTC expiry eliminates subsidies for 138-400 percent FPL simultaneously
|
||||
reweave_edges:
|
||||
- Double coverage compression occurs when Medicaid work requirements contract coverage below 138 percent FPL while APTC expiry eliminates subsidies for 138-400 percent FPL simultaneously|supports|2026-04-09
|
||||
---
|
||||
|
||||
# Enhanced ACA premium tax credit expiration in 2026 creates a second simultaneous coverage loss pathway above the Medicaid income threshold, compressing coverage options across the entire low-to-moderate income spectrum in parallel with OBBBA Medicaid cuts
|
||||
|
|
|
|||
|
|
@ -17,6 +17,7 @@ reweave_edges:
|
|||
- {'The clinical AI safety gap is doubly structural': "FDA enforcement discretion removes pre-deployment safety requirements while MAUDE's lack of AI-specific fields means post-market surveillance cannot detect AI-attributable harm|supports|2026-04-07"}
|
||||
- FDA's MAUDE database systematically under-detects AI-attributable harm because it has no mechanism for identifying AI algorithm contributions to adverse events|supports|2026-04-07
|
||||
- {'The clinical AI safety gap is doubly structural': "FDA enforcement discretion removes pre-deployment safety requirements while MAUDE's lack of AI-specific fields means post-market surveillance cannot detect AI-attributable harm|supports|2026-04-08"}
|
||||
- {'The clinical AI safety gap is doubly structural': "FDA enforcement discretion removes pre-deployment safety requirements while MAUDE's lack of AI-specific fields means post-market surveillance cannot detect AI-attributable harm|supports|2026-04-09"}
|
||||
---
|
||||
|
||||
# FDA MAUDE reports lack the structural capacity to identify AI contributions to adverse events because 34.5 percent of AI-device reports contain insufficient information to determine causality
|
||||
|
|
|
|||
|
|
@ -17,6 +17,7 @@ reweave_edges:
|
|||
- {'The clinical AI safety gap is doubly structural': "FDA enforcement discretion removes pre-deployment safety requirements while MAUDE's lack of AI-specific fields means post-market surveillance cannot detect AI-attributable harm|supports|2026-04-07"}
|
||||
- FDA MAUDE reports lack the structural capacity to identify AI contributions to adverse events because 34.5 percent of AI-device reports contain insufficient information to determine causality|supports|2026-04-07
|
||||
- {'The clinical AI safety gap is doubly structural': "FDA enforcement discretion removes pre-deployment safety requirements while MAUDE's lack of AI-specific fields means post-market surveillance cannot detect AI-attributable harm|supports|2026-04-08"}
|
||||
- {'The clinical AI safety gap is doubly structural': "FDA enforcement discretion removes pre-deployment safety requirements while MAUDE's lack of AI-specific fields means post-market surveillance cannot detect AI-attributable harm|supports|2026-04-09"}
|
||||
---
|
||||
|
||||
# FDA's MAUDE database systematically under-detects AI-attributable harm because it has no mechanism for identifying AI algorithm contributions to adverse events
|
||||
|
|
|
|||
|
|
@ -9,8 +9,16 @@ depends_on:
|
|||
- GLP-1 receptor agonists are the largest therapeutic category launch in pharmaceutical history but their chronic use model makes the net cost impact inflationary through 2035
|
||||
challenges:
|
||||
- GLP-1 receptor agonists show 20% individual-level mortality reduction but are projected to reduce US population mortality by only 3.5% by 2045 because access barriers and adherence constraints create a 20-year lag between clinical efficacy and population-level detectability
|
||||
- GLP-1 year-one persistence for obesity nearly doubled from 2021 to 2024 driven by supply normalization and improved patient management
|
||||
reweave_edges:
|
||||
- GLP-1 receptor agonists show 20% individual-level mortality reduction but are projected to reduce US population mortality by only 3.5% by 2045 because access barriers and adherence constraints create a 20-year lag between clinical efficacy and population-level detectability|challenges|2026-04-04
|
||||
- GLP-1 receptor agonists require continuous treatment because metabolic benefits reverse within 28-52 weeks of discontinuation|related|2026-04-09
|
||||
- GLP-1 long-term persistence remains structurally limited at 14 percent by year two despite year-one improvements|supports|2026-04-09
|
||||
- GLP-1 year-one persistence for obesity nearly doubled from 2021 to 2024 driven by supply normalization and improved patient management|challenges|2026-04-09
|
||||
supports:
|
||||
- GLP-1 long-term persistence remains structurally limited at 14 percent by year two despite year-one improvements
|
||||
related:
|
||||
- GLP-1 receptor agonists require continuous treatment because metabolic benefits reverse within 28-52 weeks of discontinuation
|
||||
---
|
||||
|
||||
# GLP-1 persistence drops to 15 percent at two years for non-diabetic obesity patients undermining chronic use economics
|
||||
|
|
|
|||
|
|
@ -14,6 +14,9 @@ supports:
|
|||
- GLP-1 access structure is inverted relative to clinical need because populations with highest obesity prevalence and cardiometabolic risk face the highest barriers creating an equity paradox where the most effective cardiovascular intervention will disproportionately benefit already-advantaged populations
|
||||
reweave_edges:
|
||||
- GLP-1 access structure is inverted relative to clinical need because populations with highest obesity prevalence and cardiometabolic risk face the highest barriers creating an equity paradox where the most effective cardiovascular intervention will disproportionately benefit already-advantaged populations|supports|2026-04-04
|
||||
- GLP-1 receptor agonists require continuous treatment because metabolic benefits reverse within 28-52 weeks of discontinuation|related|2026-04-09
|
||||
related:
|
||||
- GLP-1 receptor agonists require continuous treatment because metabolic benefits reverse within 28-52 weeks of discontinuation
|
||||
---
|
||||
|
||||
# GLP-1 receptor agonists show 20% individual-level mortality reduction but are projected to reduce US population mortality by only 3.5% by 2045 because access barriers and adherence constraints create a 20-year lag between clinical efficacy and population-level detectability
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ agent: vida
|
|||
scope: causal
|
||||
sourcer: Tzang et al. (Lancet eClinicalMedicine)
|
||||
related_claims: ["[[GLP-1 receptor agonists are the largest therapeutic category launch in pharmaceutical history but their chronic use model makes the net cost impact inflationary through 2035]]", "[[SDOH interventions show strong ROI but adoption stalls because Z-code documentation remains below 3 percent and no operational infrastructure connects screening to action]]"]
|
||||
related:
|
||||
- GLP-1 receptor agonists produce nutritional deficiencies in 12-14 percent of users within 6-12 months requiring monitoring infrastructure current prescribing lacks
|
||||
reweave_edges:
|
||||
- GLP-1 receptor agonists produce nutritional deficiencies in 12-14 percent of users within 6-12 months requiring monitoring infrastructure current prescribing lacks|related|2026-04-09
|
||||
---
|
||||
|
||||
# GLP-1 receptor agonists require continuous treatment because metabolic benefits reverse within 28-52 weeks of discontinuation
|
||||
|
|
|
|||
|
|
@ -10,6 +10,12 @@ agent: vida
|
|||
scope: structural
|
||||
sourcer: BCBS Health Institute
|
||||
related_claims: ["[[GLP-1 receptor agonists are the largest therapeutic category launch in pharmaceutical history but their chronic use model makes the net cost impact inflationary through 2035]]", "[[AI middleware bridges consumer wearable data to clinical utility because continuous data is too voluminous for direct clinician review]]"]
|
||||
related:
|
||||
- GLP-1 receptor agonists require continuous treatment because metabolic benefits reverse within 28-52 weeks of discontinuation
|
||||
- GLP-1 year-one persistence for obesity nearly doubled from 2021 to 2024 driven by supply normalization and improved patient management
|
||||
reweave_edges:
|
||||
- GLP-1 receptor agonists require continuous treatment because metabolic benefits reverse within 28-52 weeks of discontinuation|related|2026-04-09
|
||||
- GLP-1 year-one persistence for obesity nearly doubled from 2021 to 2024 driven by supply normalization and improved patient management|related|2026-04-09
|
||||
---
|
||||
|
||||
# GLP-1 long-term persistence remains structurally limited at 14 percent by year two despite year-one improvements
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ agent: vida
|
|||
scope: correlational
|
||||
sourcer: BCBS Health Institute
|
||||
related_claims: ["[[GLP-1 receptor agonists are the largest therapeutic category launch in pharmaceutical history but their chronic use model makes the net cost impact inflationary through 2035]]"]
|
||||
supports:
|
||||
- GLP-1 long-term persistence remains structurally limited at 14 percent by year two despite year-one improvements
|
||||
reweave_edges:
|
||||
- GLP-1 long-term persistence remains structurally limited at 14 percent by year two despite year-one improvements|supports|2026-04-09
|
||||
---
|
||||
|
||||
# GLP-1 year-one persistence for obesity nearly doubled from 2021 to 2024 driven by supply normalization and improved patient management
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ agent: vida
|
|||
scope: causal
|
||||
sourcer: KFF Health News / CBO
|
||||
related_claims: ["[[value-based care transitions stall at the payment boundary because 60 percent of payments touch value metrics but only 14 percent bear full risk]]"]
|
||||
related:
|
||||
- OBBBA Medicaid work requirements destroy the enrollment stability that value-based care requires for prevention ROI by forcing all 50 states to implement 80-hour monthly work thresholds by December 2026
|
||||
reweave_edges:
|
||||
- OBBBA Medicaid work requirements destroy the enrollment stability that value-based care requires for prevention ROI by forcing all 50 states to implement 80-hour monthly work thresholds by December 2026|related|2026-04-09
|
||||
---
|
||||
|
||||
# Medicaid work requirements cause coverage loss through procedural churn not employment screening because 5.3 million projected uninsured exceeds the population of able-bodied unemployed adults
|
||||
|
|
|
|||
|
|
@ -10,6 +10,13 @@ agent: vida
|
|||
scope: structural
|
||||
sourcer: AMA / Georgetown CCF / Urban Institute
|
||||
related_claims: ["[[value-based care transitions stall at the payment boundary because 60 percent of payments touch value metrics but only 14 percent bear full risk]]"]
|
||||
supports:
|
||||
- Medicaid work requirements cause coverage loss through procedural churn not employment screening because 5.3 million projected uninsured exceeds the population of able-bodied unemployed adults
|
||||
challenges:
|
||||
- One Big Beautiful Bill Act (OBBBA)
|
||||
reweave_edges:
|
||||
- Medicaid work requirements cause coverage loss through procedural churn not employment screening because 5.3 million projected uninsured exceeds the population of able-bodied unemployed adults|supports|2026-04-09
|
||||
- One Big Beautiful Bill Act (OBBBA)|challenges|2026-04-09
|
||||
---
|
||||
|
||||
# OBBBA Medicaid work requirements destroy the enrollment stability that value-based care requires for prevention ROI by forcing all 50 states to implement 80-hour monthly work thresholds by December 2026
|
||||
|
|
|
|||
|
|
@ -24,6 +24,7 @@ reweave_edges:
|
|||
- Regulatory vacuum emerges when deregulation outpaces safety evidence accumulation creating institutional epistemic divergence between regulators and health authorities|supports|2026-04-07
|
||||
- All three major clinical AI regulatory tracks converged on adoption acceleration rather than safety evaluation in Q1 2026|related|2026-04-07
|
||||
- {'The clinical AI safety gap is doubly structural': "FDA enforcement discretion removes pre-deployment safety requirements while MAUDE's lack of AI-specific fields means post-market surveillance cannot detect AI-attributable harm|supports|2026-04-08"}
|
||||
- {'The clinical AI safety gap is doubly structural': "FDA enforcement discretion removes pre-deployment safety requirements while MAUDE's lack of AI-specific fields means post-market surveillance cannot detect AI-attributable harm|supports|2026-04-09"}
|
||||
related:
|
||||
- All three major clinical AI regulatory tracks converged on adoption acceleration rather than safety evaluation in Q1 2026
|
||||
---
|
||||
|
|
|
|||
|
|
@ -7,8 +7,12 @@ source: "Journal of Managed Care & Specialty Pharmacy, Real-world Persistence an
|
|||
created: 2026-03-11
|
||||
related:
|
||||
- semaglutide reduces kidney disease progression 24 percent and delays dialysis creating largest per patient cost savings
|
||||
- GLP-1 long-term persistence remains structurally limited at 14 percent by year two despite year-one improvements
|
||||
- GLP-1 year-one persistence for obesity nearly doubled from 2021 to 2024 driven by supply normalization and improved patient management
|
||||
reweave_edges:
|
||||
- semaglutide reduces kidney disease progression 24 percent and delays dialysis creating largest per patient cost savings|related|2026-04-04
|
||||
- GLP-1 long-term persistence remains structurally limited at 14 percent by year two despite year-one improvements|related|2026-04-09
|
||||
- GLP-1 year-one persistence for obesity nearly doubled from 2021 to 2024 driven by supply normalization and improved patient management|related|2026-04-09
|
||||
---
|
||||
|
||||
# Semaglutide achieves 47 percent one-year persistence versus 19 percent for liraglutide showing drug-specific adherence variation of 2.5x
|
||||
|
|
|
|||
|
|
@ -11,6 +11,10 @@ attribution:
|
|||
sourcer:
|
||||
- handle: "deanfield-et-al.-(select-investigators)"
|
||||
context: "Deanfield et al., SELECT investigators, The Lancet November 2025; Colhoun/Lincoff ESC 2024 mediation analysis"
|
||||
related:
|
||||
- Real-world semaglutide use in ASCVD patients shows 43-57% MACE reduction compared to 20% in SELECT trial because treated populations have better adherence and access creating positive selection bias
|
||||
reweave_edges:
|
||||
- Real-world semaglutide use in ASCVD patients shows 43-57% MACE reduction compared to 20% in SELECT trial because treated populations have better adherence and access creating positive selection bias|related|2026-04-09
|
||||
---
|
||||
|
||||
# Semaglutide's cardiovascular benefit is approximately 67-69% independent of weight or adiposity change, with anti-inflammatory pathways (hsCRP) accounting for more of the benefit than weight loss
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ agent: vida
|
|||
scope: causal
|
||||
sourcer: STEER investigators / Nature Medicine
|
||||
related_claims: ["[[GLP-1 receptor agonists are the largest therapeutic category launch in pharmaceutical history but their chronic use model makes the net cost impact inflationary through 2035]]"]
|
||||
supports:
|
||||
- Real-world semaglutide use in ASCVD patients shows 43-57% MACE reduction compared to 20% in SELECT trial because treated populations have better adherence and access creating positive selection bias
|
||||
reweave_edges:
|
||||
- Real-world semaglutide use in ASCVD patients shows 43-57% MACE reduction compared to 20% in SELECT trial because treated populations have better adherence and access creating positive selection bias|supports|2026-04-09
|
||||
---
|
||||
|
||||
# Semaglutide achieves 29-43 percent lower major adverse cardiovascular event rates compared to tirzepatide despite tirzepatide's superior weight loss suggesting a GLP-1 receptor-specific cardioprotective mechanism independent of weight reduction
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ agent: vida
|
|||
scope: causal
|
||||
sourcer: STEER investigators
|
||||
related_claims: ["[[GLP-1 receptor agonists are the largest therapeutic category launch in pharmaceutical history but their chronic use model makes the net cost impact inflationary through 2035]]"]
|
||||
related:
|
||||
- Real-world semaglutide use in ASCVD patients shows 43-57% MACE reduction compared to 20% in SELECT trial because treated populations have better adherence and access creating positive selection bias
|
||||
reweave_edges:
|
||||
- Real-world semaglutide use in ASCVD patients shows 43-57% MACE reduction compared to 20% in SELECT trial because treated populations have better adherence and access creating positive selection bias|related|2026-04-09
|
||||
---
|
||||
|
||||
# Semaglutide produces superior cardiovascular outcomes compared to tirzepatide despite achieving less weight loss because GLP-1 receptor-specific cardiac mechanisms operate independently of weight reduction
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ agent: vida
|
|||
scope: causal
|
||||
sourcer: Penn LDI (Leonard Davis Institute of Health Economics)
|
||||
related_claims: ["[[SDOH interventions show strong ROI but adoption stalls because Z-code documentation remains below 3 percent and no operational infrastructure connects screening to action]]", "[[medical care explains only 10-20 percent of health outcomes because behavioral social and genetic factors dominate as four independent methodologies confirm]]"]
|
||||
supports:
|
||||
- OBBBA SNAP cuts represent the largest food assistance reduction in US history at $186 billion through 2034, removing continuous nutritional support from 2.4 million people despite evidence that SNAP participation reduces healthcare costs by 25 percent
|
||||
reweave_edges:
|
||||
- OBBBA SNAP cuts represent the largest food assistance reduction in US history at $186 billion through 2034, removing continuous nutritional support from 2.4 million people despite evidence that SNAP participation reduces healthcare costs by 25 percent|supports|2026-04-09
|
||||
---
|
||||
|
||||
# SNAP benefit loss causes measurable mortality increases in under-65 populations through food insecurity pathways with peer-reviewed rate estimates of 2.9 percent excess deaths over 14 years
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ agent: vida
|
|||
scope: structural
|
||||
sourcer: Pew Charitable Trusts
|
||||
related_claims: ["[[value-based care transitions stall at the payment boundary because 60 percent of payments touch value metrics but only 14 percent bear full risk]]"]
|
||||
supports:
|
||||
- OBBBA SNAP cuts represent the largest food assistance reduction in US history at $186 billion through 2034, removing continuous nutritional support from 2.4 million people despite evidence that SNAP participation reduces healthcare costs by 25 percent
|
||||
reweave_edges:
|
||||
- OBBBA SNAP cuts represent the largest food assistance reduction in US history at $186 billion through 2034, removing continuous nutritional support from 2.4 million people despite evidence that SNAP participation reduces healthcare costs by 25 percent|supports|2026-04-09
|
||||
---
|
||||
|
||||
# OBBBA SNAP cost-shifting to states creates a fiscal cascade where compliance with federal work requirements imposes $15 billion annual state costs, forcing states to cut additional health benefits to absorb the new burden
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ agent: vida
|
|||
scope: structural
|
||||
sourcer: American College of Cardiology
|
||||
related_claims: ["[[Americas declining life expectancy is driven by deaths of despair concentrated in populations and regions most damaged by economic restructuring since the 1980s]]", "[[the epidemiological transition marks the shift from material scarcity to social disadvantage as the primary driver of health outcomes in developed nations]]"]
|
||||
related:
|
||||
- CVD mortality stagnation after 2010 reversed a decade of Black-White life expectancy convergence because structural cardiovascular improvements drove racial health equity gains more than social interventions
|
||||
reweave_edges:
|
||||
- CVD mortality stagnation after 2010 reversed a decade of Black-White life expectancy convergence because structural cardiovascular improvements drove racial health equity gains more than social interventions|related|2026-04-09
|
||||
---
|
||||
|
||||
# Long-term US cardiovascular mortality gains are slowing or reversing across major conditions as of 2026 after decades of continuous improvement
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ agent: vida
|
|||
scope: structural
|
||||
sourcer: KFF Health News / CBO
|
||||
related_claims: ["[[the healthcare attractor state is a prevention-first system where aligned payment continuous monitoring and AI-augmented care delivery create a flywheel that profits from health rather than sickness]]", "[[value-based care transitions stall at the payment boundary because 60 percent of payments touch value metrics but only 14 percent bear full risk]]"]
|
||||
supports:
|
||||
- OBBBA Medicaid work requirements destroy the enrollment stability that value-based care requires for prevention ROI by forcing all 50 states to implement 80-hour monthly work thresholds by December 2026
|
||||
reweave_edges:
|
||||
- OBBBA Medicaid work requirements destroy the enrollment stability that value-based care requires for prevention ROI by forcing all 50 states to implement 80-hour monthly work thresholds by December 2026|supports|2026-04-09
|
||||
---
|
||||
|
||||
# Value-based care requires enrollment stability as structural precondition because prevention ROI depends on multi-year attribution and semi-annual redeterminations break the investment timeline
|
||||
|
|
|
|||
|
|
@ -8,6 +8,10 @@ founded: 2025-07-04
|
|||
headquarters: United States
|
||||
website:
|
||||
tags: [medicaid, healthcare-policy, budget-reconciliation, coverage-loss]
|
||||
supports:
|
||||
- OBBBA Medicaid work requirements destroy the enrollment stability that value-based care requires for prevention ROI by forcing all 50 states to implement 80-hour monthly work thresholds by December 2026
|
||||
reweave_edges:
|
||||
- OBBBA Medicaid work requirements destroy the enrollment stability that value-based care requires for prevention ROI by forcing all 50 states to implement 80-hour monthly work thresholds by December 2026|supports|2026-04-09
|
||||
---
|
||||
|
||||
# One Big Beautiful Bill Act (OBBBA)
|
||||
|
|
|
|||
|
|
@ -11,10 +11,12 @@ related:
|
|||
- AI talent circulation between frontier labs transfers alignment culture not just capability because researchers carry safety methodologies and institutional norms to their new organizations
|
||||
- surveillance of AI reasoning traces degrades trace quality through self censorship making consent gated sharing an alignment requirement not just a privacy preference
|
||||
- the absence of a societal warning signal for AGI is a structural feature not an accident because capability scaling is gradual and ambiguous and collective action requires anticipation not reaction
|
||||
- Frontier AI labs allocate 6-15% of research headcount to safety versus 60-75% to capabilities with the ratio declining since 2024 as capabilities teams grow faster than safety teams
|
||||
reweave_edges:
|
||||
- AI talent circulation between frontier labs transfers alignment culture not just capability because researchers carry safety methodologies and institutional norms to their new organizations|related|2026-03-28
|
||||
- surveillance of AI reasoning traces degrades trace quality through self censorship making consent gated sharing an alignment requirement not just a privacy preference|related|2026-03-28
|
||||
- the absence of a societal warning signal for AGI is a structural feature not an accident because capability scaling is gradual and ambiguous and collective action requires anticipation not reaction|related|2026-04-07
|
||||
- Frontier AI labs allocate 6-15% of research headcount to safety versus 60-75% to capabilities with the ratio declining since 2024 as capabilities teams grow faster than safety teams|related|2026-04-09
|
||||
---
|
||||
|
||||
# the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it
|
||||
|
|
|
|||
Loading…
Reference in a new issue