Compare commits

..

1 commit

Author SHA1 Message Date
Teleo Agents
2e09496cf7 vida: extract claims from 2026-04-21-apotex-fda-tentative-approval-generic-semaglutide
Some checks failed
Mirror PR to Forgejo / mirror (pull_request) Has been cancelled
- Source: inbox/queue/2026-04-21-apotex-fda-tentative-approval-generic-semaglutide.md
- Domain: health
- Claims: 0, Entities: 2
- Enrichments: 2
- Extracted by: pipeline ingest (OpenRouter anthropic/claude-sonnet-4.5)

Pentagon-Agent: Vida <PIPELINE>
2026-04-21 04:38:19 +00:00
29 changed files with 85 additions and 297 deletions

View file

@ -1,11 +1,10 @@
--- ---
description: Food insecurity programs return 85 percent ROI and housing programs 50 percent but SDOH Z-code documentation remains below 3 percent of encounters because screening mandates exist without operational workflows to connect identification to intervention
type: claim type: claim
domain: health domain: health
description: Food insecurity programs return 85 percent ROI and housing programs 50 percent but SDOH Z-code documentation remains below 3 percent of encounters because screening mandates exist without operational workflows to connect identification to intervention
confidence: likely
source: "Health Affairs Scholar food/housing ROI meta-analysis 2025; PMC Z-code documentation rates 2024; SAGE Journals integrated SDOH model 6.9:1 ROI 2025; National Academies social isolation 2023"
created: 2026-02-17 created: 2026-02-17
related: ["SDOH interventions show strong ROI but adoption stalls because Z-code documentation remains below 3 percent and no operational infrastructure connects screening to action"] source: "Health Affairs Scholar food/housing ROI meta-analysis 2025; PMC Z-code documentation rates 2024; SAGE Journals integrated SDOH model 6.9:1 ROI 2025; National Academies social isolation 2023"
confidence: likely
--- ---
# SDOH interventions show strong ROI but adoption stalls because Z-code documentation remains below 3 percent and no operational infrastructure connects screening to action # SDOH interventions show strong ROI but adoption stalls because Z-code documentation remains below 3 percent and no operational infrastructure connects screening to action
@ -69,10 +68,3 @@ Relevant Notes:
Topics: Topics:
- health and wellness - health and wellness
## Supporting Evidence
**Source:** JMIR 2024 e59939; ASPE/HHS Medicaid telehealth trends
Parallel structural mechanism in telehealth: 46 state Medicaid programs now reimburse audio-only telehealth and 37 states allow FQHCs as distant-site providers, but Medicaid-accepting facilities are 25 percent less likely to offer telehealth services. Policy enables the intervention (telehealth coverage, Z-code documentation) but operational infrastructure is absent—provider participation doesn't follow policy mandates without addressing underlying structural barriers.

View file

@ -10,17 +10,21 @@ agent: vida
scope: causal scope: causal
sourcer: Natali et al. sourcer: Natali et al.
related_claims: ["[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]"] related_claims: ["[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]"]
supports: ["{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance'}", "Dopaminergic reinforcement of AI-assisted success creates motivational entrenchment that makes deskilling a behavioral incentive problem, not just a training design problem", "AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance"] supports:
related: ["Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine", "clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement", "never-skilling-is-structurally-invisible-because-it-lacks-pre-ai-baseline-requiring-prospective-competency-assessment", "never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling"] - "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance'}"
reweave_edges: ["{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|supports|2026-04-14'}", "Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers|related|2026-04-14", "Dopaminergic reinforcement of AI-assisted success creates motivational entrenchment that makes deskilling a behavioral incentive problem, not just a training design problem|supports|2026-04-14", "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|supports|2026-04-17'}", "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|supports|2026-04-18'}", "AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|supports|2026-04-19"] - Dopaminergic reinforcement of AI-assisted success creates motivational entrenchment that makes deskilling a behavioral incentive problem, not just a training design problem
- "AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance"
related:
- Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers
reweave_edges:
- "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|supports|2026-04-14'}"
- Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers|related|2026-04-14
- Dopaminergic reinforcement of AI-assisted success creates motivational entrenchment that makes deskilling a behavioral incentive problem, not just a training design problem|supports|2026-04-14
- "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|supports|2026-04-17'}"
- "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|supports|2026-04-18'}"
- "AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|supports|2026-04-19"
--- ---
# AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable # AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable
Natali et al.'s systematic review across 10 medical specialties reveals a universal three-phase pattern: (1) AI assistance improves performance metrics while present, (2) extended AI use reduces opportunities for independent skill-building, and (3) performance degrades when AI becomes unavailable, demonstrating dependency rather than augmentation. Quantitative evidence includes: colonoscopy ADR dropping from 28.4% to 22.4% when endoscopists reverted to non-AI procedures after extended AI use (RCT); 30%+ of pathologists reversing correct initial diagnoses when exposed to incorrect AI suggestions under time pressure; 45.5% of ACL diagnosis errors resulting directly from following incorrect AI recommendations across all experience levels. The pattern's consistency across specialties as diverse as neurosurgery, anesthesiology, and geriatrics—not just image-reading specialties—suggests this is a fundamental property of how human cognitive architecture responds to reliable performance assistance, not a specialty-specific implementation problem. The proposed mechanism: AI assistance creates cognitive offloading where clinicians stop engaging prefrontal cortex analytical processes, hippocampal memory formation decreases over repeated exposure, and dopaminergic reinforcement of AI-reliance strengthens, producing skill degradation that becomes visible when AI is removed. Natali et al.'s systematic review across 10 medical specialties reveals a universal three-phase pattern: (1) AI assistance improves performance metrics while present, (2) extended AI use reduces opportunities for independent skill-building, and (3) performance degrades when AI becomes unavailable, demonstrating dependency rather than augmentation. Quantitative evidence includes: colonoscopy ADR dropping from 28.4% to 22.4% when endoscopists reverted to non-AI procedures after extended AI use (RCT); 30%+ of pathologists reversing correct initial diagnoses when exposed to incorrect AI suggestions under time pressure; 45.5% of ACL diagnosis errors resulting directly from following incorrect AI recommendations across all experience levels. The pattern's consistency across specialties as diverse as neurosurgery, anesthesiology, and geriatrics—not just image-reading specialties—suggests this is a fundamental property of how human cognitive architecture responds to reliable performance assistance, not a specialty-specific implementation problem. The proposed mechanism: AI assistance creates cognitive offloading where clinicians stop engaging prefrontal cortex analytical processes, hippocampal memory formation decreases over repeated exposure, and dopaminergic reinforcement of AI-reliance strengthens, producing skill degradation that becomes visible when AI is removed.
## Supporting Evidence
**Source:** Heudel PE et al. 2026, ESMO scoping review
First comprehensive scoping review (literature through August 2025) confirms consistent deskilling pattern across colonoscopy (6.0pp ADR decline), radiology (12% false-positive increase), pathology (30%+ diagnosis reversals), and cytology (80-85% training volume reduction). Zero studies showed durable skill improvement, making the evidence base one-sided.

View file

@ -1,25 +0,0 @@
---
type: claim
domain: health
description: Medicare beneficiaries who are older, racial/ethnic minorities, dual-enrolled, rural, or have low broadband access are significantly more likely to use audio-only than video telehealth
confidence: experimental
source: JMIR 2024 e59939; ASPE/HHS Medicaid telehealth trends
created: 2026-04-21
title: Audio-only telehealth is the equity-relevant modality because it over-indexes on populations that video-based telehealth systematically underserves
agent: vida
scope: functional
sourcer: JMIR 2024
challenges: ["the mental health supply gap is widening not closing because demand outpaces workforce growth and technology primarily serves the already-served rather than expanding access", "generic-digital-health-deployment-reproduces-existing-disparities-by-disproportionately-benefiting-higher-income-users-despite-nominal-technology-access-equity"]
related: ["the mental health supply gap is widening not closing because demand outpaces workforce growth and technology primarily serves the already-served rather than expanding access", "generic-digital-health-deployment-reproduces-existing-disparities-by-disproportionately-benefiting-higher-income-users-despite-nominal-technology-access-equity"]
---
# Audio-only telehealth is the equity-relevant modality because it over-indexes on populations that video-based telehealth systematically underserves
Among telehealth modalities, audio-only demonstrates a distinct equity profile. Medicare beneficiaries who are older, racial/ethnic minorities, dual-enrolled, rural, or have low broadband access are significantly more likely to use audio-only than video-based telehealth. This pattern inverts the typical digital health disparity where higher-income, higher-education, urban populations dominate adoption. Audio-only reaches the populations that cannot manage video—whether due to broadband limitations, device access, digital literacy barriers, or privacy constraints (video requires private space that many low-income households lack). The modality functions as the most equitable telehealth option precisely because it removes the technical and environmental barriers that video imposes. Maryland is cited as the only state that has legislatively expanded Medicaid telehealth definition to include text messaging, suggesting policy recognition of modality-specific equity implications. The Crisis Text Line similarly over-indexes on young, rural, low-income users. This creates a policy implication: audio-only coverage and reimbursement parity is the equity-relevant lever for telehealth access, while video-based telehealth (the dominant modality) reinforces existing disparities. Video-based telehealth is 1.62-1.67x more common in low-deprivation areas (PNAS Nexus 2025), confirming the modality-specific disparity pattern.
## Challenging Evidence
**Source:** Journal of Telemedicine and Telecare, Medicare claims 2019-2020
2019-2020 Medicare claims show telehealth disparities EXPANDED during COVID, not contracted. Non-Hispanic Black/African-American and Hispanic beneficiaries were less likely to utilize telehealth than White beneficiaries, with disparities growing in 2020. Rural patients went from MORE likely (2019) to LESS likely (2020) to use telehealth. This challenges the assumption that telehealth modality alone solves equity—the data shows structural displacement when demand surges overwhelm capacity.

View file

@ -10,16 +10,8 @@ agent: vida
scope: causal scope: causal
sourcer: Natali et al. sourcer: Natali et al.
related_claims: ["[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]"] related_claims: ["[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]"]
related: ["automation-bias-in-medicine-increases-false-positives-through-anchoring-on-ai-output"]
--- ---
# Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers # Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers
A controlled study of 27 radiologists performing mammography reads found that erroneous AI prompts increased false-positive recalls by up to 12 percentage points, with the effect persisting across experience levels. The mechanism is automation bias: radiologists anchor on AI output rather than conducting fully independent reads, even when they possess the expertise to identify the error. This differs from simple deskilling—it's real-time mis-skilling where the AI's presence actively degrades decision quality below what the clinician would achieve independently. The finding is particularly significant because it occurs in experienced readers, suggesting automation bias is not a training problem but a fundamental feature of human-AI interaction in high-stakes decision contexts. Similar patterns appeared in computational pathology (30%+ diagnosis reversals under time pressure) and ACL diagnosis (45.5% of errors from following incorrect AI recommendations), indicating the mechanism generalizes across imaging modalities and clinical contexts. A controlled study of 27 radiologists performing mammography reads found that erroneous AI prompts increased false-positive recalls by up to 12 percentage points, with the effect persisting across experience levels. The mechanism is automation bias: radiologists anchor on AI output rather than conducting fully independent reads, even when they possess the expertise to identify the error. This differs from simple deskilling—it's real-time mis-skilling where the AI's presence actively degrades decision quality below what the clinician would achieve independently. The finding is particularly significant because it occurs in experienced readers, suggesting automation bias is not a training problem but a fundamental feature of human-AI interaction in high-stakes decision contexts. Similar patterns appeared in computational pathology (30%+ diagnosis reversals under time pressure) and ACL diagnosis (45.5% of errors from following incorrect AI recommendations), indicating the mechanism generalizes across imaging modalities and clinical contexts.
## Supporting Evidence
**Source:** Heudel PE et al. 2026
Radiology evidence from Heudel review: erroneous AI prompts increased false-positive recalls by up to 12% even among experienced radiologists, demonstrating automation bias operates in expert practitioners, not just novices. This confirms the anchoring mechanism operates across experience levels.

View file

@ -10,24 +10,26 @@ agent: vida
scope: causal scope: causal
sourcer: Artificial Intelligence Review (Springer Nature) sourcer: Artificial Intelligence Review (Springer Nature)
related_claims: ["[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]"] related_claims: ["[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]"]
supports: ["Never-skilling in clinical AI is structurally invisible because it lacks a pre-AI baseline for comparison, requiring prospective competency assessment before AI exposure to detect", "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance'}", "AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable", "Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers", "Never-skilling \u2014 the failure to acquire foundational clinical competencies because AI was present during training \u2014 poses a detection-resistant, potentially unrecoverable threat to medical education that is structurally worse than deskilling"] supports:
reweave_edges: ["Never-skilling in clinical AI is structurally invisible because it lacks a pre-AI baseline for comparison, requiring prospective competency assessment before AI exposure to detect|supports|2026-04-12", "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|supports|2026-04-14'}", "AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable|supports|2026-04-14", "Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers|supports|2026-04-14", "Never-skilling \u2014 the failure to acquire foundational clinical competencies because AI was present during training \u2014 poses a detection-resistant, potentially unrecoverable threat to medical education that is structurally worse than deskilling|supports|2026-04-14", "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|related|2026-04-17'}", "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|supports|2026-04-18'}", "AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|related|2026-04-19"] - Never-skilling in clinical AI is structurally invisible because it lacks a pre-AI baseline for comparison, requiring prospective competency assessment before AI exposure to detect
related: ["{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance'}", "AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance", "clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine", "never-skilling-is-structurally-invisible-because-it-lacks-pre-ai-baseline-requiring-prospective-competency-assessment", "ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement"] - "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance'}"
- AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable
- Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers
- Never-skilling — the failure to acquire foundational clinical competencies because AI was present during training — poses a detection-resistant, potentially unrecoverable threat to medical education that is structurally worse than deskilling
reweave_edges:
- Never-skilling in clinical AI is structurally invisible because it lacks a pre-AI baseline for comparison, requiring prospective competency assessment before AI exposure to detect|supports|2026-04-12
- "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|supports|2026-04-14'}"
- AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable|supports|2026-04-14
- Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers|supports|2026-04-14
- Never-skilling — the failure to acquire foundational clinical competencies because AI was present during training — poses a detection-resistant, potentially unrecoverable threat to medical education that is structurally worse than deskilling|supports|2026-04-14
- "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|related|2026-04-17'}"
- "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|supports|2026-04-18'}"
- "AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance|related|2026-04-19"
related:
- "{'AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms': 'prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance'}"
- "AI assistance may produce neurologically-grounded, partially irreversible skill degradation through three concurrent mechanisms: prefrontal disengagement, hippocampal memory formation reduction, and dopaminergic reinforcement of AI reliance"
--- ---
# Clinical AI introduces three distinct skill failure modes — deskilling (existing expertise lost through disuse), mis-skilling (AI errors adopted as correct), and never-skilling (foundational competence never acquired) — requiring distinct mitigation strategies for each # Clinical AI introduces three distinct skill failure modes — deskilling (existing expertise lost through disuse), mis-skilling (AI errors adopted as correct), and never-skilling (foundational competence never acquired) — requiring distinct mitigation strategies for each
This systematic review identifies three mechanistically distinct pathways through which clinical AI degrades physician competence. **Deskilling** occurs when existing expertise atrophies through disuse: colonoscopy polyp detection dropped from 28.4% to 22.4% after 3 months of AI use, and experienced radiologists showed 12% increased false-positive recalls after exposure to erroneous AI prompts. **Mis-skilling** occurs when clinicians actively learn incorrect patterns from systematically biased AI outputs: in computational pathology studies, 30%+ of participants reversed correct initial diagnoses after exposure to incorrect AI suggestions under time constraints. **Never-skilling** is categorically different: trainees who begin clinical education with AI assistance may never develop foundational competencies. Junior radiologists are far less likely than senior colleagues to detect AI errors — not because they've lost skills, but because they never acquired them. This is structurally invisible because there's no pre-AI baseline to compare against. The review documents mitigation strategies including AI-off drills, structured assessment pre-AI review, and curriculum redesign with explicit competency development before AI exposure. The key insight is that these three failure modes require fundamentally different interventions: deskilling requires practice maintenance, mis-skilling requires error detection training, and never-skilling requires prospective competency assessment before AI exposure. This systematic review identifies three mechanistically distinct pathways through which clinical AI degrades physician competence. **Deskilling** occurs when existing expertise atrophies through disuse: colonoscopy polyp detection dropped from 28.4% to 22.4% after 3 months of AI use, and experienced radiologists showed 12% increased false-positive recalls after exposure to erroneous AI prompts. **Mis-skilling** occurs when clinicians actively learn incorrect patterns from systematically biased AI outputs: in computational pathology studies, 30%+ of participants reversed correct initial diagnoses after exposure to incorrect AI suggestions under time constraints. **Never-skilling** is categorically different: trainees who begin clinical education with AI assistance may never develop foundational competencies. Junior radiologists are far less likely than senior colleagues to detect AI errors — not because they've lost skills, but because they never acquired them. This is structurally invisible because there's no pre-AI baseline to compare against. The review documents mitigation strategies including AI-off drills, structured assessment pre-AI review, and curriculum redesign with explicit competency development before AI exposure. The key insight is that these three failure modes require fundamentally different interventions: deskilling requires practice maintenance, mis-skilling requires error detection training, and never-skilling requires prospective competency assessment before AI exposure.
## Extending Evidence
**Source:** Heudel PE et al. 2026, UK cervical screening consolidation
UK cytology lab consolidation provides first structural never-skilling mechanism: 80-85% training volume reduction through consolidation from 45 to 8 labs. This extends the never-skilling concept from individual cognitive failure to institutional infrastructure destruction. The mechanism is not 'physicians never learn because AI does it for them' but 'training infrastructure is dismantled so learning becomes impossible.'
## Supporting Evidence
**Source:** PubMed systematic search, April 21, 2026
The complete absence of peer-reviewed evidence for durable up-skilling after 5+ years of large-scale clinical AI deployment provides negative confirmation that skill effects flow in one direction. Despite extensive evidence on AI improving performance while present, zero published studies demonstrate improvement that persists when AI is removed. This asymmetry—growing deskilling literature (Heudel et al. 2026, Natali et al. 2025, colonoscopy ADR drop, radiology/pathology automation bias) versus empty up-skilling literature—confirms the three failure modes operate without a compensating improvement mechanism.

View file

@ -1,18 +0,0 @@
---
type: claim
domain: health
description: Effect size g=0.90 for culturally adapted programs versus g=0.43 for standard apps, though 42 percent attrition persists even in adapted programs
confidence: experimental
source: JMIR 2024 e59939 meta-analysis
created: 2026-04-21
title: Culturally adapted digital mental health interventions achieve double the effect size for racial/ethnic minorities compared to standard apps
agent: vida
scope: causal
sourcer: JMIR 2024
challenges: ["the mental health supply gap is widening not closing because demand outpaces workforce growth and technology primarily serves the already-served rather than expanding access"]
related: ["the mental health supply gap is widening not closing because demand outpaces workforce growth and technology primarily serves the already-served rather than expanding access", "generic-digital-health-deployment-reproduces-existing-disparities-by-disproportionately-benefiting-higher-income-users-despite-nominal-technology-access-equity"]
---
# Culturally adapted digital mental health interventions achieve double the effect size for racial/ethnic minorities compared to standard apps
The JMIR 2024 meta-analysis found that culturally adapted digital mental health interventions achieve an effect size of g=0.90 for racial/ethnic minorities, compared to g=0.43 for standard apps—a 2.1x improvement. This suggests that the widely documented efficacy gap for digital mental health in minority populations is partly a cultural adaptation failure, not an inherent technology limitation. The 42 percent attrition rate even in culturally adapted programs indicates that engagement barriers remain substantial, but the efficacy signal for those who remain engaged is strong and clinically meaningful. Cultural adaptation likely addresses language, cultural norms around mental health disclosure, representation in content and imagery, and alignment with community-specific stressors. The finding challenges the interpretation that digital mental health 'doesn't work' for minority populations—it may work when designed for those populations, but most apps are not. This creates a design and deployment implication: generic digital mental health tools will continue to reproduce disparities, while culturally adapted interventions can achieve parity or better outcomes. The gap between g=0.90 and g=0.43 is large enough to represent the difference between clinically significant and marginal benefit.

View file

@ -1,18 +0,0 @@
---
type: claim
domain: health
description: UK cervical screening AI deployment consolidated labs from 45 to 8 centers, reducing training case volumes by 80-85 percent and structurally eliminating the apprenticeship infrastructure needed to acquire diagnostic skills
confidence: experimental
source: "Heudel et al. 2026, ESMO Real World Data & Digital Oncology scoping review"
created: 2026-04-21
title: Cytology lab consolidation creates never-skilling pathway through 80 percent training volume destruction
agent: vida
scope: structural
sourcer: Heudel PE, Crochet H, Filori Q, Bachelot T, Blay JY
supports: ["clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling"]
related: ["human-in-the-loop-clinical-ai-degrades-to-worse-than-ai-alone-because-physicians-both-de-skill-from-reliance-and-introduce-errors-when-overriding-correct-outputs", "clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling", "never-skilling-is-structurally-invisible-because-it-lacks-pre-ai-baseline-requiring-prospective-competency-assessment"]
---
# Cytology lab consolidation creates never-skilling pathway through 80 percent training volume destruction
Following UK cervical screening consolidation with AI-assisted reading, case volumes reduced 80-85% while labs consolidated from 45 to 8 centers. The authors identify this as having 'major implications for training capacity.' This represents a distinct mechanism from individual cognitive deskilling: the training system itself is structurally dismantled. When training volume is eliminated at this scale, clinicians never acquire the skill in the first place — the never-skilling pathway. This is worse than deskilling because it's irreversible without rebuilding training infrastructure. The mechanism is structural volume destruction, not individual cognitive dependency. Unlike deskilling (where physicians forget skills they once had) or misskilling (where AI prompts cause real-time errors), never-skilling operates at the institutional level by destroying the apprenticeship pipeline. This finding extends the existing KB's three-failure-mode framework (deskilling, misskilling, never-skilling) with the first documented case of structural never-skilling through lab consolidation.

View file

@ -11,8 +11,12 @@ attribution:
sourcer: sourcer:
- handle: "adepoju-et-al." - handle: "adepoju-et-al."
context: "Adepoju et al. 2024, PMC11450565" context: "Adepoju et al. 2024, PMC11450565"
related: ["Tailored digital health interventions achieve clinically significant systolic BP reductions at 12 months in US populations experiencing health disparities, but the effect is conditional on design specificity for these populations rather than generic deployment", "Rural food-insecure populations enrolled in food assistance interventions at 81 percent versus 53 percent in urban settings, suggesting rural populations may be more receptive to food-based health interventions due to more severe baseline food access constraints", "generic-digital-health-deployment-reproduces-existing-disparities-by-disproportionately-benefiting-higher-income-users-despite-nominal-technology-access-equity"] related:
reweave_edges: ["Tailored digital health interventions achieve clinically significant systolic BP reductions at 12 months in US populations experiencing health disparities, but the effect is conditional on design specificity for these populations rather than generic deployment|related|2026-04-07", "Rural food-insecure populations enrolled in food assistance interventions at 81 percent versus 53 percent in urban settings, suggesting rural populations may be more receptive to food-based health interventions due to more severe baseline food access constraints|related|2026-04-17"] - Tailored digital health interventions achieve clinically significant systolic BP reductions at 12 months in US populations experiencing health disparities, but the effect is conditional on design specificity for these populations rather than generic deployment
- Rural food-insecure populations enrolled in food assistance interventions at 81 percent versus 53 percent in urban settings, suggesting rural populations may be more receptive to food-based health interventions due to more severe baseline food access constraints
reweave_edges:
- Tailored digital health interventions achieve clinically significant systolic BP reductions at 12 months in US populations experiencing health disparities, but the effect is conditional on design specificity for these populations rather than generic deployment|related|2026-04-07
- Rural food-insecure populations enrolled in food assistance interventions at 81 percent versus 53 percent in urban settings, suggesting rural populations may be more receptive to food-based health interventions due to more severe baseline food access constraints|related|2026-04-17
--- ---
# Generic digital health deployment reproduces existing disparities by disproportionately benefiting higher-income, higher-education users despite nominal technology access equity, because health literacy and navigation barriers concentrate digital health benefits upward # Generic digital health deployment reproduces existing disparities by disproportionately benefiting higher-income, higher-education users despite nominal technology access equity, because health literacy and navigation barriers concentrate digital health benefits upward
@ -28,16 +32,3 @@ Relevant Notes:
Topics: Topics:
- [[_map]] - [[_map]]
## Extending Evidence
**Source:** JMIR 2024 e59939
FQHCs adopting telemental health showed 5-7 percent increase in visit rates among Medicaid and low-income groups, demonstrating that institutional deployment context matters. However, standalone apps (BetterHelp, Headspace, Calm) cost $260-400/month with no Medicaid coverage and predominantly serve insured/higher-income/younger/White users. Text therapy (Talkspace, BetterHelp messaging) costs $65-100/week with virtually no Medicaid coverage in any state. The disparity is structural: commercial apps optimize for paying customers, while safety-net institutions lack resources to deploy digital tools at scale.
## Extending Evidence
**Source:** npj Digital Medicine 2025; Lancet Digital Health 2025
Mental health app attrition mechanisms are structurally inequitable: limited digital literacy (structural barrier for underserved), privacy concerns (higher in marginalized populations), lack of cultural/linguistic adaptation for non-English speakers, and poor usability that assumes technical sophistication. Even in best-case RCT conditions with motivated participants, 64% attrition suggests real-world underserved populations would face substantially higher dropout rates, creating a selection effect where apps work only for the already-advantaged completer minority.

View file

@ -10,18 +10,18 @@ agent: vida
scope: causal scope: causal
sourcer: Tzang et al. (Lancet eClinicalMedicine) sourcer: Tzang et al. (Lancet eClinicalMedicine)
related_claims: ["[[GLP-1 receptor agonists are the largest therapeutic category launch in pharmaceutical history but their chronic use model makes the net cost impact inflationary through 2035]]", "[[SDOH interventions show strong ROI but adoption stalls because Z-code documentation remains below 3 percent and no operational infrastructure connects screening to action]]"] related_claims: ["[[GLP-1 receptor agonists are the largest therapeutic category launch in pharmaceutical history but their chronic use model makes the net cost impact inflationary through 2035]]", "[[SDOH interventions show strong ROI but adoption stalls because Z-code documentation remains below 3 percent and no operational infrastructure connects screening to action]]"]
related: ["GLP-1 receptor agonists produce nutritional deficiencies in 12-14 percent of users within 6-12 months requiring monitoring infrastructure current prescribing lacks", "glp-1-receptor-agonists-require-continuous-treatment-because-metabolic-benefits-reverse-within-28-52-weeks-of-discontinuation", "semaglutide-outperforms-tirzepatide-cardiovascular-outcomes-despite-inferior-weight-loss-suggesting-glp1r-specific-cardiac-mechanism", "semaglutide-outperforms-tirzepatide-cardiovascular-outcomes-despite-inferior-weight-loss", "comprehensive-behavioral-wraparound-enables-durable-weight-maintenance-post-glp1-cessation", "glp1-receptor-agonists-provide-cardiovascular-benefits-through-weight-independent-mechanisms"] related:
reweave_edges: ["GLP-1 receptor agonists produce nutritional deficiencies in 12-14 percent of users within 6-12 months requiring monitoring infrastructure current prescribing lacks|related|2026-04-09", "GLP-1 therapy requires continuous nutritional monitoring infrastructure but 92 percent of patients receive no dietitian support creating a care gap that widens as adoption scales|supports|2026-04-12", "Comprehensive behavioral wraparound may enable durable weight maintenance post-GLP-1 cessation, challenging the unconditional continuous-delivery requirement|challenges|2026-04-14"] - GLP-1 receptor agonists produce nutritional deficiencies in 12-14 percent of users within 6-12 months requiring monitoring infrastructure current prescribing lacks
supports: ["GLP-1 therapy requires continuous nutritional monitoring infrastructure but 92 percent of patients receive no dietitian support creating a care gap that widens as adoption scales"] reweave_edges:
challenges: ["Comprehensive behavioral wraparound may enable durable weight maintenance post-GLP-1 cessation, challenging the unconditional continuous-delivery requirement"] - GLP-1 receptor agonists produce nutritional deficiencies in 12-14 percent of users within 6-12 months requiring monitoring infrastructure current prescribing lacks|related|2026-04-09
- GLP-1 therapy requires continuous nutritional monitoring infrastructure but 92 percent of patients receive no dietitian support creating a care gap that widens as adoption scales|supports|2026-04-12
- Comprehensive behavioral wraparound may enable durable weight maintenance post-GLP-1 cessation, challenging the unconditional continuous-delivery requirement|challenges|2026-04-14
supports:
- GLP-1 therapy requires continuous nutritional monitoring infrastructure but 92 percent of patients receive no dietitian support creating a care gap that widens as adoption scales
challenges:
- Comprehensive behavioral wraparound may enable durable weight maintenance post-GLP-1 cessation, challenging the unconditional continuous-delivery requirement
--- ---
# GLP-1 receptor agonists require continuous treatment because metabolic benefits reverse within 28-52 weeks of discontinuation # GLP-1 receptor agonists require continuous treatment because metabolic benefits reverse within 28-52 weeks of discontinuation
Meta-analysis of 18 randomized controlled trials (n=3,771) demonstrates that GLP-1 receptor agonist benefits require continuous treatment. After discontinuation, mean weight gain was 5.63 kg, with 40%+ of semaglutide-induced weight loss regained within 28 weeks and 50%+ of tirzepatide loss regained within 52 weeks. Nonlinear meta-regression predicts return to pre-treatment weight levels within <2 years. Critically, the rebound extends beyond weight: waist circumference, BMI, systolic blood pressure, HbA1c, fasting plasma glucose, cholesterol, and blood pressure all deteriorate post-discontinuation. STEP-10 and SURMOUNT-4 trials confirmed substantial weight regain, glycemic control deterioration, and reversal of lipid/blood pressure improvements. While individualized dose-tapering can limit (but not prevent) rebound, no reliable long-term strategy for weight management after cessation exists. This continuous-treatment dependency means GLP-1 efficacy at the population level requires permanent access infrastructure, not just drug availability. Coverage gaps of 3-6 monthscommon under Medicaid redetermination cyclescan fully reverse therapeutic benefits that took months to achieve. Meta-analysis of 18 randomized controlled trials (n=3,771) demonstrates that GLP-1 receptor agonist benefits require continuous treatment. After discontinuation, mean weight gain was 5.63 kg, with 40%+ of semaglutide-induced weight loss regained within 28 weeks and 50%+ of tirzepatide loss regained within 52 weeks. Nonlinear meta-regression predicts return to pre-treatment weight levels within <2 years. Critically, the rebound extends beyond weight: waist circumference, BMI, systolic blood pressure, HbA1c, fasting plasma glucose, cholesterol, and blood pressure all deteriorate post-discontinuation. STEP-10 and SURMOUNT-4 trials confirmed substantial weight regain, glycemic control deterioration, and reversal of lipid/blood pressure improvements. While individualized dose-tapering can limit (but not prevent) rebound, no reliable long-term strategy for weight management after cessation exists. This continuous-treatment dependency means GLP-1 efficacy at the population level requires permanent access infrastructure, not just drug availability. Coverage gaps of 3-6 monthscommon under Medicaid redetermination cyclescan fully reverse therapeutic benefits that took months to achieve.
## Supporting Evidence
**Source:** WHO December 2025 guideline conditional framing
WHO's conditional recommendation acknowledges 'limited long-term evidence' and 'durability of effects unclear' as reasons for not issuing a strong recommendation. The guideline's caution about discontinuation effects aligns with the 28-52 week reversal timeline documented in clinical trials.

View file

@ -1,18 +0,0 @@
---
type: claim
domain: health
description: Coverage expansion does not translate to access when provider participation follows existing structural inequities
confidence: experimental
source: JMIR 2024 e59939; ASPE/HHS Medicaid telehealth trends 2019-2021
created: 2026-04-21
title: Medicaid-accepting facilities are 25 percent less likely to offer telehealth services, reproducing in-person access disparities in digital modalities
agent: vida
scope: structural
sourcer: JMIR 2024
supports: ["the mental health supply gap is widening not closing because demand outpaces workforce growth and technology primarily serves the already-served rather than expanding access", "generic-digital-health-deployment-reproduces-existing-disparities-by-disproportionately-benefiting-higher-income-users-despite-nominal-technology-access-equity"]
related: ["the mental health supply gap is widening not closing because demand outpaces workforce growth and technology primarily serves the already-served rather than expanding access", "generic-digital-health-deployment-reproduces-existing-disparities-by-disproportionately-benefiting-higher-income-users-despite-nominal-technology-access-equity"]
---
# Medicaid-accepting facilities are 25 percent less likely to offer telehealth services, reproducing in-person access disparities in digital modalities
The JMIR 2024 study found that facilities accepting Medicaid were approximately 25 percent less likely to offer telehealth services compared to non-Medicaid facilities. This creates a structural inversion where populations with the greatest need for telehealth access (Medicaid enrollees, who face transportation barriers, childcare constraints, and work inflexibility) are served by providers least likely to offer it. The mechanism is provider participation gap, not technology availability. While 46 state Medicaid programs now reimburse audio-only telehealth (up from near-zero pre-2020) and 37 states allow FQHCs to serve as distant-site providers, coverage mandates fail when provider adoption follows the same disparities as in-person care. The racial geography dimension reinforces this: facilities in counties with greater than 20 percent Black residents were 42 percent less likely to offer telehealth services compared to predominantly White counties. Medicaid/CHIP-enrolled children in counties with higher Black and Hispanic populations were less likely to receive telemental health services. This is not a technology access problem—it is a structural reproduction of existing healthcare inequities in digital form. The coverage-to-access gap demonstrates that policy enabling telehealth reimbursement is necessary but insufficient without addressing provider participation patterns.

View file

@ -10,16 +10,12 @@ agent: vida
scope: causal scope: causal
sourcer: Journal of Experimental Orthopaedics / Wiley sourcer: Journal of Experimental Orthopaedics / Wiley
related_claims: ["[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]"] related_claims: ["[[human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs]]"]
related: ["AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable", "never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling", "never-skilling-is-structurally-invisible-because-it-lacks-pre-ai-baseline-requiring-prospective-competency-assessment", "clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "ai-assistance-produces-neurologically-grounded-irreversible-deskilling-through-prefrontal-disengagement-hippocampal-reduction-and-dopaminergic-reinforcement"] related:
reweave_edges: ["AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable|related|2026-04-14"] - AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable
reweave_edges:
- AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable|related|2026-04-14
--- ---
# Never-skilling — the failure to acquire foundational clinical competencies because AI was present during training — poses a detection-resistant, potentially unrecoverable threat to medical education that is structurally worse than deskilling # Never-skilling — the failure to acquire foundational clinical competencies because AI was present during training — poses a detection-resistant, potentially unrecoverable threat to medical education that is structurally worse than deskilling
Never-skilling is formally defined in peer-reviewed literature as distinct from and more dangerous than deskilling for three structural reasons. First, it is unrecoverable: deskilling allows clinicians to re-engage practice and rebuild atrophied skills, but never-skilling means foundational representations were never formed — there is nothing to rebuild from. Second, it is detection-resistant: clinicians who never developed skills don't know what they're missing, and supervisors reviewing AI-assisted work cannot distinguish never-skilled from skilled performance. Third, it is prospectively invisible: the harm manifests 5-10 years after training when current trainees become independent practitioners, creating a delayed-onset safety crisis. The JEO review explicitly states 'never-skilling poses a greater long-term threat to medical education than deskilling' because early reliance on automation prevents acquisition of foundational clinical reasoning and procedural competencies. Supporting evidence includes findings that more than one-third of advanced medical students failed to identify erroneous LLM answers to clinical scenarios, and significant negative correlation between frequent AI tool use and critical thinking abilities. The concept has graduated from informal commentary to formal peer-reviewed definition across NEJM, JEO, and Lancet Digital Health, though no prospective RCT yet exists comparing AI-naive versus AI-exposed-from-training cohorts on downstream clinical performance. Never-skilling is formally defined in peer-reviewed literature as distinct from and more dangerous than deskilling for three structural reasons. First, it is unrecoverable: deskilling allows clinicians to re-engage practice and rebuild atrophied skills, but never-skilling means foundational representations were never formed — there is nothing to rebuild from. Second, it is detection-resistant: clinicians who never developed skills don't know what they're missing, and supervisors reviewing AI-assisted work cannot distinguish never-skilled from skilled performance. Third, it is prospectively invisible: the harm manifests 5-10 years after training when current trainees become independent practitioners, creating a delayed-onset safety crisis. The JEO review explicitly states 'never-skilling poses a greater long-term threat to medical education than deskilling' because early reliance on automation prevents acquisition of foundational clinical reasoning and procedural competencies. Supporting evidence includes findings that more than one-third of advanced medical students failed to identify erroneous LLM answers to clinical scenarios, and significant negative correlation between frequent AI tool use and critical thinking abilities. The concept has graduated from informal commentary to formal peer-reviewed definition across NEJM, JEO, and Lancet Digital Health, though no prospective RCT yet exists comparing AI-naive versus AI-exposed-from-training cohorts on downstream clinical performance.
## Supporting Evidence
**Source:** Heudel PE et al. 2026
Cytology lab consolidation demonstrates unrecoverability: 37 labs closed (45 to 8), 80-85% training volume eliminated. Reversing this requires rebuilding physical infrastructure, not just retraining individuals. This confirms never-skilling is structurally worse than deskilling because the recovery path requires institutional reconstruction.

View file

@ -10,17 +10,14 @@ agent: vida
scope: structural scope: structural
sourcer: Artificial Intelligence Review (Springer Nature) sourcer: Artificial Intelligence Review (Springer Nature)
related_claims: ["[[clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling]]"] related_claims: ["[[clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling]]"]
supports: ["Clinical AI introduces three distinct skill failure modes \u2014 deskilling (existing expertise lost through disuse), mis-skilling (AI errors adopted as correct), and never-skilling (foundational competence never acquired) \u2014 requiring distinct mitigation strategies for each", "Never-skilling \u2014 the failure to acquire foundational clinical competencies because AI was present during training \u2014 poses a detection-resistant, potentially unrecoverable threat to medical education that is structurally worse than deskilling"] supports:
reweave_edges: ["Clinical AI introduces three distinct skill failure modes \u2014 deskilling (existing expertise lost through disuse), mis-skilling (AI errors adopted as correct), and never-skilling (foundational competence never acquired) \u2014 requiring distinct mitigation strategies for each|supports|2026-04-12", "Never-skilling \u2014 the failure to acquire foundational clinical competencies because AI was present during training \u2014 poses a detection-resistant, potentially unrecoverable threat to medical education that is structurally worse than deskilling|supports|2026-04-14"] - Clinical AI introduces three distinct skill failure modes — deskilling (existing expertise lost through disuse), mis-skilling (AI errors adopted as correct), and never-skilling (foundational competence never acquired) — requiring distinct mitigation strategies for each
related: ["never-skilling-is-structurally-invisible-because-it-lacks-pre-ai-baseline-requiring-prospective-competency-assessment", "clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine"] - Never-skilling — the failure to acquire foundational clinical competencies because AI was present during training — poses a detection-resistant, potentially unrecoverable threat to medical education that is structurally worse than deskilling
reweave_edges:
- Clinical AI introduces three distinct skill failure modes — deskilling (existing expertise lost through disuse), mis-skilling (AI errors adopted as correct), and never-skilling (foundational competence never acquired) — requiring distinct mitigation strategies for each|supports|2026-04-12
- Never-skilling — the failure to acquire foundational clinical competencies because AI was present during training — poses a detection-resistant, potentially unrecoverable threat to medical education that is structurally worse than deskilling|supports|2026-04-14
--- ---
# Never-skilling in clinical AI is structurally invisible because it lacks a pre-AI baseline for comparison, requiring prospective competency assessment before AI exposure to detect # Never-skilling in clinical AI is structurally invisible because it lacks a pre-AI baseline for comparison, requiring prospective competency assessment before AI exposure to detect
Never-skilling presents a unique detection challenge that distinguishes it from deskilling. When a physician loses existing skills through disuse (deskilling), the degradation is detectable through comparison to their previous baseline performance. But when a trainee never acquires foundational competencies because AI was present from the start of their education, there is no baseline to compare against. A junior radiologist who cannot detect AI errors looks identical whether they (a) never learned the underlying skill or (b) learned it and then lost it through disuse — but the remediation is fundamentally different. The review documents that junior radiologists are far less likely than senior colleagues to detect AI errors, but this cannot be attributed to deskilling because they never had the pre-AI skill level to lose. This creates a structural invisibility problem: never-skilling can only be detected through prospective competency assessment before AI exposure, or through comparison to control cohorts trained without AI. The paper argues this requires curriculum redesign with explicit competency development milestones before AI tools are introduced, rather than the current practice of integrating AI throughout training. This has specific implications for medical education policy: if AI is introduced too early in training, the resulting competency gaps may be undetectable until a system-wide failure reveals them. Never-skilling presents a unique detection challenge that distinguishes it from deskilling. When a physician loses existing skills through disuse (deskilling), the degradation is detectable through comparison to their previous baseline performance. But when a trainee never acquires foundational competencies because AI was present from the start of their education, there is no baseline to compare against. A junior radiologist who cannot detect AI errors looks identical whether they (a) never learned the underlying skill or (b) learned it and then lost it through disuse — but the remediation is fundamentally different. The review documents that junior radiologists are far less likely than senior colleagues to detect AI errors, but this cannot be attributed to deskilling because they never had the pre-AI skill level to lose. This creates a structural invisibility problem: never-skilling can only be detected through prospective competency assessment before AI exposure, or through comparison to control cohorts trained without AI. The paper argues this requires curriculum redesign with explicit competency development milestones before AI tools are introduced, rather than the current practice of integrating AI throughout training. This has specific implications for medical education policy: if AI is introduced too early in training, the resulting competency gaps may be undetectable until a system-wide failure reveals them.
## Extending Evidence
**Source:** PubMed systematic search, April 21, 2026
The absence of prospective studies comparing medical students/residents trained WITH AI versus WITHOUT AI is particularly striking given the scale of deployment. This is the exact study design that would detect never-skilling, yet not one such study exists in peer-reviewed literature as of April 2026. The null result suggests either: (1) the medical education research community has not recognized never-skilling as a research priority despite widespread AI integration in training environments, or (2) institutions are avoiding the question because the answer would be operationally inconvenient. Either explanation confirms never-skilling's structural invisibility—it requires intentional prospective design to detect, and that design is not happening.

View file

@ -1,25 +0,0 @@
---
type: claim
domain: health
description: Comprehensive scoping review through August 2025 found consistent evidence of AI-induced deskilling across four specialties but zero studies demonstrating lasting skill improvement after AI exposure
confidence: likely
source: Heudel et al. 2026 scoping review, literature through August 2025
created: 2026-04-21
title: No peer-reviewed evidence of durable physician upskilling from AI exposure as of mid-2026
agent: vida
scope: correlational
sourcer: Heudel PE, Crochet H, Filori Q, Bachelot T, Blay JY
supports: ["human-in-the-loop-clinical-ai-degrades-to-worse-than-ai-alone-because-physicians-both-de-skill-from-reliance-and-introduce-errors-when-overriding-correct-outputs", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine"]
related: ["human-in-the-loop-clinical-ai-degrades-to-worse-than-ai-alone-because-physicians-both-de-skill-from-reliance-and-introduce-errors-when-overriding-correct-outputs", "ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine", "automation-bias-in-medicine-increases-false-positives-through-anchoring-on-ai-output", "clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling", "never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling", "never-skilling-is-structurally-invisible-because-it-lacks-pre-ai-baseline-requiring-prospective-competency-assessment", "human-in-the-loop clinical AI degrades to worse-than-AI-alone because physicians both de-skill from reliance and introduce errors when overriding correct outputs"]
---
# No peer-reviewed evidence of durable physician upskilling from AI exposure as of mid-2026
The Heudel et al. scoping review examined literature through August 2025 across colonoscopy, radiology, pathology, and cytology. Authors conclude: 'empirical studies consistently demonstrate that AI can inadvertently impair physicians' performance.' The review found NO opposing evidence — no studies showed durable improvement in physician skills after AI exposure. This null result is itself significant: after 5+ years of clinical AI deployment, there is no peer-reviewed evidence of durable skill improvement. The authors searched for counter-evidence and found none. This creates a lopsided evidence base: strong consistent evidence of deskilling (colonoscopy ADR dropped 6.0 percentage points when AI removed; radiology false-positive recalls increased 12% from erroneous AI prompts; pathology showed 30%+ diagnosis reversals from incorrect AI suggestions) versus zero evidence of lasting upskilling. The absence of upskilling evidence is notable because it contradicts the common assumption that AI 'calibrates' or 'teaches' clinicians. If such effects existed and were durable, they should be detectable in the literature by now.
## Supporting Evidence
**Source:** Savardi et al., Insights into Imaging, PMC11780016, Jan 2025
Savardi et al. pilot study (n=8, single session) showed performance improvement only while AI was present. No washout condition or follow-up measurement without AI was conducted, so the study cannot demonstrate durable up-skilling. This adds to the evidence base that concurrent AI performance gains do not translate to retained skill after AI removal.

View file

@ -1,18 +0,0 @@
---
type: claim
domain: health
description: PRAIM study's design allowed radiologists to voluntarily choose whether to consult AI after making their own primary read, potentially interrupting the deskilling pathway by preserving active clinical judgment for every case
confidence: experimental
source: PRAIM Study, Nature Medicine, January 2025
created: 2026-04-21
title: Optional-use AI deployment where clinicians form independent judgment before consulting AI may structurally prevent automation bias and deskilling mechanisms observed in mandatory-use systems
agent: vida
scope: structural
sourcer: Nature Medicine
challenges: ["human-in-the-loop-clinical-ai-degrades-to-worse-than-AI-alone-because-physicians-both-de-skill-from-reliance-and-introduce-errors-when-overriding-correct-outputs", "automation-bias-in-medicine-increases-false-positives-through-anchoring-on-ai-output"]
related: ["human-in-the-loop-clinical-ai-degrades-to-worse-than-AI-alone-because-physicians-both-de-skill-from-reliance-and-introduce-errors-when-overriding-correct-outputs", "automation-bias-in-medicine-increases-false-positives-through-anchoring-on-ai-output", "clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling"]
---
# Optional-use AI deployment where clinicians form independent judgment before consulting AI may structurally prevent automation bias and deskilling mechanisms observed in mandatory-use systems
The PRAIM study deployed AI mammography screening across 12 German sites with 463,094 women and 119 radiologists using an optional-use design: radiologists made their own primary read first, then voluntarily chose whether to consult AI. This design achieved a 17.6% increase in cancer detection (6.7 vs 5.7 per 1,000 screened) with no increase in recall rate. The structural argument is that optional-use deployment may prevent deskilling because it requires radiologists to exercise active clinical judgment for EVERY case regardless of AI use, positioning AI as a second opinion rather than a primary filter. This contrasts with mandatory or default-on AI deployment where clinicians may passively wait for AI output before forming their own judgment—the mechanism for automation bias and deskilling documented in other studies. The PRAIM study did not formally measure skill degradation, so this remains a plausible structural hypothesis rather than proven effect. The design principle is: if automation bias occurs when clinicians defer judgment to AI, then requiring independent judgment formation before AI consultation should interrupt that pathway.

View file

@ -1,13 +1,14 @@
--- ---
description: SAMHSA projects a 250K professional shortage while nearly half the US lives in mental health HPSAs and teletherapy has not improved access for high-deprivation populations creating a two-tier system where technology helps the insured while underserved populations fall further behind
type: claim type: claim
domain: health domain: health
description: SAMHSA projects a 250K professional shortage while nearly half the US lives in mental health HPSAs and teletherapy has not improved access for high-deprivation populations creating a two-tier system where technology helps the insured while underserved populations fall further behind
confidence: likely
source: SAMHSA workforce projections 2025; KFF mental health HPSA data; PNAS Nexus telehealth equity analysis 2025; National Council workforce survey; Motivo Health licensure gap data 2025
created: 2026-02-17 created: 2026-02-17
supports: ["generic digital health deployment reproduces existing disparities by disproportionately benefiting higher income users despite nominal technology access equity"] source: "SAMHSA workforce projections 2025; KFF mental health HPSA data; PNAS Nexus telehealth equity analysis 2025; National Council workforce survey; Motivo Health licensure gap data 2025"
reweave_edges: ["generic digital health deployment reproduces existing disparities by disproportionately benefiting higher income users despite nominal technology access equity|supports|2026-04-03"] confidence: likely
related: ["the mental health supply gap is widening not closing because demand outpaces workforce growth and technology primarily serves the already-served rather than expanding access"] supports:
- generic digital health deployment reproduces existing disparities by disproportionately benefiting higher income users despite nominal technology access equity
reweave_edges:
- generic digital health deployment reproduces existing disparities by disproportionately benefiting higher income users despite nominal technology access equity|supports|2026-04-03
--- ---
# the mental health supply gap is widening not closing because demand outpaces workforce growth and technology primarily serves the already-served rather than expanding access # the mental health supply gap is widening not closing because demand outpaces workforce growth and technology primarily serves the already-served rather than expanding access
@ -39,10 +40,3 @@ Relevant Notes:
Topics: Topics:
- health and wellness - health and wellness
## Extending Evidence
**Source:** JMIR 2024 e59939; ASPE/HHS Medicaid telehealth trends 2019-2021
Medicaid-accepting facilities are 25 percent less likely to offer telehealth services than non-Medicaid facilities, and facilities in counties with >20 percent Black residents are 42 percent less likely to offer telehealth. This is the structural mechanism: provider participation in telehealth follows the same disparities as in-person care, reproducing access gaps in digital form despite coverage expansion (46 states now reimburse audio-only telehealth). The coverage-to-access gap demonstrates that policy enabling reimbursement is insufficient without addressing provider participation patterns.

View file

@ -10,16 +10,8 @@ agent: vida
scope: structural scope: structural
sourcer: USPSTF sourcer: USPSTF
related_claims: ["[[GLP-1 receptor agonists are the largest therapeutic category launch in pharmaceutical history but their chronic use model makes the net cost impact inflationary through 2035]]", "[[value-based care transitions stall at the payment boundary because 60 percent of payments touch value metrics but only 14 percent bear full risk]]"] related_claims: ["[[GLP-1 receptor agonists are the largest therapeutic category launch in pharmaceutical history but their chronic use model makes the net cost impact inflationary through 2035]]", "[[value-based care transitions stall at the payment boundary because 60 percent of payments touch value metrics but only 14 percent bear full risk]]"]
related: ["uspstf-glp1-policy-gap-leaves-aca-mandatory-coverage-dormant", "acc-2025-distinguishes-glp1-symptom-improvement-from-mortality-reduction-in-hfpef", "glp-1-population-mortality-impact-delayed-20-years-by-access-and-adherence-constraints", "glp1-year-one-persistence-doubled-2021-2024-supply-normalization", "glp1-access-follows-systematic-inversion-highest-burden-states-have-lowest-coverage-and-highest-income-relative-cost"]
--- ---
# The USPSTF's 2018 adult obesity B recommendation predates therapeutic-dose GLP-1 agonists and remains unupdated, leaving the ACA mandatory coverage mechanism dormant for the drug class most likely to change obesity outcomes # The USPSTF's 2018 adult obesity B recommendation predates therapeutic-dose GLP-1 agonists and remains unupdated, leaving the ACA mandatory coverage mechanism dormant for the drug class most likely to change obesity outcomes
The USPSTF's 2018 Grade B recommendation for adult obesity covers only intensive multicomponent behavioral interventions (≥12 sessions in year 1). While the 2018 review examined pharmacotherapy, it covered only orlistat, lower-dose liraglutide, phentermine-topiramate, naltrexone-bupropion, and lorcaserin—therapeutic-dose GLP-1 agonists (Wegovy/semaglutide 2.4mg, Zepbound/tirzepatide) were entirely absent from the evidence base as they did not exist at scale. The recommendation explicitly declined to recommend pharmacotherapy due to 'data lacking about maintenance of improvement after discontinuation.' As of April 2026, this 2018 recommendation remains operative. The USPSTF website flags adult obesity as 'being updated' but the redirect points toward cardiovascular prevention (diet/physical activity), not GLP-1 pharmacotherapy. No formal petition or nomination for GLP-1 pharmacotherapy review has been publicly announced. This matters because a new USPSTF A/B recommendation covering GLP-1 pharmacotherapy would trigger ACA Section 2713 mandatory coverage without cost-sharing for all non-grandfathered insurance plans—the most powerful single policy lever available, more comprehensive than any Medicaid state-by-state expansion. The clinical evidence base that could support an A/B rating (STEP trials, SURMOUNT trials, SELECT cardiovascular outcomes data) exists and is substantial. Yet the policy infrastructure has not caught up to the clinical evidence, and no advocacy organization has apparently filed a formal nomination to initiate the review process. This represents a striking policy gap: the most powerful available mechanism for mandating GLP-1 coverage sits unused despite strong supporting evidence. The USPSTF's 2018 Grade B recommendation for adult obesity covers only intensive multicomponent behavioral interventions (≥12 sessions in year 1). While the 2018 review examined pharmacotherapy, it covered only orlistat, lower-dose liraglutide, phentermine-topiramate, naltrexone-bupropion, and lorcaserin—therapeutic-dose GLP-1 agonists (Wegovy/semaglutide 2.4mg, Zepbound/tirzepatide) were entirely absent from the evidence base as they did not exist at scale. The recommendation explicitly declined to recommend pharmacotherapy due to 'data lacking about maintenance of improvement after discontinuation.' As of April 2026, this 2018 recommendation remains operative. The USPSTF website flags adult obesity as 'being updated' but the redirect points toward cardiovascular prevention (diet/physical activity), not GLP-1 pharmacotherapy. No formal petition or nomination for GLP-1 pharmacotherapy review has been publicly announced. This matters because a new USPSTF A/B recommendation covering GLP-1 pharmacotherapy would trigger ACA Section 2713 mandatory coverage without cost-sharing for all non-grandfathered insurance plans—the most powerful single policy lever available, more comprehensive than any Medicaid state-by-state expansion. The clinical evidence base that could support an A/B rating (STEP trials, SURMOUNT trials, SELECT cardiovascular outcomes data) exists and is substantial. Yet the policy infrastructure has not caught up to the clinical evidence, and no advocacy organization has apparently filed a formal nomination to initiate the review process. This represents a striking policy gap: the most powerful available mechanism for mandating GLP-1 coverage sits unused despite strong supporting evidence.
## Extending Evidence
**Source:** WHO December 2025 guideline, USPSTF 2018 recommendation
WHO's December 2025 endorsement creates a documented timeline for the policy gap: the global health authority moved 7+ years after USPSTF's 2018 recommendation and 3+ years after semaglutide's obesity approval, while USPSTF has not initiated a review. If USPSTF began review now, final recommendation would likely arrive 2028-2030, creating a 10-12 year lag from initial evidence to US preventive coverage mandate.

View file

@ -1,24 +0,0 @@
---
type: claim
domain: health
description: The global health authority with broadest mandate but no US enforcement power has endorsed GLP-1s for obesity while the US authority governing ACA preventive coverage mandates has not updated its pre-semaglutide guidance
confidence: proven
source: WHO December 2025 guideline, USPSTF 2018 recommendation
created: 2026-04-21
title: WHO endorsed GLP-1s for obesity treatment in December 2025 while USPSTF maintains its 2018 recommendation excluding pharmacotherapy creating the largest international-US preventive coverage policy gap in modern history
agent: vida
scope: structural
sourcer: WHO
supports: ["glp-1-access-structure-inverts-need-creating-equity-paradox"]
related: ["federal-budget-scoring-methodology-systematically-undervalues-preventive-interventions-because-10-year-window-excludes-long-term-savings", "uspstf-glp1-policy-gap-leaves-aca-mandatory-coverage-dormant", "glp-1-access-structure-inverts-need-creating-equity-paradox", "glp-1-population-mortality-impact-delayed-20-years-by-access-and-adherence-constraints", "acc-2025-distinguishes-glp1-symptom-improvement-from-mortality-reduction-in-hfpef", "glp1-year-one-persistence-doubled-2021-2024-supply-normalization", "GLP-1 receptor agonists are the largest therapeutic category launch in pharmaceutical history but their chronic use model makes the net cost impact inflationary through 2035"]
---
# WHO endorsed GLP-1s for obesity treatment in December 2025 while USPSTF maintains its 2018 recommendation excluding pharmacotherapy creating the largest international-US preventive coverage policy gap in modern history
On December 1, 2025, WHO issued a formal clinical guideline recommending GLP-1 receptor agonists (liraglutide, semaglutide) and GIP/GLP-1 dual agonists (tirzepatide) as a long-term treatment option for obesity in adults. This was designated as a 'conditional recommendation, moderate-certainty evidence' acknowledging limited long-term data but sufficient evidence for endorsement. WHO also added GLP-1s to its Essential Medicines List in September 2025 for type 2 diabetes management, signaling directional intent toward obesity coverage.
Meanwhile, USPSTF's most recent obesity recommendation dates to 2018 and explicitly recommends intensive behavioral interventions while excluding pharmacotherapy. USPSTF governs ACA preventive coverage mandates under Section 2713, meaning its recommendations trigger mandatory coverage without cost-sharing. The WHO guideline creates no such mandate in the US.
This creates an unusual structural asymmetry: patients in high-income countries with WHO-aligned guidelines (Canada, UK, Australia) may access covered GLP-1 obesity treatment, while US patients cannot get ACA-mandated coverage without comorbidities like diabetes or cardiovascular disease. The gap is particularly striking because WHO moved unusually fast (typically 3-5 years from evidence to guideline) while USPSTF operates on a slower review cycle. If USPSTF began review now, a final recommendation covering GLP-1 pharmacotherapy would likely not arrive before 2028-2030.
The WHO's 'conditional' framing (versus 'strong' recommendation) acknowledges cost-effectiveness uncertainty for resource-constrained systems, limited long-term evidence (most trials under 2 years), and unclear durability of effects. WHO explicitly positioned GLP-1s as 'ONE component within a comprehensive approach requiring healthy diets, physical activity, professional support, and population-level policies' and stated that countries must 'consider local cost-effectiveness, budget impact, and ethical implications' before adoption. This framing is consistent with WHO's institutional mandate but does not diminish the policy gap: WHO has endorsed, USPSTF has not.

View file

@ -7,13 +7,10 @@ date: 2026-03-19
domain: health domain: health
secondary_domains: [ai-alignment] secondary_domains: [ai-alignment]
format: journal-article format: journal-article
status: processed status: unprocessed
processed_by: vida
processed_date: 2026-04-21
priority: high priority: high
tags: [clinical-ai, deskilling, never-skilling, physician-skills, automation-bias, scoping-review] tags: [clinical-ai, deskilling, never-skilling, physician-skills, automation-bias, scoping-review]
flagged_for_theseus: ["Clinical deskilling is domain-specific instance of general AI alignment failure; the cytology consolidation finding (80-85% training volume reduction) is the never-skilling pathway via structural destruction of training pipelines"] flagged_for_theseus: ["Clinical deskilling is domain-specific instance of general AI alignment failure; the cytology consolidation finding (80-85% training volume reduction) is the never-skilling pathway via structural destruction of training pipelines"]
extraction_model: "anthropic/claude-sonnet-4.5"
--- ---
## Content ## Content

View file

@ -7,10 +7,9 @@ date: 2025-01-01
domain: health domain: health
secondary_domains: [] secondary_domains: []
format: government-report format: government-report
status: null-result status: unprocessed
priority: high priority: high
tags: [mental-health, workforce-shortage, rural-health, psychiatry, HPSA, access] tags: [mental-health, workforce-shortage, rural-health, psychiatry, HPSA, access]
extraction_model: "anthropic/claude-sonnet-4.5"
--- ---
## Content ## Content

View file

@ -7,10 +7,9 @@ date: 2026-03-02
domain: health domain: health
secondary_domains: [] secondary_domains: []
format: journal-article format: journal-article
status: null-result status: unprocessed
priority: high priority: high
tags: [mental-health, telehealth, access-equity, rural-health, treatment-gap, mental-health-workforce] tags: [mental-health, telehealth, access-equity, rural-health, treatment-gap, mental-health-workforce]
extraction_model: "anthropic/claude-sonnet-4.5"
--- ---
## Content ## Content

View file

@ -7,10 +7,9 @@ date: 2025-01-01
domain: health domain: health
secondary_domains: [] secondary_domains: []
format: policy-brief format: policy-brief
status: null-result status: unprocessed
priority: medium priority: medium
tags: [mental-health, Medicaid, treatment-gap, access-equity, insurance-coverage] tags: [mental-health, Medicaid, treatment-gap, access-equity, insurance-coverage]
extraction_model: "anthropic/claude-sonnet-4.5"
--- ---
## Content ## Content

View file

@ -7,10 +7,9 @@ date: 2025-01-01
domain: health domain: health
secondary_domains: [] secondary_domains: []
format: journal-articles format: journal-articles
status: null-result status: unprocessed
priority: medium priority: medium
tags: [mental-health, workforce-shortage, treatment-gap, psychiatric-nursing, access] tags: [mental-health, workforce-shortage, treatment-gap, psychiatric-nursing, access]
extraction_model: "anthropic/claude-sonnet-4.5"
--- ---
## Content ## Content

View file

@ -7,10 +7,9 @@ date: 2025-02-01
domain: health domain: health
secondary_domains: [] secondary_domains: []
format: journal-article format: journal-article
status: null-result status: unprocessed
priority: high priority: high
tags: [telehealth, mental-health, access-equity, deprivation, disparity, primary-care, psychiatry] tags: [telehealth, mental-health, access-equity, deprivation, disparity, primary-care, psychiatry]
extraction_model: "anthropic/claude-sonnet-4.5"
--- ---
## Content ## Content

View file

@ -7,12 +7,9 @@ date: 2025-01-01
domain: health domain: health
secondary_domains: [ai-alignment] secondary_domains: [ai-alignment]
format: journal-article format: journal-article
status: processed status: unprocessed
processed_by: vida
processed_date: 2026-04-21
priority: medium priority: medium
tags: [clinical-ai, mammography, radiology, detection, optional-use, deskilling-mitigation, real-world-evidence] tags: [clinical-ai, mammography, radiology, detection, optional-use, deskilling-mitigation, real-world-evidence]
extraction_model: "anthropic/claude-sonnet-4.5"
--- ---
## Content ## Content

View file

@ -7,12 +7,9 @@ date: 2026-04-21
domain: health domain: health
secondary_domains: [ai-alignment] secondary_domains: [ai-alignment]
format: null-result format: null-result
status: processed status: unprocessed
processed_by: vida
processed_date: 2026-04-21
priority: medium priority: medium
tags: [clinical-ai, deskilling, never-skilling, null-result, physician-skills, calibration] tags: [clinical-ai, deskilling, never-skilling, null-result, physician-skills, calibration]
extraction_model: "anthropic/claude-sonnet-4.5"
--- ---
## Content ## Content

View file

@ -7,12 +7,9 @@ date: 2025-01-29
domain: health domain: health
secondary_domains: [ai-alignment] secondary_domains: [ai-alignment]
format: journal-article format: journal-article
status: processed status: unprocessed
processed_by: vida
processed_date: 2026-04-21
priority: medium priority: medium
tags: [clinical-ai, deskilling, automation-bias, radiology, error-resilience, medical-education] tags: [clinical-ai, deskilling, automation-bias, radiology, error-resilience, medical-education]
extraction_model: "anthropic/claude-sonnet-4.5"
--- ---
## Content ## Content

View file

@ -7,12 +7,9 @@ date: 2025-01-01
domain: health domain: health
secondary_domains: [] secondary_domains: []
format: meta-analyses format: meta-analyses
status: processed status: unprocessed
processed_by: vida
processed_date: 2026-04-21
priority: high priority: high
tags: [mental-health, digital-therapeutics, smartphone-apps, efficacy, attrition, access-equity, behavioral-health] tags: [mental-health, digital-therapeutics, smartphone-apps, efficacy, attrition, access-equity, behavioral-health]
extraction_model: "anthropic/claude-sonnet-4.5"
--- ---
## Content ## Content

View file

@ -7,12 +7,9 @@ date: 2025-07-01
domain: health domain: health
secondary_domains: [] secondary_domains: []
format: journal-article format: journal-article
status: processed status: unprocessed
processed_by: vida
processed_date: 2026-04-21
priority: medium priority: medium
tags: [telehealth, mental-health, access-equity, rural-health, disparity, Medicare] tags: [telehealth, mental-health, access-equity, rural-health, disparity, Medicare]
extraction_model: "anthropic/claude-sonnet-4.5"
--- ---
## Content ## Content

View file

@ -7,12 +7,9 @@ date: 2025-12-01
domain: health domain: health
secondary_domains: [] secondary_domains: []
format: guideline format: guideline
status: processed status: unprocessed
processed_by: vida
processed_date: 2026-04-21
priority: medium priority: medium
tags: [GLP-1, WHO, USPSTF, obesity, guideline, coverage-policy, access] tags: [GLP-1, WHO, USPSTF, obesity, guideline, coverage-policy, access]
extraction_model: "anthropic/claude-sonnet-4.5"
--- ---
## Content ## Content