teleo-codex/inbox/queue/2026-04-21-smartphone-mental-health-apps-efficacy-attrition.md
Teleo Agents f0d6522cb4 vida: research session 2026-04-21 — 15 sources archived
Pentagon-Agent: Vida <HEADLESS>
2026-04-21 04:35:44 +00:00

75 lines
6.8 KiB
Markdown

---
type: source
title: "Smartphone mental health apps show modest efficacy (g=0.43) but 64% attrition in motivated samples — real-world population reach is severely limited by engagement failure"
author: "Multiple sources: Lancet Digital Health 2025; npj Digital Medicine 2025 meta-analysis (92 RCTs, n=16,728)"
url: https://www.thelancet.com/journals/landig/article/PIIS2589-7500(25)00105-0/fulltext
date: 2025-01-01
domain: health
secondary_domains: []
format: meta-analyses
status: unprocessed
priority: high
tags: [mental-health, digital-therapeutics, smartphone-apps, efficacy, attrition, access-equity, behavioral-health]
---
## Content
**Source 1:** "Efficacy of standalone smartphone apps for mental health: an updated systematic review and meta-analysis." The Lancet Digital Health. 2025. DOI: 10.1016/S2589-7500(25)00105-0.
Key findings:
- Depression apps: Hedges' g = 0.45 (small-to-moderate effect)
- Anxiety apps: Hedges' g = 0.35 (small effect)
- PTSD apps: Hedges' g = 0.15 (minimal effect)
**Source 2:** "A meta-analysis of persuasive design, engagement, and efficacy in 92 RCTs of mental health apps." npj Digital Medicine. 2025.
- 92 RCTs, 16,728 participants
- Apps significantly improved clinical outcomes vs. controls: g = 0.43
**Critical engagement/attrition data (npj Digital Medicine; also "Engagement and attrition in digital mental health" npj Digital Medicine 2025):**
- Attrition rates up to **64% in motivated, self-selected RCT participants** — the best-case scenario for engagement
- Retention: 26.15% at post-test; 18.34% at follow-up in some studies
- 1 in 4 participants drop out prematurely even in structured trial conditions
- Retention trajectory: ~90% at week 1 → ~50% by week 8
**Factors contributing to poor engagement:**
- Poor usability and lack of user-centric design
- Privacy concerns
- Skepticism about effectiveness
- Limited digital literacy (structural barrier for underserved populations)
- Lack of personalization / one-size-fits-all approaches
- No cultural or linguistic adaptation for non-English speakers
**Effect size interpretation:**
- g = 0.43 (apps overall) compares favorably to some face-to-face interventions but is lower than psychotherapy effect sizes (typically g = 0.8-1.0)
- Critically: effect sizes in RCTs represent best-case conditions with motivated, self-selected, technically literate participants who complete the program. Real-world population-level effects are substantially lower due to 64% attrition and lower engagement in non-trial conditions.
**Important null finding (npj Digital Medicine 2025):**
"Effect sizes of depression, anxiety, sleep problems, and PTSD apps were not significantly moderated by guidance, engagement, or dropout rates" — suggesting that the small proportion who complete apps benefit, but engagement doesn't predict who those completers will be.
## Agent Notes
**Why this matters:** This is the most comprehensive recent evidence base on whether smartphone mental health apps can close the mental health supply gap. The finding is nuanced: apps DO work (g=0.43 is a real effect) but with 64% attrition even in motivated samples, the population-level reach is severely limited. For underserved populations (lower digital literacy, privacy concerns, limited internet access), attrition would likely be substantially higher than the trial sample.
This directly addresses the KB claim that technology "primarily serves the already-served": the 64% attrition in motivated, self-selected RCT participants implies that in real-world conditions with non-self-selected users (including underserved), completion rates would be far lower. Apps that work for the 36% who complete them are still not solving population-level access.
**What surprised me:** The efficacy signal is real — g=0.43 is not trivial for a standalone smartphone app. But the finding that effect sizes are NOT moderated by engagement or dropout rates is strange — it suggests the benefit accrues to the completer subset regardless of what drives completion. This creates a selection problem: we can't identify in advance who will complete and benefit.
**What I expected but didn't find:** Evidence that any specific app modality (text-based, CBT-structured, mindfulness) works better for underserved populations specifically. The literature is almost entirely in trial conditions with self-selected participants — essentially no equity-stratified efficacy data exists.
**KB connections:**
- [[the mental health supply gap is widening not closing because demand outpaces workforce growth and technology primarily serves the already-served rather than expanding access]] — direct evidence: apps work but only for the self-selected completer minority; underserved populations face additional attrition barriers
- [[prescription digital therapeutics failed as a business model because FDA clearance creates regulatory cost without the pricing power that justifies it for near-zero marginal cost software]] — the Pear/Akili collapse was partly about this engagement problem: even if an app is clinically effective, population-level impact requires engagement that DTx couldn't achieve
- [[social isolation costs Medicare 7 billion annually and carries mortality risk equivalent to smoking 15 cigarettes per day making loneliness a clinical condition not a personal problem]] — the people who most need mental health apps (socially isolated, severe mental illness) are least likely to engage with them
**Extraction hints:**
- "Mental health smartphone apps show small-to-moderate efficacy (Hedges' g = 0.43) in motivated, self-selected RCT participants but 64% attrition undermines population-level impact" — this is extractable as a claim that reframes the digital mental health access question
- The equity gap is implied but not directly measured: digital literacy barriers, privacy concerns, and cultural/linguistic adaptation gaps mean underserved populations face higher attrition than the already-high RCT rates
- Consider: should this create a divergence with an optimistic "apps can close the treatment gap" framing? The Lancet Digital Health 2025 shows efficacy; the attrition data shows reach failure. These are both true simultaneously.
## Curator Notes
PRIMARY CONNECTION: [[the mental health supply gap is widening not closing because demand outpaces workforce growth and technology primarily serves the already-served rather than expanding access]]
WHY ARCHIVED: The 64% attrition in motivated RCT participants is the key mechanism explaining why smartphone apps, despite real efficacy, fail to close the treatment gap at population scale. This is the strongest recent evidence for the structural limitation of digital mental health.
EXTRACTION HINT: The two-part finding is extractable together: (1) apps work at the individual level (g=0.43); (2) 64% attrition in best-case conditions limits population reach. The combination explains why efficacy doesn't translate to access expansion.