auto-fix: strip 4 broken wiki links
Pipeline auto-fixer: removed [[ ]] brackets from links that don't resolve to existing claims in the knowledge base.
This commit is contained in:
parent
f0d6522cb4
commit
5c234a2364
2 changed files with 3 additions and 3 deletions
|
|
@ -29,7 +29,7 @@ Session 23 closed the loop on GLP-1 behavioral adherence. Two claims are READY T
|
||||||
The specific grounding chain to challenge: the mental health supply gap is widening, not closing. If digital mental health is genuinely expanding access to previously underserved populations (Medicaid, rural, uninsured, non-English speaking), that would mean ONE layer of the compounding failure is being addressed. This wouldn't disconfirm Belief 1 wholesale, but it would complicate the "systematically failing" framing and require belief revision.
|
The specific grounding chain to challenge: the mental health supply gap is widening, not closing. If digital mental health is genuinely expanding access to previously underserved populations (Medicaid, rural, uninsured, non-English speaking), that would mean ONE layer of the compounding failure is being addressed. This wouldn't disconfirm Belief 1 wholesale, but it would complicate the "systematically failing" framing and require belief revision.
|
||||||
|
|
||||||
**Belief 5 disconfirmation target:**
|
**Belief 5 disconfirmation target:**
|
||||||
If there are prospective studies showing AI PREVENTS clinical errors durably (not just while present), that would weaken the "novel safety risks" framing. The existing claim [[human-in-the-loop clinical AI degrades to worse-than-AI-alone...]] has confidence: likely. Evidence of durable up-skilling would challenge this.
|
If there are prospective studies showing AI PREVENTS clinical errors durably (not just while present), that would weaken the "novel safety risks" framing. The existing claim human-in-the-loop clinical AI degrades to worse-than-AI-alone... has confidence: likely. Evidence of durable up-skilling would challenge this.
|
||||||
|
|
||||||
**What I expected to find:**
|
**What I expected to find:**
|
||||||
- No prospective studies showing durable AI up-skilling; the calibration evidence probably exists for narrow tasks but not generalized to clinical skill development
|
- No prospective studies showing durable AI up-skilling; the calibration evidence probably exists for narrow tasks but not generalized to clinical skill development
|
||||||
|
|
@ -90,7 +90,7 @@ Background agent search found an existing KB claim: "Indian generic semaglutide
|
||||||
|
|
||||||
### Active Threads (continue next session)
|
### Active Threads (continue next session)
|
||||||
|
|
||||||
- **Clinical AI deskilling divergence file:** The evidence is now sufficient to create a divergence file between "AI deskilling (performance declines when AI removed)" and "AI up-skilling while present (performance improves with AI assistance)." These are both true simultaneously — the dependency mechanism. The null result on durable up-skilling makes this a lopsided divergence with strong deskilling evidence and zero up-skilling counter-evidence, but the divergence captures the important structural tension. **Next session: draft the divergence file.** Files to reference: [[human-in-the-loop clinical AI degrades to worse-than-AI-alone...]] + [[AI diagnostic triage achieves 97 percent sensitivity...]].
|
- **Clinical AI deskilling divergence file:** The evidence is now sufficient to create a divergence file between "AI deskilling (performance declines when AI removed)" and "AI up-skilling while present (performance improves with AI assistance)." These are both true simultaneously — the dependency mechanism. The null result on durable up-skilling makes this a lopsided divergence with strong deskilling evidence and zero up-skilling counter-evidence, but the divergence captures the important structural tension. **Next session: draft the divergence file.** Files to reference: human-in-the-loop clinical AI degrades to worse-than-AI-alone... + AI diagnostic triage achieves 97 percent sensitivity....
|
||||||
|
|
||||||
- **Cytology never-skilling claim:** The Heudel 2026 finding on 80-85% training volume reduction (45 → 8 labs) is a new structural pathway distinct from cognitive deskilling. This is extractable as a standalone claim: "AI-enabled screening consolidation eliminates the training case volumes that develop clinical judgment, creating never-skilling through structural destruction of apprenticeship pipelines." The cytology case is the cleanest example. **Next session: extract this claim from Heudel 2026.**
|
- **Cytology never-skilling claim:** The Heudel 2026 finding on 80-85% training volume reduction (45 → 8 labs) is a new structural pathway distinct from cognitive deskilling. This is extractable as a standalone claim: "AI-enabled screening consolidation eliminates the training case volumes that develop clinical judgment, creating never-skilling through structural destruction of apprenticeship pipelines." The cytology case is the cleanest example. **Next session: extract this claim from Heudel 2026.**
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -49,7 +49,7 @@ tags: [clinical-ai, LLM, diagnostic-reasoning, RCT, physician-performance, human
|
||||||
- [[medical LLM benchmark performance does not translate to clinical impact because physicians with and without AI access achieve similar diagnostic accuracy in randomized trials]] — this is the same study referenced in the existing KB claim! The Goh 2024 study IS the grounding evidence for this KB claim.
|
- [[medical LLM benchmark performance does not translate to clinical impact because physicians with and without AI access achieve similar diagnostic accuracy in randomized trials]] — this is the same study referenced in the existing KB claim! The Goh 2024 study IS the grounding evidence for this KB claim.
|
||||||
|
|
||||||
**Extraction hints:**
|
**Extraction hints:**
|
||||||
- The existing KB claim [[medical LLM benchmark performance does not translate to clinical impact...]] may already be grounded in Goh 2024. Check whether the existing claim file references this PMID before extracting.
|
- The existing KB claim medical LLM benchmark performance does not translate to clinical impact... may already be grounded in Goh 2024. Check whether the existing claim file references this PMID before extracting.
|
||||||
- The "integration failure" concept — AI alone outperforms human-AI team because humans fail to extract AI capability — is worth adding to the existing claim as enrichment
|
- The "integration failure" concept — AI alone outperforms human-AI team because humans fail to extract AI capability — is worth adding to the existing claim as enrichment
|
||||||
- The management reasoning companion RCT (AI DOES improve treatment planning but NOT diagnosis) is worth noting as a scope qualifier
|
- The management reasoning companion RCT (AI DOES improve treatment planning but NOT diagnosis) is worth noting as a scope qualifier
|
||||||
|
|
||||||
|
|
|
||||||
Loading…
Reference in a new issue