extract: 2025-11-01-jmir-knowledge-practice-gap-39-benchmarks-systematic-review
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Some checks are pending
Sync Graph Data to teleo-app / sync (push) Waiting to run
Pentagon-Agent: Epimetheus <3D35839A-7722-4740-B93D-51157F7D5E70>
This commit is contained in:
parent
f3db6b874f
commit
b41a80ab0e
3 changed files with 54 additions and 1 deletions
|
|
@ -35,6 +35,12 @@ OpenEvidence's medRxiv preprint (November 2025) showed 24% accuracy for relevant
|
|||
|
||||
ARISE report identifies specific failure modes: real-world performance 'breaks down when systems must manage uncertainty, incomplete information, or multi-step workflows.' This provides mechanistic detail for why benchmark performance doesn't translate — benchmarks test pattern recognition on complete data while clinical care requires uncertainty management.
|
||||
|
||||
### Additional Evidence (extend)
|
||||
*Source: [[2025-11-01-jmir-knowledge-practice-gap-39-benchmarks-systematic-review]] | Added: 2026-03-24*
|
||||
|
||||
JMIR systematic review of 761 studies provides methodological foundation: 95% of clinical LLM evaluation uses medical exam questions rather than real patient data, with only 5% assessing performance on actual patient care. Traditional benchmarks show saturation at 84-90% USMLE accuracy, but conversational frameworks reveal 19.3pp accuracy drop (82% → 62.7%) when moving from case vignettes to multi-turn dialogues. Review concludes: 'substantial disconnects from clinical reality and foundational gaps in construct validity, data integrity, and safety coverage.' This establishes that the Oxford/Nature Medicine RCT deployment gap (94.9% → 34.5%) is part of a systematic field-wide pattern, not an isolated finding.
|
||||
|
||||
|
||||
|
||||
|
||||
Relevant Notes:
|
||||
|
|
|
|||
|
|
@ -0,0 +1,32 @@
|
|||
{
|
||||
"rejected_claims": [
|
||||
{
|
||||
"filename": "clinical-llm-evaluation-uses-medical-exam-questions-not-real-patient-data-creating-systematic-benchmark-validity-gap.md",
|
||||
"issues": [
|
||||
"missing_attribution_extractor"
|
||||
]
|
||||
},
|
||||
{
|
||||
"filename": "conversational-clinical-ai-shows-19-point-accuracy-drop-versus-single-turn-questions-revealing-interaction-complexity-gap.md",
|
||||
"issues": [
|
||||
"missing_attribution_extractor"
|
||||
]
|
||||
}
|
||||
],
|
||||
"validation_stats": {
|
||||
"total": 2,
|
||||
"kept": 0,
|
||||
"fixed": 2,
|
||||
"rejected": 2,
|
||||
"fixes_applied": [
|
||||
"clinical-llm-evaluation-uses-medical-exam-questions-not-real-patient-data-creating-systematic-benchmark-validity-gap.md:set_created:2026-03-24",
|
||||
"conversational-clinical-ai-shows-19-point-accuracy-drop-versus-single-turn-questions-revealing-interaction-complexity-gap.md:set_created:2026-03-24"
|
||||
],
|
||||
"rejections": [
|
||||
"clinical-llm-evaluation-uses-medical-exam-questions-not-real-patient-data-creating-systematic-benchmark-validity-gap.md:missing_attribution_extractor",
|
||||
"conversational-clinical-ai-shows-19-point-accuracy-drop-versus-single-turn-questions-revealing-interaction-complexity-gap.md:missing_attribution_extractor"
|
||||
]
|
||||
},
|
||||
"model": "anthropic/claude-sonnet-4.5",
|
||||
"date": "2026-03-24"
|
||||
}
|
||||
|
|
@ -7,9 +7,13 @@ date: 2025-11-01
|
|||
domain: health
|
||||
secondary_domains: [ai-alignment]
|
||||
format: research-paper
|
||||
status: unprocessed
|
||||
status: enrichment
|
||||
priority: medium
|
||||
tags: [clinical-ai-safety, benchmark-performance-gap, llm-evaluation, knowledge-practice-gap, real-world-deployment, belief-5, systematic-review]
|
||||
processed_by: vida
|
||||
processed_date: 2026-03-24
|
||||
enrichments_applied: ["medical LLM benchmark performance does not translate to clinical impact because physicians with and without AI access achieve similar diagnostic accuracy in randomized trials.md"]
|
||||
extraction_model: "anthropic/claude-sonnet-4.5"
|
||||
---
|
||||
|
||||
## Content
|
||||
|
|
@ -53,3 +57,14 @@ Published in *Journal of Medical Internet Research* (JMIR), 2025, Vol. 2025, e84
|
|||
PRIMARY CONNECTION: Belief 5 — clinical AI safety evaluation methodology gap
|
||||
WHY ARCHIVED: Provides systematic evidence that the KB's reliance on benchmark performance data (e.g., "OE scores 100% on USMLE") is epistemically weak — and establishes that the Oxford RCT deployment gap finding is part of a systematic pattern
|
||||
EXTRACTION HINT: Extract the 5%/95% finding as a standalone methodological claim about the clinical AI evaluation field; pair with Oxford Nature Medicine RCT as empirical confirmation
|
||||
|
||||
|
||||
## Key Facts
|
||||
- JMIR systematic review analyzed 761 LLM evaluation studies across 39 benchmarks
|
||||
- Only 5% of 761 studies assessed performance on real patient care data
|
||||
- 95% of studies relied on medical examination questions (USMLE-style) or case vignettes
|
||||
- Leading models achieve 84-90% accuracy on USMLE benchmarks
|
||||
- Diagnostic accuracy drops from 82% on case vignettes to 62.7% on multi-turn dialogues (19.3pp decrease)
|
||||
- npj Digital Medicine study: six LLMs averaged 57.2% total score, 54.7% safety score, 62.3% effectiveness
|
||||
- 13.3% performance drop in high-risk scenarios versus average scenarios (npj Digital Medicine)
|
||||
- LLMs show markedly lower performance on script concordance testing than on multiple-choice benchmarks
|
||||
|
|
|
|||
Loading…
Reference in a new issue