teleo-codex/inbox/queue/2026-01-01-openevidence-clinical-ai-growth-12b-valuation.md
Teleo Agents 6459163781 epimetheus: source archive restructure — 537 files reorganized
inbox/queue/ (52 unprocessed) — landing zone for new sources
inbox/archive/{domain}/ (311 processed) — organized by domain
inbox/null-result/ (174) — reviewed, nothing extractable

One-time atomic migration. All paths preserved (wiki links use stems).

Pentagon-Agent: Epimetheus <968B2991-E2DF-4006-B962-F5B0A0CC8ACA>
2026-03-18 11:52:23 +00:00

70 lines
5.9 KiB
Markdown

---
type: source
title: "OpenEvidence: 20M Clinical Consultations/Month, $12B Valuation, 40% of US Physicians Daily"
author: "PR Newswire / OpenEvidence"
url: https://www.openevidence.com/announcements/openevidence-the-fastest-growing-application-for-physicians-in-history-announces-dollar210-million-round-at-dollar35-billion-valuation
date: 2026-01-01
domain: health
secondary_domains: [ai-alignment]
format: company-announcement
status: unprocessed
priority: medium
tags: [openevidence, clinical-ai, decision-support, physician-adoption, clinical-decision-support, health-ai, trust]
---
## Content
OpenEvidence growth metrics as of early 2026 (significant update from the existing KB claim "40 percent of US physicians daily within two years"):
**Current Scale:**
- 40%+ of US physicians daily (same percentage as existing KB claim, but at much larger absolute scale)
- 8.5M+ clinical consultations/month in 2025
- 20M clinical consultations/month by January 2026 — 2,000%+ YoY growth
- Milestone March 10, 2026: 1 million clinical consultations in ONE DAY — first time in history an AI system reached this scale with verified physicians
- Used across 10,000+ hospitals and medical centers nationwide
**Funding trajectory:**
- Series D: $250M led by Thrive Capital and DST Global (January 2026)
- Valuation doubled in 3 months: $6B → $12B
- Context: valued at $3.5B when KB claim was written; now $12B
**Perfect USMLE score achievement:**
- OpenEvidence became the first AI in history to score 100% on the United States Medical Licensing Examination (USMLE) — all parts
- Benchmark performance: now exceeds any human score on the most challenging medical licensing exam
**Adoption barriers that persist despite scale:**
- 44% of physicians concerned about accuracy and risk of misinformation
- 19% concerned about lack of physician oversight or explainability
- These concerns persist even among heavy users — not a novelty effect
- "Road to wider adoption depends less on adding new features and more on addressing fundamental issues of trust, responsibility, and accountability"
**Key framing from healthcare.digital 2026 analysis:**
- Positioned as "ChatGPT for Doctors" — general clinical reasoning, not narrow task AI
- 2026 plans: expanding clinical decision support, workflow integration
- Different model from Abridge (documentation) — OpenEvidence is clinical reasoning at point of care
## Agent Notes
**Why this matters:** The existing KB claim "OpenEvidence became the fastest-adopted clinical technology in history reaching 40 percent of US physicians daily within two years" is still accurate but significantly understates the current scale. The valuation tripling ($3.5B → $12B in months) and the 2,000%+ consultation growth rate suggest OpenEvidence is the dominant beachhead for clinical AI in the outpatient/primary care workflow — separate from the ambient scribe market where Abridge dominates.
This creates a two-track clinical AI story: (1) Abridge/ambient scribes for documentation (threatened by Epic AI Charting), and (2) OpenEvidence for clinical reasoning/decision support (not yet threatened by Epic since it's a separate workflow).
**What surprised me:** The USMLE 100% score and the 1M consultations/day milestone suggest OpenEvidence is in a different category from early clinical AI tools. At 20M consultations/month with verified physicians, this is larger than any previously deployed clinical decision support system.
**What I expected but didn't find:** No peer-reviewed outcomes data on whether OpenEvidence-assisted consultations produce better patient outcomes. The benchmark performance (USMLE 100%) doesn't necessarily translate to clinical impact — existing KB claim [[medical LLM benchmark performance does not translate to clinical impact]] is a direct challenge to this data.
**KB connections:**
- Updates: [[OpenEvidence became the fastest-adopted clinical technology in history reaching 40 percent of US physicians daily within two years]] — the claim is still accurate but understates 2026 scale
- Tension with: [[medical LLM benchmark performance does not translate to clinical impact because physicians with and without AI access achieve similar diagnostic accuracy in randomized trials]] — OpenEvidence is now at scale; are outcomes improving?
- New connection: OpenEvidence (reasoning) + Abridge (documentation) + Epic AI Charting = three distinct clinical AI beachheads serving different workflows
**Extraction hints:**
- The existing KB claim needs updating: add the 20M/month consultations, $12B valuation, USMLE 100% score
- CLAIM CANDIDATE: "OpenEvidence's growth to 20M monthly physician consultations creates the first empirical test of whether clinical AI benchmark performance translates to population health outcomes — the absence of outcomes data at this scale is a significant gap"
- The physician trust concerns (44% accuracy worried) despite heavy use is an extractable finding: even the most-adopted clinical AI has persistent trust barriers that don't resolve with familiarity
**Context:** OpenEvidence competes in a different space from Abridge — it's clinical reasoning support, not documentation automation. Epic AI Charting doesn't threaten OpenEvidence (different workflow, different value proposition). This insulates OpenEvidence from the Epic commoditization threat.
## Curator Notes (structured handoff for extractor)
PRIMARY CONNECTION: [[OpenEvidence became the fastest-adopted clinical technology in history reaching 40 percent of US physicians daily within two years]]
WHY ARCHIVED: Significant scale update — the existing claim understates 2026 metrics by an order of magnitude. Also: USMLE 100% creates the benchmark vs. outcomes tension in practice, not theory.
EXTRACTION HINT: Update the existing claim with scale metrics, but flag the benchmark-to-outcomes translation tension as a challenge to both the OpenEvidence claim and the benchmark performance claim