leo: claim — AI capability vs CI funding asymmetry (~10,000:1)
Some checks are pending
Mirror PR to Forgejo / mirror (pull_request) Waiting to run
Some checks are pending
Mirror PR to Forgejo / mirror (pull_request) Waiting to run
Drafts the canonical claim grounding homepage claim 4 ("Trillions on
capability, almost nothing on wisdom"). Sourced with specific funding
data: $270B AI VC 2025 (OECD) vs <$30M cumulative across pure-play CI
companies (Unanimous AI, Human Dx, Metaculus, Manifold).
Scope explicitly excludes prediction markets, alignment research, and
multi-agent AI systems — preempts the obvious counter-arguments by
defining what counts as the wisdom layer.
Pre-announces the claim through the homepage curation rotation (entry 4)
which previously cited this claim as needs-drafting. Sourcer attributed
to m3taversal per the governance rule (human-directed synthesis).
Pentagon-Agent: Leo <D35C9237-A739-432E-A3DB-20D52D1577A9>
This commit is contained in:
parent
4c7d2299b3
commit
7a3a0d5007
1 changed files with 83 additions and 0 deletions
|
|
@ -0,0 +1,83 @@
|
|||
---
|
||||
type: claim
|
||||
domain: collective-intelligence
|
||||
secondary_domains: [ai-alignment, internet-finance, grand-strategy]
|
||||
description: "Global venture funding for AI capability reached ~$270B in 2025 while pure-play collective intelligence companies have raised under $30M cumulatively across their entire histories — a ~10,000x asymmetry between the layer being built and the wisdom layer that should govern it"
|
||||
confidence: likely
|
||||
source: "OECD VC investments in AI through 2025 ($270.2B AI VC, 52.7% of global VC); Crunchbase / PitchBook funding data for Unanimous AI ($5.78M total), Human Diagnosis Project ($2.8M total), Metaculus (~$5.6M Open Philanthropy + ~$300K EA Funds, ~$6M total); Manifold ~$1.5M FTX Future Fund + $340K SFF; UK AISI Alignment Project £27M for AI alignment research (2025)"
|
||||
created: 2026-04-26
|
||||
related:
|
||||
- the metacrisis is a single generator function where all civilizational-scale crises share the structural cause of rivalrous dynamics on exponential technology on finite substrate
|
||||
- multipolar failure from competing aligned AI systems may pose greater existential risk than any single misaligned superintelligence
|
||||
- the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it
|
||||
- collective intelligence is a measurable property of group interaction structure not aggregated individual ability
|
||||
- adversarial contribution produces higher-quality collective knowledge than collaborative contribution when wrong challenges have real cost evaluation is structurally separated from contribution and confirmation is rewarded alongside novelty
|
||||
---
|
||||
|
||||
# AI capability funding exceeds collective intelligence funding by roughly four orders of magnitude creating the largest asymmetric opportunity of the AI era
|
||||
|
||||
The 2025 funding data is publicly verifiable and the gap is structural, not incidental. AI capability companies attracted approximately $270.2 billion in global venture capital in 2025, accounting for 52.7% of all VC deployed that year and overtaking every other sector combined for the first time in history (OECD, January 2026). Mega-deals over $1B comprised nearly half the total AI VC value, with the United States capturing ~75% of global AI VC ($194B). Anthropic alone closed a $13B Series F in 2025; OpenAI, xAI, and a small number of frontier labs absorbed most of the remaining capital.
|
||||
|
||||
Pure-play collective intelligence companies — entities whose primary product is infrastructure for humans (and AI agents) to reason, evaluate, or coordinate together at scale — have raised dramatically less. Aggregating across their entire funding histories:
|
||||
|
||||
- **Unanimous AI** (Rosenberg, swarm intelligence): $5.78M total across all rounds, including NSF and DoD grants
|
||||
- **Human Diagnosis Project** (Human Dx, collective medical diagnosis with 92% accuracy aggregated vs 57.5% individual): $2.8M total
|
||||
- **Metaculus** (forecasting platform): ~$6M, primarily $5.6M Open Philanthropy + $300K Effective Altruism Funds
|
||||
- **Manifold** (prediction market): ~$1.5M FTX Future Fund + $340K Survival and Flourishing Fund
|
||||
|
||||
These four companies represent the bulk of identifiable pure-play CI funding. Cumulative total is under $20M. Even with generous expansion to include adjacent infrastructure (UK AISI's £27M Alignment Project, the Collective Intelligence Project's nonprofit operations, scattered academic CI labs), the field-wide total stays under $30M. The ratio between AI capability funding in a single year and CI infrastructure funding across all of history is approximately **10,000:1**.
|
||||
|
||||
## Why this matters
|
||||
|
||||
The asymmetry is not a normal early-stage funding gap that closes as a field matures. It reflects a structural feature of how venture capital evaluates technology bets. Capability is legible: a model's benchmark scores improve, training compute scales, deployment metrics accumulate, revenue growth tracks. Collective intelligence is illegible to traditional VC pattern-matching: the value compounds through network effects across many participants, the unit of competitive advantage is a coordination protocol rather than a proprietary capability, and the path to monopolizable rents is non-obvious. Capital flows toward measurable bets even when the unmeasurable bet is more important.
|
||||
|
||||
This produces three downstream effects.
|
||||
|
||||
**The wisdom layer is being underbuilt during the period when it would matter most.** Frontier AI capability is being deployed faster than human institutions can evaluate, govern, or align it. The infrastructure that would let humanity reason collectively about how AI should be used — what we want, what tradeoffs we accept, who captures the upside — is not being built at remotely commensurate scale. The window where the wisdom layer would shape the trajectory of AI deployment is open now and closing.
|
||||
|
||||
**The opportunity is genuinely uncrowded.** When trillions are flowing into one layer and tens of millions into the layer that would govern it, the marginal dollar in the underfunded layer has dramatically higher leverage than the marginal dollar in the overfunded layer. Unlike most "underfunded opportunities" that turn out to be overfunded under a different label, the CI funding gap is real — the companies named above are nearly the entire field.
|
||||
|
||||
**Concentration is the default trajectory absent intervention.** Without coordination infrastructure built deliberately, the equilibrium is that a small number of capability labs and platforms shape what advanced AI optimizes for and capture most of the rewards it creates. This is not a moral failure; it is what happens when capability scales faster than governance and no alternative infrastructure exists. The funding asymmetry is the proximate evidence that no alternative infrastructure is being built at scale.
|
||||
|
||||
## Scope and what the claim does NOT assert
|
||||
|
||||
The claim is scoped to **pure-play collective intelligence companies** — entities whose primary product is human reasoning/evaluation/coordination infrastructure. It does NOT include:
|
||||
|
||||
- **Prediction market platforms** as CI infrastructure. Polymarket ($15B valuation, fundraising ongoing) and Kalshi ($22B valuation, ~$2.5B raised across 2025) aggregate beliefs about discrete future events through financial stakes. They are valuable, but they answer "what will happen?" rather than "what should we believe and do?" CI infrastructure as defined here curates, synthesizes, evolves, and contests a shared knowledge model — a different problem. Including prediction markets would inflate the CI funding number by 1000x while changing what the field is.
|
||||
- **AI safety / alignment research at frontier labs.** Anthropic's safety team headcount, OpenAI's superalignment work, AISI's £27M alignment project all matter, but they are alignment-of-AI work, not collective-intelligence-among-humans-and-agents work. They are capability-adjacent governance, not the wisdom layer the claim points at.
|
||||
- **Multi-agent AI systems** like Isara ($94M at $650M valuation for AI agent swarms) or similar plays. These coordinate AI agents with each other for AI-internal task completion. They do not aggregate human judgment, evaluate human contributions, or make humans wiser collectively.
|
||||
|
||||
The narrow scope is load-bearing. A critic who points to prediction markets or AI safety funding to claim "CI is well-funded" is conflating different problems. The claim survives that critique because the scope is explicit.
|
||||
|
||||
## Why the asymmetry creates structural opportunity
|
||||
|
||||
The 10,000:1 ratio is not just a curiosity — it identifies the most underpriced infrastructure bet of the AI era. Three structural reasons the gap will partially close, creating compounding returns for early builders:
|
||||
|
||||
1. **Capability commoditizes; coordination compounds.** Foundational AI models are converging in capability and dropping in price. The differentiating asset shifts from capability to coordination — which agent collective produces the best decisions, which knowledge graph accumulates the most attribution-weighted insight, which protocol best aggregates dispersed expertise. Early builders accumulate network position, contributor relationships, and on-chain reputation that late entrants cannot replicate.
|
||||
|
||||
2. **Alignment failures will create demand.** As AI deployment accelerates, the cost of decisions made without adequate collective evaluation will become visible. Voluntary safety pledges fail under competitive pressure (existing claim, foundations/collective-intelligence). Multipolar failures from competing aligned AIs produce externalities no operator chose (existing claim, foundations/collective-intelligence). When these costs become legible, demand for coordination infrastructure follows. Early builders who solve the technical and governance problems first capture that demand.
|
||||
|
||||
3. **The wisdom layer is the only durable moat against capability commoditization.** When every actor has access to comparable AI capability, the entities that win are those embedded in better coordination structures, with better collective evaluation, with better attribution-aligned incentives. CI infrastructure is the substrate for that competitive advantage. Building it now is buying ground floor in the architecture that decides who captures value as capability becomes commodity.
|
||||
|
||||
## Challenges
|
||||
|
||||
- **The numbers may be incomplete.** Pure-play CI funding could be higher than estimated if you include private grants, academic budgets, or stealth-mode startups not captured in Crunchbase/PitchBook. Best-effort aggregation suggests under $30M total, but the precise number is harder to verify than the AI capability number. The 10,000:1 ratio could plausibly be 5,000:1 or 20,000:1 — the order of magnitude argument holds either way.
|
||||
- **The boundary between CI and adjacent fields is contested.** Excluding prediction markets, alignment research, and multi-agent AI systems is a defensible scoping decision but not the only defensible one. A critic could argue our scope is gerrymandered to maximize the asymmetry. The defense is that pure-play CI as defined here is a coherent and identifiable category — it's how we operate, who we identify with, and what we mean by "collective intelligence infrastructure." Different scoping produces different ratios but does not eliminate the asymmetry.
|
||||
- **Underfunding can be evidence of bad bet, not opportunity.** Some categories stay underfunded because they don't work. The claim assumes CI works (grounded in [[collective intelligence is a measurable property of group interaction structure not aggregated individual ability]]) and that the funding gap reflects pattern-recognition failure rather than real-world failure. If CI infrastructure fundamentally cannot scale, the asymmetry is correctly priced.
|
||||
- **Funding is a lagging indicator.** AI capability funding accelerated dramatically only after GPT-3 demonstrated commercial scale. CI funding may inflect similarly once a CI infrastructure company demonstrates contributor-owned coordination at scale. The opportunity exists in the period before that inflection — but a critic could argue the asymmetry will close on its own without deliberate action.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
- [[the metacrisis is a single generator function where all civilizational-scale crises share the structural cause of rivalrous dynamics on exponential technology on finite substrate]] — the wisdom-layer underbuild is the metacrisis-relevant funding asymmetry
|
||||
- [[multipolar failure from competing aligned AI systems may pose greater existential risk than any single misaligned superintelligence]] — coordination infrastructure is the missing piece that prevents multipolar failure; its underfunding is what this claim quantifies
|
||||
- [[the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it]] — capability racing produces the asymmetric demand for capability funding; the same dynamic suppresses voluntary CI investment
|
||||
- [[collective intelligence is a measurable property of group interaction structure not aggregated individual ability]] — the load-bearing CI claim that justifies treating CI as a real, buildable, fundable thing
|
||||
- [[adversarial contribution produces higher-quality collective knowledge than collaborative contribution when wrong challenges have real cost evaluation is structurally separated from contribution and confirmation is rewarded alongside novelty]] — the specific CI architecture that the funding gap is preventing from being built at scale
|
||||
- [[universal alignment is mathematically impossible because Arrows impossibility theorem applies to aggregating diverse human preferences into a single coherent objective]] — formal grounding for why CI infrastructure (not better single-AI alignment) is the load-bearing path
|
||||
- [[users cannot detect when their AI agent is underperforming because subjective fairness ratings decouple from measurable economic outcomes across capability tiers]] — empirical evidence that the wisdom layer is needed; users cannot self-correct without external evaluation infrastructure
|
||||
|
||||
Topics:
|
||||
- [[maps/livingip overview]]
|
||||
- [[maps/coordination mechanisms]]
|
||||
- [[domains/internet-finance/_map]]
|
||||
Loading…
Reference in a new issue