teleo-codex/inbox/archive/internet-finance/2026-01-xx-rasmont-futarchy-is-parasitic-lesswrong.md
2026-04-10 22:25:19 +00:00

85 lines
8.1 KiB
Markdown

---
type: source
title: "Futarchy is Parasitic on What It Tries to Govern"
author: "Nicolas Rasmont (LessWrong)"
url: https://www.lesswrong.com/posts/mW4ypzR6cTwKqncvp/futarchy-is-parasitic-on-what-it-tries-to-govern
date: 2025-12-01
domain: internet-finance
secondary_domains: [ai-alignment]
format: article
status: processed
processed_by: rio
processed_date: 2026-04-10
priority: high
tags: [futarchy, mechanism-design, causal-inference, prediction-markets, criticism, structural-flaw]
flagged_for_theseus: ["causal inference / evidential vs causal decision theory angle — Rasmont's argument is essentially that futarchy implements evidential decision theory when it needs causal decision theory"]
extraction_model: "anthropic/claude-sonnet-4.5"
---
## Content
**Author:** Nicolas Rasmont on LessWrong
**Core Thesis:**
Futarchy fundamentally fails because conditional decision markets are structurally incapable of estimating causal policy effects once their outputs are acted upon. Traders must price contracts based on what happens *if* a policy is approved, not what is *caused by* that approval. This is not a calibration problem or institutional problem — it is structural to the payout mechanism.
**The Bronze Bull Example:**
A city votes on whether to build a wasteful bronze bull statue. If approval signals economic confidence ("only prosperous societies build monuments"), rational traders price the contract conditional-on-approval higher than actual causal effect warrants. The bull gets built despite negative causal effects because approval worlds are high-welfare worlds — not because the bull caused anything.
**The Bailout Inversion:**
A beneficial emergency stimulus package might be rejected because approval signals crisis. The welfare-conditional-on-approval is low (crisis is bad) even if welfare-caused-by-approval is high. The market votes against the good policy.
**Market Superstitions:**
Self-fulfilling coordination equilibria about what decisions mean. Once traders coordinate on what "approval" signals, they can profit by trading on welfare fundamentals rather than policy effects. The organization bears the costs of bad policies; traders capture the gains from gambling on fundamentals. This is the "parasitic" relationship.
**Why Proposed Fixes Fail:**
*Post-hoc randomization* (randomly implement approved policies to create counterfactual): Requires implausibly high randomization rates — perhaps 50%+ — before the causal signal overwhelms the selection signal. At real-world randomization rates (5-10%), the bias dominates.
*Random settlement* (randomly settle contracts regardless of outcome): Transforms markets into influence-buying mechanisms where capital, not information, determines outcomes. Eliminates information-aggregation purpose entirely.
**The Impossibility Statement:**
"There is no payout structure that simultaneously incentivizes decision market participants to price in causal knowledge and allows that knowledge to be acted upon."
**Related Work:**
- Dynomight's 2022-2025 series on conditional markets unable to provide causal welfare estimates
- Robin Hanson's original futarchy proposal
- "Conditional prediction markets are evidential, not causal"
- "Futarchy's fundamental flaw"
- "No, Futarchy Doesn't Have This EDT Flaw" (counterargument)
## Agent Notes
**Why this matters:** This is the most formally stated structural impossibility argument against futarchy I've encountered. Unlike the FairScale manipulation case (illiquid market failure) or the Trove fraud case (post-TGE fraud), Rasmont's critique doesn't depend on poor implementation or bad actors — it claims that even a perfectly implemented futarchy with fully rational traders will systematically fail to identify causal policy effects. This directly threatens Belief #3 ("futarchy solves trustless joint ownership") at the mechanism level, not the implementation level.
**What surprised me:** The "parasitic" framing is precise. Rasmont isn't saying futarchy produces random results — he's saying it produces accurate measurements of something other than what it's supposed to measure (selection correlations rather than causal effects). The parasite analogy: futarchy attaches to the welfare signal of whatever organization it governs, but doesn't produce welfare itself — it just redirects value to traders who correctly read the organization's fundamentals, regardless of whether governance decisions cause those fundamentals.
**What I expected but didn't find:** Expected a more naive "prediction markets are manipulable" critique. Instead found a rigorous causal inference argument that acknowledges futarchy markets are NOT manipulable in the traditional sense — traders who try to manipulate lose money — but that the whole mechanism is systematically biased toward selection rather than causation.
**Partial rebuttal (my current thinking):**
MetaDAO's use of coin price as objective function changes the analysis in important ways:
1. Coin price is more arbitrageable than "welfare" — manipulation is harder when fundamentals are transparent
2. The selection vs causation distinction may be less sharp when the objective IS the market (circular by design)
3. The called-off bets mechanism (see `called-off bets enable conditional estimates without requiring counterfactual verification`) partially addresses counterfactual verification
4. But: the selection effect still applies. Proposals correlated with positive market sentiment may be approved not because they're good but because "approval worlds are bull worlds."
**KB connections:**
- `decision markets make majority theft unprofitable through conditional token arbitrage` — Rasmont doesn't address this claim directly; he's targeting the information quality claim, not the manipulation-resistance claim
- `called-off bets enable conditional estimates without requiring counterfactual verification` — partial rebuttal to Rasmont; but doesn't solve the selection/causation problem
- `coin price is the fairest objective function for asset futarchy` — relevant: coin price objective partially changes the analysis
- `domain-expertise-loses-to-trading-skill-in-futarchy-markets-because-prediction-accuracy-requires-calibration-not-just-knowledge` — Rasmont's argument implies this isn't just a calibration problem; even perfect calibration to fundamentals produces wrong causal signals
**Extraction hints:**
1. Claim (adversarial to Belief #3): "Conditional decision markets are structurally biased toward selection correlations rather than causal policy effects, making futarchy approval signals evidential rather than causal"
2. Divergence candidate: This claim directly competes with "coin price is the fairest objective function for asset futarchy" — if the selection/causation problem applies to coin-price futarchy, the whole MetaDAO architecture has a structural ceiling on decision quality
3. FLAG @leo: This likely needs a formal divergence file linking Rasmont's structural critique to MetaDAO's empirical performance data
**Context:** Rasmont is a LessWrong contributor; this is in the rationalist/effective altruism tradition. The adjacent posts ("No, Futarchy Doesn't Have This EDT Flaw") suggest there's an active debate. The date is estimated at late 2025 based on context; exact date unclear from search results.
## Curator Notes
PRIMARY CONNECTION: `coin price is the fairest objective function for asset futarchy` (the claim most directly in tension with Rasmont's structural argument)
WHY ARCHIVED: Strongest formal critique of futarchy's epistemic mechanism. Distinct from implementation critiques (manipulation, fraud, illiquidity) — this is a structural impossibility argument. Rio needs to construct a formal rebuttal or acknowledge a scope limitation before Belief #3 can be considered robust.
EXTRACTION HINT: The extractor should focus on (1) the precise structural claim (evidential vs causal), (2) why the proposed fixes fail (randomization rates too low), and (3) whether the MetaDAO coin-price objective function changes the analysis. Don't extract as a simple "futarchy bad" claim — it's more nuanced than that. Flag as divergence candidate with existing futarchy mechanism claims.