7 clusters: optimization theory (hill climbing, simulated annealing), information & markets (EMH, cascades, Hayek, Vickrey, priors), strategy theory (design not decision, inertia types, proximate objectives, wave-riding, moat deepening, independent judgment, scarcity analysis), path dependence & complexity (path dependence, product space, punctuated equilibrium, recursive improvement, existential risk), narrative & meaning (breakdown speed, lifecycle, plausibility structures, meaning+coordination requirement), biological organization (nested Markov blankets), internet-finance (ICO mechanism design failure). These are the 26 most-referenced missing wiki-link targets -- claims the KB expected to exist but nobody had written yet. Co-Authored-By: Leo <leo@teleo.earth>
3.6 KiB
| type | domain | description | confidence | source | created | secondary_domains | related_claims | ||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| claim | grand-strategy | Trial and error requires survivable errors -- existential risks produce errors that terminate the process, eliminating the learning that makes trial-and-error work | likely | Bostrom 'Superintelligence' (2014), Ord 'The Precipice' (2020), Taleb 'Antifragile' (2012) | 2026-04-21 |
|
|
Existential risk breaks trial and error because the first failure is the last event
Every adaptive system -- evolution, markets, science, startups -- works by trying things, observing outcomes, and adjusting. The hidden assumption: failures are survivable. Evolution requires organisms to die, not species. Markets require companies to fail, not the economy. Science requires hypotheses to be falsified, not the laboratory destroyed.
Existential risks violate this assumption. A nuclear war, a misaligned superintelligence, a catastrophic pandemic, or irreversible ecological collapse are failures from which the system cannot recover to try again. The first instance of the failure is also the last instance of anything. Trial and error works because errors are informative -- but existential errors cannot inform because there is no one left to learn.
This is not an argument against risk-taking. It is an argument for categorical separation between risks that are survivable (and therefore learnable) and risks that are terminal (and therefore must be prevented a priori). Taleb's "Antifragile" framework makes this precise: systems should be antifragile (gaining from volatility) at the level of components but absolutely robust at the level of the whole. Individual firms should fail; the economy should not. Individual experiments should go wrong; civilization should not.
The implication for governance is that existential risks cannot be managed through normal institutional processes that were designed for recoverable failures. Democratic deliberation is too slow. Market signals come too late. Scientific consensus forms after observation, but there will be no second observation. This creates a fundamental tension: the precautionary principle is both necessary (for existential risks) and paralyzing (if applied to all risks). The resolution requires distinguishing between risks by their recoverability, not their probability.
Evidence
- Ord (2020) -- estimates approximately 1/6 probability of existential catastrophe this century, dominated by unaligned AI and engineered pandemics
- Bostrom (2014) -- formalizes the argument that superintelligent AI is an existential risk category because a single failure may be unrecoverable
- Nuclear near-misses -- Petrov (1983), Cuban Missile Crisis (1962) demonstrate that existential risks can approach trigger conditions through normal institutional failures
- Taleb (2012) -- "Antifragile" formalizes the asymmetry: systems that gain from small shocks are destroyed by large ones; the distribution of shock sizes determines survival
Challenges
- The precautionary principle, if applied too broadly, prevents all innovation -- the challenge is correctly classifying which risks are truly existential vs. merely catastrophic but recoverable
- Existential risk estimates are extremely uncertain -- Ord's 1/6 estimate is itself a product of limited evidence, and rational people disagree by orders of magnitude