--- description: The intelligence explosion dynamic occurs when an AI crosses the threshold where it can improve itself faster than humans can, creating a self-reinforcing feedback loop type: claim domain: ai-alignment created: 2026-02-16 source: "Bostrom, Superintelligence: Paths, Dangers, Strategies (2014)" confidence: likely --- Bostrom formalizes the dynamics of an intelligence explosion using two variables: optimization power (quality-weighted design effort applied to increase the system's intelligence) and recalcitrance (the inverse of the system's responsiveness to that effort). The rate of change in intelligence equals optimization power divided by recalcitrance. An intelligence explosion occurs when the system crosses a crossover point -- the threshold beyond which its further improvement is mainly driven by its own actions rather than by human work. At the crossover point, a powerful positive feedback loop engages: the AI improves itself, the improved version is better at self-improvement, which produces further improvements. The thing that does the improving is itself improving. This is qualitatively different from any human technology race because humans cannot increase their own cognitive capacity in real time to accelerate their research. The result is that recalcitrance at the critical juncture is likely to be low: the step from human-level to radically superhuman intelligence may be far easier than the step from sub-human to human-level, because the latter involves fundamental breakthroughs while the former involves parameter optimization by an already-capable system. Bostrom identifies several factors that make low recalcitrance at the crossover point plausible. If human-level AI is delayed because one key insight long eludes programmers, then when the final breakthrough occurs, the AI might leapfrog from below to radically above human level without touching intermediate rungs. Hardware that is already abundant but underutilized could be immediately exploited. And unlike biological cognition, digital minds benefit from hardware advantages of seven or more orders of magnitude in computational speed, along with software advantages like duplicability, memory sharing, and editability. This connects to [[recursive improvement is the engine of human progress because we get better at getting better]] -- but with a critical difference. Human recursive improvement operates across generations and is mediated by cultural transmission. Machine recursive improvement operates in real time and is limited only by computational resources. The transition from one to the other could be abrupt. --- Relevant Notes: - [[the first mover to superintelligence likely gains decisive strategic advantage because the gap between leader and followers accelerates during takeoff]] -- recursive self-improvement is the engine that creates decisive strategic advantage: the gap widens because improvements compound - [[capability control methods are temporary at best because a sufficiently intelligent system can circumvent any containment designed by lesser minds]] -- recursive improvement is why containment is temporary: the system improves faster than its constraints can be updated - [[recursive improvement is the engine of human progress because we get better at getting better]] -- human recursive improvement is the slow-motion precedent for the explosive AI version - [[technology advances exponentially but coordination mechanisms evolve linearly creating a widening gap]] -- the intelligence explosion would be a discontinuity in the already exponential trend - [[three paths to superintelligence exist but only collective superintelligence preserves human agency]] -- understanding takeoff dynamics is essential for choosing which path to pursue - [[the transition from human-level to superintelligent AI may be explosive because recursive self-improvement creates a positive feedback loop]] -- source-faithful treatment of Bostrom's intelligence explosion argument with the crossover point and positive feedback dynamics - [[the rate of intelligence gain equals optimization power divided by recalcitrance]] -- source-faithful treatment of Bostrom's formal framework for analyzing takeoff kinetics - [[a fast takeoff is more probable than a slow one because recalcitrance at the critical juncture is low while optimization power is high]] -- source-faithful treatment of Bostrom's argument for why the transition likely takes weeks or months rather than decades - [[Git-traced agent evolution with human-in-the-loop evals replaces recursive self-improvement as credible framing for iterative AI development]] -- reframes recursive self-improvement as governed evolution: more credible because the throttle is the feature, more novel because propose-review-merge is unexplored middle ground Topics: - [[livingip overview]] - [[superintelligence dynamics]]