--- description: Bostrom's inversion of his 2014 caution -- non-development of SI means 170k daily deaths from aging and disease persist forever, qualifying as an existential catastrophe by his own definition type: claim domain: ai-alignment created: 2026-02-17 source: "Bostrom, Optimal Timing for Superintelligence (2025 working paper); Bostrom interview with Adam Ford (2025)" confidence: experimental --- "It would be in itself an existential catastrophe if we forever failed to develop superintelligence." This single sentence from Bostrom's 2025 paper represents perhaps the most dramatic evolution in the AI safety landscape. The author of the foundational text warning about SI dangers now explicitly argues that *not building* SI constitutes an existential catastrophe. The argument is straightforward but its implications are radical. Approximately 170,000 people die every day from causes that a sufficiently advanced intelligence could plausibly prevent -- aging, disease, poverty, environmental degradation. If we accept Bostrom's own framework from "Superintelligence" that existential catastrophe includes permanent curtailment of humanity's potential, then a world where these deaths continue indefinitely because we chose not to develop the technology that could prevent them meets the definition. The catastrophe is not a single dramatic event but a continuous, normalized hemorrhage of human potential. This inverts the precautionary framing that dominated AI safety discourse from 2014 through roughly 2023. In that era, the burden of proof sat with developers: demonstrate safety before scaling capability. Bostrom's evolved position shifts the burden: the status quo of human mortality is itself an ongoing catastrophe, and those advocating delay must account for the deaths that occur during that delay. This does not eliminate the case for caution -- Bostrom still acknowledges significant probability of catastrophic outcomes from misaligned SI -- but it reframes caution as a tradeoff rather than a default. The Torres critique challenges this framing directly: being murdered by misaligned ASI differs fundamentally from dying of natural causes, and conflating the two is a category error. Additionally, the species could theoretically persist for billions of years without SI, so there is no death sentence requiring emergency surgery. These are serious objections. But Bostrom's counterpoint is that from a person-affecting utilitarian standpoint, the distinction between death from aging and death from AI matters less than the total expected loss of life-years across both scenarios. --- Relevant Notes: - [[developing superintelligence is surgery for a fatal condition not russian roulette because the baseline of inaction is itself catastrophic]] -- the surgery analogy is the metaphorical expression of this claim - [[consciousness may be cosmically unique and its loss would be irreversible]] -- strengthens Bostrom's argument: if consciousness is cosmically rare, maximizing conscious life-years becomes even more urgent - [[early action on civilizational trajectories compounds because reality has inertia]] -- delay in SI development compounds: each day of inaction is 170k irreversible deaths - [[safe AI development requires building alignment mechanisms before scaling capability]] -- the tension: Bostrom's urgency argument pushes against "safety first" but does not abandon it Topics: - [[_map]]