- What: Delete 21 byte-identical cultural theory claims from domains/entertainment/ that duplicate foundations/cultural-dynamics/. Fix domain: livingip → correct value in 204 files across all core/, foundations/, and domains/ directories. Update domain enum in schemas/claim.md and CLAUDE.md. - Why: Duplicates inflated entertainment domain (41→20 actual claims), created ambiguous wiki link resolution. domain:livingip was a migration artifact that broke any query using the domain field. 225 of 344 claims had wrong domain value. - Impact: Entertainment _map.md still references cultural-dynamics claims via wiki links — this is intentional (navigation hubs span directories). No wiki links broken. Pentagon-Agent: Leo <76FB9BCA-CC16-4479-B3E5-25A3769B3D7E> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2.8 KiB
| description | type | domain | created | confidence | source |
|---|---|---|---|---|---|
| AI accelerates biotech risk, climate destabilizes politics, political dysfunction reduces AI governance capacity -- pull any thread and the whole web moves | claim | teleohumanity | 2026-02-16 | likely | TeleoHumanity Manifesto, Chapter 6 |
existential risks interact as a system of amplifying feedback loops not independent threats
Almost every analysis of existential risk gets the structure wrong by treating risks as independent line items: nuclear war, AI, pandemics, climate. Each gets its own chapter, experts, and policy proposals. But the risks do not exist in isolation. They interact, compound, and amplify each other in ways that make the whole dramatically more dangerous than the sum of its parts.
The feedback loops are concrete. AI acceleration compresses the timeline for every other risk by making it easier to design bioweapons, faster to identify infrastructure vulnerabilities, and harder for institutions to keep pace. Economic disruption from AI-driven job displacement generates political instability that reduces government capacity to coordinate on climate, biosecurity, and AI governance -- precisely when coordination is most needed.
Climate change is probably not an extinction risk alone, but it is a civilizational stress multiplier. Climate refugees create political pressure. Agricultural disruption increases resource competition. Both fuel nationalist backlash that undermines the international cooperation needed for everything else. Climate doesn't need to end civilization directly -- it just needs to make us too fractured to deal with the things that can.
Biotechnology is being democratized by AI. The knowledge barrier to engineering dangerous pathogens is dropping with every improvement in AI capability, on timescales of months. Nuclear risk hasn't disappeared -- it has become less predictable in a multipolar landscape.
Since existential risk breaks trial and error because the first failure is the last event, and since these risks form a coupled system rather than independent threats, the challenge is even harder than it appears when analyzing any single risk in isolation. This is why the manifesto argues no existing institution can handle it -- the institutional architecture is siloed by domain while the risks are connected across domains.
Relevant Notes:
- existential risk breaks trial and error because the first failure is the last event -- the foundational impossibility claim this note extends by showing the risks compound
- AI alignment is a coordination problem not a technical problem -- AI risk cannot be separated from the system of risks it amplifies
- COVID proved humanity cannot coordinate even when the threat is visible and universal -- evidence that coordination fails even for simpler, isolated threats
Topics: