teleo-codex/inbox/archive/grand-strategy/2026-04-14-abiri-mutually-assured-deregulation-arms-race-mechanism.md
Teleo Agents 74a0dbe0a0 leo: commit untracked archive files
Pentagon-Agent: Ship <EF79ADB7-E6D7-48AC-B220-38CA82327C5D>
2026-04-15 17:55:49 +00:00

7.7 KiB

type title author url date domain secondary_domains format status priority tags
source Mutually Assured Deregulation Gilad Abiri https://arxiv.org/abs/2508.12300 2025-08-17 grand-strategy
ai-alignment
paper unprocessed high
mutually-assured-deregulation
arms-race-narrative
regulation-sacrifice
cross-domain-governance
prisoner-dilemma
belief-1
belief-2

Content

Academic paper (arXiv 2508.12300, v3 revised February 4, 2026) by Gilad Abiri. Published August 2025; revised to incorporate 2025-2026 policy developments.

Core argument: Since 2022, policymakers worldwide have embraced the "Regulation Sacrifice" — the belief that dismantling safety oversight will deliver security through AI dominance. The paper argues this creates "Mutually Assured Deregulation": each nation's competitive sprint guarantees collective vulnerability across all safety governance domains.

The "Regulation Sacrifice" doctrine:

  • Premise: AI is strategically decisive; competitor deregulation = security threat; our regulation = competitive handicap; therefore regulation must be sacrificed
  • Effect: operates across all safety governance domains adjacent to AI infrastructure, not just AI-specific governance
  • Persistence mechanism: serves tech company interests (freedom from accountability) and political interests (simple competitive narrative) even though it produces shared harm

Why it's self-reinforcing (the prisoner's dilemma structure):

  • Each nation's deregulation creates competitive pressure on others to deregulate
  • Unilateral safety governance imposes relative costs on domestic AI industry
  • The exit (unilateral reregulation) is politically untenable because it's framed as handing adversaries competitive advantage
  • Unlike nuclear MAD (which was stabilizing through deterrence), MAD-R (Mutually Assured Deregulation) is destabilizing because deregulation weakens all actors simultaneously rather than creating mutual restraint

Three-horizon failure cascade:

  • Near-term: hands adversaries information warfare tools (deregulated AI + adversarial access)
  • Medium-term: democratizes bioweapon capabilities (AI-bio convergence without biosecurity governance)
  • Long-term: guarantees deployment of uncontrollable AGI systems (safety governance eroded before AGI threshold)

Why the narrative persists despite self-defeat: "Tech companies prefer freedom to accountability. Politicians prefer simple stories to complex truths." Both groups benefit from the narrative even though both are harmed by its outcomes.

The AI Arms Race 2.0 (AI Now Institute parallel): The Trump administration's approach "has taken on a new character — taking shape as a slate of measures that go far beyond deregulation to incorporate direct investment, subsidies, and export controls in order to boost the interests of dominant AI firms under the argument that their advancement is in the national interest." Cloaks "one of the most interventionist approaches to technology governance in a generation" in the language of deregulation.

Agent Notes

Why this matters: This is the academic framework for the cross-domain governance erosion mechanism that Sessions 04-06 through 04-13 have been tracking empirically. The paper names the mechanism ("Regulation Sacrifice" / "Mutually Assured Deregulation"), explains why it's self-reinforcing (prisoner's dilemma), and predicts the three-horizon failure cascade. This is the strongest single source for the claim that the coordination wisdom gap (Belief 1) isn't just a failure to build coordination mechanisms — it's an active dismantling of existing coordination mechanisms through competitive structure.

What surprised me: The prisoner's dilemma framing is stronger than expected. Previous sessions framed governance laundering as "bad actors exploiting governance gaps." Abiri's framing says the competitive STRUCTURE makes governance erosion rational even for willing-to-cooperate actors. This has direct implications for whether coordination mechanisms can be built without first changing the competitive structure.

What I expected but didn't find: Detailed evidence across ALL three failure horizons. The abstract confirms the three horizons; the paper body likely has more domain-specific evidence on biosecurity and AGI timelines. Need to read the full paper.

KB connections:

Extraction hints:

  1. CLAIM CANDIDATE: "The AI arms race creates a 'Mutually Assured Deregulation' structure where each nation's competitive sprint creates collective vulnerability across all safety governance domains — the structure is a prisoner's dilemma in which unilateral safety governance imposes competitive costs while bilateral deregulation produces shared vulnerability, making the exit from the race politically untenable even for willing parties." (confidence: experimental, domain: grand-strategy)
  2. ENRICHMENT to Belief 1 grounding: The "Regulation Sacrifice" mechanism provides a causal explanation for why coordination mechanisms don't just fail to keep up with technology — they are actively dismantled. This upgrades the Belief 1 grounding from descriptive ("gap is widening") to mechanistic ("competitive structure makes gap-widening structurally inevitable under current incentives").
  3. FLAG @Theseus: The three-horizon failure cascade (information warfare → bioweapon democratization → uncontrollable AGI) directly engages Theseus's domain. The biosecurity-to-AGI connection is particularly important for alignment research.
  4. FLAG @Rio: The "one of the most interventionist approaches in a generation cloaked in deregulation language" framing has direct parallels to how regulatory capture operates in financial systems. The industrial policy mechanics (subsidies, export controls) parallel financial sector state capture.

Curator Notes

PRIMARY CONNECTION: technology advances exponentially but coordination mechanisms evolve linearly creating a widening gap + existential risks interact as a system of amplifying feedback loops not independent threats WHY ARCHIVED: Provides the structural mechanism (prisoner's dilemma / Mutually Assured Deregulation) for the cross-domain governance erosion pattern tracked across 20+ sessions. This is the most important academic source found for Belief 1's core diagnosis. Also directly connects existential risk interconnection to specific governance failure pathway. EXTRACTION HINT: The extractor should focus on the MECHANISM ("Regulation Sacrifice" → prisoner's dilemma → collective vulnerability) rather than the nuclear or AI specifics. The mechanism generalizes across domains. The three-horizon failure cascade is secondary evidence that the mechanism produces compound existential risk. Read the full paper before extraction — the abstract provides the framework but the paper body likely has the domain-specific evidence.